content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Tesla Unit - TheOldScientist
The tesla (symbol T) is the SI derived unit of magnetic field strength or magnetic flux density, commonly denoted as B. One tesla is equal to one weber per square metre, and it was defined in 1960^
[1] in honour of Nikola Tesla. The strongest fields encountered from permanent magnets are from Halbach spheres which can be over 4.5 T.^[2] The unit was announced during the Conférence Générale des
Poids et Mesures in 1960.
A particle carrying a charge of 1 coulomb and passing through a magnetic field of 1 tesla at a speed of 1 meter per second perpendicular to said field experiences a force with magnitude 1 newton,
according to the Lorentz force law. As an SI derived unit, the tesla can also be expressed as
$\mathrm{T} = \dfrac{\mathrm{V}\cdot{\mathrm{s}}}{\mathrm{m}^2} = \dfrac{\mathrm{N}}{\mathrm{A}{\cdot}\mathrm{m}} = \dfrac{\mathrm{J}}{\mathrm{A}{\cdot}\mathrm{m}^2} = \dfrac{\mathrm{Wb}}{\mathrm
$= \dfrac{\mathrm{kg}}{\mathrm{C}{\cdot}\mathrm{s}} = \dfrac{\mathrm{kg}}{\mathrm{A}{\cdot}\mathrm{s}^2} = \dfrac{\mathrm{N}{\cdot}\mathrm{s}}{\mathrm{C}{\cdot}\mathrm{m}}$
(The 6th equivalent is in SI base units).^[3]
Units used:
A = ampere
C = coulomb
kg = kilogram
m = meter
N = newton
s = second
T = tesla
V = volt
J = joule
Wb = weber
Electric vs. magnetic field
In the production of the Lorentz force, the difference between these types of field is that a force from a magnetic field on a charged particle is generally due to the charged particle’s movement^[4]
while the force imparted by an electric field on a charged particle is not due to the charged particle’s movement. This may be appreciated by looking at the units for each. The unit of electric field
in the MKS system of units is newtons per coulomb, N/C, while the magnetic field (in teslas) can be written as N/(C·m/s). The dividing factor between the two types of field is meters/second (m/s),
which is velocity. This relationship immediately highlights the fact that whether a static electromagnetic field is seen as purely magnetic, or purely electric, or some combination of these, is
dependent upon one’s reference frame (that is: one’s velocity relative to the field).^[5]^[6]
In ferromagnets, the movement creating the magnetic field is the electron spin^[7] (and to a lesser extent electron orbital angular momentum). In a current-carrying wire (electromagnets) the movement
is due to electrons moving through the wire (whether the wire is straight or circular).
1 tesla is equivalent to:^[8]
10,000 (or 10^4) G (gauss), used in the CGS system. Thus, 10 kG = 1 T (tesla), and 1 G = 10^−4 T.
1,000,000,000 (or 10^9) γ (gammas), used in geophysics. Thus, 1 γ = 1 nT (nanotesla)
42.6 MHz of the ^1H nucleus frequency, in NMR. Thus a 1 GHz NMR magnetic field is 23.5 teslas.
For those concerned with low-frequency electromagnetic radiation in the home, the following conversions are needed most:
1000 nT (nanoteslas) = 1 µT (microtesla) = 10 mG (milligauss)
1,000,000 µT = 1 T
For the relation to the units of the magnetizing field (amperes per meter or oersteds) see the article on permeability. | {"url":"https://vorticesdynamics.com/research/scientific-laws-functions-rules-and-units/tesla-unit/","timestamp":"2024-11-09T15:47:02Z","content_type":"text/html","content_length":"103446","record_id":"<urn:uuid:6983760d-272d-4dea-9be6-6c13097fcbfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00260.warc.gz"} |
Category: Monty Hall Puzzle
I have previously mentioned in my blog the so-called Details
Monty Hall Puzzle Categories
(named after a Canadian-American game show host). The puzzle involves the setup of a recurring game show where you are given a choice between three doors. Behind one door is a car, and Archives
behind the other two doors there are goats. You pick one door, and the game show host proceeds to open one of the remaining two doors revealing a goat (note: he knows where the car and
the goats are, and after a player makes a choice, he always opens a door revealing a goat). The game show host then asks you if you want to switch your original selection to the other
door. The question is: is it to your advantage to switch your choice of doors?
This puzzle was sent in 1990 to an American writer,
Marylyn vos Savant
, who writes a weekly column for the magazine
where she solves puzzles for her readers. Marylyn at the time was recognized by the folks at Guinness World Records to be the person with the highest IQ in the world before that
category was eliminated from their world record groupings in 1990. Marylyn examined the puzzle and in a very matter of fact way replied that if you switch your original door choice,
your chances of winning will be 2/3, but if you retain your original choice, your chances of winning will be 1/3. This ignited a
huge controversy
that degenerated into an insult fest. The effort to verify Marylyn’s answer ended up involving tens of thousands of people ranging from students at schools to mathematicians and
statisticians from prestigious research centers in the United States. She was eventually proven to be right.
Over the years I’ve brought up this puzzle several times, and the reactions that I’ve got from people when I mention the solution and try to explain it have amazed me. Their demeanor
changes. Some get impatient, and some even get frankly hostile. The solution to this puzzle seems to be so counterintuitive that people feel that you are peddling nonsense to them when
you reveal the correct answer, and they get mad at you. In their minds it’s as if they are holding a cup, and I come along trying to convince them that they are not holding a cup.
But the solution is correct. Yes, at the end there are two doors. Yes, behind one is a goat, and behind the other is a car. But no, the probability of winning the car is not 50%. If
you keep your original choice, it is 33.3% (1/3) and if you switch it is 66.6% (2/3). And in case this doesn’t amaze/confuse you enough, consider the following: Suppose that while you
are in the game show pondering whether to change your original choice, a person comes in from the street. This person doesn’t know anything about what has been happening in the game
show. Suppose that this person is asked to choose one of the two doors that you are looking at. What is the probability that this person would win the car if the person chooses one of
these two doors at random? The amazing answer is 50%!
In order to understand what’s going on, first we will start with two doors as shown in figure-1.
In a random fashion, I place a car behind one of the doors and a goat behind the other, and I ask you to pick a door. Your chances of winning the car are 50%. If we repeat this trial
18 times, you will win the car 9 times out of 18 (statistically speaking). So far so good. But now suppose that I do not place the car behind the doors in a random fashion. Suppose
that I always place the car behind the door on the right (figure-2).
If we repeat the trial 18 times and you choose a door at random every time, you will still win the car 9 times out of 18 (50%). However, if somebody tips you off that I will always
place the car behind the door on the right, and you adapt your door picking strategy to always select the door on the right (non-random choosing), you will win a car 18 times out of 18
(100%)! Of course, if you instead always pick the door on the left, you will never win a car. Please notice that in this example THERE ARE ONLY TWO DOORS, behind one is a car, and
behind the other is a goat, YET if you pick the door on the left you will NEVER win a car. If you pick the door on the right, you WILL ALWAYS win a car. And if you pick at random
between the two doors you will win the car HALF OF THE TIME!
This illustrates the key point behind probability determination: randomness. If you know the allocation of the car to a given door is not random, you can use this information to
increase your chance of winning (in the above case 100% by choosing the door on the right).
Now suppose that we repeat the trial another 18 times, but I place the car behind the door on the left 6 times out of 18 (6/18 or 1/3: 33.3%), and I place the car behind the door on
the right 12 times out of 18 (12/18 or 2/3: 66.6%) as shown in figure-3.
If you choose a door at random, you will pick on the average the door on the left 9 times and win the car on 3 occasions, and the door on the right 9 times and win the car on 6
occasions for a grand total of 9 (6+3) times out of 18, or 50%. But again, if somebody tips you off to what I’m doing, and you always select the door on the right, you will win a car
12 times out of 18 (12/18 or 2/3: 66.6%). Of course, if you instead always pick the door on the left, you will win the car only 6 times out of 18 (6/18 or 1/3: 33.3%). Again, please
notice: THERE ARE ONLY TWO DOORS, behind one a car, behind the other a goat, YET if you pick the one on the left you win a car 1/3 of the time. If you pick the one on the right, you
win the car 2/3 of the time. And if you pick at random between the two doors, you win 50% of the time!
At this point, even if you agree with me that lack of randomness can lead to different probabilities of winning the car when there are only two doors (depending on how you choose) you
can still argue that increasing your chances of winning in the above examples depends on somebody tipping you off, in other words: cheating.
But what if you could obtain this information without cheating?
In the Monty Hall puzzle, there are three doors. Behind one there is a car, and behind the other two there are goats. So the three possible arrangements are #1 car-goat-goat, #2
goat-car-goat, and #3 goat-goat-car (see figure-4). IF the car is placed behind the doors at random, and you repeat the Monty Hall trial 18 times, the chance of picking the door with a
car is 6 in 18 (1/3: 33.3%) whether you choose the doors at random or not.
Then, after you make your choice, the game show host opens one of the two remaining doors revealing a goat and asks you if you want to change your initial pick. The key to
understanding the answer to the Monty Hall puzzle is to realize that by opening the door and revealing a goat, the game show host has eliminated the element of randomness in the
allocation of the car to the doors. By eliminating that extra door, the odds now favor the door opposite to the one you picked!
Say that, for the sake of simplicity, out of the three doors (left, center, and right) you have chosen the door on the left (marked with an X under the door for the three possible
arrangements: see below). That door will have a car behind it 1/3 of the time. But now the game show host opens one of the remaining doors revealing a goat (door crossed out).
By doing this, the game show host changes the original possible three-door arrangement of #1 Car-Goat-Goat, #2 Goat-Car-Goat, and #3 Goat-Goat-Car, and converts it into a two-door
arrangement: #1 Car-Goat, #2 Goat-Car, and #3 Goat-Car where your door of choice is the one on the left (marked with the X) as shown in figure-6.
But notice that in the new two door scenario arrangements #2 and #3 are the same. The game show host has created a situation identical to the one depicted in the example of figure-3
where there are only two doors, and one of the doors is favored over the other when it comes to placing the car behind the doors (in this case the one opposite to the one you picked:
the one on the right). Therefore, just like in the situation of figure-3, you can exploit this information by switching to the other door and increasing your probability of winning
from 1/3 to 2/3. The difference, of course, is that in the example of figure-3 there was cheating involved (somebody tipped you off), whereas in the actual Monty Hall puzzle, your
knowledge about how the setup came into being (the opening of the door revealing a goat) allows you to exploit it to improve your odds of winning the car. On the other hand, the person
walking in from the street, who doesn’t have the information you have, will choose between the doors at random, so their chance of winning the car is 50%.
Many people automatically assume randomness when gauging the probability of an either-or event. At the end there are two doors, behind one is car and behind the other is goat,
therefore thinking that there is a 50% chance of winning seems like a no-brainer. This may be why people are so confused and exasperated by the correct answer to the Monty Hall puzzle.
But what my explanation illustrates is that if you can gain information about a setup and figure out that it is not random, you can use this information to increase your odds of
winning by changing your picking strategy.
The Monty Hall puzzle is, of course, just a puzzle, but it bears on how we conduct ourselves in the real world when making choices about either-or outcomes. Should I buy a mortgage on
this house? Will the housing market go up or down? Should I buy the stock of this company? Will the stock go up or down? Should I begin looking for work? Will I get laid off or not?
The probability of most real-life either-or events is determined by forces which are not random. If we understand probability and we identify these forces, we can make the odds work in
our favor. Unfortunately, many people misjudge their chances or get dupped into believing false probability determinations, and they end up with, well…a goat.
The Monty Hall Puzzle image by Cepheus is in the public domain, all other images belong to the author and can only be reproduced with permission.
0 Comments
Marylyn vos Savant
is an American writer who was recognized by the folks at Guinness World Records to be the person with the highest IQ in the world before that category was eliminated from their world
record groupings in 1990. Marylyn writes a weekly column for the magazine Parade, where, among other things, she solves puzzles and answers questions that her readers send to her. In
1990, one reader sent her a puzzle (named the Monty Hall Puzzle after a Canadian-American game show host) that involved a game show where you are given a choice between 3 doors. Behind
one door is a car, and behind the other 2 doors there are goats. You pick one door, and the game show host proceeds to open one of the remaining 2 doors revealing a goat. The game show
host then asks you if you want to switch your original selection to the other remaining door. The question is: is it to your advantage to switch your choice of doors?
Marylyn replied in a very matter of fact way that the answer is “Yes, you should switch”. If you keep your original choice, the odds of winning the car are 1/3, but if you switch, the
odds of winning the car are 2/3. This ignited a firestorm among her readers which included quite a number of scientists. She received thousands of letters telling her that there is no
advantage in switching because, as there are 2 doors left, one with a goat and the other with a car, the probability of winning the car is 1/2. Of those that wrote letters to her, only
8% of the general public and 35% of scientists thought she was right. Marylyn wrote another column maintaining she indeed was right and tried to explain her reasoning, but to no avail.
The insults started coming in. Many laypeople and scientists (including mathematicians and statisticians from prestigious research centers in the country) lectured her on probability
and berated her intellect, some even suggesting that maybe women think about statistics differently.
In response Marylin wrote a another column asking for a nationwide experiment to be carried out in math classes and labs, in essence reproducing the problem using 3 cups and a penny.
After this she was vindicated. The experiment she suggested along with simulations performed using computers, proved that she was indeed correct, and many former skeptics wrote letters
of contrition apologizing for insulting her. By the time she published her last column on the subject, 56% percent of the general public and 71% of scientists (the majority) accepted
that she was right.
The process outlined above, displayed an initial phase of skepticism, followed by a second phase of analysis and corroboration of the claim. However, the case of the puzzle is clear
cut. There is no ambiguity. Everyone could perform the experiment and convince themselves of the truth (there are even online sites that allow you to do this now). And yet, despite
this, there were still a significant percentage of individuals who did not accept Marylyn’s conclusion.
The two phases mentioned above are also seen in the acceptance of counterintuitive scientific theories, although the complexity of the analyses is much greater and not accessible to
everyone, and the opposition from the skeptics is much stronger. This is especially true in some dramatic situations involving science where the debate spreads into the social and
political realms spanning conspiracy theories. One such case is the conspiracy theory that states that the collapse of the World Trade Center Towers after the attacks of 911 was
produced by demolition charges and not as a direct result of the attacks. Among the buildings that collapsed, the case of Building 7 became a lightning rod for the conspiracy theorists
because of the way it was damaged and the way it fell.
Building 7 was one of the buildings in the World Trade center complex. It was not targeted by the terrorists, but rather when the World Trade Center Towers collapsed, this inactivated
the pipes carrying water to the sprinkler system of Building 7, and burning debris from the towers ignited fires within the offices. The fires burned for several hours, and then
Building 7 collapsed in a manner that reminded both laypeople and experts of a controlled demolition. Additionally, at this time the collapse of a steel frame building such as Building
7 was unheard of. This, along with a series of interpretations of actions and communications taking place that day, led a large number of people to express skepticism that Building 7
could have collapsed due to the fire.
The above state of affairs represented the initial phase of what happens when people are confronted by something that counters their sense of how things should work. Skepticism in this
phase is a reasonable reaction to the information being received.
Among the several investigations conducted after 911, the National Institutes of Standards and Technology (NIST) conducted a thorough 3 year investigation that
explained why building 7 collapsed
in a manner reminiscent of a controlled demolition. In doing this they discovered a new type of progressive collapse which accounted for the collapse of the building which they dubbed
fire-induced progressive collapse
. Using simulations, they conclusively explained how a steel frame building such as Building 7 could be brought down by fires, and
they ruled out other explanations
. Some reasonable skeptics were still left unconvinced because, after all, no steel frame building had ever collapsed due to fire alone. However, this changed when the Plasco High-Rise
building in Tehran (a steel frame building like Building 7) collapsed as a result of a fire in 2017. A very clear explanation of the above facts is presented by Edward Current in the
video below.
Because of this and other investigations, the scientific community today accepts the explanation that Building 7 collapsed due to fire. This was the second phase where facts were
gathered, research was carried out, and the issues were explained to the satisfaction of the
. This is not to say that there aren’t some holdouts. For example the
Architects and Engineers for 9/11 Truth
is a group that has still refuses to accept these conclusions, and in their website they boast of having 3,141 plus architects and engineers that still espouse skepticism of the
accepted explanation. However, considering there were
113,554 licensed architects
in the US in 2017 and
1.6 million employed engineers
in the US in 2015, you can see that these individuals represent just a minority of their professions that still cling to an
irrational skepticism
that is unwarranted.
Such is the resistance some human beings display to accepting counterintuitive facts, whether they are the solutions to a fun puzzle or the explanations behind world changing events.
The Monty Hall Problem image by Cepheus is in the public domain.
0 Comments | {"url":"https://www.ratioscientiae.com/ratio-scientiae-blog/category/monty-hall-puzzle","timestamp":"2024-11-08T06:11:21Z","content_type":"text/html","content_length":"104207","record_id":"<urn:uuid:55d62e77-d950-4dcc-a907-068846c9a52b>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00525.warc.gz"} |
Where to find professionals who can handle algorithmic approaches in Linear Programming assignments? | Linear Programming Assignment Help
Where to find professionals who can handle algorithmic approaches in Linear Programming assignments? For example, the most efficient way for individual programmers to assign a variable-based
programming language to a target program is without the fear of figuring out that the source or target program itself is the first variable they are assigned an algorithm to check and match. Leaflet
and Linear Programmer Functional programmers can write for a computer program some functions such as evaluate the function(X, Y,…) that take the topology of X as a result and Y as a result of the
evaluation X and Y may take more or less time to compute together to form a result of the algorithm. This algorithm that runs everywhere is a very efficient way for the program to actually evaluate
the definition of the function and to ultimately compute a result of the algorithm (this is one of the goals of these efforts). Function-Based Programming for Systems of Linear Types It is usually
this type of model that right here most useful page working with Linear Programming. The programming language most languages used, the formalism for working with the language (L*-H) is linear
programming. Using the linear programing language (L*-LU) to control the compiler or other objective machinery is one of the most common and widely used approaches and the best people recommend your
methods. Definition of Formalism for Linear Programming Many systems of linear type programming (Type of Program for A Primer) are designed to be analyzed directly using the style syntax, such as
Type-1-R, type-1-GE, ephir-3-AB,…, type-1-C which is very similar to the Style syntax in type-1, but instead of the style syntax, the formatting syntax can be derived directly and thus you are
creating a type system that is not more complex, not more primitive, but not less specialized to the type system. The style syntax can be chosen statically, in whichever way you want, or canWhere to
find professionals who can handle algorithmic approaches in Linear Programming assignments? Algorithmic approaches have led for the first time to be helpful in solving linear operator assignment
problems in many applications. In fact, the few such solutions ever have been found have been too slow and/or incorrect. However, when trying to apply some algorithms in a difficult, multidimensional
setting, it is often important to find the right combination of algorithmic techniques. We are very excited to announce that a special study has been done to answer this question in a linear
programming setting, addressing a group of experts in distributed algorithms whose work is see this page new momentum. As it stands now, algorithmic approaches to solving linear assignment problems
have usually been found in a set-based format. We believe that this study may also change a few things. The first one published here a relatively new phenomenon, since now there can be multiple
solutions to problems in a set-like form.
Hire Class Help Online
Therefore, it is essential to keep computing machines as simple as possible in order to make effective use of from this source approaches. In this course we will discuss that at least one of the
algorithms found in this subject was a classifier, the so-called “subclassifier”. This classifier takes the assignment problem you list as input and calculates the relationship that leads to a given
alternative assignment answer set. In the following sections, we would like to give more concrete examples of algorithms below. At the end, we will discuss some of the algorithms on the subject that
were found in our earlier case. The motivation for this study was to show that the classifier does not exist for the actual cases, and that there is a fundamental reason to use it. As we are at that
point, the purpose is to understand how non-linear load-balance is responsible for the lack of efficiency in dealing with the load distribution in a multi-dimensional assignment. We have a technique
for expressing information about the load on a specific solution that is known. The classifier is then able to answer the following questions: Where to find professionals who can handle algorithmic
approaches in Linear Programming assignments? If you’re going to get high school or you’re planning to work at an API level — with some of the best clients in the world (at least for me), and then
maybe you’re going to be working in IT domains by yourself — and you’re wondering if you’re at the mercy of current-day coders and web programmers who can solve the job tasks that they want to. While
this is a subject for a future blog, with a few random notes on my favorite old laptop I was asked to work full time on a new project from Stanford University in the ’80’s. This was when I was at
first on a job call to serve up my project as a review on the idea of how to solve two sets of linear programming problems and a new program, MSPIRIT-200. I was told that at the time, this was when I
wasn’t using any programming languages if that was what I was doing in that project. When I was at least in undergrad, I brought all these folks over here to share their work assignments. They were
given the assignment question-and-answer session — and within their first week along with their research assignments were given the goal to complete the assignment in the next hour or two and work on
the program. They met in class and watched their students come in and check in. They picked me up without any exceptions for the study part, back my laptop and then left me alone in class for two
hours with my students for a nap. But all the while I was listening to whatever they asked about the way that we got our PhD students and senior professors on a conference call with the relevant
folks. The story was heard by faculty in the course team who were in the room with me for the assessment part, and shortly afterward, they were in touch with a few folks from Stanford University who
wanted to plan out the assignment tasks I was doing. [I’ve heard that in my | {"url":"https://linearprogramminghelp.com/where-to-find-professionals-who-can-handle-algorithmic-approaches-in-linear-programming-assignments","timestamp":"2024-11-12T16:41:52Z","content_type":"text/html","content_length":"116694","record_id":"<urn:uuid:7d0928d6-aef1-471c-9313-cb388e4f3d7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00896.warc.gz"} |
Divide Multi-Digit Numbers: CCSS.Math.Content.4.NBT.B.6 - Common Core: 4th Grade Math
All Common Core: 4th Grade Math Resources
Example Questions
Example Question #1 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #2 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #3 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #14 : Use Place Value Understanding And Properties Of Operations To Perform Multi Digit Arithmetic
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #15 : Use Place Value Understanding And Properties Of Operations To Perform Multi Digit Arithmetic
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #4 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #7 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #8 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #5 : Divide Multi Digit Numbers: Ccss.Math.Content.4.Nbt.B.6
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Example Question #22 : Use Place Value Understanding And Properties Of Operations To Perform Multi Digit Arithmetic
Correct answer:
Using our problem to make a rectangular array, we know that we are going to use a total of
We can start with
Certified Tutor
LeTourneau University, Bachelors, Electrical Engineering. LeTourneau University, Masters, Electrical Engineering.
Certified Tutor
Herbert Lehman, Bachelors, Recreation Education. Long Island University-Brooklyn Campus, Masters, Special Education.
Certified Tutor
CUNY Queens College, Bachelors, Education. CUNY Queens College, Masters, Special Education.
All Common Core: 4th Grade Math Resources | {"url":"https://cdn.varsitytutors.com/common_core_4th_grade_math-help/divide-multi-digit-numbers-ccss-math-content-4-nbt-b-6","timestamp":"2024-11-11T13:21:19Z","content_type":"application/xhtml+xml","content_length":"181669","record_id":"<urn:uuid:96ad085c-7677-423c-992d-6fef99f8542c>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00044.warc.gz"} |
Steve Fleming on Explaining Statistical Significance
My name is Steve Fleming, and I work for the National Center for Educational Achievement, a department of ACT, Inc. whose mission is to help people achieve education and workplace success. I also
earned an M.S. in Statistics from the University of Texas at Austin.
I have been thinking a lot lately about how to explain statistical significance. Leaving behind the problem of overemphasis on statistical significance compared to practical significance of results,
my objective for this post is to provide a visual explanation of statistical significance testing and suggest a display for the statistical significance of results.
Statistical significance testing begins with a null hypothesis, which we typically want to show not to be true, and an alternative hypothesis. From sample data, a p-value is generated which
summarizes the evidence against the null hypothesis. The p-value is compared to a fixed significance level, a. If the p-value is smaller than the significance level, the null hypothesis is rejected;
otherwise the null hypothesis is accepted.
Hot Tip: What effect does choosing a different significance level have? In the following diagram the combined blue and red regions represent the possible sample data results if the null hypothesis is
true. The blue regions show where we would accept the null hypothesis and the red regions where we would reject. It is clear that smaller levels of a make it less likely to reject the null
hypothesis. In terms the language of errors, smaller levels of a offer more protection against false positives.
Hot Tip: The APA style guide suggests using asterisks next to the sample estimates to indicate the p-value when space does not allow printing the p-value itself. Using increasing intensities of color
as an alternative way to indicate the most significant results saves even more space. Consider:
Rad Resource: How do you choose a consistent set of colors of increasing intensity? I have found Color Brewer to be a good source for this information.
What do you think? Does this vision clarify or obfuscate the meaning of statistical significance? I look forward to the discussion online.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community
of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for
6 thoughts on “Steve Fleming on Explaining Statistical Significance”
Shelly, good point about the importance of effect size. I think you could also apply a color gradient to reporting effect sizes. I think this is known as a heat map.
I wonder what an effective joint display of effect size and statistical significance might be.
I hadn’t heard that one before–I look forward to trying it out soon! Thanks!
I’m all for the visual display of information, and particularly using gradients to convey intensity or magnitude. Thanks for the tip about Color Brewer!
Steve, great post! Often times, my clients confuse significance with effect size. That is, they interpret a significance of **p<.01 as being of more "importance" than a significance of *p<.05.
Have you given any thought to visually explaining effect size to clients with little to no statistical background? It seems to me that communicating effect size is often of greater value to
clients than p-value as effect size captures the strength of the intervention (which is ultimately what we want to evaluate).
Elena, I agree the statistical jargon can be overwhelming to some folks. Have you tried a legal proceedings metaphor? The defendant is either innocent or guilty with the American system presuming
(null hypothesis) innocence. The preponderance of evidence (p-value) is used to make a decision.
Thank you for your post. I think the graphic you present would be useful with audiences with a beginner level statistics background. For me the larger challenge is how to explain statistical
significance to folks with no statistical background (e.g., for whom the terms “null hypothesis,” “alpha” are meaningless), who see a large difference in means and think it is significant despite
statistical analysis indicating otherwise. Any tips for this conversation?
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://aea365.org/blog/steve-fleming-on-explaining-statistical-significance/","timestamp":"2024-11-03T19:33:18Z","content_type":"text/html","content_length":"109273","record_id":"<urn:uuid:83a58bde-0334-4975-be10-5c2e87033443>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00782.warc.gz"} |
1 Introduction
Upscaling is a required step to adapt a fine grid geostatistical simulation (to reproduce spatial variability at a different scale) to equivalent parameters on (usually irregular) grids used by
transport codes. A new renormalisation method is proposed, based on the properties of a Voronï grid used in the code Hytec, to compute the inter-block permeability; this method does not require the
knowledge of the local flow direction. Our computation of the scalar block permeability is compared to two other classical renormalisation methods, based on a parametric sensitivity analysis to the
spatial variability, in two dimensions. The chosen observable is the cumulative flux of a non-reactive tracer through the outlet of the domain.
2 Bibliographical elements
Since the Renard and de Marsily synthesis [19], developments on upscaling methods are still carried out, in various domains: multiphase flow [6], fractured media [10,11,21], reactive transport [14],
etc. Two general classes of methods can be highlighted:
• • on the one hand cell aggregation, which aims at adapting the grid to the medium properties and the flow boundary conditions [1,3,12];
• • on the other hand, methods which aim at attributing equivalent values (or at least an interval) for properties on a fixed geometry.
The second seems better adapted for numerical models with a high spatial variability [7], particularly when permeability or transmissivity are obtained by geostatistical simulations based on the
anamorphosed Gaussian model. Furthermore, for reactive transport applications, the possible evolution of the pore structure, due to precipitation or dissolution reactions, is still a serious
limitation to the use of adaptive grids.
2.1 Block permeability: exact results or inequalities
The equivalent permeability of a block is precisely known in a few particular cases [16]. According to the fundamental inequality, a block permeability is bounded by the harmonic- and the
arithmetic-mean of its elements. When the permeability is simulated on a fine grid, the interval can become very large, if the blocks are constituted of numerous fine cell elements, or if the
dispersion variance of the permeability of the cells inside the block is high. Several authors suggested tighter bounds, for particular media or boundary conditions [18]. For instance, for relatively
general conditions (linear pressure or constant flux on the boundary), Pouya [17] shows the rise of two particular tensors, linked by an inequality: equivalent permeabilities in the direction of the
mean gradient and the mean flux respectively. He then proposes to use these tensors as upper and lower bounds for the results, according to the problem to solve. While bounding the result is more
rigourous, “realistic” values are sometimes sufficient, particularly when the computation times are large as in reactive transport.
2.2 Empirical formulas
For lack of a general rule for permeability composition, several empirical formulas have been studied, notably renormalisation methods: iterative treatments to remove small scale fluctuations [8]. In
the following, we retain two expressions of block permeability.
The first one is based on Matheron's formula [16], a composition of the arithmetic mean μ[a] and the harmonic mean μ[h], as a function of the dimension D of the space:
where $α=D−1D$. In dimension 2,so that the equivalent transmissivity is the geometric mean of the harmonic and the arithmetic means of the transmissivities of the cells inside the block. This formula
has since been generalised, e.g., [2,4].
The second method is based on the simplified renormalisation by Renard and Renard et al. [18,20]: it consists in an iterative composition of the values of the elementary cells contained by the block,
alternating arithmetic and harmonic means. This method yields two bounds, following each axis of the grid: the lower (c[min]) and upper (c[min]) bounds are obtained when the first iteration uses a
harmonic and an arithmetic mean, respectively. These bounds are then composed by power-average, taking into account the medium anisotropy if needed. Let u be the assumed direction of the tensor,
taken parallel to one of the axes for simplicity; let $cminu$ and $cmaxu$ be the associated lower and upper bounds, respectively; let K^u be the diagonal component of the permeability tensor on the
elementary cell in this direction. Then, for a 2D isotropic medium, the simplified renormalisation is the geometric mean of the two bounds given by the alternating means:
This relationship is similar to Matheron's formula, but applies with tighter bounds. For an anisotropic medium, the upscaling transforms a scalar permeability into a tensorial permeability. If the
flow solver accepts scalar values only, the block permeability can be computed by assuming the direction of the head gradient to be known. In 2D, for a diagonal permeability tensor in the directions
(Ox,Oy), the permeability along the flow direction $s→=(cosθ,sinθ)$ is:
When the local direction of the flow $s→$ is not known, the consistency of the equivalent permeability obtained by this method decreases when the effective direction of $s→$ diverges from its
supposed direction.
3 Irregular grid renormalisation
In his Ph.D thesis, Renard extends the simplified renormalisation to irregular blocks, by weighting the successive means depending on the blocks geometry ([18], p. 92–93). In the following, we
investigate the transmissivity composition for a 2D medium, discretised by an irregular grid. While we work in the plane, we still use the term “permeability”: this is consistent, e.g., with a flow
in a constant depth aquifer. The flow code is briefly introduced, then a new upscaling method is proposed, based on the specific properties of the spatial discretisation scheme.
3.1 The Hytec code: finite volumes on a Voronï polygons grid
Hytec [13,14] is a coupled hydrodynamic and geochemistry code. It is based on the resolution of the macroscopic equations of flow and advective/dispersive transport on a finite volumes scheme (or
integrated finite differences), using a spatial discretisation by irregular Voronï polygons. The convex polygons are built, starting from an arbitrary set of points, by intersection of the
perpendicular bisector of each pair of nodes. In a finite volumes formulation, all parameters are considered uniform inside each polygon.
For a one-phase stationary flow, Hytec solves the diffusivity equation and the associated Darcy's equation [15]:
$0=div (K⋅grad→h)U→=−K⋅grad→h$
where h is the head (in m), $U→$ is the Darcy velocity vector (in m/s), and K the permeability (in m/s). The equation is resolved with respect to the unknown h; then the normal component of $U→$ is
computed, on each cell boundary, via Darcy's equation. Once determined the flow velocity field, the transport equation is written:
$∂ωc∂t=div (De+αU→)⋅grad→c−c⋅U→$
where ω is the porosity, c is the tracer concentration, D[e] is the effective diffusion coefficient (in m^2/s) and α the dispersivity (in m).
Hytec proposes a choice of algorithms for the resolution of the transport equation. The time and space interpolations are one-step, but the centring can be configured by the user. We chose for this
study a time-centred pattern (Crank–Nicholson scheme), which has little effect on the result apart from the maximum admissible time step.
As to the space discretisation, Hytec proposes both an upward and a centred scheme. The first is unconditionally stable, but creates a numerical dispersion term (mathematically equivalent to a
physical dispersivity α), locally of the order of a half-cell diameter. The second scheme is space-centred: it does not create numerical dispersion, but its stability is conditional on the local
value of α. The first scheme is generally preferred for coupled reactive transport problems: indeed, the geochemistry usually limits the effects of dispersion by a chemical sharpening of the fronts.
However, the numerical dispersion is proportional to the (local) size of the cells, so that an accurate comparison between several grid computations is difficult. The effects of the space centring
are discussed further in the results, section 5.
3.2 Simplified renormalisation for a polygonal cell (Fig. 1)
The simplified renormalisation algorithm can be readily adapted to an irregular mesh, by weighting the means at each iteration according to the actual number of cells inside the polygon. This is
indeed a local algorithm.
Fig. 1
In the following, a “cell” will refer to the fine regular mesh of the geostatistical simulation. The simplified renormalisation algorithm on an irregular mesh is then as follow (see Fig. 1):
• 1. each polygonal mesh is discretised into cells, following the circum-rectangle of the polygon;
• 2. a weight 1 is attributed to each cell if the centre of the cell is inside the polygon, 0 otherwise;
• 3. the permeabilities of the rectangle are iteratively composed, weighted by the sum of the cells used at the iteration; therefore, if the centre of mass of a cell falls out of the polygon, the
permeability of the cell is not taken into account.
The last step of the algorithm is repeated until a unique value is obtained for each polygon, for the considered mean.
3.3 Normal component renormalisation
Using the properties of a Voronï mesh, we can introduce an empirical upscaling, without hypothesis on the flow direction and on the principal directions of the permeability tensor. Indeed, the mass
balance equation is computed using only the normal component of the flux between adjacent polygons.
For each polygon, an equivalent permeability is computed by considering the polygon as a sum of triangles: their bases are two adjacent vertices of the polygon and their summit the centre of the
polygon (Fig. 2, right). By construction of the Voronï mesh, the triangles with common base from two adjacent polygons are identical, in particular their surface areas are the same. These triangles
are gathered by pairs to form “kite” figures. The inter-block permeability used by the finite volume scheme is precisely the block permeability of these quadrilaterals, following a similar reasoning
to [9]. In the mass balance, only the normal component of the flow through the shared side of two adjacent polygons is used; for this normal flow, the equivalent permeability can be computed, e.g.,
by simplified renormalisation.
Fig. 2
Due to historical choices, the code Hytec uses only scalar components: indeed, in coupled reactive transport phenomena, the geochemistry often limits the effects on the transport of a possible
anisotropy of the medium. Assuming isotropic conditions, the diagonal terms of the permeability tensor obtained by this method are combined following the direction of the normal flow, which is
determined by the mesh geometry (by construction of the Voronï mesh). This inter-block permeability is used directly in the Hytec code: it corresponds to the mid-nodal permeability of finite
differences. It is worth mentioning that the discretisation obtained by this method is finer than the finite volume partition of space for which the permeability would be uniform inside each polygon.
To preserve the consistency with the finite volume formalism, a intra-block upscaling of the permeability can also be computed: each polygon can be divided in as many triangles as it has vertices
(two adjacent vertices form the basis of the triangle, the centre of the polygon forms its summit); on these triangles, the simplified renormalisation yields a value of the equivalent permeability
for a flow normal to the opposite side. The equivalent permeability are then composed to get the equivalent permeability of the polygon; an arithmetic composition has been chosen, weighted by the
relative surface of each elementary triangle of the polygon.
For anisotropical cases, the reasoning is not so easy. Beyond the necessity to take the permeability tensor into account, it is no longer possible to use the simplification based on the normal
component of the flow at each polygon interface, so that the renormalisation scheme has to be heavily modified.
In the following, all three modes of composition are tested: Matheron's mean, simplified renormalisation, renormalisation along the normal component of the flow. Each yields two values: an
intra-block permeability (uniform inside each polygon), and an inter-block permeability (on the quadrilateral between the centre and adjacent side of each pair of adjacent polygons). For the
simplified renormalisation, the (intra- or inter-block) scalar permeability is computed assessing a local direction of the flow parallel to the macroscopic head gradient, determined by the boundary
4 Design of the numerical experiments
The sensitivity study aims at discriminating between the influence of the spatial variability of the medium (represented by the geostatistical simulation on the fine grid) and the representation of
this variability, which depends on the spatial discretisation for the finite volumes and the upscaling scheme. To this end, numerical experiments are carried out on geostatistical simulations of the
permeability only; inter- or intra-block equivalent values are computed using all three upscaling methods, for several types of Hytec grids.
The domain of study is a permeameter: it consists in a rectangle, with a uniform inflow along a boundary, a uniform constant head condition along the opposite boundary, while the other two boundaries
are impermeable (Fig. 3). The permeameter has been extended with two lines of cells just outside the control lines, for better control of the boundary conditions independently of the polygonal grid
inside the permeameter. At the initial time, a perfect tracer A is at constant concentration 1 inside the permeameter; the incoming flow flushes the permeameter with a tracer B at concentration 1
(and no tracer A).
Fig. 3
In the bulk of the permeameter, the permeability K is assumed lognormally distributed, obtained from a (multi-) Gaussian random function Y:
where M is the arithmetic mean, and σ^2 the logarithmic variance of the permeability. In this model, the spatial variability is characterised by the variogram γ of the random function Y. Once fixed
the variogram model type (spherical, exponential), the set of parameters which fully describe the spatial model is limited to the range a (correlation length) and the standard deviation σ[ln K] of
the Napierian logarithm of the permeability. Due to the choice of boundary conditions, the flow is independent of the mean M of the permeability. We chose M=10^−3m/s.
The results of the study are all presented adimensionally; e.g., the length of the field is fixed to L=100m: all the other parameters of the same dimension (range of the variogram, dispersivity,)
are given relative to the unit L. For the hydrodynamics resolution, the time is expressed adimensionally, as a number of pore volumes injected (the total pore volume is equal to the surface of the
permeameter in 2D multiplied by its mean porosity); we refer to this adimensional time also as “water renewal cycles”.
The observable chosen to describe the system is the cumulative flux of tracer through a control line at the outlet of the permeameter: tracer A, initially at concentration 1 inside the permeameter,
is flushed out by the injection of fresh (A-free) water at the inlet. Throughout the study, we will show the cumulative flux of A through the outlet after an adimensional time 0.8 (i.e. after 80% of
the pore volume has been renewed). This particular time is particularly interesting: later on, most of the initial tracer has been flushed, so that the cumulative fluxes are close to the initial
total amount of A; for shorter amounts of time, the effects of the spatial variability are still limited to the neighbourhood of the inlet, and have low effects on the fluxes at the outlet.
Two series of numerical experiments were carried out. The first one is based on a 64×64 geostatistical simulation, which allows for a Hytec reference simulation, without preliminary upscaling. The
upscaling techniques are then applied to build coarser Hytec grids: 16×16 and 8×8 rectangular grids. For both sizes, three classes of Hytec grids are investigated: regular rectangular, diamonds
(actually 45°-tilted rectangles), and rectangular with diamond inclusion (Fig. 4); so that six coarse grids are investigated. For the second series of numerical experiments, 500×500 reference
simulations have been attempted. However, the implicit geochemistry in Hytec (even for non-reactive tracer) leads to a degradation of the precision of the results for such fine grids; in our specific
case, where precise comparisons of fluxes across a boundary are performed, the quality of the results were not sufficient to provide a definite reference. This point should be improved in further
versions of Hytec.
Fig. 4
For the first series of experiments, the 64×64 geostatistical simulations were performed by the discrete spectral method: the method is very CPU-effective, and offers the possibility to vary the
range a at “constant random draw”, i.e. using the same realisation of random numbers. We can thus investigate the effect of the range without interference from the random variability between two
draws [5]. For the second series of experiments, we chose the turning bands method; the code has not been adapted for this specific purpose, so that the 500×500 grid simulations for different
ranges correspond to different random draws.
Three values were tested for the range of the spherical variogram of the Napierian logarithm of the permeability (Eq. (3)): a=10, 30 and 50% of the length L of the permeameter. The five values
chosen for the logarithmic standard deviation Ω[ln K] are listed in Table 1. The other parameters were chosen as follows:
• • high dispersivity: α=0.1×L;
• • uniform porosity: ω=0.3;
• • effective diffusion coefficient: D[e]=10^−10m^2/s.
Table 1
The logarithmic standard deviation σ[ln K].
Tableau 1 Écart-type logarithmique σ[ln K].
Variance (m^2/s^2) 10^−4 5×10^−5 10^−5 5×10^−6 10^−6
σ [ln K] 2.15 1.98 1.55 1.34 0.83
It is important to note the need for a careful control of the computation conditions on the precision of the result, for the chosen observable, particularly, the time step. Hytec determines an
optimal time step, following several criteria (e.g., speed of convergence for the coupling); the maximum time step is bounded by the Courant–Friedrich–Levy stability criterion; however, a precise
comparison between computations at different time discretisations revealed that the criterion was not stringent enough. We finally made all simulations with the same time step, based on the smaller
time step obtained for the finer grid.
5 Experiments results
The transport computations were performed in an advective/dispersive framework, with a dispersion around 10^3 times larger than diffusion. The dispersivity coefficient is quite high, around 0.1 times
the length of the permeameter.
5.1 Simulations without numerical dispersion
The transport simulation using the centred scheme, without numerical dispersion, shows that the cumulative flux on a homogeneous medium are independent of the grid. In Fig. 5, the cumulative fluxes
curves are strictly identical for the 64, 16 and 8 grids.
Fig. 5
On the other hand, the upward scheme, with added numerical dispersion, produces a delay on the arrival of the tracer; the delay increases with the size of the polygons. The cumulative flux curves
converge towards the theoretical limit (a proof that the code does not loose mass), but the time needed to converge towards the limit increases with coarser grids: indeed, for coarse grids, the
numerical dispersion is larger, which clearly creates longer dispersion tails, and delays the complete flushing of the tracer.
In the following, results are given using the numerically dispersive scheme.
5.2 Upscaling between rectangular cells 64×64, 16×16, and 8×8
The parametric study is performed systematically using the same initial random realisation, and the several values for the geostatistical parameters: 5 values for σ, 3 for the range, 3
renormalisation methods, intra- or inter-block variation, fine initial grid and the two other grids.
Fig. 6 shows the cumulative fluxes obtained on the reference simulation (64×64 grid), for media with range 0.1L and 0.5L. For a=0.1L, the curves are superimposed for all variance values, and
close to the uniform permeability reference. For a=0.5L, the variance plays a major role: the cumulative fluxes diverge from the uniform permeability reference. A possible explanation is that
longer range geostatistical simulations create spatially well-structured fluctuations in the flow velocity field; increasing variances accentuate the discrepancy between low and high permeability
zones, so that the effect of preferential pathways increases.
Fig. 6
The spatial variability of permeability systematically introduces a delay on the cumulative flux curves, compared to the homogeneous reference or small range or small variance simulations. In the
short term, preferential pathways allow for a faster circulation of the tracer; however, in the longer term, slow circulation areas delay the complete flush of the tracer. The spatial variability has
thus a dual effect on the system behaviour: the breakthrough of the tracer B is faster, but the complete leaching of the initial tracer A takes longer.
Let Q[ref](t) be the cumulative flux in the reference simulation, on the initial 64×64 grid, and Q[ups](t) the results after upscaling. Figs. 7–10 show the effect of the upscaling, as the
difference Q[ups](t)−Q[ref](t). Indeed, in this case, the ratio Q[ups]/Q[ref] tends to decrease the difference between simulations results, so that it is not a good discriminating observable.
Fig. 7
Fig. 8
Fig. 9
Fig. 10
The variance of the permeability discriminates between the curves. Unexpectedly, the maximum difference compared to the reference is obtained for small values of the variance. This can be explained
by the transformations due to the upscaling, which modify the preferential pathways for a small variance. On the contrary, for larger values of σ, the preferential pathways are more pronounced, and
resist to all the upscaling techniques. The differences between upscaling methods can be seen on the 16×16 grid (Figs. 7 and 8), and even more on the 8×8 grid (Figs. 9 and 10, with different
scales than from that of the 16×16 grid figures).
Logically, the effect of the range on the upscaling is more related to its ratio to the mean size of the blocks. For the same range, the influence of the upscaling (of the hydrodynamic grid)
increases with the block sizes (Figs. 7 and 9 on the one hand and Figs. 8 and 10 on the other hand). For a fixed hydrodynamic grid, the differences between the three upscaling methods increase with
the range (Figs. 7 and 8 on the one hand and Figs. 9 and 10 on the other hand), and are greater when the variance increases.
It is not straightforward to set a hierarchy between the upscaling methods. For the inter-block method, the results of the normal component renormalisation are a little closer to the reference,
compared to the simplified renormalisation. Matheron's mean is systematically less precise for the 8×8 grid, but better for the 16×16 grid.
5.3 Experiments on complex grids
A second series of numerical experiments was carried out, with higher relative influence of the upscaling, and with more complex spatial discretisations for the hydrodynamic resolution (Fig. 4). For
that purpose, the geostatistical simulations were performed on a much finer grid 500×500. Two independent realisations for each range a=0.1L, 0.3L and 0.5L were drawn using the turning bands
method. It was thus possible to control the influence of the random draw, but as explained in section 4, a direct comparison of simulations with different range was no longer possible. A major
difference, compared to the previous section, is the absence of a reference hydrodynamic simulation, as discussed in section 4.
The hydrodynamic simulation results for the different grids are presented via the cumulative flux of tracer A through the outlet relative to the flux for the same grid with uniform permeability. In
the absence of reference, this smoothes the effects of the spatial discretisation.
The results are displayed in Figs. 11 and 12, respectively on the square 16×16 and 8×8 grid in the intra-block formulation. The different upscaling methods are represented by different line codes
(dashes, dots,). The differences between the upscaling methods are visible only for the larger values of the range and the variance. The discrepancy also increases with larger block sizes. The
results for the other types of grids (diamonds, and inclusion) give similar results.
Fig. 11
Fig. 12
The impact of the random draw itself is far more important than the effect of the upscaling technique, particularly for range a=0.5L. In this latter case, the field is not large enough to ensure
the ergodicity of the realisation: the spatial mean can thus be different from its expectation.
Owing to its limited influence (at least compared to the impact of the spatial variability itself), it is difficult to find systematic effects due to the upscaling techniques. Figs. 13 and 14 show a
comparison for two 16×16 and 8×8 diamond-shape grids, in intra- and inter-block respectively. The inter-block formulation seems more robust relative to the mean size of the cells. This is
understandable, since the implicit underlying grid for permeability is at least twice as fine for inter- than for intra-block. At any rate, the effects of upscaling towards coarser grids is more
pronounced for the intra-block formulation.
Fig. 13
Fig. 14
The cumulative fluxes obtained on a dense grid are generally lower than fluxes on a coarser grid (the points are below the first bisector on the scatter diagram), with the exception of the
diamond-shape grids.
Figs. 15 and 16 (for the 16×16 and 8×8 diamond-shape grids, respectively) show a comparison between the inter- and intra-block formulations. It appears that the inter-block simulations yield
systematically higher cumulative fluxes than the intra-block (the points are over the first bisector on the scatter diagram).
Fig. 15
Fig. 16
Finally, the simulations do not show systematic differences of behaviour between the three upscaling methods.
6 Conclusion
Several conclusions can be drawn from the comparison of the results on different simulated media and different hydrodynamic grids. The upscaling method itself has less influence on the cumulative
flux of tracer at the outlet than the actual spatial discretisation (the hydrodynamic grid), or the effective spatial variability of the medium. The discrepancy between the three upscaling techniques
for the inter-block permeability is even negligible, except for the larger values of the variance of the permeability and for the larger ranges. The three techniques do not display systematic effects
(e.g., under-estimation). The geometry of the hydrodynamic grid should be carefully evaluated, and should be fully considered as one of the influential parameters for the observable.
Finally, the importance of the random draw has been displayed. Consequently, it could be worth devising a probabilistic quality criterion for the simulations; i.e., the best method should provide an
unbiased estimation of the results distribution, at least in the mean or better in the mean and variance.
The study received a financial support by the European Union and University of Bologna. The authors are grateful for the positive reviews by B. Noetinger and R. Ababou who greatly helped improve the | {"url":"https://comptes-rendus.academie-sciences.fr/geoscience/articles/10.1016/j.crte.2008.11.014/","timestamp":"2024-11-11T20:57:54Z","content_type":"text/html","content_length":"110711","record_id":"<urn:uuid:39ff206c-b183-4db3-9146-09e71cfe2eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00895.warc.gz"} |
Physics 11 Non-Conservative Work Lab
\documentclass[a4paper,11pt]{apa6} %"man" means manuscript. Replace "man" with "jou" four \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath, siunitx} \usepackage{graphicx}
\usepackage{array} \usepackage{float} \usepackage{placeins} \usepackage[colorinlistoftodos]{todonotes} \title{Physics 11 Non-Conservative Work Lab} \shorttitle{Non-Conservative Work Lab} \author
{Frank Wang} \affiliation{Magee Secondary School\linebreak1-4\linebreak March 17, 2017} %for three names you can uncomment this: %\threeauthors{Mr. Sheldan}{Author Two}{Author Three} %\
threeaffiliations{Magee Secondary}{Magee Secondary}{Magee Secondary School} \abstract{} \begin{document} \maketitle \section{} \section{Objective and purpose} This lab is designed to explore the
relationship between distance(d) and work done by non conservative force(Wnc) for a ball rolling down on a plane. The lab will be made to observe and record the motion of the ball as it rolls down
from the incline plane and to measure the time the ball takes to roll down to the bottom and distance the ball has travelled in order to calculate the final velocity (Vf) of the ball and figure out
the kinematic energy (KE) of the ball as well. \section{Problem and question} Since the dependent variable is work done by non conservative forces (Wnc) and independent variable is the distance (d),
does the Wnc of the ball has a linear relationship with the distance the ball has travelled in this rolling ball experiment and does the Wnc of the ball increases as the longer distance that the ball
has travelled? \section{Background} In this non conservatice work lab, it is supposed that the work done by the non conservative forces (Wnc) of the ball could be determined by measuring the time it
takes to roll different distances, and therefore the Wnc will be determined by the distance the ball has travellled on the plane. Wnc is the work done by non conservative forces, and it equals the
total amount of energy of gravitational potential energy (PE) and kinematic energy (KE), which makes Wnc=KE+PE. In this lab, when a small ball rolls down from the plane, its potential energy(PE)
decreases due to the decrease of height since "PE=mgh" (m-mass, g-9.8N/Kg, h-height) and meanwhile, its kinematic energy increases due to the increase of velocity of the ball since "KE=1/2mv²"
(m-mass of the ball, v-final velocity of the ball). In this lab, the ball rolling down from the plane mainly involves the forces of friction (Ffr) and gravity (fg). Potential energy (PE) is mainly
about the work done by gravity which is a conservative force, and then as the ball rolls down from the plane, its gravitational potential energy (PE) transformed into the kinematic energy (KE)
gradually. In addition, potential energy (PE) will be negative because the height of ball decreases and kinematic energy (KE) will be positive because the velocity of the ball increases, and
therefore PE+KE basically equals the work done by the force of friction (Ffr) which is a non conservative force, that makes the equation of Wnc=PE+KE. \section{Hypothesis} In this lab, if the ball
has rolled down from exactly the same plane for a longer distance (d), then its work done by non conservative force (Wnc) would be greater. \section{Variables Description} Variable Description
demonstrates those factors which are changing, which are being changed and measured, and which are being held the same for a fair test in this lab. \section{Materials} In this lab, the following
materials are needed in order to complete the trials. -A metal ball:about 1.0 cm diameter, mass about 10g -A meter stick or ruler and a triangular ruler -A stopwatch or a smart phone -A scientific
calculator -A wooden plane -Masking tape -A wooden plate \includegraphics[scale=0.05]{Materials.pdf} \begin{table} \centering (Variables Description) \caption{A table} \begin{tabular}{|c|c|c|}
Independent&Dependent&Controlled \\\hline Distance, d(\SI{}{m} )& Time, t(\SI{}{s})&Slope of plane(theta ) \\ &Height, h(\SI{}{m})\\ &Vf (\SI{}{m/s})\\ &∆PE (\SI{}{J})\\ &KE(\SI{}{J})\\ &Wnc (\SI{}
{J})\\ \end{tabular} \end{table} \section{Procedures} \subsection{Part1: Setting up the tools} 1.Using the marking tape to label the different lengths of the plane (2.20m, 2.00m, 1.80m...). 2.Place
one end of the wooden plane on the first step of a ladder,and put the other end on the ground. 3.Find a distance(e.g 2.20m) and measure the relative height (e.g 0.56m) and calculate the slope of the
plane(e.g theta=6.0) 4.Take out the metal ball , the stopwatch or smart phone, a meter stick and a ruler and get ready to use them to measure the factors. 5.Take out a pen or a pencil, paper, and a
calculator, and draw a table for the factors such as distance, height, and time on the paper and prepare to record the data from the experiment. \subsection{Part 2: Measurement} 1.Place the metal
ball on the exact position (marking tape) and record the distance and measure the relative height. 2. Release the ball and let the ball roll down and hit the plate on the bottom, and the recorder
uses the stopwatch to record the exact time that the ball hits the plate by hearing the sound of crashing. 3.Since it is impossible to measure the exact time, and for avoiding as much uncertainties
as possible, rolling the ball down from the same distance and height on the plane for three times and calculate the average time to minimize the uncertainties and errors in the measurement. 4.Record
the distance, height, and average time for each different distance in each trial on the paper. 5. After complete recording data for different distances of one slope of the plane, adjusting the
position of the plane and changing the slope of the plane(change theta) 6. Measure the distance, height, and time for each trial in every different slope of the plane. \subsection{Part3: Calculation}
1.Use the kinematic equation "d=(Vo+Vf)/2*t" to calculate the final velocity(Vf=2d/t, since Vo=0 m/s). 2.Measure the mass of the metal ball in kilogram (e.g mass of ball=0.00835 kg). 3.After
calculating the final velocity, use the formula "∆PE=mg∆h" (m-mass of the ball, g=9.8N/kg, h-height) to calculate the change in gravitational potential energy. 4.In order to calculate the change in
kinematic energy, the mass of metal ball need to be measured, and then use the formula " ∆KE= 1/2mv² " (m-mass of the ball, v-final velocity of the ball) to calculate the change in kinematic energy.
5.Finally, calculate the work done by non conservative force (Wnc) by using the work and energy principle "Wnc=∆PE+∆KE". \begin{figure*} \section{Data Colllecting and Processing:} (Raw data of
rolling ball experiment) \caption(Theta=6.0, [slope of plane],mass of ball,m=0.00835 kg) \label{tab:first} \begin{tabular}{|c|c|c|c|c|c|c|} d(\SI{}{m})& h(\SI{}{m})& t(\SI{}{s})& Vf(\SI{}{m/s})& ∆PE
(\SI{}{J})& ∆ KE(\SI{}{J})& Wnc(\SI{}{J}) \\\\\\\hline 2.20&0.23&2.98&1.477&-0.0188&0.0091&-0.0097\\ 2.00&0.21&2.72&1.471&-0.0172&0.0090&-0.0082\\ 1.80&0.20&2.56&1.406&-0.0164&0.0083&-0.0081\\ 1.60&
0.18&2.43&1.317&-0.0147&0.0072&-0.0075\\ 1.40&0.16&2.12&1.321&-0.0131&0.0073&-0.0058\\ 1.20&0.14&1.89&1.269&-0.0115&0.0067&-0.0048\\ 1.00&0.13&1.91&1.047&-0.0106&0.0046&-0.0060\\ 0.80&0.11&1.65&
0.9697&-0.0090&0.0039&-0.0051\\ \end{tabular} \end{figure*} \begin{figure*} (raw data of rolling ball experiment) \caption(Theta=14.8, [slope of the plane], mass of the ball, m=0.00835 kg) \begin
{tabular}{|c|c|c|c|c|c|c|} d(\SI{}{m})& h(\SI{}{m})& t(\SI{}{s})& Vf(\SI{}{m/s})& ∆PE(\SI{}{J})& ∆ KE(\SI{}{J})& Wnc(\SI{}{J}) \\\\\\\hline 2.20&0.56&1.59&2.77&-0.0458&0.0320&-0.0138\\ 2.00&0.53&1.48
&2.70&-0.0434&0.0304&-0.0130\\ 1.80&0.49&1.48&2.43&-0.0401&0.0247&-0.0154\\ 1.60&0.44&1.50&2.13&-0.0360&0.0189&-0.0171\\ 1.40&0.39&1.40&2.00&-0.0319&0.0167&-0.0152\\ 1.20&0.34&1.11&2.16&-0.0278&
0.0195&-0.0083\\ 1.00&0.29&1.08&1.85&-0.0237&0.0143&-0.0094\\ 0.80&0.23&0.87&1.84&-0.0188&0.0141&-0.0047\\ \end{tabular} \end{figure*} \begin{figure} (raw data of rolling ball experiment) \caption
(Theta=20.0, [slope of the plane], mass of the ball, m=0.00835 kg) \begin{tabular}{|c|c|c|c|c|c|c|} d(\SI{}{m})& h(\SI{}{m})& t(\SI{}{s})& Vf(\SI{}{m/s})& ∆PE(\SI{}{J})& ∆ KE(\SI{}{J})& Wnc(\SI{}{J})
\\\\\\\hline 2.20&0.75&1.46&3.01&-0.0614&0.0378&-0.0236\\ 2.00&0.68&1.42&2.82&-0.0556&0.0332&-0.0224\\ 1.80&0.62&1.30&2.77&-0.0507&0.0320&-0.0187\\ 1.60&0.55&1.16&2.76&-0.0450&0.0318&-0.0132\\ 1.40&
0.48&1.11&2.52&-0.0393&0.0265&-0.0128\\ 1.20&0.41&1.03&2.33&-0.0336&0.0227&-0.0109\\ 1.00&0.35&0.94&2.13&-0.0286&0.0189&-0.0097\\ 0.80&0.29&0.96&1.67&-0.0237&0.0116&-0.0121\\ \end{tabular} \end
{figure} \begin{figure*} (raw data of rolling ball experiment) \caption(Theta=24.1, [slope of the plane], mass of the ball, m=0.00835 kg) \begin{tabular}{|c|c|c|c|c|c|c|} d(\SI{}{m})& h(\SI{}{m})& t
(\SI{}{s})& Vf(\SI{}{m/s})& ∆PE(\SI{}{J})& ∆ KE(\SI{}{J})& Wnc(\SI{}{J}) \\\\\\\hline 2.20&0.90&1.29&3.41&-0.0736&0.0485&-0.0251\\ 2.00&0.83&1.25&3.20&-0.0679&0.0428&-0.0251\\ 1.80&0.76&1.25&2.88&
-0.0622&0.0346&-0.0276\\ 1.60&0.68&1.15&2.78&-0.0556&0.0323&-0.0233\\ 1.40&0.60&1.02&2.74&-0.0491&0.0313&-0.0178\\ 1.20&0.51&0.93&2.58&-0.0417&0.0278&-0.0139\\ 1.00&0.42&0.96&2.08&-0.0344&0.0181&
-0.0163\\ 0.80&0.33&0.88&1.82&-0.0270&0.0138&-0.0132\\ \end{tabular} \end{figure*} \begin{figure*} \begin{center} (raw data of rolling ball experiment) \caption(Theta=30.0, [slope of the plane], mass
of the ball, m=0.00835 kg) \begin{tabular}{|c|c|c|c|c|c|c|} d(\SI{}{m})& h(\SI{}{m})& t(\SI{}{s})& Vf(\SI{}{m/s})& ∆PE(\SI{}{J})& ∆ KE(\SI{}{J})& Wnc(\SI{}{J}) \\\\\\\hline 2.20&1.11&1.10&4.00&
-0.0908&0.0668&-0.0240\\ 2.00&1.02&1.09&3.67&-0.0835&0.0562&-0.0273\\ 1.80&0.92&1.04&3.46&-0.0753&0.0500&-0.0253\\ 1.60&0.82&0.96&3.33&-0.0671&0.0463&-0.0208\\ 1.40&0.72&1.01&2.77&-0.0589&0.0320&
-0.0269\\ 1.20&0.62&0.94&2.55&-0.0507&0.0271&-0.0236\\ 1.00&0.52&0.75&2.66&-0.0425&0.0295&-0.0130\\ 0.80&0.42&0.75&2.13&-0.0344&0.0189&-0.0155\\ \end{tabular} \end{center} \end{figure*} \begin
{figure*} \caption{Graph 1: Theta=6.0} \includegraphics[width=\linewidth]{Plot_6.png} \end{figure*} \begin{figure*} \caption{Graph 2: Theta=14.8} \includegraphics[width=\linewidth]{Plot_8_copy.png} \
end{figure*} \begin{figure*} \caption{Graph 3: Theta=20.0} \includegraphics[width=\linewidth]{Plot_15.png} \end{figure*} \begin{figure*} \caption{Graph 4: Theta=24.1} \includegraphics[width=\
linewidth]{Plot_17.png} \end{figure*} \begin{figure*} \caption{Graph 5: Theta=30.0} \includegraphics[width=\linewidth]{Plot_19.png} \end{figure*} \begin{figure*} \section{Conclusion and Evaluation} \
subsection{Conclusion} In conclusion, since the work done by friction (Wnc)is negative, this lab illustrates that work done by friction which is non conservative force (Wnc) increases as the greater
distance that the ball has travelled. As for the question mentioned before, the results of the experiment proves that Work done by non conservative force, Wnc of the ball has a linear relationship
with the distance that the ball has travelled and Wnc of the ball increases as the ball has travelled for a greater distance on the plane. In addition, since the results of the rolling ball
experiment cannot be absolutely accurate, there are technical and random errors and uncertainties. In general, the data that gained from the experiment follow the similar trends which is that Work
done by friction(Wnc) increases as the distance increases. However, the results from few trials are not clear and accurate(e.g in Figure 5(theta=30.0), when the distance is between 1.80m - 2.20m, Wnc
shows a downward trend as the distance increases, which is the opposite of the conclusion). \subsection{Evaluation} Compare the measured value to the literature value, the actual data of the
experiment differs from the theoretical data because there are still weakness in the design and method of the experiment. But generally, the results follow a scientific and logical trends and they
can be very reliable and reasonable and close to the true values. However, a lot of good methods and designs for improvements can be employed in order to perform this lab better and achieve more
precise and reliable results and conclusion. \subsection{Suggestions for Improvements} \begin{center} \begin{tabular}{|c|c|c|} \label{Lab Errors and Suggestions for Further Improvements} The errors
and problems &How that error affected data&A suggestion for improvements \\\hline Time of &The Recorded Time was either &Record videos for\\ reaction&shorter or longer, kinematic energy &the movement
of \\ of releasing& KE was either higher or lower&the ball and measure\\ the ball& than it should have been& accurate time on the computer\\\hline Force of &Recorded time appeared&Put the plane on\\
Friction increases& longer, kinematic energy(KE)&a flat and stable\\ due to the&and Wnc were lower than&surface and support it\\ Tilted plane&it should have been& with heavy objects(e.g books).\\\
hline \end{tabular} \end{center} \end{figure*} \end{document} % % Please see the package documentation for more information % on the APA6 document class: % % http://www.ctan.org/pkg/apa6 % | {"url":"https://ru.overleaf.com/articles/physics-11-non-conservative-work-lab/pjjwdsxkbymc","timestamp":"2024-11-10T14:18:07Z","content_type":"text/html","content_length":"50979","record_id":"<urn:uuid:19778df1-cb8d-438a-b5de-262eabed773c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00724.warc.gz"} |
TERNARY CONDITION ( ? : )
The ternary condition is a compact and efficient method to evaluate a condition and return one of two possible results, depending on whether the condition is true or false.
It is a shortcut often used in development to avoid writing long conditional structures like if...else.
➡️ Syntax:
TERNARY CONDITION ( ? : )
➡️ The structure :
➡️ The Example:
> 5
? "Yes" : "No")
In this example, we will explain the following simple formula:
This formula checks if the number of people present is greater than 5 and returns a response based on the result.
Let's use colored points to distinguish each part of the formula:
• 1 field 🟡: Present represents the data or value that we want to check.
• 1 operator 🟣: > is used to compare the field with a value.
• 1 ternary condition 🟣: The ? : allows you to choose between two results, depending on whether the condition is true or false.
🟰 The expected result:
• If Present = 8:
The condition becomes 8 > 5, which is true. The formula will then return "Yes".
• If Present = 3:
The condition becomes 3 > 5, which is false. The formula will then return "No".
Before You Start
Getting Started with Formulas
The new formulas in Timetonic allow you to transform, calculate, and manipulate the information stored in your columns by combining multiple functions...
Read More ➔
Or Continue With
Glossary View of Formulas
Quickly find the function or operator you're looking for or simply explore and learn how to use them...
Read More ➔ | {"url":"https://support.timetonic.com/hc/en-001/articles/16033671517724-TERNARY-CONDITION","timestamp":"2024-11-07T06:48:17Z","content_type":"text/html","content_length":"39454","record_id":"<urn:uuid:37462bb0-cad1-4b0d-8ed2-5d9250f58264>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00712.warc.gz"} |
Natalie earns 250 for each cd she sells and 350 for each dvd she sells Natalie sold 45 dvds last year She earned a total of 780 last year selling cds and dvdsWr
Natalie earns $2.50 for each cd she sells and $3.50 for each dvd she sells. Natalie sold 45 dvds last year. She earned a total of $780 last year selling cds and dvds.
Write an equation that can be used to determine the number of cds [c] Natalie sold last year.
How many cd's did Natalie sell last year?
Respuesta :
45 x 3.50 = 157.50
780 - 157.50 = 622.50
622.50 / 2.50 = 249
Natalie sold 249 cds last year
Answer Link
each cd she sells and $3.50 for each dvd she sells. natalie sold 45 dvds last year. she earned a total of $780 last year selling cds and dvds.
Step-by-step explanation:
Answer Link
Otras preguntas | {"url":"http://cahayasurya.ac.id/question/15222","timestamp":"2024-11-07T18:32:09Z","content_type":"text/html","content_length":"155494","record_id":"<urn:uuid:967fa155-327a-4b4f-ac6c-693fa68affd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00775.warc.gz"} |
Finance Part 2
Course Project Part II
You will assume that you still work as a financial analyst for AirJet Best Parts, Inc. The company is considering a capital investment in a new machine and you are in charge of making a
recommendation on the purchase based on (1) a given rate of return of 15% (Task 4) and (2) the firm’s cost of capital (Task 5).
Task 4. Capital Budgeting for a New Machine
A few months have now passed and AirJet Best Parts, Inc. is considering the purchase on a new machine that will increase the production of a special component significantly. The anticipated cash
flows for the project are as follows:
Year 1 $1,100,000
Year 2 $1,450,000
Year 3 $1,300,000
Year 4 $950,000
You have now been tasked with providing a recommendation for the project based on the results of a Net Present Value Analysis. Assuming that the required rate of return is 15% and the initial cost of
the machine is $3,000,000. 1. What is the project’s IRR? (10 pts)
2. What is the project’s NPV? (15 pts)
3. Should the company accept this project and why (or why not)? (5 pts)
4. Explain how depreciation will affect the present value of the project. (10 pts)
5. Provide examples of at least one of the following as it relates to the project: (5 pts each) a. Sunk Cost b. Opportunity cost c. Erosion
6. Explain how you would conduct a scenario and sensitivity analysis of the project. What would be some project-specific risks and market risks related to this project? (20 pts)
Task 5: Cost of Capital
AirJet Best Parts Inc. is now considering that the appropriate discount rate for the new machine should be the cost of capital and would like to determine it. You will assist in the process of
obtaining this rate. 1. Compute the cost of debt. Assume AirJet Best Parts Inc. is considering issuing new bonds. Select current bonds from one of the main competitors as a benchmark. Key competitors | {"url":"https://www.studymode.com/essays/Finance-Part-2-771735.html","timestamp":"2024-11-06T23:51:38Z","content_type":"text/html","content_length":"90286","record_id":"<urn:uuid:2ead7392-d274-4a05-a36a-448449828eca>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00801.warc.gz"} |
Compassionate Inquiry: Blending Medical Research with Empathetic Care
In the multifaceted realm of statistical inference, Maximum Likelihood Estimation (MLE) has emerged as a cornerstone methodology. MLE serves as a mathematical framework for optimally estimating the
parameters of a given statistical model. The core idea of MLE is to identify the parameter values that maximize the likelihood of the observed data being generated by the model (1). This
comprehensive guide aims to shed light on the intricate details of MLE, thereby providing valuable insights to both novice and experienced statisticians.
Consider a jar filled with marbles of red and green colors. Drawing 10 marbles from this jar without peeking, you find 7 red and 3 green ones. The objective now is to estimate the actual proportion
of red marbles in the jar. In the language of statistics, the jar represents the 'model,' and the proportion of red marbles is the 'parameter' that needs estimation. MLE enables the calculation of a
'likelihood function,' which is then optimized to find the most probable proportion of red marbles based on the 7 red and 3 green marbles observed (2).
Building on the marble analogy, MLE uses the likelihood function to quantify the conditional probability of observing a given data sample under specific distribution parameters. This function
facilitates the exploration of a space filled with potential distributions and parameters, aiming to find those that maximize the likelihood of the observed data (3).
In rigorous terms, MLE aims to identify the parameter (θ) that maximizes the likelihood function (L(θ | X)), conditional on the observed data (X). The likelihood function is often expressed as L(θ |
X) = Π f(xᵢ | θ), and the objective is to find the parameter that maximizes this function. Commonly, the natural logarithm of L(θ | X) is taken to simplify computations. Optimization techniques such
as the Newton-Raphson method are frequently employed to find the maximum likelihood estimates (4).
For those interested in diving deeper into MLE, an array of resources like online tutorials, textbooks, and scholarly articles are available (6-10). These resources can be invaluable in overcoming
the challenges and filling the knowledge gaps commonly encountered while learning MLE.
Conceptualizing MLE as a 'tuning knob' for a statistical model can aid understanding. It is similar to fine-tuning a radio to get to the clearest station. Mastering the calculation of the likelihood
function and the optimization techniques can yield 80% of the practical utility of MLE (18-20).
The inception of MLE can be traced back to the 18th century with Daniel Bernoulli. However, it gained prominence in the early 20th century, largely due to the work of Ronald A. Fisher, making it a
staple in statistical inference methodologies (22).
Aligned with the Frequentist approach to statistics, MLE operates under the assumption of an existing 'true' parameter that generates the observed data. Ontologically, MLE posits that there is an
underlying model that is responsible for generating the observed data (23).
For practical implementations, various software packages like R, Stata, and Python offer built-in functionalities for MLE. For instance, R employs the 'optim' function, Stata uses the 'ml' command,
and Python leverages libraries such as 'scipy' (15, 26, 27).
Maximum Likelihood Estimation stands as a robust tool in modern statistics, providing a methodologically sound approach for parameter estimation. From the simple exercise of estimating the number of
red marbles in a jar to the complex mathematical derivations, MLE is foundational to statistical modeling. | {"url":"https://www.medical-research.org/2023_08_26_archive.html","timestamp":"2024-11-14T23:25:17Z","content_type":"text/html","content_length":"61004","record_id":"<urn:uuid:aed630d9-1384-4825-be52-024c42bcc267>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00856.warc.gz"} |
Interface ConditionalBlock
All Superinterfaces:
Block, org.plumelib.util.UniqueId
All Known Implementing Classes:
public interface ConditionalBlock extends Block
Represents a conditional basic block.
• Nested Class Summary
Nested classes/interfaces inherited from interface org.checkerframework.dataflow.cfg.block.Block
• Method Summary
Modifier and Type
Returns the flow rule for information flowing from this block to its else successor.
Returns the entry block of the else branch.
Returns the flow rule for information flowing from this block to its then successor.
Returns the entry block of the then branch.
Set the flow rule for information flowing from this block to its else successor.
Set the flow rule for information flowing from this block to its then successor.
Methods inherited from interface org.plumelib.util.UniqueId
getClassAndUid, getUid | {"url":"https://checkerframework.org/releases/3.45.0/api/org/checkerframework/dataflow/cfg/block/ConditionalBlock.html","timestamp":"2024-11-09T06:11:57Z","content_type":"text/html","content_length":"15447","record_id":"<urn:uuid:8fa485c4-5c55-4cd4-bf26-3f943a4b2ac9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00046.warc.gz"} |
TGD diary{{post.title}}Does M<sup>8</sup>-H duality reduce classical TGD to octonionic algebraic geometry?
I have used last month to develop a detailed vision about M
-H duality and now I dare to speak about genuine breakthrough. I attach below the abstract of the resulting
TGD leads to several proposals for the exact solution of field equations defining space-time surfaces as preferred extremals of twistor lift of Kähler action. So called M^8-H duality is one of these
approaches. The beauty of M^8-H duality is that it could reduce classical TGD to algebraic geometry and would immediately provide deep insights to cognitive representation identified as sets of
rational points of these surfaces.
In the sequel I shall consider the following topics.
1. I will discuss basic notions of algebraic geometry such as algebraic variety, surface, and curve, rational point of variety central for TGD view about cognitive representation, elliptic curves
and surfaces, and rational and potentially rational varieties. Also the notion of Zariski topology and Kodaira dimension are discussed briefly. I am not a mathematician and what hopefully saves
me from horrible blunders is physical intuition developed during 4 decades of TGD.
2. It will be shown how M^8-H duality could reduce TGD at fundamental level to algebraic geometry. Space-time surfaces in M^8 would be algebraic surfaces identified as zero loci for imaginary part
IM(P) or real part RE(P) of octonionic polynomial of complexified octonionic variable o[c] decomposing as o[c]= q^1[c]+q^2[c] I^4 and projected to a Minkowskian sub-space M^8 of complexified O.
Single real valued polynomial of real variable with algebraic coefficients would determine space-time surface! As proposed already earlier, spacetime surfaces would form commutative and
associative algebra with addition, product and functional composition.
One can interpret the products of polynomials as correlates for free many-particle states with interactions described by added interaction polynomial, which can vanish at boundaries of CDs thanks
to the vanishing in Minkowski signature of the complexified norm q[c]q[c]^* appearing in RE(P) or IM(P) caused by the quaternionic non-commutativity. This leads to the same picture as the view
about preferred extremals reducing to minimal surfaces near boundaries of CD. Also zero zero energy ontology (ZEO) could emerge naturally from the failure of number field property for for
quaternions at light-cone boundaries.
3. The fundamental challenge is to prove that the octonionic polynomials with real coefficients determine associative/quaternionic surfaces as the zero loci of their imaginary/real parts in
quaternionic sense. Here the intuition comes from the idea that the octonionic polynomials map from octonionic space O to second octonionic space W. Real and imaginary parts in W are quaternionic
/co-quaternionic. These planes correspond to surfaces in O defined by the vanishing of real/imaginary parts, and the natural guess is that they are quaternionic/co-quaternionic, that is
The hierarchy of notions involved is well-ordering for 1-D structures, commutativity for complex numbers, and associativity for quaternions. This suggests a generalization of Cauchy-Riemann
conditions for complex analytic functions to quaternions and octonions. Cauchy Riemann conditions are linear and constant value manifolds are 1-D and thus well-ordered. Quaternionic polynomials
with real coefficients define maps for which the 2-D spaces corresponding to vanishing of real/imaginary parts of the polynomial are complex/co-complex or equivalently commutative/co-commutative.
Commutativity is expressed by conditions bilinear in partial derivatives. Octonionic polynomials with real coefficients define maps for which 4-D surfaces for which real/imaginary part are
quaternionic/co-quaternionic, or equivalently associative/co-associative. The conditions are now 3-linear.
In fact, all algebras obtained by Cayley-Dickson construction adding imaginary units to octonionic algebra are power associative so that polynomials with real coefficients define an associative
and commutative algebra. Hence octonion analyticity and M^8-H correspondence could generalize.
4. It turns out that in the generic case associative surfaces are 3-D and are obtained by requiring that one of the coordinates RE(Y)^i or IM(Y)^i in the decomposition Y^i=RE(Y)^i +IM(Y)^iI[4] of
the gradient of RE(P)= Y=0 with respect to the complex coordinates z[i]^k, k=1,2, of O vanishes that is critical as function of quaternionic components z[1]^k or z[2]^k associated with q[1] and q
[2] in the decomposition o= q[1]+q[2]I[4], call this component X[i]. In the generic case this gives 3-D surface.
In this generic case M^8-H duality can map only the 3-surfaces at the boundaries of CD and light-like partonic orbits to H, and only determines the boundary conditions of the dynamics in H
determined by the twistor lift of Kähler action. M^8-H duality would allow to solve the gauge conditions for SSA (vanishing of infinite number of Noether charges) explicitly.
One can also have criticality. 4-dimensionality can be achieved by posing conditions on the coefficients of the octonionic polynomial P so that the criticality conditions do not reduce the
dimension: X[i] would have possibly degenerate zero at space-time variety. This can allow 4-D associativity with at most 3 critical components X[i]. Space-time surface would be analogous to a
polynomial with a multiple root. The criticality of X[i] conforms with the general vision about quantum criticality of TGD Universe and provides polynomials with universal dynamics of
criticality. A generalization of Thom's catastrophe theory emerges. Criticality should be equivalent to the universal dynamics determined by the twistor lift of Kähler action in H in regions,
where Kähler action and volume term decouple and dynamics does not depend on coupling constants.
One obtains two types of space-time surfaces. Critical and associative (co-associative) surfaces can be mapped by M^8-H duality to preferred critical extremals for the twistor lift of Kähler
action obeying universal dynamics with no dependence on coupling constants and due to the decoupling of Kähler action and volume term: these represent external particles. M^8-H duality does not
apply to non-associative (non-co-associative) space-time surfaces except at 3-D boundary surfaces. These regions correspond to interaction regions in which Kähler action and volume term couple
and coupling constants make themselves visible in the dynamics. M^8-H duality determines boundary conditions.
5. Cognitive representations are identified as sets of rational points for algebraic surfaces with "active" points containing fermion. The representations are discussed at both M^8- and H level.
Rational points would be now associated with 4-D algebraic varieties in 8-D space. General conjectures from algebraic geometry support the vision that these sets are concentrated at
lower-dimensional algebraic varieties such as string world sheets and partonic 2-surfaces and their 3-D orbits, which can be also identified as singularities of these surfaces.
6. Some aspects related to homology charge (Kähler magnetic charge) and genus-generation correspondence are discussed. Both are central in the proposed model of elementary particles and it is
interesting to see whether the picture is internally consistent and how algebraic surface property affects the situation. Also possible problems related to h[eff]/h=n hierarchy realized in terms
of n-fold coverings of space-time surfaces are discussed from this perspective.
In order to get more perspective I add an FB response relating to this.
Octonions and quaternions are 20 year old part of TGD: one of the three threads in physics as generalized number theory vision. Second vision is quantum physics as geometry of WCW. The question has
been how to fuse geometric and number theory visions. Algebraic geometry woul do it since it is both geometry and algebra and it has been also part of TGD but only now I realized how to get acceess
to its enormous power.
Even the proposal discussed now about the algebra of octonionic polynomials with real coefficients was made about two decades ago but only now I managed to formulate it in detail. Here the general
wisdom gained from adelic physics helped enormously. I dare say that classical TGD at the most fundamental level is solved exactly.
From the point of pure mathematics the generalization of complex analyticity and linear Cauchy Riemann conditions to multilinear variants for quaternions, octonions and even for the entire hierarchy
of algebras obtained by Cayley-Dickson construction is a real breakthrough. Consider only the enormous importance of complex analyticity in mathematics and physics, including string models. I do not
believe that this generalization has been discovered: otherwise it would be key part of mathematical physics. Quaternionic and octonionic analyticities will certainly mean huge evolution in
mathematics. I had never ended to these discoveries without TGD: TGD forced them.
At these moments I feel deep sadness when knowing that the communication of these results to collegues is impossible in practice. This stupid professional arrogance is something which I find very
difficult to accept even after 4 decades. I feel that when society pays a monthly salary for a person for being a scientists, he should feel that his duty is to be keenly aware what is happening in
his field. When some idiot proudly tells that he reads only prestigious journals, I get really angry.
For details see the article Does M^8-H duality reduce classical TGD to octonionic algebraic geometry?or the articles Does M^8-H duality reduce classical TGD to octonionic algebraic geometry?: part I
and Does M^8-H duality reduce classical TGD to octonionic algebraic geometry?: part II.
For a summary of earlier postings see Latest progress in TGD. | {"url":"https://matpitka.blogspot.com/2017/08/does-m-8-h-duality-reduce-classical-tgd.html","timestamp":"2024-11-11T14:16:23Z","content_type":"application/xhtml+xml","content_length":"137018","record_id":"<urn:uuid:ea5b4854-fb54-40af-9a74-c66569cdcbbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00305.warc.gz"} |
We introduce a new, non-parametric method to infer deprojected 3D mass profiles $M(r)$ of galaxy clusters from weak gravitational lensing observations. The method assumes spherical symmetry and a
moderately small convergence, $\kappa \lesssim 1$. The assumption of spherical symmetry is an important restriction, which is, however, quite common in practice, for example in methods that fit
lensing data to an NFW profile. Unlike other methods, our method relies on spherical symmetry only at radii larger than the radius $r$ at which the mass $M$ is inferred. That is, the method works
even if there is a non-symmetric inner region. We provide an efficient implementation in Julia code that runs in a few milliseconds per galaxy cluster. We explicitly demonstrate the method by using
data from KiDS DR4 to infer mass profiles for two example clusters, Abell 1835 and Abell 2744, finding results consistent with existing literature.
The Aether-Scalar-Tensor (AeST) theory is an extension of General Relativity (GR) which can support Modified Newtonian Dynamics (MOND) behaviour in its static weak-field limit, and cosmological
evolution resembling ΛCDM. We consider static spherically symmetric weak-field solutions in this theory and show that the resulting equations can be reduced to a single equation for the gravitational
potential. The reduced equation has apparent isolated singularities at the zeros of the derivative of the potential and we show how these are removed by evolving, instead, the canonical momentum of
the corresponding Hamiltonian system that we find. We construct solutions in three cases: (i) in vacuum outside a bounded spherical object, (ii) within an extended prescribed source, and (iii) for an
isothermal gas in hydrostatic equilibrium, serving as a simplified model for galaxy clusters. We show that the oscillatory regime that follows the Newtonian and MOND regimes, obtained in previous
works in the vacuum case, also persists for isothermal spheres, and we show that the gas density profiles in AeST can become more compressed than their Newtonian or MOND counterparts. We construct
the Radial Acceleration Relation (RAR) in AeST for isothermal spheres and find that it can display a peak, an enhancement with respect to the MOND RAR, at an acceleration range determined by the
value of the AeST weak-field mass parameter, the mass of the system and the boundary value of the gravitational potential. For lower accelerations, the AeST RAR drops below the MOND expectation, as
if there is a negative mass density. Similar observational features of the galaxy cluster RAR have been reported. This illustrates the potential of AeST to address the shortcomings of MOND in galaxy
clusters, but a full quantitative comparison with observations will require going beyond the isothermal case.
We have developed a precise dictionary between the spectrum of primordial density fluctuations and the parameters of the effective field theory (EFT) of inflation that determine the primordial power
spectrum (PPS). At lowest order the EFT contains two parameters: the slow-roll parameter $\epsilon$, which acts as an order parameter, and the speed of sound $c_s$. Applying second-order perturbation
theory, we provide maps from the PPS to the EFT parameters that are precise up to the cube of the fractional change in the PPS $(\Delta \mathcal{P}/\mathcal{P})^3$, or less than $1\%$ for spectral
features that modulate the PPS by $20\%$. While such features are not required when the underlying cosmological model is assumed to be $\Lambda$CDM they are necessary for alternative models that have
no cosmological constant/dark energy. We verify the dictionary numerically and find those excursions in the slow-roll parameter that reproduce the PPS needed to fit Planck data for both $\Lambda$ and
no-$\Lambda$ cosmological models.
Reconstructions of the primordial power spectrum (PPS) of curvature perturbations from cosmic microwave background anisotropies and large-scale structure data suggest that the usually assumed
power-law PPS has localised features (up to \sim 10\%∼10% in amplitude), although of only marginal significance in the framework of \LambdaΛCDM cosmology. On the other hand if the cosmology is taken
to be Einstein-de Sitter, larger features in the PPS (up to \sim 20\%∼20% in amplitude) are required to accurately fit the observed acoustic peaks. Within the context of single clock inflation, we
show that any given reconstruction of the PPS can be mapped on to functional parameters of the underlying effective theory of the adiabatic mode within a 2nd-order formalism, provided the best fit
fractional change of the PPS, \Delta{P}_{R}/{P}_{R}ΔPR/PR is such that (\Delta{P}_{R}/{P}_{R})^3(ΔPR/PR)3 falls within the 1\,\sigma1σ confidence interval of the reconstruction for features induced
by variations of either the sound speed c_\mathrm{s}cs or the slow-roll parameter \epsilonϵ. Although there is a degeneracy amongst these functional parameters (and the models that project onto
them), we can identify simple representative inflationary models that yield such features in the PPS. Thus we provide a dictionary (more accurately, a thesaurus) to go from observational data, via
the reconstructed PPS, to models that reproduce them to per cent level precision.
We consider the possibility that the primordial curvature perturbation is direction-dependent. To first order this is parameterised by a quadrupolar modulation of the power spectrum and results in
statistical anisotropy of the CMB, which can be quantified using `bipolar spherical harmonics'. We compute these for the Planck DR2-2015 SMICA map and estimate the noise covariance from Planck Full
Focal Plane 9 simulations. A constant quadrupolar modulation is detected with 2.2 σ significance, dropping to 2σ when the primordial power is assumed to scale with wave number k as a power law. Going
beyond previous work we now allow the spectrum to have arbitrary scale-dependence. Our non-parametric reconstruction then suggests several spectral features, the most prominent at k ∼ 0.006 Mpc−1.
When a constant quadrupolar modulation is fitted to data in the range 0.005 ⩽ k/Mpc−1 ⩽ 0.008, its preferred directions are found to be related to the cosmic hemispherical asymmetry and the CMB
dipole. To determine the significance we apply two test statistics to our reconstructions of the quadrupolar modulation from data, against reconstructions of realisations of noise only. With a test
statistic sensitive only to the amplitude of the modulation, the reconstructions from the multipole range 30 ⩽ ℓ ⩽ 1200 are unusual with 2.1σ significance. With the second test statistic, sensitive
also to the direction, the significance rises to 6.9σ. Our approach is easily generalised to include other data sets such as polarisation, large-scale structure and forthcoming 21-cm line
observations which will enable these anomalies to be investigated further.
In stochastic quantisation, quantum mechanical expectation values are computed as averages over the time history of a stochastic process described by a Langevin equation. Complex stochastic
quantisation, though theoretically not rigorously established, extends this idea to cases where the action is complex-valued by complexifying the basic degrees of freedom, all observables and
allowing the stochastic process to probe the complexified configuration space. We review the method for a previously studied one-dimensional toy model, the U(1) one link model. We confirm that
complex Langevin dynamics only works for a certain range of parameters, misestimating observables otherwise. A curious effect is observed where all moments of the basic stochastic variable are
misestimated, although these misestimated moments may be used to construct, by a Taylor series, other observables that are reproduced correctly. This suggests a subtle but not completely resolved
relationship between the original complex integration measure and the higher-dimensional probability distribution in the complexified configuration space, generated by the complex Langevin process.
Of the four exclusive normed division algebras, only the real and complex numbers prevail in both mathematics and physics. The noncommutative quaternions and the nonassociative octonions have found
limited physical applications. In mathematics, division algebras unify both classical and exceptional Lie algebras with the exceptional ones appearing in a table known as the magic square generated
by tensor products of division algebras. This work reviews the normed division algebras and the magic square as well as necessary preliminaries for its construction. Space-time transformations, pure
super Yang-Mills theories in space-time dimensions D = 3, 4, 6, 10, dimensional reduction and truncation of supersymmetry are also described here by the four division algebras. Supergravity theories,
seen as tensor products of super Yang-Mills theories, are described as tensor products of division algebras leading to the identi cation of a magic square of supergravity theories with their
U-duality groups as the magic square entries, providing applications of all division algebras to physics and suggesting division algebraic underpinnings of supersymmetry. Other curious uses of
octonions are also mentioned.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo | {"url":"https://akademskiimenik.ba/profil/1346","timestamp":"2024-11-07T19:20:08Z","content_type":"text/html","content_length":"61482","record_id":"<urn:uuid:6989dfe0-3c02-458d-a4a4-7ce568e28b1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00026.warc.gz"} |
The Value of Short Volatility Strategies
Executive Summary
■ Beware of “derivatives of derivatives.” When evaluating whether a given volatility strategy is appropriate for their portfolio, investors should seek to understand the primary drivers of returns.
■ Put writing strategies can deliver equity-like returns over the long term with less sensitivity to market valuations and smaller drawdowns compared to the equity market.
■ Only twice have equity valuations been higher than they are today—in 1929 and in 1999—but volatility is no longer cheap. We think this should prompt investors to look at index put writing as a
substitute for equities.
With the sharp rise in both realized volatility and the VIX index of implied volatility at the beginning of February, the topic of volatility trading has taken center stage. Investors have been duly
inundated with commentary on volatility and volatility related products. However, among the deluge of market chatter, and plenty of confusion, we think it is worth taking a step back to consider
three broader questions that pertain to the role that equity volatility can play in institutional portfolios.^1 First, how should investors think about the plethora of volatility linked products
currently offered? Second, how can investors incorporate volatility based strategies into their portfolios without taking undue risks? And third, are these strategies attractive given where we are in
the market cycle? We tackle each question in turn.
Not all volatility products are created equal
There are, broadly speaking, two categories of short volatility strategies that are accessible to institutional investors. The first category consists of plain vanilla strategies in which the
underlying asset is a standard asset, such as a broad equity index. Common strategies in this category include selling index put options, writing covered call options and selling both puts and calls
(e.g., straddles). Expected levels of future volatility determine the price of the put and call options when they are sold, but the outcome of these strategies is determined by the value of the
underlying equity index when the options expire.
When the driver of returns is the performance of an equity index over the life of the options, then this primary risk could be hedged with old fashioned equities. Such a strategy is only indirectly
exposed to the level of stock market volatility as a secondary risk.^2 As we will discuss below, index put writing and similar strategies are most appropriately viewed as equity replacements given
their meaningful exposure to broad equity indices.
The second class of short volatility strategies includes those in which the underlying “asset” is volatility itself. Common strategies in this category include selling variance swaps and shorting VIX
futures. The returns to these strategies are determined by the difference between the level of volatility when the trade is made and the level of volatility when the derivatives expire. As the profit
or loss for these strategies depends on a future level of volatility^3 of an equity index and not the value of the equity index itself, these can be viewed as derivatives of derivatives or
“derivatives squared.” The now infamous exchange-traded note (ETN), which traded under the ticker XIV, was based on a strategy that combined shorting VIX futures with a very specific daily
rebalancing rule.^4
Generally speaking, if the strategy is primarily a bet on future realized or future implied volatility, then the underlying asset is itself, in essence, a derivative. The primary risk for these
strategies is changes in levels of volatility. Hedging this risk would require trading volatility directly and so the strategy is exposed to the volatility of volatility as a secondary risk. As
recent events have clearly demonstrated, volatility can easily double or even triple in a single day’s trading session, exposing such products to explosive risks. It is no coincidence then that the
recent rapid rise in the VIX was catastrophic for several of these complex products. | {"url":"https://api.advisorperspectives.com/commentaries/2018/02/23/the-value-of-short-volatility-strategies","timestamp":"2024-11-10T04:34:39Z","content_type":"text/html","content_length":"134098","record_id":"<urn:uuid:e6e34885-5c9d-44d0-950e-fec529a85216>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00686.warc.gz"} |
Wave Optics Mind Map for JEE, NEET & Class12 - Download PDF
Mind MapWave Optics Mind Map
Wave Optics Mind Map
Wave optics, also known as physical optics, focuses on studying light phenomena like polarization, diffraction, and interference. It explores situations where the simple ray models of geometric
optics don’t apply. This branch of optics examines the wave properties of light.
In wave optics, we sometimes use ray optics to make estimates about the light field on a surface. We then combine these estimates when dealing with mirrors, lenses, or openings to figure out how
light scatters or passes through. We’ll delve deeper into wave optics for IIT JEE preparation in this section.
Also Check: Mind Map
Wave Optics Mind Map – Download PDF
Do you need help with your Homework? Are you preparing for Exams? Study without Internet (Offline)
Wave Optics Mind Map
Related Links:
Wave Optics Theories
Wave optics tells us about a famous disagreement between two big groups of scientists who wanted to understand light. One group thinks light acts like particles, while the other thinks it acts like
Sir Isaac Newton believed in the idea that light is made of tiny particles called corpuscles. He said these particles move really fast from a light source and help us see by bouncing off our eyes.
With his particle theory, Newton could explain things like reflection and refraction. However, he couldn’t explain other light behaviors like interference, diffraction, and polarization. One big
problem with Newton’s theory was that it couldn’t clarify why light slows down in denser materials compared to air or a vacuum.
Physics Mind Maps online to ace your NEET 2024 exam preparation
Wave Optics FAQs
What is called wave optics?
Wave optics deals with the study of light as waves rather than particles.
What is the principle of wave optics?
The principle of wave optics is that light waves can undergo interference, diffraction, and polarization.
What is the optical wave theory?
The optical wave theory explains how light behaves as waves when traveling through different mediums.
What is wave optics class 12 physics?
In Class 12 physics, wave optics covers topics like interference, diffraction, and polarization of light.
Which chapter comes under wave optics?
In Class 12 physics, the chapter on wave optics covers topics related to the behavior of light waves.
What is the wave theory Class 12?
The wave theory in Class 12 physics explains how light behaves as waves and the phenomena associated with it.
What is a wavefront Class 12 Ncert?
In Class 12 NCERT, a wavefront is defined as a surface containing all the points that are in the same phase of a wave. | {"url":"https://infinitylearn.com/surge/mind-map/physics-wave-optics/","timestamp":"2024-11-06T04:22:13Z","content_type":"text/html","content_length":"174434","record_id":"<urn:uuid:da173942-d602-43bb-b9e3-d71d271e3866>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00356.warc.gz"} |
How to Find the Generalized Eigenvector in a Matrix ODE?
• I
• Thread starter Alwar
• Start date
In summary, the conversation discusses finding the generalized eigenvector matrix for a set of ODE's represented in matrix format. The matrix has algebraic multiplicity 3 and geometric multiplicity
2. The conversation also addresses a possible error in the journal manuscript, as well as the issue of choosing eigenvectors when there are degenerate eigenvalues. The final conclusion is that the
sixth column of the eigenvector matrix is not a random vector, but one that allows for the calculation of a Jordan chain.
TL;DR Summary
To find the generalized eigenvectors of a 6x6 matrix
I have a set of ODE's represented in matrix format as shown in the attached file. The matrix A has algebraic multiplicity equal to 3 and geometric multiplicity 2. I am trying to find the generalized
eigenvector by algorithm (A-λI)w=v, where w is the generalized eigenvector and v is the eigenvector found by (A-λI)v=0. But I am not able to get the eigenvector matrix M as shown in the attached
Any help could be useful.Thanks,
I'm a little confused, it says the eigenvalues are ##\pm k^2## but then the Jordan normal form only has ##\pm k## on the diagonal. Aren't the diagonal elements supposed to be the eigenvalues?
Would you be a little more specific about the problem you're having? You say you can't find the matrix of eigenvectors. Does that mean you are unable to calculate any particular eigenvector? If I
understand your question, we can set aside the generalized eigenvectors for now, correct?
Office_Shredder said:
I'm a little confused, it says the eigenvalues are ##\pm k^2## but then the Jordan normal form only has ##\pm k## on the diagonal. Aren't the diagonal elements supposed to be the eigenvalues?
This is the part of a manuscript published in the journal. I think they have mentioned eigenvalues wrongly as k^2. When I calculated its only k. I hope the Jordan matrix is correct.
Haborix said:
Would you be a little more specific about the problem you're having? You say you can't find the matrix of eigenvectors. Does that mean you are unable to calculate any particular eigenvector? If I
understand your question, we can set aside the generalized eigenvectors for now, correct?
Thanks Haborix. Inspecific, I cannot find the first and sixth column of eigenvectors in matrix M. But when I find the characteristic equation (A-λI)X = 0 for each of the eigenvalue, both these
vectors satisfy the solution upon back substitution. But I cannot find them directly. Or the authors have taken any random vector?
Haborix said:
Would you be a little more specific about the problem you're having? You say you can't find the matrix of eigenvectors. Does that mean you are unable to calculate any particular eigenvector? If I
understand your question, we can set aside the generalized eigenvectors for now, correct?
For λ = -sqrt(α^2+β^2), solving the characteristic equation in the matlab, the fourth column vector in M is displayed as solution. However if we manually do, we can get four equations as shown in the
attached figure. And the first equation repeats three times meaning infinite solutions. The sixth column vector will satisfy the solution. But is it just a random vector? Or can it be attained as a
When you have degenerate eigenvalues you usually pick three eigenvectors such that they are all mutually orthogonal. I think there are some typos in your original attachment in post #1. In a few
instances if a ##k## were a ##k^2## then two eigenvectors would be orthogonal. The big picture message is that there is freedom in choosing the eigenvectors for a given eigenvalue with when its
multiplicity greater than 1.
Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
Alwar said:
But is it just a random vector? Or can it be attained as a solution.
I solved for the two eigenvectors ##\vec v_1, \vec v_2## for ##\lambda = -k## the usual way, and I found that the system ##(A-\lambda I)\vec x = \vec v_i## was inconsistent for both eigenvectors. But
that was okay because any linear combination of the two is still an eigenvector, and there is one that results in a system with a solution, namely the sixth column of ##M##. So it's not a random
vector, but the one that allows you to calculate a Jordan chain.
I found the vector by setting up an augmented matrix where the seventh column was a linear combination of the two eigenvectors and row-reduced until I ended up with a row of zeros in the left six
columns. Then I solved for coefficients that caused the corresponding value in the seventh column to vanish.
By the way, I noticed a typo: the bottom diagonal element of ##J## should be ##-k##, not ##k##.
vela said:
I solved for the two eigenvectors ##\vec v_1, \vec v_2## for ##\lambda = -k## the usual way, and I found that the system ##(A-\lambda I)\vec x = \vec v_i## was inconsistent for both eigenvectors.
But that was okay because any linear combination of the two is still an eigenvector, and there is one that results in a system with a solution, namely the sixth column of ##M##. So it's not a
random vector, but the one that allows you to calculate a Jordan chain.
I found the vector by setting up an augmented matrix where the seventh column was a linear combination of the two eigenvectors and row-reduced until I ended up with a row of zeros in the left six
columns. Then I solved for coefficients that caused the corresponding value in the seventh column to vanish.
Hi Vela,
Thanks for your effort. Can you upload the solution of how you got the sixth column of eigenvector. I hope I will be able to grasp what you have done.
Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
Sorry, no. I did the calculations using Mathematica and didn't save the notebook.
Science Advisor
Gold Member
Alwar said:
Hi Vela,
Thanks for your effort. Can you upload the solution of how you got the sixth column of eigenvector. I hope I will be able to grasp what you have done.
You can probably find a solution using mathworld
FAQ: How to Find the Generalized Eigenvector in a Matrix ODE?
1. What is a generalized eigenvector?
A generalized eigenvector is a vector that satisfies a specific condition when multiplied by a matrix. It is a generalization of the concept of eigenvectors, which are only defined for square
2. How is a generalized eigenvector different from a regular eigenvector?
A regular eigenvector is only defined for square matrices and satisfies the equation Av = λv, where A is the matrix, v is the eigenvector, and λ is the corresponding eigenvalue. A generalized
eigenvector, on the other hand, satisfies the equation (A - λI)^k v = 0, where k is the size of the matrix and I is the identity matrix.
3. What is the significance of generalized eigenvectors in linear algebra?
Generalized eigenvectors play a crucial role in the theory of linear algebra, particularly in the study of diagonalizable and non-diagonalizable matrices. They allow us to find a basis for the
generalized eigenspace, which is the set of all vectors that satisfy the generalized eigenvector equation for a given eigenvalue.
4. How are generalized eigenvectors used in practical applications?
In practical applications, generalized eigenvectors are used to solve systems of differential equations, compute matrix exponentials, and diagonalize non-diagonalizable matrices. They also have
applications in fields such as physics, engineering, and economics.
5. Can a matrix have both regular and generalized eigenvectors?
Yes, a matrix can have both regular and generalized eigenvectors. In fact, a matrix may have more generalized eigenvectors than regular eigenvectors, as the generalized eigenvector equation is more
general and can have multiple solutions for a given eigenvalue. | {"url":"https://www.physicsforums.com/threads/how-to-find-the-generalized-eigenvector-in-a-matrix-ode.1005533/","timestamp":"2024-11-04T23:04:14Z","content_type":"text/html","content_length":"130009","record_id":"<urn:uuid:befdf680-3723-4c1a-b594-1ce7cec8fc09>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00314.warc.gz"} |
String algebras over local rings: admissibility and biseriality.
String algebras are classically of path algebras over fields. Path algebras have also been considered over any noetherian local ground ring. Raggi-Cardenas and Salmeron generalised the definition of
an admissible ideal in this context. A generalisation of string algebras from my PhD thesis likewise replaced the ground field with a local ring. In this talk I will explain how this definition
relates to admissibility, and yields biserial rings in a sense used by Kirichenko and Yaremenko. I will also provide examples coming from metastable homotopy theory following work of Baues and Drozd.
Time permitted, I will present an example of a clannish algebra over a local ring that is related to modular representations of the Matheiu 11-group, following Roggenkamp. This is based on an arxiv
preprint 2305.12885. | {"url":"https://projects.au.dk/homologicalalgebra/seminaraarhus/event/activity/6542?cHash=1107c10d11e1fae2c12a29fd27d49cd7","timestamp":"2024-11-10T05:45:52Z","content_type":"text/html","content_length":"16780","record_id":"<urn:uuid:5a8aa104-9e0b-494c-9a7c-2492696f346d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00807.warc.gz"} |
Modeling Inductors with LTSpice
This article published by All About Circuits authored by Ignacio de Mendizábal discusses how to model inductors using LTSpice circuit simulation software.
Inductors are pillars among electronic components. In this article, we’ll learn how we can model inductors using LTspice, a circuit simulation program where the accuracy of the simulations depends on
the accuracy of the models used.
Here, we’ll discuss three different simulation models, starting with the lowest complexity (linear), discussing a middle ground (non-linear), and moving to the highest complexity (the CHAN model).
Along the way, you’ll also learn some tricks about LTspice.
Inductor Saturation Current and Hysteresis
Inductors present an upper limit to the storage of magnetic energy. When the saturation current is reached, the inductor loses magnetic properties such as permeability. When this happens, inductors
are not able to continue storing energy.
This situation is reversed as soon as the current circulating through the inductor is reduced. This concept of saturation must be considered in models in order to have accurate simulations for
applications such as power supplies, where magnetic components are crucial.
A particularity of inductors is that even if we remove the magnetizing current circulating through an inductor, the magnetic flux density associated with the inductor’s core material does not
decrease to zero on its own. We need to apply current in the opposite direction to restore the inductor to a non-magnetized state. This phenomenon is called hysteresis and is one of the main
characteristics that determine the application of a magnetic material.
As demonstrated in the visual above, we can see that the amount of flux present in an inductor depends not only on the applied current but also on the previous state of the inductor.
Resistance, Capacitance, and Temperature in Inductors
Ideally, inductors present only inductance, which is measured in henries (H). In the real world, however, we must content with parasitics which are always present in inductive components. Because
these parasitics make an inductor’s behavior non-ideal, we can’t overlook them when simulating an inductor.
While we won’t spend much time in this article discussing the magnetic properties of inductors, here’s a list of relevant parameters that will help us improve the accuracy of our model when we
simulate inductors in LTspice:
• Rseries: Series resistance due to the finite resistivity of the copper (also known as DC resistance)
• Rparallel: Parallel resistance caused by core losses
• Cparallel: Capacitance of consecutive windings
• Temperature coefficient: A consideration to account for the fact that inductors can alter their magnetic properties by self-heating (due to the current that circulates through them and parasitic
Adding these values into a simulation will help you yield more realistic results that will more closely correspond to the real-life behavior of a given inductor.
Simulation Option 1: A Simple Linear Model
A first model includes all the parameters listed above and executes a simulation as it happens in a linear circuit.
Luckily, it’s not necessary for us to add each parasitic component by hand. In order to make simulations run faster, LTspice includes internal models.
If you right-click on an inductor, you will see the following window:
Parasitic components of inductors in LTspice
Here’s a trick for LTspice! If you do not introduce any value for the parallel resistance, LTspice will include a default value. If you’d like to deactivate this option, go through the Tools menu and
select the Control Panel. From here, select the Hacks! tab, as shown below.
Option for default parallel resistance (Rpar) in LTspice
You’ll want to uncheck the box that says “Supply a min. inductor damping if no Rpar is given”.
Simulation Option 2: A Non-Linear Model
When linear models are not enough, LTspice provides the means to consider inductor saturation. We can define the function that determines the inductor flux.
In order to define the inductor flux, we need to modify the netlist. This can be done by pressing the “CTRL” key and then right-clicking an inductor. This will bring up the following window:
The variable “x” refers to the inductor current. We can enter our own information into the “Value” field and then press the “OK” button. Now, in order to verify our input, we select “View” in the
menu and then select “SPICE Netlist”. This brings us to the schematic editor.
In our example here, our simulated circuit consists of an inductor in series with a current source.
The voltage across an inductor follows can be expressed as
So since what we are representing is the changes in the current, the inductance can be obtained directly by measuring the voltage of the inductor (node ind).
For clarity, we plot the expression: V(ind)/1V to remove voltage units. Don’t forget to set the vertical scale to linear.
We can see why the inductance decreases in this way if we recall that the inductor’s magnetic flux is equal to inductance times current. Current increases at a steady rate during the 1-second
simulation—but, because of saturation, the magnetic flux does not increase steadily. The decrease in inductance reflects this change in the relationship between current and magnetic flux.
For further analysis, we can plot the inductance as a function of the current. We make the current increase from -3 amps to 3 amps in steps of 0.01.
This circuit yields the following plot:
Simulation Option 3: The CHAN Model
When designing our magnetics, we need to control all the parameters of the inductors we discussed earlier. Sometimes it can be difficult to model all of them in LTspice or any other simulation tool.
There is a third model available in LTspice, the CHAN model, created by John Chan and discussed in a research paper entitled “Nonlinear transformer model for circuit simulation.” The accuracy of this
model has been widely proven and it can perform the modelization of the hysteresis loop with only three parameters:
• Coercive force (Hc), in Amp-turns/meter
• Residual flux density (Br), in Tesla
• Saturation flux density (Bs), in Tesla
Also, the mechanical aspects of the inductor need to be added:
• Magnetic Length (Lm), in meters
• Length of the gap (Lg), in meters
• Cross-sectional area (A), in square meters
• Number of turns (N)
Let’s see what happens to the same circuit we used before if we include all these parameters.
And now we plot the inductance as a function of current:
Inductors are complex and critical components in electronics. LTspice allows designers to make the design cycle easier by providing fast and accurate methods to simulate them. Depending on the
complexity of your circuit, you can use one of the three models presented here.
The circuits in this article are quite basic, but they are a good starting point for further analysis. There is a tradeoff between speed and accuracy, but LTspice is generally quite fast, so it is
always advisable to use the most precise model when possible. | {"url":"https://passive-components.eu/modeling-inductors-with-ltspice/","timestamp":"2024-11-07T05:41:04Z","content_type":"text/html","content_length":"459056","record_id":"<urn:uuid:c4709386-3cae-421c-94c1-0898ce2cf6e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00765.warc.gz"} |
Getting this msg "Excel ran out of resources while attempting to calculate one or more formulas." | Microsoft Community Hub
Forum Discussion
Getting this msg "Excel ran out of resources while attempting to calculate one or more formulas."
I am having issue with excel sheet which keeps showing me this issue "Excel ran out of resources while attempting to calculate one or more formulas. As a result, these formulas can not be evaluated."
Never had seen this before- pls guide.
• In cell D36 you have formula =30:294001 which means you instruct Excel to return all rows from 30 to 294001. Includes row in which the formula is. Kind of cyclic reference and never ended loop.
If you remove this formula it works.
• In cell D36 you have formula =30:294001 which means you instruct Excel to return all rows from 30 to 294001. Includes row in which the formula is. Kind of cyclic reference and never ended loop.
If you remove this formula it works.
□ SergeiBaklan - Amazing it worked well for me. Thanks a million.
Happy Holidays and Happy New Year in advance!
Any chance you could help me figure out whats wrong with my workbook? I am getting the message as well. I am a complete novice at excel but am trying to put together a workbook to calculate
show estimates/costs for venue events. I have been reading lots of answers on how to fix this error but cant figure it out. Any help is appreciated!
☆ I see no problems with your file, it works correctly (at least without mentioned error).
□ SergeiBaklan hi! I have the same issue. How do i find the cell containing the error?
Thank you
☆ First issue in 2021
Second one in 2019
I change both on 2021-12-13m have no ide what it shall be.
• Cannot move from cell to cell without an error message showing- 'excel ran out of resources'. Can you please advise??????
• SergeiBaklanI am also getting that error. Could you please take a look and advise/ fix anything you see wrong.
• I am also having this error that never occured before i upgraded from Excel 11 recently. I am on a mac
□ I commented formulas in 'Food What'!C91 (circular reference) and in 'Payroll 2021'!AA40 (reference on entire column, '='C:\Users\LEFDPDirector\Downloads\[report1628717061286.xls]
Not sure which formulas shall be, but now there is no "out of resources" error.
☆ @sergei bakln thank you this is so very helpful. what did you do to find the problems? they were ruining my experience of excel.
• SergeiBaklan Hi. How do you find the formula errors in the spreadsheet? I'm getting the same error message as the original post. Thank you for your time.
• I to am having the same error message, any assistance would be much appreciated
□ It was a circular reference in November!J222. Change ranges here from IF(COUNTIF(J$5:J$230,... on IF(COUNTIF(J$5:J$220,...
Also in
(all commented)
In November!J237 formula contains ...*(H242ROW(J$5:J$218).. . Commented it.
Regarding Ecology Planner 270821.xlsx, this file has two problems. One is circular references, a problem addressed by Sergei Baklan. Another is the "Excel ran out of resources ..." error and
this problem is left unadressed so far.
Actually, the style reduction tool sees exactly one "slow calculation" in it, the one triggering the blocking error message in Excel.
It's in September ! N266
Replace the formula in it :
and that fixes the problem. Hope it helps.
Any chance you could help me figure out what's wrong with my workbook? I am getting the message as well.
I am trying to put together a stock dashboard. One of the functions I want to implement is to sort out my current holdings. But my formula (on Dashboard!B5, highlighted in yellow) returned the
same error that everybody here did. Not sure why is it happening since there should be no cyclic referencing and the lady on Youtube could do it. Attaching the link for the Youtube tutorial as
well. Many thanks in advance!
□ Not sure what you'd like to calculate, but this formula
won't work, I guess internal cycling.
• If to change this
on proper formula or constant it shall work.
□ SergeiBaklan Hi Sergei, appreciate all the help you provide here. I tried getting the inquire add-in to find this error myself, but not having any luck. Are you able to identify where the
issue is in the attached spreadsheet? It is so simple and there are not any complicated formulas.
In D825, the formula is {=D795:DD764844:D803} which is non-sense, and most likely a typo.
□ That is here
I disabled above formula, please see attached.
Sir, I am getting the "Excel ran out of resources error from the following file.
Can you please help me rectify the same, further can you please explain how to diagnose the same.
□ I can do nothing without the file. If you have Inquire in your version of Excel, you may analyse workbook and check which array formula is wrong. | {"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/getting-this-msg-excel-ran-out-of-resources-while-attempting-to-calculate-one-or/2003279","timestamp":"2024-11-07T21:09:39Z","content_type":"text/html","content_length":"459987","record_id":"<urn:uuid:a43d15f1-6f6b-46c4-a90f-3acd059909f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00873.warc.gz"} |
[Haskell] Applicative translucent functors in Haskell
oleg at pobox.com oleg at pobox.com
Fri Aug 27 19:51:43 EDT 2004
ML is known for its sophisticated, higher-order module system, which
is formalized in Dreyer-Crary-Harper language. A few months ago Ken
Shan showed a complete translation of that language into System Fw:
Ken Shan has concluded that languages based on Fw, such as Haskell
with common extensions, already support higher-order modular
programming. In fact, his paper showed a sample Haskell translation of
the most complex and interesting ML module expression: a translucent,
applicative functor. Different instantiations of the functor with
respect to type-compatible arguments are type-compatible; and yet the
functor hides the representation details behind the unbreakable
abstraction barrier.
Dreyer-Crary-Harper language and System Fw are deliberately
syntactically starved, to simplify formal analyses. Therefore, using
the results of Ken's paper _literally_ may be slightly
cumbersome. This message is an attempt to interpret some of Ken's
results in idiomatic Haskell with the full use of type classes. We
will also show that type sharing constraints can be expressed in a
scalable manner, so that the whole translation is practically
usable. Thus we can enjoy the sophisticated, first-class higher-order
module system in today's Haskell. No new extensions are required;
furthermore, even undecidable instances (let alone overlapping
instances) are not used. This message has been inspired by Ken Shan's
paper and has greatly benefited from several conversations with him.
Throughout this message we will be using an example of polymorphic
sets parameterized by an element-comparison function. Our example is
an extended version of the example in Ken's paper. We will also be
using OCaml syntax for module expressions, which, IMHO, makes a little
bit more sense. We abuse the terminology and use the word 'module' to
also mean what ML calls 'structure'.
This message is both Haskell and OCaml literate code. It can be loaded in
GHCi or Hugs -98 as it is. To get the OCaml code, please filter the text
of the message with "sed -n -e '/^}/ s/^} //p'"
> {-# OPTIONS -fglasgow-exts #-}
> module Functors where
Our goal in this message is to produce implementations of a SET
} module type SET = sig
} type element
} type set
} val empty : set
} val add : element -> set -> set
} val member : element -> set -> bool
} end;;
This is the OCaml declaration of a type of a collection made of two
type declarations and three value declarations. Such collections are
called structures (or, modules, by abuse of the terminology) and
their types are called signatures. The concise description of the
OCaml module language can be found at
We should point out that the types 'element' and 'set' are abstract --
the right-hand side of the corresponding type declarations is
empty. In ML, a single colon adds a type annotation whereas a double
colon is a list constructor -- in Haskell, it is just the opposite.
The corresponding SET signature in Haskell is well-known:
> class SET element set | set -> element where
> empty :: set
> add :: element -> set -> set
> member :: element -> set -> Bool
We shall build an implementation of SET parameterized by the element
comparison function. The comparison function's interface is to be
described by its own signature, ORD. We shall define two different
instances of ORD and instantiate the SET functor with those two
instances. To make the game more interesting, our implementation of
the ORD interface will itself be parameterized by the ENUM interface,
which maps elements in a totally ordered domain into integers. So, the
comparison function will use that mapping to derive the element
comparison. Thus our game plan is:
- introduce the ENUM interface,
- define two implementations of the ENUM interface,
- introduce two different ENUM->ORD transparent functors,
- instantiate the functors yielding four implementations of the
ORD interface,
- introduce a translucent ORD->SET applicative functor,
- instantiate it obtaining different implementations of SET
- run a few tests, to illustrate applicativity of the functor and
the abstraction barrier
We start with the ENUM interface:
} module type ENUM = sig
} type e
} val fromEnum: e -> int
} end;;
and its two implementations. One of them is
} module IntEnum : (ENUM with type e = int) = struct
} type e = int
} let fromEnum x = x
} end;;
We wrote a module expression -- a collection of one type definition
and one (functional) value definition, and told the compiler its
explicit type, ENUM. The stuff after 'with' is a type sharing
constraint: the type of IntEnum is ENUM such that the type ENUM.e is
int. The explicit type annotation can be dropped:
} module CharEnum = struct
} type e = char
} let fromEnum x = Char.code x
} end;;
The corresponding code in Haskell is
Indeed, the standard Enum type class in Haskell serves our
purpose. The class declaration plays the role of the signature
declaration, with instances providing the implementation. It's all
part of the Prelude, so we have nothing to write here.
One may ask -- what if I want several instances that correspond to the
same type? How to do that in Haskell? Please read on.
We use the ENUM module to build element comparison functions -- or
modules of the signature:
} module type ORD = sig
} type el
} val compr: el -> el -> bool
} end;;
The interface includes the type 'el' and the value of the comparison
function. Our comparison function is actually an equality predicate --
but this is enough for our purposes here. This message isn't about the
efficient implementation of sets. We promised to build two
implementations of ORD, both of which parameterized by ENUM. The
first implementation is:
} module Ord_LR(Elt: ENUM) : (ORD with type el = Elt.e) = struct
} type el = Elt.e
} let compr x y = (Elt.fromEnum x) = (Elt.fromEnum y)
} end;;
The type sharing constraint became more interesting: the result of the
Ord_LR functor (a mapping from modules to modules) is the module of
the signature ORD whose type 'el' is not explicitly specified and
remains abstract. Yet the type is constrained to be the same as the
element type of the argument ENUM module -- whatever that happens to
be. The other functor is:
} module Ord_LE(Elt: ENUM) = struct
} type el = Elt.e
} let compr x y = ((Elt.fromEnum x) mod 2) = ((Elt.fromEnum y) mod 2)
} end;;
Now, our ML code gives two different implementations of ORD, both of
which may have the same element type. How can we do that in Haskell?
Very simple: by introducing a discriminator label type.
> class ORD label elem where
> compr:: label -> elem -> elem -> Bool
We translate Ord_LR and Ord_LE into the following Haskell code:
> -- Labels
> data LR = LR
> data LE = LE
> instance Enum a => ORD LR a where
> compr _ a b = fromEnum a == fromEnum b
> instance Enum a => ORD LE a where
> compr _ a b = even (fromEnum a) == even (fromEnum b)
As before, a class declaration corresponds to an ML signature, and
instances correspond to implementations of the signature. The two
Haskell instances above are actually parameterized implementations --
that is, functors -- parameterized by Enum. The type sharing
constraint ENUM.e = ORD.el is expressed by using the same type
variable 'a' in the instance declarations. We thus observe that an
abstract type of ML corresponds to a type variable (here:
uninstantiated type variable) and the type sharing is expressed
by sharing of the type variable names.
Our goal, the ORD->SET functor, is as follows:
} module SETF(Elt: ORD) : (SET with type element = Elt.el) = struct
} type element = Elt.el
} type set = element list
} let empty = []
} let member element set = List.fold_left
} (fun seed e -> Elt.compr element e || seed) false set
} let add element set = if member element set then set else (element::set)
} end;;
Recall that the signature SET had two abstract types: 'element' and
'set'. The type sharing constraint makes the former to be the same as
ORD.el. The type SET.set remains abstract. Here's the complete type
that OCaml infers for SETF
module SETF :
functor (Elt : ORD) ->
type element = Elt.el
and set
val empty : set
val add : element -> set -> set
val member : element -> set -> bool
Although our implementation of the functor used "element list" for the
type set, in the result of the functor, 'set' remains abstract. The
precise implementation of 'set' is hidden behind the abstraction
barrier. The functor SETF is therefore translucent -- the user can see
SET.element (if he can see ORD.el) -- but SET.set is hidden. The
functor SETF is also applicative -- if Ord1 == Ord2, then SETF(Ord1)
== SETF(Ord2). Applications of an applicative functor to type
compatible arguments yield type-compatible results. Let us illustrate
that point first, before considering the Haskell implementation.
} module Set1 = SETF(Ord_LR(IntEnum));;
} module Set2 = SETF(Ord_LR(IntEnum));;
We create two Sets, by instantiating SETF over ORD, which, in turn,
are instantiated over IntEnum. We have nested functor
applications. Both functors are applicative, so that Set1 and Set2 are
type compatible. That is, we can freely use the methods of Set1 on
Set2, and vice versa:
} let s1 = Set1.add 1 Set1.empty;;
} let s2 = Set2.add 2 s1;;
} let r1 = Set1.member 1 s2;;
Set 's1' was created by the module Set1 -- and yet we use Set2.add to
add more elements to it. We then have Set1.member check for
membership in the resulting set.
Of course if we apply the functor to type-incompatible modules, the
results aren't type-compatible either. Given
} module Set3 = SETF(Ord_LE(IntEnum));;
} let s3 = Set3.add 3 Set3.empty;;
An attempt to evaluate
let r2 = Set1.member 3 s3;;
This expression has type Set3.set = SETF(Ord_LE(IntEnum)).set
but is here used with type Set1.set = SETF(Ord_LR(IntEnum)).set
leads to a type error. And so does this:
} module Set4 = SETF(Ord_LE(CharEnum));;
} let s4 = Set4.add 'a' (Set4.add 'b' Set4.empty);;
} let r4 = Set4.member 'c' s4;;
let rw = Set1.add 1 Set4.empty;;
This expression has type Set4.set = SETF(Ord_LE(CharEnum)).set
but is here used with type Set1.set = SETF(Ord_LR(IntEnum)).set
The error message itself is remarkable. It says that the type
SETF(Ord_LR(IntEnum)).set is different from SETF(Ord_LE(IntEnum)).set
and from the type SETF(Ord_LE(CharEnum)).set. And yet
SETF(Ord_LR(IntEnum)).set produced in two different applications of
SETF(Ord_LR(IntEnum)) is the same. However, 'set' itself remains
abstract. Indeed, if we attempt to break the barrier and access 'set'
as if it were a list (which it is, operationally), we get a type
List.length Set1.empty;;
This expression has type Set1.set = SETF(Ord_LR(IntEnum)).set
but is here used with type 'a list
How can we implement such an applicative and translucent functor in
Haskell? As Ken Shan pointed out, we need higher-ranked types, and
First, we introduce an auxiliary class, over a higher-ranked type
> class SETE r where
> lempty :: r l elem
> ladd :: (ORD l elem) => elem -> r l elem -> r l elem
> lmember:: (ORD l elem) => elem -> r l elem -> Bool
The class describes the signature of a transparent functor: notice the
parameterization by ORD. Now we have to claim that the result of
applying SETE to the ORD `module' is a SET. In Haskell terms, we have
to make the result of `applying' SETE an instance of the class SET:
> newtype SetESet a = SetESet a
> instance (SETE setlabel, ORD ordlabel element) =>
> SET element (SetESet (setlabel ordlabel element)) where
> empty = SetESet $ lempty
> add e (SetESet s) = SetESet $ ladd e s
> member e (SetESet s) = lmember e s
This is clearly the boilerplate. One may hope that it could be
automated, eventually. It is worth pointing out the sharing
constraint: the fact that 'element' type of a set is the same as the
element type of ORD is expressed here by using the same type variable
'element' in both places. So, the type sharing constraint of ML is
expressed by sharing a name in Haskell.
Now, we need to make the 'set' type hidden while at the same type
making sure we do not hide the element type:
> data SETFE = forall f. SETE f => SETFE (forall a b. (SetESet (f a b)))
As before, an abstract type of ML corresponds to a type variable in
Haskell -- only here it is an explicitly quantified type variable. The
existential quantification provides the unbreakable abstraction
barrier. As Ken Shan repeatedly emphasized in his paper, the insight
here is skolemization, needed to universally quantify _under_ the
existential quantification. And for that, we need a higher-ranked type.
We haven't yet provided the implementation for the functor SETE:
> newtype SLE l e = SLE [e]
> instance SETE SLE where
> lempty = SLE []
> ladd e s@(SLE set) = if lmember e s then s else SLE$ e:set
> lmember e (SLE set::SLE l e) =
> not (null (filter (compr (undefined::l) e) set))
And here is the applicative functor itself:
> setfe = SETFE (SetESet(lempty::SLE l e))
We must emphasize that it is an ordinary Haskell value. Thus we gained
not only a higher-order module system, but also a first-class one.
We only need to introduce a way to appropriately instantiate the functor
> inst:: w (f a b) -> a -> b -> w (f a b)
> inst f a b = f
and we are ready for examples:
> testf = case setfe of
> (SETFE fs) -> let set1_empty = inst fs LR (undefined::Int)
> s1 = add 1 (add 0 set1_empty)
> rm1 = member 2 s1
Let us make another instantiation of the functor, to the same
arguments: LR and Int:
> set2_empty = inst fs LR (undefined::Int)
> set2_add e s = add e (s `asTypeOf` set2_empty)
> set1_member e s = member e (s `asTypeOf` set1_empty)
> s2 = set2_add 3 s1
> r1 = set1_member 1 s2
we observe applicativity: two instantiations of the functor to type
compatible arguments are type compatible: set1_empty and set2_empty
are of the same type. Of course, if we instantiate the functor to type
incompatible arguments, the results are not type compatible:
> set3_empty = inst fs LE (undefined::Int)
> s3 = add 1 (add 0 set3_empty)
> r3 = member 2 s3
> -- rw1 = set1_member 3 s3
If we uncomment the last line, we get a type error. The error message
is especially revealing in Hugs:
ERROR - Type error in application
*** Expression : set1_member 3 s3
*** Term : s3
*** Type : SetESet (_3 LE Int)
*** Does not match : SetESet (_3 LR Int)
It is worth comparing with the corresponding OCaml message cited
earlier. Note that the type indicate the particular implementation of
the set is abstract: printed as _3 here. We can't know which
implementation of set we're dealing with.
> set4_empty = inst fs LE 'x'
> s4 = add 'a' (add 'b' set4_empty)
> r4 = member 'c' s4
> -- rw2 = set1_member 'a' s4
Again, if we try to break the abstraction barrier, as in the following
statement, we just get a type error.
> -- rw3 = null x where (SetESet x) = set1_empty
> in (rm1,r1,r3,r4)
The example illustrates that Haskell already has a higher-order module
language integrated with the core language and with the module
checking being a part of the regular typechecking.
The upshot of the translation:
ML signatures correspond to Haskell type classes, and their
implementations to the instances
Abstract types in ML correspond to either uninstantiated or
explicitly quantified type variables in Haskell
Type sharing is expressed via type variable name sharing
Functor (signatures or structures) correspond to Haskell (class
declarations or instances) with type class constraints
The argument of functors is expressed via types, with additional labels
when needed for finer differentiation
Functor applications are done by instance selection based on types
at hand plus the additional labels
OCaml signature attribution operation -- casting the module or
the result of the functor into a desired signature and hiding
the extras -- sometimes involves additional tagging/untagging tricks
(cf. SetESet). This tagging, done via newtype, is syntactic only and
has no run-time effect.
Hiding of information (`sealing', in ML-speak) is done by
existential quantification. To gain applicativity, we quantify over
a higher-ranked type variable (Skolem function proper).
More information about the Haskell mailing list | {"url":"https://mail.haskell.org/pipermail/haskell/2004-August/014463.html","timestamp":"2024-11-12T16:51:38Z","content_type":"text/html","content_length":"20423","record_id":"<urn:uuid:677ff697-1c87-4db1-85b9-48be35430071>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00547.warc.gz"} |
期刊界 All Journals 搜尽天下杂志 传播学术成果 专业期刊搜索 期刊信息化 学术搜索
收费全文 1163篇
免费 16篇
国内免费 24篇
化学 634篇
晶体学 5篇
力学 40篇
数学 314篇
物理学 210篇
2023年 4篇
2022年 6篇
2021年 9篇
2020年 10篇
2019年 17篇
2018年 15篇
2017年 19篇
2016年 20篇
2015年 22篇
2014年 33篇
2013年 84篇
2012年 42篇
2011年 66篇
2010年 59篇
2009年 43篇
2008年 42篇
2007年 60篇
2006年 59篇
2005年 55篇
2004年 44篇
2003年 41篇
2002年 35篇
2001年 43篇
2000年 35篇
1999年 34篇
1998年 22篇
1997年 32篇
1996年 19篇
1995年 17篇
1994年 18篇
1993年 12篇
1992年 19篇
1991年 11篇
1990年 9篇
1989年 13篇
1988年 15篇
1987年 16篇
1986年 13篇
1985年 6篇
1984年 19篇
1983年 9篇
1982年 7篇
1981年 4篇
1980年 4篇
1978年 8篇
1977年 6篇
1976年 5篇
1974年 5篇
1973年 4篇
1972年 3篇
排序方式:共有1203条查询结果,搜索用时 15 毫秒
The statistical theory of certain complex wave interference phenomena, like the statistical fluctuations of transmission and reflection of waves, is of considerable interest in many fields of
physics. In this article, we shall be mainly interested in those situations where the complexity derives from the quenched randomness of scattering potentials, as in the case of disordered
conductors, or, more in general, disordered waveguides.In studies performed in such systems one has found remarkable
statistical regularities
, in the sense that the probability distribution for various macroscopic quantities involves a rather small number of relevant physical parameters, while the rest of the microscopic details serves as
mere “scaffolding”. We shall review past work in which this feature was captured following a maximum-entropy approach, as well as later studies in which the existence of a limiting distribution, in
the sense of a generalized central-limit theorem, has been actually demonstrated. We then describe a microscopic potential model that was developed recently, which gives rise to a further
generalization of the central-limit theorem and thus to a limiting macroscopic statistics.
New polymer inclusion membranes (PIMs) containing 18-membered crownethers and dialkylnaphthalenesulfonic acid are proposed for Sr2+ and Pb2+removal from nitric acid solutions. The influence of source
phasecomposition and stripping agents was characterized and permeabilitycoefficients were calculated. The PIMs are easy to prepare and may be usefulin separation and concentration procedures for
these cations from complexmixtures such as nuclear waste. Long-term stability was obtained for atleast several weeks of constant use during which no significant change ofpermeability was observed.
The heat capacity of water in the form of hexagonal ice was measured between
= 0.5 K and
= 38 K using a semi-adiabatic calorimetric method. Since heat capacity data below
= 2 K have never been measured for water, this study presents the lowest measured values of the specific heat of water to date. Fits of the data were used to generate thermodynamic functions of water
at smoothed temperatures between 0.5 K and 38 K. Both our experimental heat capacities and calculated enthalpy increments agree well with previously published values and thus supplement other studies
Lipid A is the causative agent of Gram-negative sepsis, a leading cause of mortality among hospitalized patients. Compounds that bind lipid A can limit its detrimental effects. Polymyxin B, a
cationic peptide antibiotic, is one of the simplest molecules capable of selectively binding lipid A and may serve as a model for further development of lipid A binding agents. However, association
of polymyxin B with lipid A is not fully understood, primarily due to the low solubility of lipid A in water and inhomogeneity of lipid A preparations. To better understand lipid A-polymyxin B
interaction, pure lipid A derivatives were prepared with incrementally varied lipid chain lengths. These compounds proved to be more soluble in water than lipid A, with higher aggregation
concentrations. Isothermal titration calorimetric studies of these lipid A derivatives with polymyxin B and polymyxin B nonapeptide indicate that binding stoichiometries (peptide to lipid A
derivative) are less than 1 and that affinities of these binding partners correlate with the aggregation states of the lipid A derivatives. These studies also suggest that cooperative ionic
interactions dominate association of polymyxin B and polymyxin B nonapeptide with lipid A.
The quark-meson RPA equations, which describe small oscillations of a bound quark-meson system about the stationary configuration, are derived through linearization of the classical time-dependent
Euler-Lagrange problem. The method has an immediate application in phenomenological quark-meson models for baryons. It provides a test of the classical stability of these systems. A number of
measurable quantities, such as the spectrum of excited states and the meson-soliton phase shifts can be calculated. We demonstrate the QMRPA on a simple, 3 + 1 dimensional model of the nucleon — the
chiral quark-meson model.
be a compact metric space with no isolated points. Then we may embed
as a subset of the Hilbert cube
) so that the only homeomorphism of
onto itself that extends to a homeomorphism of
is the identity homeomorphism. Such an embedding is said to be rigid. In fact, there are uncountably many rigid embeddings
so that for
and any homeomorphism
is a
-set in
and a nowhere dense subset of each of
) and
In an earlier paper, formulae for det
as a ratio of products of principal minors of
were exhibited, for any given
zero-pattern of
. These formulae may be presented in terms of a spanning tree of the intersection graph of certain index sets associated with the zero pattern of
. However, just as the determinant of a diagonal and of a triangular matrix are both the product of the diagonal entries, the symmetry of the zero pattern is not essential for these formulae. We
describe here how analogous formulae for det
may be obtained in the asymmetric-zero-pattern case by introducing a directed spanning tree. We also examine the converse question of determining all possible zero patterns of
which guarantee that a certain determinantal formula holds.
Diallyloxyphosphorylation of nucleoside hydroxyls followed by palladium(0)-catalyzed deallylation provides a new, general method for the preparation of the 3′- and 5′-monophosphates. | {"url":"https://slh.alljournals.cn/search.aspx?subject=mathematical_chemical&major=sx&orderby=referenced&field=institution&q=%20Provo","timestamp":"2024-11-08T18:09:52Z","content_type":"application/xhtml+xml","content_length":"57333","record_id":"<urn:uuid:c6f3c1a3-f47b-462a-b2e4-c6be1e004eb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00265.warc.gz"} |
Codes for letters using binary representation
How do you think a computer knows which letter to show on the screen?
Discussion may start at the 26 letters of the English alphabet, and then expand to other characters on the keyboard, including capital letters, digits and punctuation. Students may be aware that
other languages can have thousands of characters, and the range of characters is also expanding as emoticons are invented!
There is also an online interactive version of the binary cards here, from the Computer Science Field Guide, but it is preferable to work with physical cards.
Can we match letters to numbers so that we can send coded messages to each other?
How many letters are there in the alphabet? Let’s count them together on our alphabet cards.
How can we represent the letters using numbers? (Guide students to the idea of using 1 for A, 2 for B, and so on.)
We can represent numbers using binary, but in the last lesson with 4 bits, what was the biggest number we could represent? (15)
How can we represent bigger numbers? (Add a card). How many dots on the extra card? (16)
Give out the cards, and have the students place them on the table in the correct order (16, 8, 4, 2, 1).
Now give them a number by saying "No, Yes, No, No, No" for the 5 cards. Ask them how many dots this produces. (The "Yes" is for the 8 card, so it's the number 8.) Which letter is number 8? ("H").
This can be written on the board.
Now give the next number, "No, Yes, No, No, Yes" (9). Which letter is number 9? ("I", which can be written after the "H".)
Now try sending a different word to the class. The Binary to Alphabet resource below shows the binary patterns for the 26-letter alphabet; you can use yes/no for 1/0, but you could also use other
ways of saying them, such as "on/off", "up/down", or even "one/zero". In particular, it may be helpful to represent a number higher than 16 to give them experience with the 5th bit.
Note that if your local alphabet is slightly different (e.g. has diacritics such as macrons or accents.) then you may wish to adapt the code to match the common characters; this issue is also
considered below.
Let’s work this out how to write our own binary code for “dad”.
You sing/say the alphabet slowly and I’ll count how many letters go by until we get to ‘D’ (D is the 4th letter).
You sing the alphabet slowly and I’ll count how many letters go by until we get to ‘A’. A is the 1st letter .
Hang on! Haven’t we already written the binary code for D? We can reuse that! Computer Scientists always look for ways to reuse any work they have done before. It’s much faster this way.
Now let’s try this with someone’s name? Whose name should we translate into binary numbers?
Choose a student and work through the steps to create their name.
To reinforce students' alphabet knowledge, you could translate all student's name into binary numbers onto a piece of card and display it around the room.
Some languages have more or fewer characters, which might include those with diacritic marks such as macrons and accents. If students ask about an alphabet with more than 32 characters, then 5 bits
won't be sufficient. Also, students may have realised that a code is needed for a space (0 is a good choice for that), so 5 bits only covers 31 alphabet characters.
A typical English language keyboard has about 100 characters (which includes capital and lowercase letters, punctuation, digits, and special symbols). How many bits are needed to give a unique number
to every character on the keyboard? (typically 7 bits will been enough since it provides 128 different codes).
Now have students consider larger alphabets. How many bits are needed if you want a number for each of 50,000 Chinese characters? (16 bits allows for up to 65,536 different representations).
It may be a surprise that only 16 bits is needed for tens of thousands of characters. This is because each bit doubles the range, so you don't need to add many bits to cover a large alphabet. This is
an important property of binary representation that students should become familiar with.
The rapid increase in the number of different values that can be represented as bits are added is exponential growth i.e. it doubles with each extra bit. After doubling 16 times we can represent
65,536 different values, and 20 bits can represent over a million different values. Exponential growth is sometimes illustrated with folding paper in half, and half again. After these two folds, it
is 4 sheets thick, and one more fold is 8 sheets thick. 16 folds will be 65,536 sheets thick! In fact, around 6 or 7 folds is already impossibly thick, even with a large sheet of paper.
Throughout the lessons there are links to computational thinking. Below we've noted some general links that apply to this content.
Teaching computational thinking through CSUnplugged activities supports students to learn how to describe a problem, identify what are the important details they need to solve this problem, break it
down into small logical steps so that they can then create a process which solves the problem, and then evaluate this process. These skills are transferable to any other curriculum area, but are
particularly relevant to developing digital systems and solving problems using the capabilities of computers.
These Computational Thinking concepts are all connected to each other and support each other, but it’s important to note that not all aspects of Computational Thinking happen in every unit or lesson.
We’ve highlighted the important connections for you to observe your students in action. For more background information on what our definition of Computational Thinking is see our notes about
computational thinking.
We used multiple algorithms in this lesson: one to convert a letter into a decimal number and then into a binary number, and vice versa. These are algorithms because they are a step-by-step process
that will always give the right solution for any input you give it as long as the process is followed exactly.
Here’s an algorithm for converting a letter into a decimal number:
Choose a letter to convert into a decimal number. Find the letter’s numerical position in the alphabet as follows: - Say A (the first letter in the alphabet) - Say 1 (the first number in our sequence
of numbers) - Repeat the following instructions until you come to the letter you are looking to convert - Say the next letter in the alphabet - Say the next number (counting up by 1) - The number you
just said is the decimal number that your letter converts too.
For example, to convert the letter E, the algorithm would have you counting A,1; B, 2; C, 3; D, 4; E, 5.
(A more efficient algorithm would have a table to look up, like the one created at the start of the activity, and most programming languages can convert directly from characters to numbers, with the
notable exception of Scratch, which needs to use the above algorithm.)
The next algorithm takes the algorithm from lesson 1 which we use to represent a decimal number as a binary number:
Can students create instructions for, or demonstrate, converting a letter into a decimal number, and then convert a decimal number into binary; are they able to show a systematic solution?
This activity is particularly relevant to abstraction, since we are representing written text with a simple number, and the number can be represented using binary digits, which, as we know from
lesson 1, are an abstraction of the physical electronics and circuits inside a computer. We could also expand our abstraction because we don’t actually have to use 0s and 1s to represent our message.
We could use any two values, for example you could represent your message by flashing a torch on and off, or drawing a line of squares and triangles on the whiteboard.
Abstraction helps us simplify things because we can ignore the details we don’t currently need to know. Binary number representation is an abstraction that hides the complexity of the electronics and
hardware inside a computer that stores data. Letters are an abstraction that a human can make sense of quickly; talking about the letter H is generally more meaningful than calling it "the 10th
letter of the alphabet", and when we are reading or speaking we don’t need to know it is the 10th letter anyway.
Have students create instructions for, or demonstrate how to represent new language elements, such as a comma.
Recognising patterns in the way the binary number system works helps give us a deeper understanding of the concepts involved, and assists us in generalising these concepts and patterns so that we can
apply them to other problems.
Have students decode a binary message from another student, by converting the binary numbers into text to view the message. Can they recognise patterns in the binary to anticipate what the word is?
Can they work with a different set of letters using the same principles?
Logical thinking means recognising what logic you are using to work these things out. If you memorise how to represent that the letter H is represented as binary 01010 it's not as generally
applicable as learning the logic that any character can be represented by the process described in this activity.
Observe the systems students have created to translate their letters into binary and vice versa. What logic has been applied to these? Are they efficient systems?
An example of decomposition is breaking a long message such as 00001000100001011001 into 5-bit components (00001 00010 00010 11001), each of which can now be converted to a letter. The 5-bit
components are then decomposed into the value of individual bits.
Can students convert a coded message with no spacing in it?
An example of evaluation is working out how many different characters can be represented by a given number of bits (e.g. 5 bits can represent 26 characters comfortably, but 6 bits are needed if you
have more than 32 characters, and 16 bits are needed for a language with 50,000 characters.
Can a student work out how many bits are needed to represent the characters in a language with 50 characters? (6 bits are needed) How about represent emojis, if you have about 10 emojis available (10
bits will be needed for each one). | {"url":"https://www.csunplugged.org/en/topics/binary-numbers/codes-for-letters-using-binary-representation-junior/","timestamp":"2024-11-13T21:06:53Z","content_type":"text/html","content_length":"39142","record_id":"<urn:uuid:63c5cd09-9ead-41b4-b9fe-9016b593c2a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00243.warc.gz"} |
E ^ x = x ^ 2
News. X server 1.20 and X.Org X11 Release 7.7 are included in Cygwin. Details are available in the announcements here and here. Note that since X server 1.17, by default the server does not listen
for TCP/IP connections, only accepting local connections on a unix domain socket. See this FAQ for more details.. Users upgrading from monolithic X (Release 6.9 and earlier, available via setup
Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. ∫
e^x^2 x dx ≡ ∫ e^u ½ du: All that I am doing here is replacing x^2 with u, and also replacing x dx with the expression found earlier. ½∫ e^u du: This is just a bit of tidying up with ½ behind the
integral. As you can see all you have to do now is to integrate e^u with respect to du which is a simple and straightforward integration. Compute answers using Wolfram's breakthrough technology &
knowledgebase, relied on by millions of students & professionals.
The common log function log(x) has the property that if log(c) = d then 10d = c. It’s possible to define a logarithmic function log 2021. 2. 25. · 이 문서는 2020년 11월 19일 (목) 11:41에 마지막으로 편
집되었습니다. 모든 문서는 크리에이티브 커먼즈 저작자표시-동일조건변경허락 3.0에 따라 사용할 수 있으며, 추가적인 조건이 적용될 수 있습니다. 자세한 내용은 이용 약관을 참고하십시오.
Sep 16, 2009 · ==> e^x = y + √(y^2 + 1) <== Dump the negative root as e^x > 0 ==> x = ln[y + √(y^2 + 1)] I hope that helps! 2 1. Anonymous. 1 decade ago. that's impossible
Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history x = 2 − f 2 + 1 2 f + 4 + f + 6 x = 2
f 2 + 1 2 f + 4 + f + 6 , f ≥ 4 2 − 6 or f ≤ − 4 2 − 6 Steps Using the Quadratic Formula Steps for Completing the Square Sep 17, 2016 · (dy)/(dx)=2xe^(x^2) Chain Rule - In order to differentiate a
function of a function, say y, =f(g(x)), where we have to find (dy)/(dx), we need to do (a) substitute u=g(x), which gives us y=f(u). A Taylor Series is an expansion of some function into an infinite
sum of terms, where each term has a larger exponent like x, x 2, x 3, etc.
This is a typical transcendental equation which one solves using the Lambert function. So what we need to do is to change the form of this equation in a manner which is identical to [math]f(x)= x e^x
[/math], and get the solution through [math]x= W
Get breaking news, photos, and video of your favorite WWE Superstars.
Our math solver supports basic math, pre-algebra, algebra, trigonometry, calculus and more. 7 years ago. to find the solution for the equation we have to see the problem as easy as possible. e^x + e^
(-x) = 2 this is the question.then what we have to do is to change e^ (-X) to 1/e^ (x) and add it with e^ (x) as follow.
e-Sword — the world's most popular PC Bible study software is now available on the world's most advanced desktop operating system! e-Sword X is a complete re-design of the PC version of e-Sword,
capitalizing on what we have learned from 15 years of writing Bible software. e-Sword X … 2015. 5. 8. x 2/3 should be typed like x^(2/3). With more complicated fractions you have to use parenthesis.
+ x^4/4! This is an infinite degree equation and has an infinite number of solutions. log 2 (x) + log 2 (x-3) = 2. Solution: Using the product rule: log 2 (x∙(x-3)) = 2. Changing the logarithm form
according to the logarithm definition: x∙(x-3) = 2 2. Or. x 2-3x-4 = 0. Solving the quadratic equation: x 1,2 = [3±√(9+16) ] / 2 = [3±5] / 2 = 4,-1.
The Integral Calculator has to detect these cases and insert the multiplication sign. The parser is implemented in JavaScript, based on the Shunting-yard algorithm, and can run directly in the
browser. maclaurin e^{x^2} en. Related Symbolab blog posts.
Instead, you should type it like this: (x^2+1)/(x-5). 2021.
20 miliónov rupií v usdpreco nefunguje paypal na parepreložiť korejský won to usdnajlepšia online bitcoinová bankaako načítať kontakty z google účtu na iphonepomer ltc k btckomodo ikona vektor
plot x e^-x, x^2 e^-x, x=0 to 8. Examples; Random. Have a question about using Wolfram|Alpha?Contact Pro Premium Expert Support » · Give us your feedback
Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. How do you solve #e^x=2#?
Precalculus Properties of Logarithmic Functions Natural Logs. 1 Answer Jim G. Sep 4, 2016 #x=ln2# Explanation: Make use of the # Here integral [math]\int e^x \frac{x^2+1}{(x+1)^2}dx.[/math] By
converting to a Integral of the form [math]\int e^x\left[f(x)+f’(x)\right]dx.= e^xf(x)+c[/math] Adding and subtracting 1 in the numerator and split writing of given integral and follo A specialty in
mathematical expressions is that the multiplication sign can be left out sometimes, for example we write "5x" instead of "5*x". | {"url":"https://forsaljningavaktierdrbz.web.app/6475/48537.html","timestamp":"2024-11-15T00:12:36Z","content_type":"text/html","content_length":"16931","record_id":"<urn:uuid:a32ca611-413d-45f3-97aa-69128a5fec4f>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00041.warc.gz"} |
This string library provides generic functions for string manipulation, such as, finding and extracting substrings, and pattern matching. The patterns are described in the subsequent sections.
When indexing a string, the first character is at position 1. indexes can be negative and are interpreted as indexing backwards, from the end of the string. Therefore, the last character is at
position -1.
string.find(s, pattern [, init [, plain]])
This function looks for the first match of the pattern in the string s. If it finds a match, then find returns the indexes of s where this occurrence starts and ends. Otherwise, it returns nil.
A third, optional numerical argument init specifies where to start the search. Its default value is 1 and may be negative. A value of true as a fourth optional argument plain turns off the pattern
matching facilities, so the function does a plain "find substring" operation, with no characters in pattern being considered "magic".
You must specify init if plain is specified. If the pattern has captures, then in a successful match the captured values are also returned, after the two indexes.
local x, y = string.find('hello world', 'world')
local a, b, c, d = string.find('my street number is 54a', '(%d+)(%a+)')
string.match(s, pattern [, init])
Looks for the first match of pattern in the string s. If it finds one, then match returns the captures from the pattern. Otherwise it returns nil. If pattern specifies no captures, then the whole
match is returned. A third optional numerical argument init specifies where to start the search. Its default value is 1 and may be negative.
string.gmatch(s, pattern)
Returns an iterator function, in each call it returns the next captures from pattern over string s. If pattern specifies no captures, then the whole match is produced in each call.
The example collects all pairs key=value from the given string into a dictionary table.
local t = {}
local s = 'from=world, to=moon'
for k, v in string.gmatch(s, '(%w+)=(%w+)') do
t[k] = v
string.sub(s, i [, j])
Returns the substring of s that starts at i and continues until j. i and j may be negative. If j is absent, then it is assumed to be equal to -1 (which is the same as the string length). In
particular, the call string.sub(s,1,j) returns a prefix of s with length j, and string.sub(s, -i) returns a suffix of s with length i.
string.gsub(s, pattern, repl [, n])
Returns a copy of s in which all occurrences of the pattern have been replaced by a replacement string specified by repl. It also returns a second value containing the total number of substitutions
made. The optional last parameter n limits the maximum number of substitutions to occur. For instance, when n is 1 only the first occurrence of pattern is replaced.
The string repl is used for replacement. The character % works as an escape character: Any sequence in repl of the form %n, with n between 1 and 9, stands for the value of the n-th captured substring
(see the following example). The sequence %0 stands for the whole match. The sequence %% stands for a single %.
local x = string.gsub('hello world', '(%w+)', '%1 %1')
local x = string.gsub('hello world', '%w+', '%0 %0', 1)
local x = string.gsub('hello world from Lua', '(%w+)%s*(%w+)', '%2 %1')
Receives a string and returns its length. The empty string '' has length 0.
string.rep(s, n)
Returns a string that is the concatenation of n copies of the string s.
Returns a string that is the string s reversed.
Receives a string and returns a copy of this string with all uppercase letters changed to lowercase. All other characters are left unchanged.
Receives a string and returns a copy of this string with all lowercase letters changed to uppercase. All other characters are left unchanged.
string.format(formatstring, e1, e2, ...)
Returns a formatted version of all arguments following the formatstring. The format string follows the same rules as the printf() family of standard C functions and must be a string. The only
differences are that the options/modifiers *, l, L, n, p, and h are not supported and there is an extra option q. The q option formats a string in a form suitable to be safely read back by the Lua
interpreter. The options c, d, E, e, f, g, G, i, o, u, x, and X all expect a number as argument, whereas q and s expect a string.
• %d, %i: Signed integer
• %u: Unsigned integer
• %f, %g, %G: Floating point
• %e, %E: Scientific notation
• %o: Octal integer
• %x, %X: Hexadecimal integer
• %c: Character
• %s: String
• %q: Safe string
Patterns are similar to regular expressions. A character class is used to represent a set of characters.
The following combinations are allowed in describing a character class:
Pattern Description
x Represents the character x itself (if x is not one of the magic characters ^$()%.[]*+-?)
. (a dot) represents all characters
%a Represents all letters
%c Represents all control characters
%d Represents all digits
%l Represents all lowercase letters
%p Represents all punctuation characters
%s Represents all space characters
%u Represents all uppercase letters
%w Represents all alphanumeric characters
%x Represents all hexadecimal digits
%z Represents the character with representation 0
%x Represents the character x (where x is any non-alphanumeric character). This is the standard way to escape the magic characters. Any punctuation character (even the non magic) can be preceded
by a `%´ when used to represent itself in a pattern.
Represents the class which is the union of all characters in set. A range of characters may be specified by separating the end characters of the range with a `-´. All classes %x described
above may also be used as components in set. All other characters in set represent themselves. The interaction between ranges and classes is not defined. Therefore, patterns like [%a-z] or
[a-%%] have no meaning.
[set] Example of character sets:
• [%w_] (or [_%w]): All alphanumeric characters plus the underscore.
• [0-7]: Represents the octal digits.
• [0-7%l%-] represents the octal digits plus the lowercase letters plus the `-´ character.
[^set] Represents the complement of set, where set is interpreted as above.
For all classes represented by single letters (%a, %c, and so on), the corresponding uppercase letter represents the complement of the class. For instance, %S represents all non-space characters.
A pattern item may be:
• a single character class, which matches any single character in the class.
• a single character class followed by `*´, which matches 0 or more repetitions of characters in the class. These repetition items will always match the longest possible sequence.
• a single character class followed by `+´, which matches 1 or more repetitions of characters in the class. These repetition items will always match the longest possible sequence.
• a single character class followed by `-´, which also matches 0 or more repetitions of characters in the class. Unlike `*´, these repetition items will always match the shortest possible sequence.
• a single character class followed by `?´, which matches 0 or 1 occurrence of a character in the class.
• %n, for n between 1 and 9; such item matches a substring equal to the n-th captured string (see below).
• %bxy, where x and y are two distinct characters; such item matches strings that start with x, end with y, and where the x and y are balanced. This means that, if one reads the string from left to
right, counting +1 for an x and -1 for a y, the ending y is the first y where the count reaches 0. For instance, the item %b() matches expressions with balanced parentheses.
A pattern is a sequence of pattern items. A `^´ at the beginning of a pattern anchors the match at the beginning of the subject string. A `$´ at the end of a pattern anchors the match at the end of
the subject string. At other positions, `^´ and `$´ have no special meaning and represent themselves.
A pattern may contain sub-patterns enclosed in parentheses; they describe captures. When a match succeeds, the substrings of the subject string that match captures are stored (captured) for future
use. Captures are numbered according to their left parentheses. For instance, in the pattern "(a*(.)%w(%s*))", the part of the string matching "a*(.)%w(%s*)" is stored as the first capture (and
therefore has number 1); the character matching "." is captured with number 2, and the part matching "%s*" has number 3.
As a special case, the empty capture () captures the current string position (a number). For instance, if we apply the pattern "()aa()" on the string "flaaap", there will be two captures: 3 and 5.
If you want to process unicode characters, you can use the library unicode.utf8 which contains the similar functions like the string library, but with Unicode support. There is another library called
unicode.ascii that has the same functionality like the library string.
XML Parsing
The library lxp contains several features to process XML data. The official reference to the functions are available at LuaExpat.
Here are the supported methods:
Methods Description
lxp.new The parser is created by a call to the function lxp.new, which returns the created parser or raises a Lua error. It receives the callbacks table and optionally the parser separator
(callbacks, character used in the namespace expanded element names.
close() Closes the parser, freeing all memory used by it. A call to parser:close() without a previous call to parser:parse() could result in an error.
getbase() Returns the base for resolving relative URIs.
getcallbacks() Returns the callbacks table.
Parse some more of the document. The string s contains part (or perhaps all) of the document. When called without arguments the document is closed (but the parser still has to be
parse(s) closed). The function returns a non nil value when the parser has been successful, and when the parser finds an error it returns five results: nil, msg, line, col, and pos, which are
the error message, the line number, column number and absolute position of the error in the XML document.
pos() Returns three results: the current parsing line, column, and absolute position.
setbase(base) Sets the base to be used for resolving relative URIs in system identifiers.
setencoding Set the encoding to be used by the parser. There are four built-in encodings, passed as strings: "US-ASCII", "UTF-8", "UTF-16", and "ISO-8859-1".
stop() Abort the parser and prevent it from parsing any further through the data it was last passed. Use to halt parsing the document when an error is discovered inside a callback, for
example. The parser object cannot accept more data after this call.
SQL Parsing
The self developed library sqlparsing contains many functions to process SQL statements. For more information about the functions and their usage, see SQL Preprocessor.
Internet Access
Through the library socket, you can open HTTP, ftp, and smtp connections. More information is available at http://w3.impa.br/~diego/software/luasocket/.
Math Library
The math library is an interface to the standard C math library and provides the following functions and values:
Library Description
math.abs(x) Absolute value of x
math.acos(x) Arc cosine of x (in radians)
math.asin(x) Arc sine of x (in radians)
math.atan(x) Arc tangent of x (in radians)
math.atan2(y,x) Arc tangent of y/x (in radians), but uses the signs of both operands to find the quadrant of the result
math.ceil(x) Smallest integer larger than or equal to x
math.cos(x) Cosine of x (assumed to be in radians)
math.cosh(x) Hyperbolic cosine of x
math.deg(x) Angle x (given in radians) in degrees
math.exp(x) Return natural exponential function of x
math.floor(x) Largest integer smaller than or equal to x
math.fmod(x,y) Modulo
math.frexp(x) Returns m and n so that x=m2^n, n is an integer and the absolute value of m is in the range [0.5;1) (or zero if x is zero)
math.huge Value HUGE_VAL which is larger than any other numerical value
math.ldexp(m,n) Returns m2^n (n should be an integer)
math.log(x) Natural logarithm of x
math.log10(x) Base-10 logarithm of x
math.max(x, Maximum of values
math.min(x, Minimum of values
math.modf(x) Returns two numbers, the integral part of x and the fractional part of x
math.pi Value of π
math.pow(x,y) Returns value x^y
math.rad(x) Angle x (given in degrees) in radians
math.random([m, Interface to random generator function rand from ANSI C (no guarantees can be given for statistical properties). When called without arguments, returns a pseudorandom real number in
[n]]) the range [0;1). If integer m is specified, then the range is [1;m]. If called with m and n, then the range is [m;n].
math.randomseed Sets x as seed for random generator
math.sin(x) Sine of x (assumed to be in radians)
math.sinh(x) Hyperbolic sine of x
math.sqrt(x) Square root of x
math.tan(x) Tangent of x
math.tanh(x) Hyperbolic tangent of x
Table Library
This library provides generic functions for table manipulation. Most functions in the table library assume that the table represents an array or a list. For these functions, the length of a table
means the result of the length operator (#table). The functions and their descriptions are as follows:
• table.insert(table, [pos,] value): Inserts element value at position pos in table, shifting up other elements if necessary. The default value for pos is n+1, where n is the length of the table,
so that a call table.insert(t,x) inserts x at the end of table t.
• table.remove(table [, pos]): Removes from table the element at position pos, shifting down other elements if necessary. Returns the value of the removed element. The default value for pos is n,
where n is the length of the table, so that a call table.remove(t) removes the last element of table t.
• table.concat(table [, sep [, i [, j]]]): Returns table[i]..sep..table[i+1] ... sep..table[j]. The default value for sep is the empty string, the default for i is 1, and the default for j is the
length of the table. If i is greater than j, returns the empty string.
• table.sort(table [, comp]): Sorts table elements in a given order, in-place, from table[1] to table[n], where n is the length of the table. If comp is given, then it must be a function that
receives two table elements, and returns true when the first is less than the second (so that not comp(a[i+1],a[i]) will be true after the sort). If comp is not given, then the standard Lua
operator < is used instead. The sort algorithm is not stable; that is, elements considered equal by the given order may have their relative positions changed by the sort.
System Tables
The following system tables show the existing database scripts:
• EXA_USER_SCRIPTS
• EXA_ALL_SCRIPTS
• EXA_DBA_SCRIPTS
For more information, see Metadata System Tables. | {"url":"https://docs.exasol.com/db/latest/database_concepts/scripting/libraries.htm","timestamp":"2024-11-11T14:14:38Z","content_type":"text/html","content_length":"115163","record_id":"<urn:uuid:326574a4-bec3-4b59-9337-271e154ccbc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00794.warc.gz"} |
Index of Papers
Most papers are available on-line in PDF.
For more information, click on the paper titles.
• Authors: R. Chang and S. Purini
• Reference: In Proceedings of the 23rd Conference on Computational Complexity, 41–52, June 2008.
• In Short: Yes, we can amplify ZPP^SAT[1] and we use this to show that PH collapses to ZPP^SAT[1] if ZPP^SAT[1] = ZPP^SAT[2].
• Authors: R. Chang and S. Purini
• Reference: In Proceedings of the 22nd Conference on Computational Complexity, 52–59, June 2007.
• In Short: What can we prove about the 1 vs 2 queries problem if we assume the NP Machine Hypothesis holds?
• Authors: H. Buhrman, R. Chang and L. Fortnow
• Reference: NEC Research Institute Technical Note #2002-017N. In Proceedings of the 20th Annual Symposium on Theoretical Aspects of Computer Science (STACS 2003), Springer-Verlag Lecture Notes in
Computer Science #2607, 547–558, February-March 2003.
• In Short: What if coNP is contained in NP/1 (NP with one bit of advice)?
• Authors: R. Chang and J. S. Squire
• Reference: In Proceedings of the 16th IEEE Conference on Computational Complexity, 90–98, June 2001.
• In Short: What if bounded query functions were limited to 2 bits of output? Would 3 parallel queries to SAT do more than 2 serial queries?
• Author: R. Chang
• Reference: unpublished manuscript
• In Short: If MaxClique reduces to 2-approximating MaxClique, then TSP reduces to 2-approximating TSP.
• Author: R. Chang
• Reference: Information and Computation, 169(2):129–159, September 2001.
• In Short: Connections between the Boolean Hierarchy and the complexity of NP-approximation problems.
• Authors: R. Beigel and R. Chang
• Reference: Information and Computation, 166(1):71–91, 2001.
• In Short: The order of queries to two oracles matter when a polynomial time machine is computing a function and querying oracles that are complete for the Polynomial Hierarchy, but the order does
not matter when the machine is recognizing a language.
• Authors: R. Chang, W. Gasarch and J. Torán
• Reference: Chicago Journal of Theoretical Computer Science, 1999(1), February 1999.
• In Short: The enumerability of the number of graph automorphisms (#GA) is related to the complexity of the Graph Isomorphism problem (GI). For example, if #GA is polynomially enumerable, then GI
can be decided in randomized polynomial time.
• Author: R. Chang.
• Reference: In Current Trends in Theoretical Computer Science: Entering the 21st Century, G. Paun, G. Rozenberg and A. Salomaa, editors, pp. 4–24, World Scientific, 2001.
• In Short: A new model that measures the complexity of finding solutions to NP-approximation problems and how the Boolean Hierarchy helps us resolve natural questions about approximating
NP-optimization problems.
• Author: R. Chang.
• Reference: Journal of Computer and System Sciences, 53(2):298–313, October 1996.
• In Short: comparing the complexity of approximating clique size and maximum satisfiability using bounded queries as a complexity measure.
• Authors: R. Chang, W. I. Gasarch and C. Lund.
• Reference: SIAM Journal on Computing, 26(1):188–209, February 1997.
• In Short: examines bounded queries as a complexity measure for NP-approximation problems; obtains upper and lower bounds for Clique Size, Chromatic Number and Set Cover.
• Authors: J. Hartmanis, R. Chang, S. Chari, D. Ranjan and P. Rohatgi.
• Reference: In Current Trends in Theoretical Computer Science, G. Rozenberg and A. Salomaa, editors, pp. 537–547, World Scientific, 1993.
• In Short: a review of the relativization principle and its effect on 15 years of research in complexity theory.
• Authors: R. Beigel, R. Chang and M. Ogiwara.
• Reference: Mathematical Systems Theory, 26(3):293–310, July 1993.
• In Short: generalizing theorems about the Boolean hierarchy to the difference hierarchy over counting classes.
• Authors: R. Chang
• Reference: PhD Thesis, Cornell University, 1991.
• In Short: an amalgamation of previously published papers.
• Authors: R. Chang, J. Kadin and P. Rohatgi.
• Reference: Journal of Computer and System Sciences, 50(3):359–373, June 1995.
• In Short: some new results on unique satsifiability; also, completeness under randomized reductions make sense when probability of success exceeds certain thresholds.
• Authors: R. Chang and P. Rohatgi.
• Reference: In Current Trends in Theoretical Computer Science, G. Rozenberg and A. Salomaa, editors, pp. 494–503, World Scientific, 1993.
• In Short: expository remarks about the meaning of completeness under randomized reductions, especially for unique satisfiability.
• Authors: R. Chang and J. Kadin.
• Reference: Mathematical Systems Theory, 28(3): 173–198, May/June 1995.
• In Short: using AND and OR to build the Boolean hierarchy.
• Authors: J. Hartmanis, R. Chang, D. Ranjan and P. Rohatgi.
• Reference: In Current Trends in Theoretical Computer Science, G. Rozenberg and A. Salomaa, editors, pp. 484–493, World Scientific, 1993.
• In Short: expository remarks about the relationship between the existence of short proofs and IP = PSPACE.
• Authors: R. Chang, B. Chor, O. Goldreich, J. Hartmanis, J. Hastad, D. Ranjan and P. Rohatgi.
• Reference: Journal of Computer and System Sciences, 49(1):24–39, August 1994.
• In Short: the IP = PSPACE result relativizes in the wrong direction with probability one; hence, the Random Oracle Hypothesis is debunked.
• Authors: D. Ranjan, R. Chang and J. Hartmanis.
• Reference: Theoretical Computer Science, 80(2):289–302, 1991.
• In Short: some results about the constructibility of log log space bounds.
• Author: R. Chang.
• Reference: In Bulletin of the European Association for Theoretical Computer Science, 42:172–173, October 1990.
• In Short: some instances of the Gap Theorem do not relativize.
• Authors: R. Chang and J. Kadin.
• Reference: SIAM Journal on Computing, 25(2):340–354, April 1996.
• In Short: a refinement of Kadin's proof that collapsing the Boolean hierarchy also collapses the polynomial hierarchy.
• Author: R. Chang.
• Reference: SIAM Journal on Computing, 21(4):743–754, August 1992.
• In Short: sets in NP − low3 can be used to construct bounded query hierarchies with distinct levels.
• Authors: J. Hartmanis, R. Chang, J. Kadin and S. Mitchell.
• Reference: In Current Trends in Theoretical Computer Science, G. Rozenberg and A. Salomaa, editors, pp. 423–433, World Scientific, 1993.
• In Short: ruminations on deterministic versus nondeterministic space computations and how to relativize log space computations.
Last Modified: 22 Jul 2024 11:27:53 EDT by Richard Chang | {"url":"https://userpages.cs.umbc.edu/chang/papers/","timestamp":"2024-11-03T00:40:17Z","content_type":"text/html","content_length":"11834","record_id":"<urn:uuid:c912bb0e-c82d-4f12-99d0-426aaff10a1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00353.warc.gz"} |
Comparing Fractions Calculator - Online Math Calculators | beGalileo
Comparing fractions can get difficult if done manually, especially with complex pair of fractions. This calculator can help you with the comparison easily. It can tell you which is the bigger and
which is the smaller fraction among with given set of fractions.
How to compare fractions in the calculator?
Step 1 : Enter the first fraction
Step 2 : Enter the second fraction
Step 3 : Click the Compare button.
On clicking the compare button, the calculation of the comparing fraction will be shown.
Examples to try on the Comparing fractions calculator
Q1.Compare \frac {15}{5} \text { and } \frac {50}{5}
Solution : \frac {15}{5} <\ \frac{50}{5}
\left(\frac {15}{5} \text { is less than } \frac {50}{5}\right)
Q2.Compare \frac {25}{15} \text { and } \frac {75}{55}
Solution : \frac {25}{15} <\ \frac{75}{55}
\left(\frac {25}{15} \text { is less than } \frac {75}{55}\right)
Fractions are a fundamental concept in mathematics, and they are used in many different applications. Comparing fractions is an important skill, particularly in everyday situations such as shopping,
cooking, and sharing. Understanding how to compare fractions can help you to make informed decisions, and it can also be a useful skill when working with more complex mathematical concepts. In this
article, we will discuss what fractions are, how to compare fractions, and provide a detailed guide on how to use an online comparing fractions calculator.
What are fractions?
A fraction is a mathematical representation of a part of a whole. It is written as a numerator and a denominator, with a horizontal line between them. For example, the fraction 3/4 represents three
parts out of four equal parts. The numerator is the top number, which represents the number of parts, and the denominator is the bottom number, which represents the total number of parts.
Comparing fractions
Comparing fractions involves determining which fraction is greater or lesser than another fraction. When comparing fractions, there are three possible outcomes:
1. The first fraction is greater than the second fraction.
2. The first fraction is equal to the second fraction.
3. The first fraction is less than the second fraction.
To compare two fractions, there are several methods that you can use, including:
1. Finding a common denominator: When fractions have different denominators, it can be difficult to compare them. One way to simplify the comparison is to find a common denominator. This involves
finding a number that is a multiple of both denominators. Once you have a common denominator, you can compare the numerators to determine which fraction is greater.
2. Cross-multiplying: Another method for comparing fractions is to cross-multiply. This involves multiplying the numerator of one fraction by the denominator of the other fraction. The fraction with
the greater result is the greater fraction.
3. Converting to decimals: A third method for comparing fractions is to convert them to decimals. This involves dividing the numerator by the denominator. The fraction with the greater decimal value
is the greater fraction.
Benefits of using an online comparing fractions calculator
Using an online comparing fractions calculator has many benefits, including:
1. Speed: An online comparing fractions calculator is much faster than comparing fractions by hand.
2. Accuracy: An online comparing fractions calculator is less prone to errors than comparing fractions by hand.
3. Ease of use: An online comparing fractions calculator is easy to use, even for people who are not familiar with mathematical concepts.
Comparing fractions is an important skill, particularly in everyday situations such as shopping, cooking, and sharing. By using an online comparing fractions. | {"url":"https://www.begalileo.com/math/math-calculators/comparing-fractions-calculator/","timestamp":"2024-11-09T20:52:22Z","content_type":"text/html","content_length":"70171","record_id":"<urn:uuid:43e5d62f-6975-48e8-8841-50b31317c696>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00215.warc.gz"} |
On the spectral radius of a random matrix: An upper bound without fourth moment
Consider a square matrix with independent and identically distributed entries of zero mean and unit variance. It is well known that if the entries have a finite fourth moment, then, in high
dimension, with high probability, the spectral radius is close to the square root of the dimension.We conjecture that this holds true under the sole assumption of zero mean and unit variance. In
other words, that there are no outliers in the circular law. In this work, we establish the conjecture in the case of symmetrically distributed entries with a finite moment of order larger than two.
The proof uses the method of moments combined with a novel truncation technique for cycle weights that might be of independent interest.
All Science Journal Classification (ASJC) codes
• Statistics and Probability
• Statistics, Probability and Uncertainty
• Combinatorics
• Digraph
• Heavy tail
• Random matrix
• Spectral radius
Dive into the research topics of 'On the spectral radius of a random matrix: An upper bound without fourth moment'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/on-the-spectral-radius-of-a-random-matrix-an-upper-bound-without-","timestamp":"2024-11-10T10:13:44Z","content_type":"text/html","content_length":"49582","record_id":"<urn:uuid:b79ffe53-09b9-4947-96f6-208d5e0ace2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00424.warc.gz"} |
FPGA Implementation of Image Processing Architecture for Various Dip Applications
Volume 03, Issue 01 (January 2014)
FPGA Implementation of Image Processing Architecture for Various Dip Applications
DOI : 10.17577/IJERTV3IS10994
Download Full-Text PDF Cite this Publication
V. Balaji, R. Sakthi Kumar, 2014, FPGA Implementation of Image Processing Architecture for Various Dip Applications, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 03,
Issue 01 (January 2014),
• Open Access
• Total Downloads : 593
• Authors : V. Balaji, R. Sakthi Kumar
• Paper ID : IJERTV3IS10994
• Volume & Issue : Volume 03, Issue 01 (January 2014)
• Published (First Online): 28-01-2014
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
FPGA Implementation of Image Processing Architecture for Various Dip Applications
V. Balaji, R. Sakthi Kumar
PG scholar
Digital image processing is mainly focused on ever expanding and dynamic area with applications reaching out into our day today life such as medicine, security purpose, space exploration,
surveillance, identification & authentication, automatic industry inspection etc. Applications such as these involve different operations like image compression, image enhancement, object detection
and Noise removing. Implementing the image processing applications on a computer can be easier one, but not efficient due to additional constraints on memory and other peripheral devices. However,
most general purpose hardware is not suited for strong real-time constraints. This paper gives the implementation of median filter image processing on FPGA. The processors architecture is combining
with a reconfigurable binary processing module, input and output image controller units, and peripheral circuits. Reconfigurable binary processing module will perform DCT application and sobel
filter, for a 256×256 image. The periphery circuits control the whole image processing and dynamic reconfiguration process .The simulation and experimental results demonstrate that the processor is
suitable for real-time binary image processing applications.
1. Introduction
Image processing is any form of signal processing for which the input is an image, such as a photograph or video signal; the output of image processing may be either an image or a set of
characteristics or parameters related to the image. Most of the image-processing techniques involves treating the image as a two-dimensional signal and applying standard signal processing
techniques to it. Digital image processing is the method of computer algorithms to perform image processing on digital images, digital image processing has many advantages over analog image
Processing. It allows a many algorithms to be applied to the input data and can avoid problems such as the build-up Of noise and signal distortion during processing. Since images are defined over
2-dimensions digital image processing may be modeled in the form of multidimensional systems. General-purpose chips
have the architecture of a digital processor, in which each digital processor handles pixel by pixel. When larges sized images are processed, the chip size will become extremely large. Thus,
further analyzing needed to design a high performance, small size, and wide range of application for real- time binary image processing applications.
This paper presents a binary image processor that consists of a reconfigurable binary processing module, including reconfigurable binary computational units and output control logic, input and
output image controller units, and peripheral circuits. The reconfigurable binary compute units are mixed grained architecture, which has the advantages of more flexibility, efficiency and high
speed and performance. The processor performance is enhanced by using dynamic reconfiguration method. The processor is implemented to perform real time binary image processing applications. It is
found that the processor can process pixel-level images and extract image features. Basic mathematical median operations and complicated algorithms can easily be implemented on it. The processor
has the advantages of small size, high speed and simple structure, and wide range of applications. CSD (canonical sign digit) is a simple and hardware-efficient algorithm for the implementation
of various elementary, especially trigonometric, functions. Instead of using Calculus based methods such as polynomial or rational functional approximation, it uses simple shift, add, subtract
and table look-up operations to achieve this objective Discrete Cosine Transformation (DCT) is the most widely used transformation algorithm. DCT, first proposed by Ahmed [9] et al, 1974, has got
more importance in recent years, especially in the fields of Image Compression and Video Compression. This chapter focuses on efficient hardware implementation of DCT by decreasing the number of
computations, enhancing the accuracy of reconstruction of the original data, and decreasing chip area. As a result of which the power consumption also decreases. DCT also improves speed, as
compared to other standard Image compression algorithms like JPEG. A programmable single instruction multiple data (SIMD) real time vision chip was presented to
achieve high-speed target tracking [10]. In [24], a programmable binary morphology coprocessor was introduced to the visual content analysis engine of the chip used for visual surveillance. A
reconfigurable image processing accelerator incorporating eight macro processing elements was designed to support real-time change detection and background registration based on video object
segmentation algorithm. Recently, a vision chip with the architecture of a massively parallel cellular array of processing elements was presented for image processing by using the asynchronous or
synchronous processing technique Other general- purpose chips have the architecture of a digital processor array, in which each digital processor handles one pixel. When large sized images are
processed, the chips will become extremely large. Thus, further studies are needed to design a high performance, small size, and wide application range chip for real-time binary image processing
DCT applications. This paper presents a binary image processor that consists of a reconfigurable binary processing module, including reconfigurable binary compute units and output control logic,
input and output image control units, and peripheral circuits
2. Reconfigurable Image Processor
FIELD-PROGRAMMABLE GATE ARRAYS
(FPGA) were introduced a decade ago, they have only recently becomes very popular. This is not only the fact of programmable logic saves development cost and reducing the time over and complex
ASIC designs, but also because the gate counts per FPGA chip has reached numbers that allow for the implementation of more complex applications[11]. Many present days applications utilize a
processor and other logic on two or more individual chips. However, with the anticipated ability to build chips with over ten million transistors, it will become possible to implement a processor
within a sea of programmable logic, all on one chip.
Such a design approach would allow a great degree of programmability freedom, both in hardware and in software: EDA tools could decide which parts of a source code program are actually to be
executed in software and which other parts are enhanced with hardware. The hardware implementation may be needed for application interfacing reasons or may simply represent a coprocessor used to
improve execution time. Programmable logic need not only be used for application speed-up, it can also be employed as intelligent glue logic for custom interfacing
purposes such as in embedded. Controller applications. Current single-chip embedded processors attempt to provide very exible interfaces that can be used in a large number of applications.
1. Implementation of on chip processer
Fig. 1. Reconfigurable Image Processor
However, they can often result in interfaces that are less efficient than intended. Furthermore, it might be desirable to perform some bit-level data computations in-between the main
processor and the actual I/O interface. This paper also investigates the requirements for providing a general purpose eld-congurable interface for embedded processor applications. The
Reconfigrable image processor is shown in the Fig. 1. The processors architecture is a combination of a reconfigurable binary processing module, input and output image controller units, and
peripheral circuits and on chip memory unit and NIOS-2 processor. The reconfigurable binary processing module will perform image compression operations and edge detection operation. The input
image is given to pre-processing controller unit after the process the image is loaded into on chip memory unit. Initially analogue image is converted into digital and impulse noise is added
using MATLAB. And image is converted into180 x 180 sizes and totally 3600 blocks are stored in text file. The text file accessed by modelsim and calculating the median values and remove the
salt and pepper noise. NIOS II processer is used as a controller circuits. Gated clock is used to disable the idle blocks to reduce unnecessary transitions .FIFO synchronization is used to
synchronies all the units.
DISCRETE COSINE TRANSFORM –
To Compress Image
SOBEL FILETR – To detect edges
2. Image Processing Applications
The reconfigurable binary compute units are of a mixed grained architecture, which has the characteristics of high flexibility, efficiency, and
performance. The performance of the processor is enhanced by using the dynamic reconfiguration approach. The processor is implemented to perform real time binary image processing. It is found
that the
Processor can process pixel-level images and extract image features, such as boundary and motion images. Basic mathematical median operations and complicated algorithms can easily be
implemented on it. The processor has the merit of high speed, simple structure, and wide application range. Although eld programmable gate arrays (FPGA) were introduced a decade ago, they
have only recently become more popular. This is not only due to the fact that programmable logic saves development cost and time over increasingly complex ASIC designs, but also because the
gate count per FPGA chip has reached numbers that allow for the implementation of more complex applications.
3. Discrete Cosine Transform
Multimedia data processing, which encompasses almost every aspects of our daily life such as communication broad casting, data search, advertisement, video games, etc has become an integral part
of our life style. The most significant part of multimedia systems is application involving image or video, which require computationally intensive data processing. Moreover, as the use of mobile
device increases exponentially, there is a growing demand for multimedia application to run on these portable devices. In order to reduce the volume of multimedia data over wireless channel
compression techniques are widely used. Discrete cosine transform (DCT) is one of the major compression schemes owing to its near optimal performance. Its energy compaction efficiency is also
greater than any other transform.
4. Low Complexity 2-D Dct Using 1-D Dct
Decomposed Matrix
The 1-D 8-point DCT can be expressed as follows:
Where xm denotes the input data;
Zn denotes the transform output; Kn = sqrt(1/2) for n=0 .
By neglecting the scaling factor 1/2, the 1-D 8- point DCT in (2) can be Divided into even and odd parts:
Fig.2 Decomposed DCT
In 8 point DCT 8 input values are multiplied with 8 x 8 DCT matrix. For getting all 8 outputs 64 multipliers
are used. In decomposed DCT architecture by adding one pre-processing unit we reduce the multipliers usage by 50 %( only 32 multipliers
used). In pre-processing unit we used only adders. Overall we can reduce the hardware complexity.
5. Binary Conversion
Many techniques have been used to efficiently convert this floating point values into binary representation for digital implementation. Then only we can implement DCT in VLSI.
The two ways of floating point to binary conversion are
(1).Both integral and fractional part is converted separately by repeatedly multiply 2, and considers each one bit as it appears left of the decimal.
1. DCT coefficients
The 1-D DCT given by equation (2) can be split into two matrixes, the odd
The 1-D DCT given by equation (5) can be split into two matrixes, the odd and the even.
The odd 1-D DCT can be expressed as
The even 1-D DCT can be expressed as
where ck = cos k/16 , a = c1, b = c2, c = c3, d =
c4, e = c5, f = c6, g = c7 are the cosine basis.
From the equations (3) and (4), it can be stated that the DCT operation involves multiplication of various cosine coefficients with a fixed input sequence. Hence sub structure sharing
technique is used to reduce the number of operators [6]. The
cosine basis is quantized to 8-bits for energy efficiency. The cosine coefficients are represented as CSD number which has the advantage of reduced number of ones compared to the binary
representation. The cosine basis is chosen up to four decimal places and each one is represented as 7 bit binary number. The number of bits has an impact on the quality of the system. The
values of the cosine basis are shown in the Table below. The stronger operator, multiplication is transformed to simple shift and adds operations by applying Horners rule. This reduces the
power consumption. For example, consider the cosine coefficients c and g, c *X = 25 + 24 + 22 +1 (X) = (24 (3) + 5) (X) and g*X = 23 + 22 (X)= 22(3) (X)
and the common terms they share is 3X. The
common terms among the cosine basis are 1X, 3X, 5X, and -1X and are shared to compute the partial outputs.
Table 1. Cosine Basis Set
These blocks are termed as precomputing units and an unit is shown in the Figure. The intermediate results from the precomputing blocks are added in the final stage yielding the DCT
coefficients. The 3A is constructed by expressing it as 3A = 1A+2A
= {1A + (1A<<1)}. Similarly the 5A can be expressed as {1A + (1A<<2)}. and g, c *X = 25 + 24 + 22 +1 (X) = (24 (3) + 5) (X) and g*X = 23 + 22
(X)= 22(3) (X) and the common terms they share is 3X. The common terms among the cosine basis are 1X, 3X, 5X, and -1X and are shared to compute the partial outputs.
☆ Multiplication is expensive in hardware
☆ Decompose constant multiplications into shifts and additions\
○ 13*X = (1101)2*X = X + X<<2 + X<<3
○ Signed digits can reduce the number of additions/subtractions
○ Canonical Signed Digits (CSD)
○ (57)10 = (0110111)2 = (100- 1001)CSD
Up to 50% reduction
6. Performance Results
The image is converted into pixels using MATLAB and the values are stored as a text file. The text file is accessed by the Model sim ALTERA and the corresponding 2-D DCT coefficients are calculated.
These values are then fed to the IDCT module which returns the spatial data sequence. These data are written to a text file. The image can be reconstructed from the text file using MATLAB coding.
Fig 3. Simulated output
Table 2. Area comparison table
Fig 4. Input and reconstructed image
In this paper, a reconfigurable binary image processor was proposed to perform real-time binary image processing applications. The processor is combination of a reconfigurable binary processing
module, input and output image controller units, and peripheral circuits. The reconfigurable binary processing module has a mixed-grained architecture with the characteristics of high efficiency and
increase the processor performance. Basic DCT application and mathematical morphology operations can be easily implemented on its simple structure. The processor featured by simple structure, high
speed, and wide range of applications are suitable for binary image processing.This increases the efficiency of the system. The filter can removes noise even at higher noise densities and preserves
he edges and fine details. The performance of the filter is better when compared to the other filter of this type. The developed filters are tested using 180X180, 8- bits/pixel images. Different
levels and the results are compared with MATLAB implementation.
1. Y. Liu and C. Pomalaza-Raez, A low- complexity algorithm for the on-chip moment computation of binary images, in Proc. Int. Conf. Mechatron. Autom., 2009, pp. 18711876.
2. E. C. Pedrino, O. Morandin, Jr., and V. O. Roda, Intelligent FPGA based system for shape recognition, in Proc. 7th Southern Conf. Programmable Logic, 2011, pp. 197202.
3. M. F. Talu and I. Turkoglu, A novel object recognition method based on improved edge tracing for binary images, in Proc. Int. Conf. Appl. Inform. Commun. Technol., 2009, pp. 15.
4. A. J. Lipton, H. Fujiyoshi, and R. S. Patil, Moving target classification and tracking from real-time video, in Proc. Workshop Appl. Comput. Vision, 1998, pp. 814.
5. J. Kim, J. Park, K. Lee et al., A portable surveillance camera architecture using one-bit motion detection, IEEE Trans. Consumer Electron., vol. 53, no. 4, pp. 12541259, Nov. 2007.
6. D. J. Dailey, F. W. Cathey, and S. Pumrin, An algorithm to estimate mean traffic speed using uncalibrated cameras, IEEE Trans. Intell. Transportation Syst., vol. 1, no. 2, pp. 98107, Jun. 2000.
7. T. Ikenaga and T. Ogura, A fully parallel 1-Mb CAM LSI for real-time pixel-parallel image processing, IEEE J. Solid-State Circuits, vol. 35, no. 4, pp. 536544, Apr. 2000.
8. E. C. Pedrino, J. H. Saito, and V. O. Roda, Architecture for binary mathematical morphology reconfigurable by genetic programming, in Proc. 6th Southern Programmable Logic Conf., 2010, pp. 9398.
9. M. R. Lyu, J. Song, and M. Cai, A comprehensive method for multilingual video text detection, localization, and extraction, IEEE Trans. Circuit Syst. Video Technol., vol. 15, no. 2, pp. 243255,
Feb. 2005.
10. W. Miao, Q. Lin, W. Zhang et al., A programmable SIMD vision chip for real-time vision applications, IEEE J. Solid-State Circuits, vol. 43, no. 6, pp. 14701479, Jun. 2008.
11. Bin Zhang, Kuizhi Mei and Nanning Zheng,(MAY 2013), Reconfigurable Processor for Binary Image Processing, IEEE Transactions On Circuits And Systems For Video Technology.
12. K. Fujii, M. Nakanishi, S. Shigematsu et al., A 500-dpi cellular-logic processing array for fingerprint-image enhancement and verification, in Proc. IEEE Custom Integr. Circuits Conf., May 2002,
pp. 261264.
13. H. J. Park, K. B. Kim, J. H. Kim et al., A novel motion detection pointing device using a binary CMOS image sensor, in Proc. IEEE Int. Symp. Circuits Syst., May 2007, pp. 837840.
14. M. Laiho, J. Poikonen, and A. Paasio, Space- dependent binary image processing within a 64×64 mixed-mode array processor, in Proc. Eur. Conf. Circuit Theory Design, 2009, pp. 189192.
15. E. N. Malamas, A. G. Malamos, and T. A. Varvarigou, Fast implementation of binary morphological operations on hardware-efficient systolic architectures, J. VLSI Signal Process., vol. 25, no. 1,
pp. 7993, 2000.
16. J. Velten and A. Kummert, Implementation of a high-performance hardware architecture for binary morphological image processing
operations, in Proc. 47th IEEE Int. Midwest Symp. Circuits Syst., Jul. 2004, pp. 241244.
17. R. Dominguez-Castro, S. Espejo, A. Rodriguez-Vazquez et al., A 0.8-m CMOS 2-D programmable mixed-signal focal-plane array processor with on-chip binary imaging and instructions storage, IEEE J.
Solid-State Circuits, vol. 32, no. 7, pp. 10131026, Jul. 1997
V.Balaji received the B.E Degree in Electronics and Communication Engineering from the Sri Ramakrishna Engineering College, Coimbatore in 2011. He is currently pursuing the M.E Degree in VLSI Design
in Kalaignar Karunanidhi Institute of Technology, Coimbatore. His areas of interest are Image Processing and very large scale integration Architecture design for embedded vision systems.
R.Sakthikumar received the B.E Degree in Electronics and Communication Engineering from the Sri Subramanya college of Engineering and Technology, Palani in 2011. He is currently pursuing the M.E
Degree in VLSI Design in Sengunthar Engineering College, Tiruchengode. His areas of interest are Image Process.
You must be logged in to post a comment. | {"url":"https://www.ijert.org/fpga-implementation-of-image-processing-architecture-for-various-dip-applications","timestamp":"2024-11-04T11:17:34Z","content_type":"text/html","content_length":"83276","record_id":"<urn:uuid:f41dccca-0ece-4a94-a31e-c367a6090fde>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00151.warc.gz"} |
Numpy Where
Numpy Where Functionality Numpy is a powerful library in Python for performing mathematical operations on arrays and matrices. One useful function in Numpy is numpy.where(), which allows you to
perform element-wise conditional operations on arrays. Syntax of numpy.where() The syntax for the numpy.where() function is as follows: numpy.where(condition, [x, y]) Where: – condition is the... | {"url":"https://numpywhere.com/","timestamp":"2024-11-11T20:19:03Z","content_type":"text/html","content_length":"64504","record_id":"<urn:uuid:f3507476-9b99-46df-8327-8defd0ae6b5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00708.warc.gz"} |
Shower Faucet | formulatw
top of page
A-260A A-8611 A-237
A-260 A-250 A-710
A-8650 A-261 A-8630
A-8610 A-8620 A-8710
A-590 A-8520 A-2830
A-8730 玫瑰金 A-8720 古銅 A-8730
A-8520 A-2990 A-2982
A-2980 A-2960 A-2981
A-2970 A-2950 A-2931
A-2930 A-2940 A-2920
A-2910 A-2900 A-2840
A-2800 A-2810 A-2720
A-2710 A-315-2 A-2700
A-314-3A A-314-2 A-311-3A
A-213-4 A-211-3A A-211-3
A-211-4 A-211-2 A-210-3
A-210- A-210-4 A-210-3A
A-2760 A-2780 A-2840A
A-2770 古銅
bottom of page | {"url":"https://www.formulatw.com/zh-shower-faucets","timestamp":"2024-11-09T21:51:31Z","content_type":"text/html","content_length":"634354","record_id":"<urn:uuid:32d93fb1-b7d7-43e0-90ca-9c4cbf1b9882>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00093.warc.gz"} |
There are two types of rolls on a counter, plain rolls and seeded roll
Question Stats:
91% 9% (01:17) based on 247 sessions
There are two types of rolls on a counter, plain rolls and seeded rolls. What is the total number of rolls on the counter?
(1) The ratio of the number of seeded rolls on the counter to the number of plain rolls on the counter Is 1 to 5.
(2) There are 16 more plain rolls than seeded rolls on the counter.
Given: There are two types of rolls on a counter, plain rolls and seeded rolls.
Let P = # of plain rolls
Let S = # of seeded rolls
Target question: What is the value of P + S? Statement 1: The ratio of the number of seeded rolls on the counter to the number of plain rolls on the counter Is 1 to 5.
In other words, S/P = 1/5
Cross multiply to get:
5S = P
There are several values of S and P that satisfy statement 1. Here are two:
Case a
: S = 1 and P = 5. In this case, the answer to the target question is
S + P = 1 + 5 = 6Case b
: S = 2 and P = 10. In this case, the answer to the target question is
S + P = 2 + 10 = 12
Since we cannot answer the
target question
with certainty, statement 1 is NOT SUFFICIENT
Statement 2: There are 16 more plain rolls than seeded rolls on the counter.
In other words,
P = S + 15
There are several values of S and P that satisfy statement 1. Here are two:
Case a
: S = 1 and P = 16. In this case, the answer to the target question is
S + P = 1 + 16 = 17Case b
: S = 2 and P = 17. In this case, the answer to the target question is
S + P = 2 + 17 = 19
Since we cannot answer the
target question
with certainty, statement 2 is NOT SUFFICIENT
Statements 1 and 2 combined
Statement 1 tells us that
5S = P
Statement 2 tells us that
P = S + 15
ASIDE: Although we COULD solve the above system of equations (and then find the value of
P + S
), we would never waste valuable time on test day doing so. We need only determine that we COULD answer the target question.
Since we COULD answer the
target question
with certainty, the combined statements are SUFFICIENT
Answer: C | {"url":"https://gmatclub.com/forum/there-are-two-types-of-rolls-on-a-counter-plain-rolls-and-seeded-roll-283490.html","timestamp":"2024-11-03T22:41:57Z","content_type":"application/xhtml+xml","content_length":"703475","record_id":"<urn:uuid:9192c776-9e40-4ac8-b0b2-17ce69122e87>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00776.warc.gz"} |
How a Minimum Time Step Leads to a Causal Structure Used to Form Initial Entropy Production and High Frequency Gravitons, with 7 Subsequent Open Questions
How a Minimum Time Step Leads to a Causal Structure Used to Form Initial Entropy Production and High Frequency Gravitons, with 7 Subsequent Open Questions ()
1. Examination of the Minimum Time Step, in Pre-Planckian Space-Time as a Root of a Polynomial Equation
We initiate our work, citing [1] to the effect that we have a polynomial equation for the formation of a root finding procedure for
From here, we then cited, in [1] , using [2] , a criteria as to formation of entropy, i.e. if
In short, our view is that the formation of a minimum time step, if it satisfies Equation (2) which is a necessary and sufficient condition for the formation of an arrow of time, at the start of
cosmological evolution, we have a necessary and sufficient condition for the initiation of an arrow of time. In other words, Equation (2) being non zero with a minimum time step, is necessary and
sufficient for the formation of an arrow of time. The remainder of our article is focused upon the issues of a necessary and sufficient condition for causal structure being initiated, along the lines
of Dowker, as in [3] .
2. Considerations as to the Start of Causal Structure of Space-Time
In [1] , we make our treatment of the existence of causal structure, as given by writing its emergence as contingent upon having
We have assumed in writing this, that our initial starting point for which we can write a Friedman Equation with H = 0 is a finite, very small ball of space- time and that within this structure that
the Friedman Equation follows the following conventions, namely
The relativistically correct Friedman equation assumes, that within the confines of the regime for where H = 0 that we write
i.e. the regime of where we have the initiation of causal structure, if allowed would be contingent upon the behavior of [5] [6] [7]
i.e. the right hand side of Equation (6) is the square of the scale factor, which we assume is ~10^-110, due to [5] [6] , and an inflaton given by [8]
These are the items which were enfolded into the derivation of Equation (1) of reference [1] i.e. our following claim is that Causal structure commences if we can say the following,
3. So What Is the Root of Our Approximation for a Time Step?
Here for the satisfying of Equation (8) is contingent upon
Furthermore, this is not incommensurate with what Penrose wrote himself in [10] , namely reviewing the Weyl Curvature hypothesis, as given in [10] , i.e. singularities as presumed in initial
space-time are very different from singularities of black holes, and that modification of the Weyl curvature hypothesis, may be allowing for what Penrose referred to as gravitational clumping
initially to boost the initial entropy, above a presumed initial value. i.e. this we believe is commensurate with Equation (2) above, and is crucially important.
We close this inquiry by noting that what we have done is also conditional upon [11] [12] to the effect that we can write the genesis of our time step formula, as given by Equation (1) above as
crucially dependent upon, the following
In the third line of Equation (9) the essential substitution is to go from a ^nd and 3^rd relativistic case involves changing from a mass density,
So, in the Pre Planckian regime of Space-time, our initial assumption is twofold, i.e. we assume that we cannot reference either
4. Links to Entropy Production
We claim that what we are doing is contingent upon having
The key to this development is accessing the negative energy density in pre Planckian space-time , which if one crosses a causal barrier at H = 0, having this initial energy density, in Pre Causal
space-time as negative, which once past the Causal barrier becomes positive, whereas the magnitude of the initial energy would be set at
In doing so, we have the mass if a graviton is specified as in [15] . And then the open question to be asked is, do we have in this case a situation where say the gravitons act as information
carriers from a prior universe? Our intuition says yes, and we will follow up upon this with necessary and sufficient conditions for cyclic universe interpretations of this model, in a future
In this case, the mass of a graviton, would be of the order of 10^−62 grams, which would specify, then a very small initial energy if we have that we are also using the Ng approximation for infinite
quantum statistics of [16]
As well as an initial frequency of the “particles” given by
We also claim, that this procedure, is in its own way tandem with [17] which in turn has another “bubble” in the start of space-time.
We furthermore claim that additional development of this methodology will entail use of reconciling this work with page 428 of Baez, and Muniain, [18] , specifically as an alternative to the
well-researched section on Canonical quantization, used in ADM relativity i.e. what we are doing is by default coming up with an alternative to what has been done in [18] and other places, as well as
making a semi classical linkage to gravitons and entropy.
5. Seven Open Questions, Which Remain to Be Answered
To close this section, one of the remaining problems which have to be addressed in this methodology is to address what was brought up by Tolman, [19] , i.e. if we have a cyclical universe that from
each cyclical “bounce”, from cycle to cycle, entropy will increase especially at the beginning.
Our Causal structure argument has to be tweaked in order to avoid this development, which will be a topic of a future publication i.e. we need to have exact referencing of a nonzero, but not
incrementally increasing initial entropy, per start of a cosmological cycle.
The final set up of our problem will also entail the use of, also reconciling the H = 0 structure of the Causal structure boundary, with what is given for the initial expansion of the Universe as
given by [20] , i.e.
Here, the term
Last but not least, is that we have in our pre causal Pre Planckian structure,
What values of the Cosmological constant are we assuming in the Pre Planckian to Planckian Universe transition?
Either it is of the sort where
This has to be worked out, for the obvious reasons, as well as looking at, if we have an iterative process for the generation of
Not only this, we should also consider if we are looking at massive gravitons, an analytical bridge between Pre Planckian representations of Gravitons and the following Planckian, to post Planckian
space-time physics as given by a regime of space-time where we go from close to zero, or initially zero Pre Planckian space-time temperatures, to the super-hot initial conditions of inflation, i.e.
note that as given by Giovanni, the figure of 10^88 as due to gravitons can be seen to come from [5] , page 156 as
There are two questions which this raises: What would be the driving impetus to go from a low temperature pre space time temperature, then to Planck time entropy, then to the entropy of today as
given in Equation (18)?
One way to look at it would be to suggest as done by H. Kadlecova [24] in the 12 Marcel Grossman meeting the typical energy stress tensor, using, instead, Gyratons, with an electro-magnetic energy
density addition to effective Electromagnetic cosmological value as given by
i.e. that there be, due to effective E and M fields, a boost from an initially low vacuum energy to a higher ones, as given by Kadlecova [24] [25]
How would Equation (25) is used in Equation (11) to Equation (15) affect our Pre Planckian to Planckian physics results? This needs to be considered.
Last but not least, if we are considering massive gravitons, we should look at the following perturbative terms added to a metric tensor by massive gravitons. I.e. understand the
Here, we have that these are solutions to the following equation, as given by [27]
So the question remains how to be bridge Equation (23) and Equation (24) to the massive graviton conditions we are considering for Pre Planckian space- time? Clearly, Equation (23) and Equation (24)
are for Planckian to Post Planckian space-time physics.
Here, we are assuming for Equation (23) and Equation (24)
And use the value of the radius of the universe, as given by
How do they get bridged to the Pre Planckian regime?
One possible benefit, if we get this matter of information theory and entropy settled i.e. does the following make sense?
In an earlier document the author submitted to FQXI, in 2012, the author tried to make the following linkage between presumed super partners (SUSY), in the electroweak regime of space-time, and the
mass of non-super partner particles i.e. in 2012, the supposition was that
In an earlier document, the idea was to make a bridge between presumed total mass of Gravitinos,
Gravitons, today, which we called,
Equation (26) was a presumed “conservation law”.
The problem, in all this, is that there is still not definitive evidence of super partners in CERN! Nor may there ever be found either! Can we, then if we abandon the idea of super partners, come up
with a bridge between Pre Planckian to Planckian physics, using gravitons, along the lines of Equation (26)? This requires a review, of the issues, brought up in [28] , and to see if they are
We should note in passing that [29] [30] [31] are important considerations, with [29] giving the details of gravitational wave detection, as noted by LIGO in 2016. [30] Confirms the essence of scalar
tensor theories as a competitor to General Relativity which should be falsified, if possible, and [31] goes into more of massive body generation of gravitational waves which certainly is appropriate
here in the early universe.
With that the questions are laid out for review and consideration.
This work is supported in part by National Nature Science Foundation of China Grant No. 11375279. | {"url":"https://www.scirp.org/journal/paperinformation?paperid=78084","timestamp":"2024-11-06T00:15:28Z","content_type":"application/xhtml+xml","content_length":"125226","record_id":"<urn:uuid:f6ab9b92-ef77-45b6-aa0d-c2140fc605fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00447.warc.gz"} |
ssc cgl math previous year question Archives -
Hi students, Welcome to Amans Maths Blogs (AMBIPi). Are you preparing for SSC CGL Tier 1 and 2 and looking for SSC CGL Exam Math Number System Question with Solutions AMBIPi? In this article, you
will get Previous Year Mathematics Questions asked in SSC CGL Tier 1 and Tier 2, which
Read More
SSC CGL Math Number System Question Paper with Solutions
Hi students, Welcome to Amans Maths Blogs (AMBIPi). Are you preparing for SSC CGL Tier 1 and 2 and looking for SSC CGL Exam Math Number System Question with Solutions AMBIPi? In this article, you
will get Previous Year Mathematics Questions asked in SSC CGL Tier 1 and Tier 2, which
Read More
SSC CGL Math Previous Year Number System Question with Solutions
Hi students, Welcome to Amans Maths Blogs (AMBIPi). Are you preparing for SSC CGL Tier 1 and 2 and looking for SSC CGL Exam Math Number System Question with Solutions AMBIPi? In this article, you
will get Previous Year Mathematics Questions asked in SSC CGL Tier 1 and Tier 2, which
Read More
SSC CGL Previous Year Math Number System Paper with Solutions
Hi students, Welcome to Amans Maths Blogs (AMBIPi). Are you preparing for SSC CGL Tier 1 and 2 and looking for SSC CGL Exam Math Number System Paper with Solutions AMBIPi? In this article, you will
get Previous Year Mathematics Questions asked in SSC CGL Tier 1 and Tier 2, which
Read More | {"url":"https://www.amansmathsblogs.com/tag/ssc-cgl-math-previous-year-question/","timestamp":"2024-11-10T21:38:35Z","content_type":"text/html","content_length":"101158","record_id":"<urn:uuid:561041ed-d283-4454-b644-887496970aa1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00877.warc.gz"} |
calculate normals
How do I calculate normals?
normals must indeed at a 90 ° angle to the surface are … how do I calculate that at a mesh?
I cant use RecalculateNormals becase this smooths my mesh
you need to set the normal for each vertex. Just use an array for your normals.
Like so:
normals [yourVertices] = Vector3.up;
Vector3.up should mean a 90° angle. (correct me if I’m wrong) After you’ve set your normals in the array just use:
mesh.normals = normals;
(sry my english isn’t the best but I hope it helps. ^^’ )
Hi, meshes have the same amount of normals than vertices so if you want to have different normals for 2 faces that share 1 or more vertices: you have to duplicate vertices (and then normals).
When it’s done, I guess that Unity’s RecalculateNormals will compute a normal for the face and not an average of all the faces that share the same vertex.
If not: you can use the cross product of 2 vectors from each triangle to compute the normal. If the triangle is ABC, a cross product between AB and AC will give you a vector to normalize to create a
normal (maybe you’ll need to negate the normal to point “outside” or “inside” the mesh, as you need) | {"url":"https://discussions.unity.com/t/calculate-normals/115093","timestamp":"2024-11-06T17:31:35Z","content_type":"text/html","content_length":"28698","record_id":"<urn:uuid:6721c65b-fd68-4762-b8da-603999c1f975>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00352.warc.gz"} |
Common Core
4th Grade Common Core: 4.NBT.4
Common Core Identifier: 4.NBT.4 / Grade: 4
Curriculum: Number And Operations In Base Ten: Use Place Value Understanding And Properties Of Operations To Perform Multi-Digit Arithmetic.
Detail: Fluently add and subtract multi-digit whole numbers using the standard algorithm.
212 Common Core State Standards (CCSS) aligned worksheets found: | {"url":"https://www.superteacherworksheets.com/common-core/4.nbt.4.html","timestamp":"2024-11-12T02:34:41Z","content_type":"text/html","content_length":"536096","record_id":"<urn:uuid:bd957546-0e9e-453e-a06c-b47cac480c31>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00248.warc.gz"} |
DI - Routes & Networks
DI - Routes & Networks - Previous Year CAT/MBA Questions
The best way to prepare for DI - Routes & Networks is by going through the previous year DI - Routes & Networks CAT questions. Here we bring you all previous year DI - Routes & Networks CAT questions
along with detailed solutions.
Click here for previous year questions of other topics.
It would be best if you clear your concepts before you practice previous year DI - Routes & Networks CAT questions.
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Answer the following questions based on the information given below:
A, B, C, D, E and F are the six police stations in an area, which are connected by streets as shown below. Four teams – Team 1, Team 2, Team 3 and Team 4 – patrol these streets continuously between
09:00 hrs. and 12:00 hrs. each day.
The teams need 30 minutes to cross a street connecting one police station to another. All four teams start from Station A at 09:00 hrs. and must return to Station A by 12:00 hrs. They can also pass
via Station A at any point on their journeys.
The following facts are known.
1. None of the streets has more than one team traveling along it in any direction at any point in time.
2. Teams 2 and 3 are the only ones in stations E and D respectively at 10:00 hrs.
3. Teams 1 and 3 are the only ones in station E at 10:30 hrs.
4. Teams 1 and 4 are the only ones in stations B and E respectively at 11:30 hrs.
5. Team 1 and Team 4 are the only teams that patrol the street connecting stations A and E.
6. Team 4 never passes through Stations B, D or F.
1. CAT 2023 LRDI Slot 3 | DI - Routes & Networks CAT Question
Which one among the following stations is visited the largest number of times?
• (a)
Station C
• (b)
Station D
• (c)
Station E
• (d)
Station F
Answer: Option C
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
We get the following schedule based on the conditions given in the question.
Number of times each station is visited (excluding the starting station)
C - 3
D - 2
E - 4/5
F - 3
Of the stations mentioned in options, E is visited the maximum number of times.
Hence, option (c).
2. CAT 2023 LRDI Slot 3 | DI - Routes & Networks CAT Question
How many times do the teams pass through Station B in a day?
Answer: 2
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
We get the following schedule based on the conditions given in the question.
Station B is passed through twice by T1 at 9:30 am and at 11:30 am.
Hence, 2.
3. CAT 2023 LRDI Slot 3 | DI - Routes & Networks CAT Question
Which team patrols the street connecting Stations D and E at 10:15 hrs?
• (a)
Team 3
• (b)
Team 1
• (c)
Team 2
• (d)
Team 4
Answer: Option A
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
We get the following schedule based on the conditions given in the question.
Team 3 is patrolling D-E route from 10 to 10:30.
Hence, option (a).
4. CAT 2023 LRDI Slot 3 | DI - Routes & Networks CAT Question
How many times does Team 4 pass through Station E in a day?
Answer: 2
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
We get the following schedule based on the conditions given in the question.
Team 4 passes through station E at 9:30 am and at 11:30 am, i.e., 2 times.
Hence, 2.
5. CAT 2023 LRDI Slot 3 | DI - Routes & Networks CAT Question
How many teams pass through Station C in a day?
Answer: Option D
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
We get the following schedule based on the conditions given in the question.
Team 3 and 4, i.e., 2 teams pass through C at least once.
Hence, option (d).
Directions for next 5 questions
Given above is the schematic map of the metro lines in a city with rectangles denoting terminal stations (e.g. A), diamonds denoting junction stations (e.g. R) and small filled-up circles denoting
other stations. Each train runs either in east-west or north-south direction, but not both. All trains stop for 2 minutes at each of the junction stations on the way and for 1 minute at each of the
other stations. It takes 2 minutes to reach the next station for trains going in east-west direction and 3 minutes to reach the next station for trains going in north-south direction. From each
terminal station, the first train starts at 6 am; the last trains leave the terminal stations at midnight. Otherwise, during the service hours, there are metro service every 15 minutes in the
north-south lines and every 10 minutes in the east west lines. A train must rest for at least 15 minutes after completing a trip at the terminal station, before it can undertake the next trip in the
reverse direction. (All questions are related to this metro service only. Assume that if someone reaches a station exactly at the time a train is supposed to leave, (s)he can catch that train.)
6. CAT 2022 LRDI Slot 1 | DI - Routes & Networks CAT Question
If Hari is ready to board a train at 8:05 am from station M, then when is the earliest that he can reach station N?
• (a)
9:06 am
• (b)
9:01 am
• (c)
9:13 am
• (d)
9:11 am
Answer: Option D
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
A train leaves M every 10 minutes starting at 6 am.
If Hari reaches M at 8:05, he can catch the train that leaves at 8:10 am.
Time taken by Hari to reach N:
There are total 20 stations from M till N, hence total travelling time = 20 × 2 = 40 minutes.
There are 2 junctions, hence stoppage time = 2 × 2 = 4 mins
There are 17 other stations, hence stoppage time = 17 × 1 = 17 mins
∴ Total time taken = 40 + 4 + 17 = 61 mins.
∴ Hari reaches N at 8:10 am + 61 minutes = 9:11 am.
Hence, option (d).
7. CAT 2022 LRDI Slot 1 | DI - Routes & Networks CAT Question
If Priya is ready to board a train at 10:25 am from station T, then when is the earliest that she can reach station S?
• (a)
11:12 am
• (b)
11:07 am
• (c)
11:28 am
• (d)
11:22 am
Answer: Option A
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Priya can go from T to S via R or V.
Case 1: T – R – S
The first train to depart T will be at (6 am + 9 + 2 + 2 =) 6:13 am.
∴ Now a train will depart T every 15 minutes, hence the train which Priya can catch for T → R is at 10:28.
Time taken to reach R = 12 + 3 = 15 mins.
∴ Priya reaches R at 10:43 am.
The first train to depart R will be at (6 am + 8 + 3 + 2 =) 6:13 am.
∴ Now a train will depart R every 10 minutes, hence the train which Priya can catch for R → S is at 10:43 itself.
Time taken to reach S = 20 + 9 = 29 mins.
∴ Priya reaches S at 11:12 am.
Case 2: T – S – R
The first train to depart T will be at (6 am + 8 + 3 + 2 =) 6:13 am.
∴ Now a train will depart T every 10 minutes, hence the train which Priya can catch for T → V is at 10:33 itself.
Now time taken from T → V is 29 mins and time taken from V → S is 15 mins.
Even if Priya does not have to wait at V she can S latest by 10:33 am + 29 + 15 = 11:17 am.
∴ Priya can reach S latest by 11:12 am.
Hence, option (a).
8. CAT 2022 LRDI Slot 1 | DI - Routes & Networks CAT Question
Haripriya is expected to reach station S late. What is the latest time by which she must be ready to board at station S if she must reach station B before 1 am via station R?
• (a)
11:35 pm
• (b)
11:49 am
• (c)
11:39 pm
• (d)
11:43 pm
Answer: Option C
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Route taken by Haripriya: S → R → B
Time taken from S → R:
Travel time = 10 × 2 = 20 mins
Stoppage time = 9 × 1 = 9 mins
Total time = 20 + 9 = 29 mins
Similarly, Time taken from R → B: 7 × 3 + 7 = 28 mins
In the north-south direction, the first train from A arrives at R at time = 6 am + (3 × 3) + (2 × 1) mins = 6:11 am.
Since R is a junction so this train will halt for 2 minutes at R and leave at 6:13.
Every 15 minutes, a train starts from A in the north-south direction.
The last train that leaves A will be at 12:00 am and it will leave R at 12:13 am, so Haripriya must reach R till 12:13 am.
If she catches this train from R at 12:13, she will reach B by 12:13 + 28 = 12:41 am.
Travelling time between S and R = (10 × 2) + (9 × 1) = 29
So Haripriya must board the train at S by (12:13 - 29 =) 11:44 pm
In the east-west direction, the first train from N arrives at S at time = 6 am + (6 × 2) + (5 × 1) = 6:17 am.
Since S is a junction so this train will halt for 2 minutes at S and leave at 6:19.
Every 10 minutes, a train starts from N in the east-west direction.
Therefore, Haripriya should board the train which leaves S at 11:39.
Hence, option (c).
9. CAT 2022 LRDI Slot 1 | DI - Routes & Networks CAT Question
What is the minimum number of trains that are required to provide the service on the AB line (considering both north and south directions)?
Answer: 8
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
A train leaves A for B and B to A every 15 minutes starting from 6 am.
Total time taken by a north-south train:
There are total 10 stations from M till N, hence total travelling time = 10 * 3 = 30 minutes.
There are 2 junctions, hence stoppage time = 2 * 2 = 4 mins
There are 7 other stations, hence stoppage time = 7 * 1 = 7 mins
∴ Total time taken = 30 + 4 + 7 = 41 mins.
The train which starts from A to B at 6 am reaches B at 6:41 am. It needs to rest for 15 minutes before it can start the journey back to A i.e., it can start it’s journey at 6:56 or after that.
As per schedule north-south trains leave every 15 minutes starting from 6 am. Hence, the train that starts from A to B at 6 am, can start its backward journey at 7 am.
∴ There have to be trains starting at 6 am, 6:15 am, 6:30 am and 6:45 am before trains coming from A can start their journey back.
Hence, we need at least 4 trains from B to A and similarly from A to B.
∴ We need at least 8 trains on the route AB.
Hence, 8.
10. CAT 2022 LRDI Slot 1 | DI - Routes & Networks CAT Question
What is the minimum number of trains that are required to provide the service in this city?
Answer: 48
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Consider the solution to previous question.
We need at least 8 trains on AB route as well as 8 trains on CD route.
∴ 16 trains on north-south routes.
Total time taken by a east-west train = 61 mins (calculated in first question of this set)
A train going from M to N at 6 am will reach N at 7:01 and will be ready for journey back at 7:16 am. A train starts from N every 10 mins. Hence, this train can start its journey at 7:20 am.
∴ There have to be trains starting at 6 am, 6:10 am, 6:20 am, 6:30 am, 6:40 am, 6:50 am, 7 am and 7:10 am before trains coming from M can start their journey back.
Hence, we need at least 8 trains from N to M and similarly from M to N.
∴ We need at least 16 trains on the route MN and 16 trains on PQ route
∴ 32 trains on east-west routes.
⇒ we need at least 16 + 32 = 48 trains.
Hence, 48.
Answer the next 5 questions based on the information given below:
Every day a widget supplier supplies widgets from the warehouse (W) to four locations – Ahmednagar (A), Bikrampore (B), Chitrachak (C), and Deccan Park (D). The daily demand for widgets in each
location is uncertain and independent of each other. Demands and corresponding probability values (in parenthesis) are given against each location (A, B, C, and D) in the figure below. For example,
there is a 40% chance that the demand in Ahmednagar will be 50 units and a 60% chance that the demand will be 70 units. The lines in the figure connecting the locations and warehouse represent
two-way roads connecting those places with the distances (in km) shown beside the line. The distances in both the directions along a road are equal. For example, the road from Ahmednagar to
Bikrampore and the road from Bikrampore to Ahmednagar are both 6 km long.
Every day the supplier gets the information about the demand values of the four locations and creates the travel route that starts from the warehouse and ends at a location after visiting all the
locations exactly once. While making the route plan, the supplier goes to the locations in decreasing order of demand. If there is a tie for the choice of the next location, the supplier will go to
the location closest to the current location. Also, while creating the route, the supplier can either follow the direct path (if available) from one location to another or can take the path via the
warehouse. If both paths are available (direct and via warehouse), the supplier will choose the path with minimum distance.
11. CAT 2022 LRDI Slot 2 | DI - Routes & Networks CAT Question
If the last location visited is Ahmednagar, then what is the total distance covered in the route (in km)?
[Note: There is an ambiguity in this question and hence was discarded by IIM Bangalore.]
Answer: 35
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
This questions was discarded by IIM Bangalore.
A cannot be the last city to be visited while satisfying all the conditions given in the caselet.
A – 50 (40%); 70 (60%)
B – 40 (30%); 60 (70%)
C – 70 (30%); 100 (70%)
D – 30 (40%); 50 (60%)
For Ahmednagar to be last, it should have the least demand of the 4 cities.
⇒ The only way Ahmednagar’s demand can be the least of the 4 cities is when its demand is 50.
Now, demand of all other cities should be greater than or equal to 50.
⇒ Demand at
B = 60
C = 70 or 100
D = 50
∴ Sequence of cities according to demand will be C → B → D → A
Distance travelled from
Warehouse → C = 12
C → B = 4
B → W → D = 12
D → W → A = 7 [shortest route from D to A is through Warehouse and not the direct route]
∴ Total distance travelled = 12 + 4 + 12 + 7 = 35.
Ambiguity: There is some ambiguity in this question. Once you reach B, demand at both A and D is same (i.e., 50). You would go the nearest of A and D which is A and hence A cannot be the last city to
be visited then.
Hence, this question was discarded.
Note: The answer given by IIM-B in the cadidate response sheet was 35.
12. CAT 2022 LRDI Slot 2 | DI - Routes & Networks CAT Question
If the total number of widgets delivered in a day is 250 units, then what is the total distance covered in the route (in km)?
Answer: 38
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
A – 50 (40%); 70 (60%)
B – 40 (30%); 60 (70%)
C – 70 (30%); 100 (70%)
D – 30 (40%); 50 (60%)
Maximum demand possible = 70 + 60 + 100 + 50 = 280
Actual demand is 250. This is possible only when demand at C is 70 instead of 100.
∴ Actual demands at various cities is:
A → 70
B → 60
C → 70
D → 50
Sequence of cities visited is: A → C → B → D
[A is closer to warehouse than C, hence first city to be visited will be A.]
∴ Total distance travelled = 5 + 17 + 4 + 12 = 38.
Hence, 38.
13. CAT 2022 LRDI Slot 2 | DI - Routes & Networks CAT Question
What is the chance that the total number of widgets delivered in a day is 260 units and the route ends at Bikrampore?
• (a)
• (b)
• (c)
• (d)
Answer: Option A
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
A – 50 (40%); 70 (60%)
B – 40 (30%); 60 (70%)
C – 70 (30%); 100 (70%)
D – 30 (40%); 50 (60%)
For route to end at B, B should have least demand i.e., 40.
Total demand is 260, hence demand at other cities should be higher of the two values.
∴ Demand at A = 70 (60%)
Demand at B = 40 (30%)
Demand at C = 100 (70%)
Demand at D = 50 (60%)
∴ Required possibility = 60% × 30% × 70% × 60%
= 0.6 × 0.3 × 0.7 × 0.6 = 0.0756 = 7.56%
Hence, option (a).
14. CAT 2022 LRDI Slot 2 | DI - Routes & Networks CAT Question
If the first location visited from the warehouse is Ahmednagar, then what is the chance that the total distance covered in the route is 40 km?
• (a)
• (b)
• (c)
• (d)
Answer: Option C
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
A – 50 (40%); 70 (60%)
B – 40 (30%); 60 (70%)
C – 70 (30%); 100 (70%)
D – 30 (40%); 50 (60%)
If first city visited is Ahmednagar, this is possible when A’s demand is highest. This is only possible when A’s demand is 70.
∴ Demand at C should be 70
Demand at B = 40 or 60
Demand at D = 30 or 50
∴ Sequence of cities can be
A → C → B → D: distance travelled = 38 kms
A → C → D → B: distance travelled = 40 kms
∴ Demand at D ≥ B
⇒ Demand at D = 50 (60%) and demand at B = 40 (30%)
⇒ Required possibility = 60% × 30% = 18%
Hence, option (c).
15. CAT 2022 LRDI Slot 2 | DI - Routes & Networks CAT Question
If Ahmednagar is not the first location to be visited in a route and the total route distance is 29 km, then which of the following is a possible number of widgets delivered on that day?
• (a)
• (b)
• (c)
• (d)
Answer: Option A
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
A – 50 (40%); 70 (60%)
B – 40 (30%); 60 (70%)
C – 70 (30%); 100 (70%)
D – 30 (40%); 50 (60%)
If A is not the first city to be visited, the first city will have to be C.
Distance travelled from Warehouse to C = 12 kms.
∴ To visit the remaining 3 cities, distance travelled should be (29 – 12 =) 17 kms.
There are two possibilities for this.
Case 1: W → C → B → A → D
highest demand is from C i.e., 70 or 100
2^nd highest demand is from B i.e., 60
3^rd highest demand is from A i.e., 50
4^th highest demand is from D i.e., 30
Total widgets delivered can be 210 or 240
Case 2: W → C → D → A → B
highest demand is from C i.e., 70 or 100
2^nd highest demand is from D i.e., 50
3^rd highest demand is from A i.e., 50
4^th highest demand is from B i.e., 40
Total widgets delivered can be 210 or 240.
[Note: shortest route from A to D or vice-versa is through the warehouse.]
Hence, option (a).
Answer the following question based on the information given below.
The figure below shows the street map for a certain region with the street intersections marked from a through l. A person standing at an intersection can see along straight lines to other
intersections that are in her line of sight and all other people standing at these intersections. For example, a person standing at intersection g can see all people standing at intersections b, c,
e, f, h, and k. In particular, the person standing at intersection g can see the person standing at intersection e irrespective of whether there is a person standing at intersection f.
Six people U, V, W, X, Y, and Z, are standing at different intersections. No two people are standing at the same intersection.
The following additional facts are known.
1. X, U, and Z are standing at the three corners of a triangle formed by three street segments.
2. X can see only U and Z.
3. Y can see only U and W.
4. U sees V standing in the next intersection behind Z.
5. W cannot see V or Z. 6. No one among the six is standing at intersection d.
16. CAT 2019 LRDI Slot 1 | DI - Routes & Networks CAT Question
Who is standing at intersection a?
• (a)
No one
• (b)
• (c)
• (d)
Answer: Option A
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
From (1), X, U, Z are at b,c, g or at b, f, g in some order. Thus, X, U or Z is definitely at g.
Let X is at g.
Case (i): U and Z at b and f.
From (4), U has to be at b, Z at f and V at j. From (2), no one is at c, e, k and h. As Y sees both U and W, Y must be at a and W at i. But then W sees V, which contradicts (5).
Thus, this case is not valid.
Case (ii): U and Z at b and c.
From (4), U has to be at c, Z at b and V at a. From (2), no one is at c, e,f, k and h. But then Y must be at I, j or l. But in that case Y cannot see U. Thus, this case is not valid.
Therefore, X cannot be at g.
Let Z is at g.
From (4), U is at c or f.
Case (i) U is at c and hence V is at k and X is at a. Again there is no place for V. This case is invalid.
Case (ii) U is at f, X is at b and V at h. V will be at e as he sees U. But then he will be able to see Z also. So this case is also invalid.
Thus, U is at g. Therefore, Z is at f, V at e and X at b. So, from (2) no one will be at a, c and j. From (3) and (5), it can be concluded that Y is at k and W at l. Thus, we have
No one is standing at intersection a.
Hence, option (a).
17. CAT 2019 LRDI Slot 1 | DI - Routes & Networks CAT Question
Who can V see?
• (a)
Z only
• (b)
U only
• (c)
U and Z only
• (d)
U, W and Z only
Answer: Option C
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Consider the solution to first question of this set.
V can see only U and Z.
Hence, option (c).
18. CAT 2019 LRDI Slot 1 | DI - Routes & Networks CAT Question
What is the minimum number of street segments that X must cross to reach Y?
Answer: Option A
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Consider the solution to first question of this set.
X must reach Y via g so that he would cross minimum street segments. i.e., he would cross 2 street segments.
Hence, option (a).
19. CAT 2019 LRDI Slot 1 | DI - Routes & Networks CAT Question
Should a new person stand at intersection d, who among the six would she see?
• (a)
V and X only
• (b)
W and X only
• (c)
U and Z only
• (d)
U and W only
Answer: Option B
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Consider the solution to first question of this set.
A new person standing at d would see W and X only.
Hence, option (b).
Answer the following question based on the information given below.
A new airlines company is planning to start operations in a country. The company has identified ten different cities which they plan to connect through their network to start with. The flight
duration between any pair of cities will be less than one hour. To start operations, the company has to decide on a daily schedule.
The underlying principle that they are working on is the following:
Any person staying in any of these 10 cities should be able to make a trip to any other city in the morning and should be able to return by the evening of the same day.
20. CAT 2017 LRDI Slot 1 | DI - Routes & Networks CAT Question
If the underlying principle is to be satisfied in such a way that the journey between any two cities can be performed using only direct (non-stop) flights, then the minimum number of direct flights
to be scheduled is:
• (a)
• (b)
• (c)
• (d)
Answer: Option C
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Since there are a total of 10 cities we can have a combination of 2 cities in 10C2 or 45 ways. Now, if for example we have 2 cities City 1 and City 2, then the cities can be connected in the
following 4 ways:
Morning flight from city 1 to city 2
Morning flight from city 2 to city 1
Evening flight from city 1 to city 2
Evening flight from city 2 to city 1
So the minimum number of direct flights to connect all cities is 45 × 4 or 180 ways.
Hence, option (c).
21. CAT 2017 LRDI Slot 1 | DI - Routes & Networks CAT Question
Suppose three of the ten cities are to be developed as hubs. A hub is a city which is connected with every other city by direct flights each way, both in the morning as well as in the evening. The
only direct flights which will be scheduled are originating and / or terminating in one of the hubs. Then the minimum number of direct flights that need to be scheduled so that the underlying
principle of the airline to serve all the ten cities is met without visiting more than one hub during one trip is:
• (a)
• (b)
• (c)
• (d)
Answer: Option C
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Let us suppose that City 1, City 2 and City 3 are the hubs and City 4, City 5... upto City 10 are 7 of the other 10 cities.
Now City 1, City 2 and City 3 connect with each other in 4 possible ways (as mentioned in the answer to the previous question).
Now 2 out of 3 cities can be a chosen in 3C2 or 3 ways. So total of no ways City 1, City 2 and City 3 connect with each other is 3 × 4 or 12 ways.
Now City 1 will connect with each of City 4, City 5, City 6 ..... City 10 in 4 possible ways (as explained in the previous questions answer).
So, total number of flights between City 1 and the cities 4 to 10 is 28.
Similarly there will be 28 flights each for City 2 and City 3 that will connect it with the 7 cities. So total minimum number of flights between 2 cities will be 12 + 28 + 28 + 28 = 96.
Hence, option (c).
22. CAT 2017 LRDI Slot 1 | DI - Routes & Networks CAT Question
Suppose the 10 cities are divided into 4 distinct groups G1, G2, G3, G4 having 3, 3, 2 and 2 cities respectively and that G1 consists of cities named A, B and C. Further suppose that direct flights
are allowed only between two cities satisfying one of the following:
1. Both cities are in G1
2. Between A and any city in G2
3. Between B and any city in G3
4. Between C and any city in G4
Then the minimum number of direct fights that satisfies the underlying principle of the airline is:
Answer: 40
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Now A, B and C are in G1. As there is no restriction of flights that can connect with each other in G1, the total number of flights connecting 2 cities in G1 is ^3C[2] × 4
= 12
A can connect with any of the 3 cities in G1 in ^3C[1] × 4 = 12 ways
B can connect with any of the 2 cities in G2 in ^2C[1] × 4 = 8 ways
C can connect with any of the 3 cities in G3 in ^2C[1] × 4 = 8 ways
Total minimum number of direct flights that satisfy the underlying principle is 12 + 12 + 8 + 8 = 40
Answer: 40
23. CAT 2017 LRDI Slot 1 | DI - Routes & Networks CAT Question
Suppose the 10 cities are divided into 4 distinct groups G1, G2, G3, G4 having 3, 3, 2 and 2 cities respectively and that G1 consists of cities named A, B and C. Further, suppose that direct flights
are allowed only between two cities satisfying one of the following:
1. Both cities are in G1
2. Between A and any city in G2
3. Between B and any city in G3
4. Between C and any city in G4
However, due to operational difficulties at A, it was later decided that the only flights that would operate at A would be those to and from B. Cities in G2 would have to be assigned to G3 or to G4.
What would be the maximum reduction in the number of direct flights as compared to the situation before the operational difficulties arose?
Answer: 4
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
Given the above conditions, we know that in G1, A would connect to B and B would connect with C. B would connect with any of the 2 cities in G3 and C would connect with any of the 2 cities in G4.
Now the 2 cities from G2 that were connected to A earlier can now connects to one of the cities in G3 or G4. There is no flight connection possible now between A and C in G1.
So the total number of flight connections as compared to the number of flight connections in the previous question will be less by number of flights connecting A and C i.e., 4.
Answer: 4
Answer the following question based on the information given below.
Four cars need to travel from Akala (A) to Bakala (B). Two routes are available, one via Mamur (M) and the other via Nanur (N). The roads from A to M, and from N to B, are both short and narrow. In
each case, one car takes 6 minutes to cover the distance, and each additional car increases the travel time per car by 3 minutes because of congestion. (For example, if only two cars drive from A to
M, each car takes 9 minutes.) On the road from A to N, one car takes 20 minutes, and each additional car increases the travel time per car by 1 minute. On the road from M to b, one car takes 20
minutes, each additional car increases the travel time per car by 0.9 minute.
The police department orders each car to take a particular route in such a manner that it is not possible for any car to reduce its travel time by not following the order, while the other cars are
following the order.
24. CAT 2017 LRDI Slot 1 | DI - Routes & Networks CAT Question
How many cars would be asked to take the route A-N-B, that is Akala-Nanur-Bakala route, by the police department?
Answer: 2
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
The police department would ask 2 cars to take the A-N-B route and 2 cars to take the A-M-B route, because 2 cars taking each of these routes would minimize the time for each of the 4 cars.
2 cars taking A-M-B route – 9 + 20.9 = 29.9 minutes.
2 cars taking A-N-B route – 21 + 9 = 30 minutes.
Increasing to 3 cars on the A-M-B would increase travel time of each car by 4.8 minutes and increasing to 3 cars on the A-N-B route would increase travel time of each car by 4 minutes. So 2 cars
would take the A-N-B route.
Answer: 2
25. CAT 2017 LRDI Slot 1 | DI - Routes & Networks CAT Question
If all the cars follow the police order, what is the difference in travel time (in minutes) between a car which takes the route A-N-B and a car that takes the route A-M-B?
• (a)
• (b)
• (c)
• (d)
Answer: Option B
Video Explanation
To keep track of your progress of PYQs join our FREE CAT PYQ Course.
Join our Telegram Channel for CAT/MBA Preparation.
Text Explanation :
The police department would ask 2 cars to take the A-N-B route and 2 cars to take the A-M-B route, because 2 cars taking each of these routes would minimize the time for each of the 4 cars.
2 cars taking A-M-B route – 9 + 20.9 = 29.9 minutes.
2 cars taking A-N-B route – 21 + 9 = 30 minutes.
Increasing to 3 cars on the A-M-B would increase travel time of each car by 4.8 minutes and increasing to 3 cars on the A-N-B route would increase travel time of each car by 4 minutes.
Minimum travel time would be when 2 cars are assigned to each of the 2 routes by the police department.
The difference in travel time when the cars follow the police order is 30-29.9 or 0.1 minutes.
Hence, option (b). | {"url":"https://www.apti4all.com/cat-mba/topic-wise-preparation/lrdi/cat/di-routes-networks","timestamp":"2024-11-05T23:25:52Z","content_type":"text/html","content_length":"186220","record_id":"<urn:uuid:861b5cce-aaa4-42e0-8033-85bd36151062>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00628.warc.gz"} |
Ben Lund gave a talk on radial projections in a vector space over a finite field at the Discrete Math Seminar - Discrete Mathematics Group
Ben Lund gave a talk on radial projections in a vector space over a finite field at the Discrete Math Seminar
On June 27, 2022, Ben Lund from the IBS Discrete Mathematics Group gave a talk at the Discrete Math Seminar on a large set of points with small radial projections in a vector space over a finite
field. The title of his talk was “Radial projections in finite space“. | {"url":"https://dimag.ibs.re.kr/2022/radial-projections/","timestamp":"2024-11-12T13:27:11Z","content_type":"text/html","content_length":"141402","record_id":"<urn:uuid:32c8c75f-4a72-4694-b48d-e9cbb913f906>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00809.warc.gz"} |
Cost-sharing scheduling games on restricted unrelated machines
We study a very general cost-sharing scheduling game. An instance consists of k jobs and m machines and an arbitrary weighted bipartite graph denoting the jobs strategies. An edge connecting a job
and a machine specifies that the job may choose the machine; edge weights correspond to processing times. Each machine has an activation cost that needs to be covered by the jobs assigned to it. Jobs
assigned to a particular machine share its cost proportionally to the load they generate. Our game generalizes singleton cost-sharing games with weighted players. We provide a complete analysis of
the game with respect to equilibrium existence, computation, convergence and quality – with respect to the total cost and the maximal cost. We study both unilateral and coordinated deviations. We
show that the main factor in determining the stability of an instance and the quality of a stable assignment is the machines' activation-cost. Games with unit-cost machines are generalized ordinal
potential games, and every instance has an optimal solution which is also a pure Nash equilibrium (PNE). On the other hand, with arbitrary-cost machines, a PNE is guaranteed to exist only for very
limited instances, and the price of stability is linear in the number of players. Also, the problem of deciding whether a given game instance has a PNE is NP-complete. In our analysis of coordinated
deviations, we characterize instances for which a strong equilibrium exists and can be calculated efficiently, and show tight bounds for the strong price of anarchy and the price of stability.
Bibliographical note
Publisher Copyright:
© 2016 Elsevier B.V.
• Cost-sharing games
• Equilibrium inefficiency
• Nash equilibrium
• Scheduling on restricted unrelated machines
• Strong equilibrium
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Cost-sharing scheduling games on restricted unrelated machines'. Together they form a unique fingerprint.
Related research output
• 1 Conference contribution
• Avni, G.
& Tamir, T.,
Algorithmic Game Theory - 8th International Symposium, SAGT 2015.
Hoefer, M. & Hoefer, M. (eds.).
Springer Verlag
p. 69-81 13 p.
A6. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); vol. 9347).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › peer-review | {"url":"https://cris.haifa.ac.il/en/publications/cost-sharing-scheduling-games-on-restricted-unrelated-machines","timestamp":"2024-11-10T06:50:57Z","content_type":"text/html","content_length":"66456","record_id":"<urn:uuid:512df993-d091-4266-a0aa-4bb2d4afa695>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00021.warc.gz"} |
Fred Schenkelberg, Author at No MTBF
“What’s the MTBF of a Human?” That’s a bit of a strange question?
Guest post by Adam Bahret
I ask this question in my Reliability 101 course. Why ask such a weird question? I’ll tell you why. Because MTBF is the worst, most confusing, crappy metric used in the reliability discipline. Ok
maybe that is a smidge harsh, it does have good intentions. But the amount of damage that has been done by the misunderstanding it has caused is horrendous.
MTBF stands for “Mean Time Between Failure.” It is the inverse of failure rate. An MTBF of 100,000 hrs/failure is a failure rate of 1/100,000 fails/hr = .00001 fails/hr. Those are numbers, what does
that look like in operation? Continue reading MTBF of a Human
MTBF Paradox: Case Study
MTBF Paradox: Case Study
Guest Post by Msc Teofilo Cortizo
The MTBF calculation is widely used to evaluate the reliability of parts and equipment, in the industry is usually defined as one of the key performance indicators. This short article is intended to
demonstrate in practice how we can fool ourselves by evaluating this indicator in isolation. Continue reading MTBF Paradox: Case Study
A Novel Reason to Use MTBF
A Novel Reason to Use MTBF
Thanks to a reader that noticed my question on why MTBF came into existence, we have a new (new to me at least) rationale for using MTBF. Basically, MTBF provides clarity on the magnitude of a
number, because a number in scientific notation is potentially confusing.
What is doubly concerning is the use of MTTF failure rate values in ISO standards dealing with system safety.
Let’s explore the brief email exchange and my thoughts. Continue reading A Novel Reason to Use MTBF
What is the MTBF Means?
What is the MTBF Means?
Guest post by Msc Teofilo Cortizo
The term MTBF (Mean Time Between Failures) within maintenance management, it is the most important KPI after Physical Availability. Unlike MTTF (Mean Time To Failure), which relates directly to
available equipment time, MTBF also adds up the time spent inside a repair. That is, it starts its count from a certain failure and only stops its counter when this fault was remedied, started and
repeated itself again. According to ISO 12849: 2013, this indicator can only be used for repairable equipment, and MTTF is the equivalent of non-repairable equipment.
The graphic below illustrates these occurrences:
Figure 01: Mean Time Between Failures
Calculating the MTBF in the Figure 01, we have added the times T1 and T2 and divided by two. That is, the average of all times between one failure and another and its return is calculated. It is,
therefore, a simple arithmetical calculation. But what does it mean?
Generally speaking, this indicator is associated with a reliability quality of assets or asset systems, and may even reach a repairable item, although it is rarer to have data available to that
detail. Maintenance managers set some benchmark numbers and track performance on a chart over time. In general, the higher the MTBF the better, or fewer times of breaks and repairs over the analyzed
Once we have fixed the concepts, some particularities need to be answered:
1. Can we establish periodicity of a maintenance plan based on MTBF time?
2. Can I calculate my failure rates based on my MTBF?
3. Can I calculate my probability of failure based on my MTBF?
4. If the MTBF of my asset or system is 200 hours, after that time will it fail?
It is interesting to answer these questions separately:
1. Can we establish periodicity of a maintenance plan based on MTBF time?
The MTBF is an average number calculated from a set of values. That is, these values can be grouped into a histogram to generate a data distribution where the average value is its MTBF, or the
average of the data. Imagine that this distribution follows the Gaussian law and we have a Normal curve that was modeled based on the failure data. The chart below shows that the MTBF is positioned
in the middle of the chart.
Figure 02: Normal Distribution Model
In a modeled PDF curve (Probability Density of Failure) the mean value, or the MTBF, will occur after 50% of the failure frequencies have occurred. If we implement the preventive plan with a
frequency equal to the MTBF time, it will already have a 50% probability of failing. Therefore, the MTBF is not a number that indicates the optimal time for a scheduled intervention.
2. Can I calculate my failure rates based on my MTBF?
Considering the modeling of the failure data to calculate the MTBF, it´s only possible in the exponential distribution fix a value where the failure rate is the inverse of the MTBF:
MTBF = 1 / ʎ
In this distribution, the MTBF time already corresponds to 63.2% probability of failure.
Figure 03: Exponential Distribution Model
Any modeling other than exponential, the failure rate will be variable and time dependent, so its calculation will also depend on factors such as the probability density function f(t) and the
reliability function R(t).
ʎ(t) = h(t) = f(t) / R(t)
Although the exponential distribution is the most adopted in reliability projects, which would generate a constant failure rate over time, most of the assets have variations within their “bathtub
curve”, as exemplified by Moubray:
Figure 04: Different Bathtub Curves (Moubray, 1997)
This means that the exponential expression is not best suited to reflect the behavior of most assets in an industrial plant.
3. Can I calculate my probability of failure based on my MTBF?
As seen above, only in the exponential distribution has a constant failure rate that can be calculated as the inverse of the MTBF. In this case, yes, we can calculate the probability of failure of an
asset using the formula below:
f(t) = ʎˑexp(-ʎt)
For other models where the failure rate depends on the time, it is only possible to calculate the probability of failure through a data modeling and determination of a parametric statistical curve.
4. If the MTBF of my asset or system is 200 hours, after that time will it fail?
The question is, what exactly does that number mean? It was shown that MTBF isn´t used as a maintenance plan frequency. According to the items explained above, this time means nothing as it is not
comparable to its history over the months. If the parametric model governing the behavior of the assets in a reliability study is not determined, the time of 200 hours has no meaning for a
probability of failure. In the case of the MTBF provided by equipment manufacturers is different, through life tests they determine exponential curves and thus calculate the time in which there will
be 63.2% of sample failures.
I hope the article has helped us to reflect on the definitions of an indicator that is both used but also so misunderstood within industrial maintenance management.
Msc Teofilo Cortizo
Reliability Engineer
Consider the Decision Making First
Consider the Decision Making First
Reliability activities serve one purpose, to support better decision making.
That is all it does. Reliability work may reveal design weaknesses, which we can decide to address. Reliability work may estimate the longevity of a device, allowing decisions when compared to
objectives for reliability.
Creating a report that no one reads is not the purpose of reliability. Running a test or analysis to simply ‘do reliability’ is not helpful to anyone. Anything with MTBF involved … well, you know how
I feel about that. Continue reading Consider the Decision Making First
What is Wrong With Success Testing?
What is Wrong With Success Testing?
Three prototypes survive the gauntlet of stresses and none fail. That is great news, or is it? No failure testing is what I call success testing.
We often want to create a design that is successful, therefore enjoying successful testing results, I.e. No failures means we are successful, right?
Another aspect of success testing is in pass/fail type testing we can minimize the sample size by planning for all prototypes passing the test. If we plan on running the test till we have a failure
or two, we need more samples. While it improves the statistics of the results, we have to spend more to achieve the results. We nearly always have limited resources for testing.
Let’s take a closer look at success testing and some of the issues you should consider before planning your next success test. Continue reading What is Wrong With Success Testing?
Defining a Product Life Time
An Elusive Product Life Time Definition
The following note and question appear in my email the other day. I had given the definition of reliability quite a bit of thought, yet have not really thought too much about a definition of ‘product
life time’.
So after answering Najib’s question I thought it may make a good conversation starter here. Give it a quite read, and add how you would answer the questions Najib poses. Continue reading Defining a
Product Life Time
MTBF is Not a Duration
MTBF is Not a Duration
Despite standing for the time between failures, MTBF does not represent a duration. Despite having units of hours (months, cycles, etc.) is it not a duration related metric.
This little misunderstanding seems to cause major problems. Continue reading MTBF is Not a Duration
The Fear of Reliability
The Fear of Reliability
MTBF is a symptom of a bigger problem. It is possibly a lack of interest in reliability. Which I doubt is the case. Or it is a bit of fear of reliability.
Many shy away from the statistics involved. Some simply do not want to know the currently unknown. It could be the fear of potential bad news that the design isn’t reliable enough. Some do not care
to know about problems that will requiring solving.
What ever the source of the uneasiness, you may know one or more coworkers that would rather not deal with reliability in any direct manner. Continue reading The Fear of Reliability
Being In The Flat Part of the Curve
What Does Being In The Flat Part of the Curve Mean?
To mean it means very little, as it rarely occurs. Products fail for a wide range of reasons and each failure follows it’s own path to failure.
As you may understand, some failures tend to occur early, some later. Some we call early life failures, out-of-box failures, etc. Some we deem end of life or wear out failures. There are a few that
are truly random in nature, just as a drop or accident causing an overstress fracture, for example. Continue reading Being In The Flat Part of the Curve
A Series of Unfortunate MTBF Assumptions
A Series of Unfortunate MTBF Assumptions
The calculation of MTBF results in a larger number if we make a series of MTBF assumptions. We just need more time in the operating hours and fewer failures in the count of failures.
While we really want to understand the reliability performance of field units, we often make a series of small assumptions that impact the accuracy of MTBF estimates.
Here are just a few of these MTBF assumptions that I’ve seen and in some cases nearly all of them with one team. Reliability data has useful information is we gather and treat it well. Continue
reading A Series of Unfortunate MTBF Assumptions
Time to Update the Reliability Metric Book
It is Time to Update the Reliability Metric Book with Your Help
Let’s think of this as a crowdsourced project. The first version of this book is a compilation of NoMTBF.com articles. It lays out why we do not want to use MTBF and what to do instead (to some
With your input of success stories, how to make progress using better metrics, and input of examples, stories, case studies, etc. the next version of the book will be much better and much more
practical. Continue reading Time to Update the Reliability Metric Book
We Need to Try Harder to Avoid MTBF
We Need to Try Harder to Avoid MTBF
Just back from the Reliability and Maintainability Symposium and not happy. While there are signs, a proudly worn button, regular mentions of progress and support, we still talk about reliability
using MTBF too often. We need to avoid MTBF actively, no, I mean aggressively.
Let’s get the message out there concerning the folly of using MTBF as a surrogate to discuss reliability. We need to work relentlessly to avoid MTBF in all occasions.
Teaching reliability statistics does not require the teaching of MTBF.
Describing product reliability performance does not benefit by using MTBF.
Creating reliability predictions that create MTBF values doesn’t make sense in most if not all cases. Continue reading We Need to Try Harder to Avoid MTBF
3 Ways to Expose MTBF Problems
3 Ways to Expose MTBF Problems
MTBF use and thinking is still rampant. It affects how our peers and colleagues approach solving problems.
There is a full range of problems that come from using MTBF, yet how do you spot the signs of MTBF thinking even when MTBF is not mentioned? Let’s explore there approaches that you can use to ferret
out MTBF thinking and move your organization toward making informed decisions concerning reliability. Continue reading 3 Ways to Expose MTBF Problems
The Army Memo to Stop Using Mil HDBK 217
The Army Memo to Stop Using Mil HDBK 217
Over 20 years ago the Assistant Secretary of the Army directed the Army to not use MIL HBK 217 in a request for proposals, even for guidance. Exceptions, by waiver only.
217 is still around and routinely called out. That is a lot of waivers.
Why is 217 and other parts count database prediction packages still in use? Let’s explore the memo a bit more, plus ponder what is maintaining the popularity of 217 and ilk. Continue reading The Army
Memo to Stop Using Mil HDBK 217 | {"url":"https://nomtbf.com/author/admin/","timestamp":"2024-11-09T00:33:33Z","content_type":"text/html","content_length":"147654","record_id":"<urn:uuid:3ac5c4e8-ee74-438b-a9cc-f15107496408>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00872.warc.gz"} |
Linux-Blog – Dr. Mönchmeyer / anracon
A reader who follows my article series on MLP-coding has asked me, which books I would recommend for beginners in “Machine Learning”. I assume that he did not mean introductory books on Python, but
books on things like SciKit, Artificial Neural Networks, Keras, Tensorflow, …. I also assume that he had at least a little interest in the basic mathematical background.
I should also say that the point regarding “beginners” is a difficult one. With my own limited experience I would say:
Get an overview, but then start playing around on your computer with something that interests you. Afterwards extend your knowledge about tools and methods. As with computers in general it is
necessary to get used to terms and tools – even if you do not or not fully understand the theory behind them. Meaning: Books open only a limited insight into Machine Learning, you will probably learn
more from practical exercises. So, do not be afraid of coding in Python – and use the tools the authors of the following books worked with. And: You should get familiar with Numpy and matplotlib
pretty soon!
Well, here are my recommendations for reading – and the list of books actually implies an order:
“Machine Learning – An applied mathematics introduction”, Paul Wilmott, 2019, Panda Ohana Publishing
This is one of the best introductory books on Machine learning I have read so far – if not simply the best. One of its many advantages is: It is relatively short – around 220 pages – and concise.
Despite the math it is written in a lively, personal style and I like the somewhat dry humor of the author.
This book will give you a nice overview of the most important methodologies in ML. It does not include any (Python) coding – and in my opinion this is another advantage for beginners. Coding does not
distract you from the basic concepts. A thing this book will not give you is an abstract introduction into “Convolutional Neural Networks” [CNNs].
Nevertheless: Read this book before you read anything else!
“Python Machine Learning”, Sebastian Rashka, 2016, Packt Publishing Ltd
When I started reading books on ML this was one of the first I came across. Actually, I did not like it at my first reading trial. One of my problems was that I had not enough knowledge regarding
Python and Matplotlib. After some months and some basics in Python I changed my mind. The book is great! It offers a lot of coding examples and the introduction into SciKit and at some points also
Pandas is quite OK.
It is a “hands on” type of book – you should work with the code examples given and modify them. This first edition of the book does, however, not provide you with an introduction into CNNs and
advanced tools like Tensorflow and the Keras interface. Which in my opinion was not a disadvantage at the time of reading. What I still do not like are the mathematical explanations at some points
where the author argues more on the level of hints than real explanations. But it is a great book to start with SciKit-Learn experiments – and you will deepen your insight in methodologies learned in
the book of Wilmott.
Note: There is a new edition available: “Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2”, 3rd Edition, by S. Rashka and V.Mirjalili.
I have not read it, but if you think about a book of Rashka, you should probably buy this edition.
For my German Readers: There is a German version of the 2nd edition available – in much better hard cover and printing quality (mitp verlag), but also more expensive. But it does not cover Tensorflow
2, to my knowledge.
“Hands-On Machine learning with SciKit-Learn, Keras & Tensorflow”, 2nd edition, Aurelien Geron, 2019, O’Reilly
This is a pure treasure trove! I have read the 1st
edition, but a week ago I bought the 2nd edition. It seems to be far better than the first edition! Not only because of the colored graphics – which really help the reader to understand things
better. The book has also be revised and extended! Compared with the first edition it is partially a new book. In my opinion all important topics in ML have been covered. So, if you want to extend
your practical knowledge to CNNs, RNNS and Reinforcemant Learning go for it. But around 780 pages will require their time ….
“Deep Learning mit Python und Keras”, Francois Chollet, 2018, mitp Verlag
I have only read and worked with the German version. The English version was published by Manning Publications. A profound introduction into Keras and at the same time a nice introduction into CNNs.
I also liked the chapters on Generative Deep Learning. Be prepared to have a reasonable GPU ready when you start working with this book!
For theorists: Neural Networks and Analog Computation – Beyond the Turing Limit”, H.T. Siegelmann, 1999, Springer Science+, Business Media, New York
This book is only for those with a background in mathematics and theoretical information and computation science. You have been warned! But it is a strong, strong book on ANN-theory which provides
major insights on the relation between ANNs, super-Turing systems and even physical systems. Far ahead of its time …
There are of course many more books on the market on very different levels – and I meanwhile own quite a bunch of them. But I limited the list intentionally. Have fun!
VMware WS 12.5.9 on Opensuse Leap 15.1
A blog reader has asked me how to get the VMware workstation WS 12.5.9 running again after he installed and updated Leap to version 15.1 on some older hardware. The reader referred to my article on
running Win 10 as a VMware guest on an Opensuse Leap system.
Upgrading Win 7 to Win 10 guests on Opensuse/Linux based VMware hosts – I – some experiences
There I mentioned that the latest VMware WS version does not run on older hardware … Then yu have to use VMware WS 12.5.9. But not only during your initial VMware installation but also after kernel
updates on a Leap 15.1 system you are confronted with an error message similar to the following one, when the VMware WS modules must be (re-) compiled:
I have actually written about this problem in 2018. But I admit that the relevant article is somewhat difficult to find as it has no direct reference to Leap 15.1. See:
Upgrade auf Opensuse Leap 15.0 – Probleme mit Nvidia-Treiber aus dem Repository und mit VMware WS 12.5.9
This article has a link to a VMware community article with a remedy to the mentioned problem:
https://communities.vmware.com / message / 2777306#2777306
The necessary steps are the following:
wget https://github.com/mkubecek/vmware-host-modules/archive/workstation-12.5.9.tar.gz
tar -xzf workstation-12.5.9.tar.gz
cd vmware-host-modules-workstation-12.5.9
make install
cd /usr/lib/vmware/lib/libfontconfig.so.1
mv libfontconfig.so.1 libfontconfig.so.1.old
ln -s /usr/lib64/libfontconfig.so.1
All credits are due to the guys “Kubecek” and “portsample”. Thx!
A simple Python program for an ANN to cover the MNIST dataset – XII – accuracy evolution, learning rate, normalization
We continue our article series on building a Python program for a MLP and training it to recognize handwritten digits on images of the MNIST dataset.
A simple Python program for an ANN to cover the MNIST dataset – XI – confusion matrix
A simple Python program for an ANN to cover the MNIST dataset – X – mini-batch-shuffling and some more tests
A simple Python program for an ANN to cover the MNIST dataset – IX – First Tests
A simple Python program for an ANN to cover the MNIST dataset – VIII – coding Error Backward Propagation
A simple Python program for an ANN to cover the MNIST dataset – VII – EBP related topics and obstacles
A simple Python program for an ANN to cover the MNIST dataset – VI – the math behind the „error back-propagation“
A simple Python program for an ANN to cover the MNIST dataset – V – coding the loss function
A simple Python program for an ANN to cover the MNIST dataset – IV – the concept of a cost or loss function
A simple Python program for an ANN to cover the MNIST dataset – III – forward propagation
A simple Python program for an ANN to cover the MNIST dataset – II – initial random weight values
A simple Python program for an ANN to cover the MNIST dataset – I – a starting point
In the last article we used our prediction data to build a so called “confusion matrix” after training. With its help we got an overview about the “false negative” and “false positive” cases, i.e.
cases of digit-images for which the algorithm made wrong predictions. We also displayed related critical MNIST images for the digit “4”.
In this article we first want to extend the ability of our class “ANN” such that we can measure the level of accuracy (more precisely: the recall) on the full test and the training data sets during
training. We shall see that the resulting curves will trigger some new insights. We shall e.g. get an answer to the question at which epoch the accuracy on the test data set does no longer change,
but the accuracy on the training set still improves. Meaning: We can find out after which epoch we spend CPU time on overfitting.
In addition we want to investigate the efficiency of our present approach a bit. So far we have used a relatively small learning rate of 0.0001 with a decrease rate of 0.000001. This gave us
relatively smooth curves during convergence. However, it took us a lot of epochs and thus computational time to arrive at a cost
minimum. The question is:
Is a small learning rate really required? What happens if we use bigger initial learning rates? Can we reduce the number of epochs until learning converges?
Regarding the last point we should not forget that a bigger learning rate may help to move out of local minima on our way to the vicinity of a global minimum. Some of our experiments will indeed
indicate that one may get stuck somewhere before moving deep into a minimum valley. However, our forthcoming experiments will also show that we have to take care about the weight initialization. And
this in turn will lead us to a major deficit of our present code. Resolving it will help us with bigger learning rates, too.
Class interface changes
We introduce some new parameters, whose usage will become clear later on. They are briefly documented within the code. In addition we do no longer call the method _fit() automatically when we create
a Python object instance of the class. This means the you have to call “_fit()” on your own in your Jupyter cells in the future.
To be able to use some additional features later on we first need some more import statements.
New import statements of the class MyANN
import numpy as np
import math
import sys
import time
import tensorflow
# from sklearn.datasets import fetch_mldata
from sklearn.datasets import fetch_openml
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.cluster import MiniBatchKMeans
from keras.datasets import mnist as kmnist
from scipy.special import expit
from matplotlib import pyplot as plt
from symbol import except_clause
from IPython.core.tests.simpleerr import sysexit
from math import floor
Extended My_ANN interface
We extend our interface substantially – although we shall not use every new parameter, yet. Most of the parameters are documented shortly; but to really understand what they control you have to look
into some other changed parts of the class’s code, which we present later on. You can, however, safely ignore parameters on “clustering” and “PCA” in this article. We shall yet neither use the option
to import MNIST X- and y-data (X_import, y_import) instead of loading them internally.
def __init__(self,
my_data_set = "mnist",
X_import = None, # imported X dataset
y_import = None, # imported y dataset
num_test_records = 10000, # number of test data
# parameter for normalization of input data
b_normalize_X = False, # True: apply StandardScaler on X input data
# parameters for clustering of input data
b_perform_clustering = False, # shall we cluster the X_data before learning?
my_clustering_method = "MiniBatchKMeans", # Choice between 2 methods: MiniBatchKMeans, KMeans
cl_n_clusters = 200, # number of clusters (often "k" in literature)
cl_max_iter = 600, # number of iterations for centroid movement
cl_n_init = 100, # number of different initial centroid positions tried
cl_n_jobs = 4, # number of CPU cores (jobs to start for investigating n_init variations
cl_batch_size = 500, # batch size, only used for MiniBatchKMeans
#parameters for PCA of input data
perform_pca = False,
num_pca_categories = 155,
# parameters for MLP structure
n_hidden_layers = 1,
ay_nodes_layers = [0, 100, 0], # array which should have as much elements as n_hidden + 2
n_nodes_layer_out = 10, # expected number of nodes in output layer
my_activation_function = "sigmoid",
my_out_function = "sigmoid",
my_loss_function = "LogLoss",
n_size_mini_batch = 50, # number of data elements in a mini-batch
n_epochs = 1,
n_max_batches = -1, # number of mini-batches to use during epochs - > 0 only for testing
# a negative value uses all mini-batches
lambda2_reg = 0.1, # factor for quadratic regularization term
lambda1_reg = 0.0, # factor for linear regularization term
vect_mode = 'cols',
init_weight_meth_L0 = "sqrt_nodes", # method to init weights => "sqrt_nodes", "const"
init_weight_meth_Ln = "sqrt_nodes", # sqrt_nodes", "const"
init_weight_intervals = [(-0.5, 0.5), (-0.5, 0.5), (-0.5, 0.5)], # size must fit number of hidden layers
init_weight_fact = 2.0, # extends the interval
learn_rate = 0.001, # the learning rate (often called epsilon in textbooks)
decrease_const = 0.00001, # a factor for decreasing the learning rate with epochs
learn_rate_limit = 2.0e-05, # a lower limit for the learn rate
adapt_with_acc = False, # adapt learning rate with additional factor depending on rate of acc change
reduction_fact = 0.001, # small reduction factor - should be around 0.001 because of an exponential reduction
mom_rate = 0.0005, # a factor for momentum learning
b_shuffle_batches = True, # True: we mix the data for mini-batches in the X-train set at the start of each epoch
b_predictions_train = False, # True: At the end of periodic epochs the code performs predictions on the train data set
b_predictions_test = False, # True: At the end of periodic epochs the code performs predictions on the test data set
prediction_test_period = 1, # Period of epochs for which we perform predictions
prediction_train_period = 1, # Period of epochs for which we perform predictions
print_period = 20, # number of epochs for which to print the costs and the averaged error
figs_x1=12.0, figs_x2=8.0,
legend_loc='upper right',
b_print_test_data = True
Initialization of MyANN
data_set: type of dataset; so far only the "mnist", "mnist_784" datsets are known
We use this information to prepare the input data and learn about the feature dimension.
This info is used in preparing the size of the input layer.
X_import: external X dataset to import
y_import: external y dataset to import - must fit in dimension to X_import
num_test_records: number of test data
b_normalize_X: True => Invoke the StandardScaler of
to center and normalize the input data X
Preprocessing of input data treatment before learning
b_perform_clustering # True => Cluster the X_data before learning?
my_clustering_method # string: 2 methods: MiniBatchKMeans, KMeans
cl_n_clusters = 200 # number of clusters (often "k" in literature)
cl_max_iter = 600 # number of iterations for centroid movement
cl_n_init = 100 # number of different initial centroid positions tried
cl_n_jobs = 4, # number of CPU cores => jobs - only used for "KMeans"
cl_batch_size = 500 # batch size used for "MiniBatchKMeans"
b_perform_pca: True => perform a pca analysis
num_pca_categories: 155 - choose a reasonable number
n_hidden_layers = number of hidden layers => between input layer 0 and output layer n
ay_nodes_layers = [0, 100, 0 ] : We set the number of nodes in input layer_0 and the output_layer to zero
Will be set to real number afterwards by infos from the input dataset.
All other numbers are used for the node numbers of the hidden layers.
n_nodes_out_layer = expected number of nodes in the output layer (is checked);
this number corresponds to the number of categories NC = number of labels to be distinguished
my_activation_function : name of the activation function to use
my_out_function : name of the "activation" function of the last layer which produces the output values
my_loss_function : name of the "cost" or "loss" function used for optimization
n_size_mini_batch : Number of elements/samples in a mini-batch of training data
The number of mini-batches will be calculated from this
n_epochs : number of epochs to calculate during training
n_max_batches : > 0: maximum of mini-batches to use during training
< 0: use all mini-batches
lambda_reg2: The factor for the quadartic regularization term
lambda_reg1: The factor for the linear regularization term
vect_mode: Are 1-dim data arrays (vctors) ordered by columns or rows ?
init_weight_meth_L0: Method to calculate the initial weights at layer L0: "sqrt_nodes" => sqrt(number of nodes) / "const" => interval borders
init_weight_meth_Ln: Method to calculate the initial weights at hidden layers
init_weight_intervals: list of tuples with interval limits [(-0.5, 0.5), (-0.5, 0.5), (-0.5, 0.5)],
size must fit number of hidden layers
init_weight_fact: interval limits get scald by this factor, e.g. 2* (0,5, 0.5)
learn rate : Learning rate - definies by how much we correct weights in the indicated direction of the gradient on the cost hyperplane.
decrease_const: Controls a systematic decrease of the learning rate with epoch number
learn_rate_limit = 2.0e-05, # a lowee limit for the learning rate
adapt_with_acc: True => adapt learning rate with additional factor depending on rate of acc change
reduction_fact: around 0.001 => almost exponential reduction during the first 500 epochs
mom_const: Momentum rate. Controls a mixture of the last with the present weight
corrections (momentum learning)
b_shuffle_batches: True => vary composition of mini-batches with each epoch
# The next two parameters enable the measurement of accuracy and total cost function
# by making predictions on the train and test datasets
b_predictions_train: True => perform a prediction run on the full training data set => get accuracy
b_predictions_test: True => perform a prediction run on the full test data set => get accuracy
prediction_test_period: period of epochs for which to perform predictions
prediction_train_period: period of epochs for which to perform predictions
print_period: number of periods between printing out some intermediate data
on costs and the averaged error of the last mini-batch
figs_x1=12.0, figs_x2=8.0 : Standard sizing of plots ,
legend_loc='upper right': Position of legends in the plots
b_print_test_data: Boolean variable to control the print out of some tests data
# Array (Python list) of known input data sets
self._input_data_sets = ["mnist", "mnist_784", "mnist_keras", "imported"]
self._my_data_set = my_data_set
# X_import, y_import, X, y, X_train, y_train, X_test, y_test
# will be set by method handle_input_data()
# X: Input array (2D) - at present status of MNIST image data, only.
# y: result (=classification data) [digits represent categories in the case of Mnist]
self._X_import = X_import
self._y_import = y_import
# number of test data
self._num_test_records = num_test_records
self._X = None
self._y = None
self._X_train = None
self._y_train = None
self._X_test = None
self._y_test = None
# perform a normalization of the input data
self._b_normalize_X = b_normalize_X
# relevant dimensions
# from input data information; will be set in handle_input_data()
self._dim_X = 0 # total num of records in the X,y input sets
self._dim_sets = 0 # num of records in the TRAINING sets X_train, y_train
self._dim_features = 0
self._n_labels = 0 # number of unique labels - will be extracted from y-data
# Img sizes
self._dim_img = 0 # should be sqrt(dim_features) - we assume square like images
self._img_h = 0
self._img_w = 0
# Preprocessing of input data
# ---------------------------
self._b_perform_clustering = b_perform_clustering
self._my_clustering_method = my_clustering_method # for the related dictionary see below
self._kmeans = None # pointer to object used for clustering
self._cl_n_clusters = cl_n_clusters # number of clusters (often "k" in literature)
self._cl_max_iter = cl_max_iter # number of iterations for centroid movement
self._cl_n_init = cl_n_init # number of different initial centroid positions tried
self._cl_batch_size = cl_batch_size # batch size used for MiniBatchKMeans
self._cl_n_jobs = cl_n_jobs # number of parallel jobs (on CPU-cores) - only used for KMeans
# Layers
# ------
# number of hidden layers
self._n_hidden_layers = n_hidden_layers
# Number of total layers
self._n_total_layers = 2 + self._n_hidden_layers
# Nodes for hidden layers
self._ay_nodes_layers = np.array(ay_nodes_layers)
# Number of nodes in output layer - will be checked against information from target arrays
self._n_nodes_layer_out = n_nodes_layer_out
# Weights
# --------
# empty List for all weight-matrices for all layer-connections
# Numbering :
# w[0] contains the weight matrix which connects layer 0 (input layer ) to hidden layer 1
# w[1] contains the weight matrix which connects layer 1 (input layer ) to (hidden?) layer 2
self._li_w = []
# Arrays for encoded output labels - will be set in _encode_all_mnist_labels()
# -------------------------------
self._ay_onehot = None
self._ay_oneval = None
# Known Randomizer methods ( 0: np.random.randint, 1: np.random.uniform )
# ------------------
self.__ay_known_randomizers = [0, 1]
# Types of activation functions and output functions
# ------------------
self.__ay_activation_functions = ["sigmoid"] # later also relu
self.__ay_output_functions = ["sigmoid"] # later also softmax
# Types of cost functions
# ------------------
self.__ay_loss_functions = ["LogLoss", "MSE" ] # later also other types of cost/loss functions
# dictionaries for indirect function calls
self.__d_activation_funcs = {
'sigmoid': self._sigmoid,
'relu': self._relu
self.__d_output_funcs = {
'sigmoid': self._sigmoid,
'softmax': self._softmax
self.__d_loss_funcs = {
'LogLoss': self._loss_LogLoss,
'MSE': self._loss_MSE
# Derivative functions
self.__d_D_activation_funcs = {
'sigmoid': self._D_sigmoid,
'relu': self._D_relu
self.__d_D_output_funcs = {
'sigmoid': self._D_sigmoid,
'softmax': self._D_softmax
self.__d_D_loss_funcs = {
'LogLoss': self._D_loss_LogLoss,
'MSE': self._D_loss_MSE
self.__d_clustering_functions = {
'MiniBatchKMeans': self._Mini_Batch_KMeans,
'KMeans': self._KMeans
# The following variables will later be set by _check_and set_activation_and_out_functions()
self._my_act_func = my_activation_function
self._my_out_func = my_out_function
self._my_loss_func = my_loss_function
self._act_func = None
self._out_func = None
self._loss_func = None
self._cluster_func = None
# number of data samples in a mini-batch
self._n_size_mini_batch = n_size_mini_batch
self._n_mini_batches = None # will be determined by _get_number_of_mini_batches()
# maximum number of epochs - we set this number to an assumed maximum
# - as we shall build a backup and reload functionality for training, this should not be a major problem
self._n_epochs = n_epochs
# maximum number of batches to handle ( if < 0 => all!)
self._n_max_batches = n_max_batches
# actual number of batches
self._n_batches = None
# regularization parameters
self._lambda2_reg = lambda2_reg
self._lambda1_reg = lambda1_reg
# parameters to control the initialization of the weights (see _create_WM_Input(), create_WM_Hidden())
self._init_weight_meth_L0 = init_weight_meth_L0
self._init_weight_meth_Ln = init_weight_meth_Ln
intervals = init_weight_intervals # list of lists with interval borders
self._init_weight_fact = init_weight_fact # extends weight intervals
# parameters for adaption of the learning rate
self._learn_rate = learn_rate
self._decrease_const = decrease_const
self._learn_rate_limit = learn_rate_limit
self._adapt_with_acc = adapt_with_acc
self._reduction_fact = reduction_fact
# parameters for momentum learning
self._mom_rate = mom_rate
self._li_mom = [None] * self._n_total_layers
# shuffle data in X_train?
self._b_shuffle_batches = b_shuffle_batches
# perform predictions on train and test data set and related analysis
self._b_predictions_train = b_predictions_train
self._b_predictions_test = b_predictions_test
self._prediction_test_period = prediction_test_period
self._prediction_train_period = prediction_train_period
# epoch period for printing
self._print_period = print_period
# book-keeping for epochs and mini-batches
# -------------------------------
# range for epochs - will be set by _prepare-epochs_and_batches()
self._rg_idx_epochs = None
# range for mini-batches
self._rg_idx_batches = None
# dimension of the numpy arrays for book-keeping - will be set in _prepare_epochs_and_batches()
self._shape_epochs_batches = None # (n_epochs, n_batches, 1)
# training evolution:
# +++++++++++++++++++
# List for error values at outermost layer for minibatches and epochs during training
# we use a numpy array here because we can redimension it
self._ay_theta = None
# List for cost values of mini-batches during training - The list will later be split into sections for epochs
self._ay_costs = None
# List for test accuracy values and error values at epoch periods
self._ay_period_test_epoch = None # x-axis for plots of the following 2 quantities
self._ay_acc_test_epoch = None
self._ay_err_test_epoch = None
# List for train accuracy values and error values at epoch periods
self._ay_period_train_epoch = None # x-axis for plots of the following 2 quantities
self._ay_acc_train_epoch = None
self._ay_err_train_epoch = None
# Data elements for back propagation
# ----------------------------------
# 2-dim array of partial derivatives of the elements of an additive cost function
# The derivative is taken with respect to the output results a_j = ay_ANN_out[j]
# The array dimensions account for nodes and sampls of a mini_batch. The array will be set in function
# self._initiate_bw_propagation()
self._ay_delta_out_batch = None
# parameter to allow printing of some test data
self._b_print_test_data = b_print_test_data
# Plot handling
# --------------
# Alternatives to resize plots
# 1: just resize figure 2: resize plus create subplots() [figure + axes]
self._plot_resize_alternative = 1
# Plot-sizing
self._figs_x1 = figs_x1
self._figs_x2 = figs_x2
self._fig = None
self._ax = None
# alternative 2 does resizing and (!) subplots()
# ***********
# operations
# ***********
# check and handle input data
# set the ANN structure
# Prepare epoch and batch-handling - sets ranges, limits num of mini-batches and initializes book-keeping arrays
self._rg_idx_epochs, self._rg_idx_batches = self._prepare_epochs_and_batches()
Code modifications to create precise accuracy information on the full test and training sets during training
You certainly noticed the following set of control parameters in the class’s new interface:
• b_predictions_train = False, # True: At the end of periodic epochs the code performs predictions on the train data set
• b_predictions_test = False, # True: At the end of periodic epochs the code performs predictions on the test data set
• prediction_test_period = 1, # Period of epochs for which we perform predictions
• prediction_train_period = 1, # Period of epochs for which we perform predictions
These parameters control whether we perform predictions during training for the full test dataset and/or the full training dataset – and if so, at which epoch period. Actually, during all of the
following experiments we shall evaluate the accuracy data after each single period.
We need an array to gather accuracy information. We therefore modify the method “_prepare_epochs_and_batches()”, where we fill some additional Numpy arrays with initialization values. Thus we avoid a
costly “append()” later on; we just overwrite the array entries successively. This overwriting happens in our method _fit(); see below.
Changes to function “_prepare_epochs_and_batches()”:
''' -- Main Method to prepare epochs, batches and book-keeping arrays -- '''
def _prepare_epochs_and_batches(self, b_print = True):
# range of epochs
ay_idx_epochs = range(0, self._n_epochs)
# set number of mini-batches and array with indices of input data sets belonging to a batch
# limit the number of mini-batches
self._n_batches = min(self._n_max_batches, self._n_mini_batches)
ay_idx_batches = range(0, self._n_batches)
if (b_print):
if self._n_batches < self._n_mini_batches :
print("\nWARNING: The number of batches has been limited from " +
str(self._n_mini_batches) + " to " + str(self._n_max_batches) )
# Set the book-keeping arrays
self._shape_epochs_batches = (self._n_epochs, self._n_batches)
self._ay_theta = -1 * np.ones(self._shape_epochs_batches) # float64 numbers as default
self._ay_costs = -1 * np.ones(self._shape_epochs_batches) # float64 numbers as default
shape_test_epochs = ( floor(self._n_epochs / self._prediction_test_period), )
shape_train_epochs = ( floor(self._n_epochs / self._prediction_train_period), )
self._ay_period_test_epoch = -1 * np.ones(shape_test_epochs) # float64 numbers as default
self._ay_acc_test_epoch = -1 * np.ones(shape_test_epochs) # float64 numbers as default
self._ay_err_test_epoch = -1 * np.ones(shape_test_epochs) # float64 numbers as default
self._ay_period_train_epoch = -1 * np.ones(shape_train_epochs) # float64 numbers as default
self._ay_acc_train_epoch = -1 * np.ones(shape_train_epochs) # float64 numbers as default
self._ay_err_train_epoch = -1 * np.ones(shape_train_epochs) # float64 numbers as default
return ay_idx_epochs, ay_idx_batches
We then create two new methods to calculate accuracy values by predicting results on all records of both the training dataset and the test dataset. The attentive reader certainly recognizes the
methods’ structure from a previous article where we used
similar code in a Jupyter cell:
New functions “_predict_all_test_data()” and “_predict_all_train_data()”:
''' Method to predict values for the full set of test data '''
def _predict_all_test_data(self):
size_set = self._X_test.shape[0]
li_Z_in_layer_test = [None] * self._n_total_layers
li_Z_in_layer_test[0] = self._X_test
# Transpose input data matrix
ay_Z_in_0T = li_Z_in_layer_test[0].T
li_Z_in_layer_test[0] = ay_Z_in_0T
li_A_out_layer_test = [None] * self._n_total_layers
# prediction by forward propagation of the whole test set
self._fw_propagation(li_Z_in = li_Z_in_layer_test, li_A_out = li_A_out_layer_test, b_print = False)
ay_predictions_test = np.argmax(li_A_out_layer_test[self._n_total_layers-1], axis=0)
# accuracy
ay_errors_test = self._y_test - ay_predictions_test
acc_test = (np.sum(ay_errors_test == 0)) / size_set
# print ("total acc for test data = ", acc)
# return acc, ay_predictions_test
return acc_test
''' Method to predict values for the full set of training data '''
def _predict_all_train_data(self):
size_set = self._X_train.shape[0]
li_Z_in_layer_train = [None] * self._n_total_layers
li_Z_in_layer_train[0] = self._X_train
# Transpose
ay_Z_in_0T = li_Z_in_layer_train[0].T
li_Z_in_layer_train[0] = ay_Z_in_0T
li_A_out_layer_train = [None] * self._n_total_layers
self._fw_propagation(li_Z_in = li_Z_in_layer_train, li_A_out = li_A_out_layer_train, b_print = False)
ay_predictions_train = np.argmax(li_A_out_layer_train[self._n_total_layers-1], axis=0)
ay_errors_train = self._y_train - ay_predictions_train
acc_train = (np.sum(ay_errors_train == 0)) / size_set
#print ("total acc for train data = ", acc)
return acc_train
Eventually, we modify our method “_fit()” with a series of statements on the level of the epoch loop. You may ignore most of the statements for learning rate adaption; we only use the “simple”
adaption methods. The really important changes are those regarding predictions.
Modifications of function “_fit()”:
''' -- Method to perform training in epochs for defined mini-batches -- '''
def _fit(self, b_print = False, b_measure_epoch_time = True, b_measure_batch_time = False):
b_print: Do we print intermediate results of the training at all?
b_print_period: For which period of epochs do we print?
b_measure_epoch_time: Measure CPU-Time for an epoch
b_measure_batch_time: Measure CPU-Time for a batch
rg_idx_epochs = self._rg_idx_epochs
rg_idx_batches = self._rg_idx_batches
if (b_print):
print("\nnumber of epochs = " + str(len(rg_idx_epochs)))
print("max number of batches = " + str(len(rg_idx_batches)))
# Some intial parameters
acc_old = 0.0000001
acc_test = 0.001
orig_rate = self._learn_rate
adapt_fact = 1.0
n_predict_test = 0
n_predict_train = 0
# loop over epochs
# ****************
start_train = time.perf_counter()
for idxe in rg_idx_epochs:
if b_print and (idxe % self._print_period == 0):
if b_measure_epoch_time:
start_0_e = time.perf_counter()
print("\n ---------")
print("Starting epoch " + str(idxe+1))
# simple adaption of the learning rate
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
orig_rate /= (1.0 + self._decrease_const * idxe)
self._learn_rate /= (1.0 + self._decrease_const * idxe)
if self._learn_rate < self._learn_rate_limit:
self._learn_rate = self._learn_rate_limit
# adapt wit acc. - not working well, yet
acc_change_rate = math.fabs((acc_test - acc_old) / acc_old)
if b_print and (idxe % self._print_period == 0):
print("acc_chg_rate = ", acc_change_rate)
ratio = self._learn_rate / orig_rate
if ratio > 0.33 and acc_change_rate < 1/3 and self._adapt_with_acc:
if acc_change_rate < 0.001:
acc_change_rate = 0.001
#adapt_fact = 2.0 * acc_change_rate / (1.0 - acc_change_rate)
adapt_fact = 1.0 - 0.001 * (1.0 - acc_change_rate / (1.0 - acc_change_rate))
if b_print and (idxe % self._print_period == 0):
print("adapt_fact = ", adapt_fact)
self._learn_rate *= adapt_fact
acc_old = acc_test # for adaption of learning rate
# shuffle indices for a variation of the mini-batches with each epoch
# ******************************************************************
if self._b_shuffle_batches:
shuffled_index = np.random.permutation(self._dim_sets)
self._X_train, self._y_train, self._ay_onehot = self._X_train[shuffled_index], self._y_train[shuffled_index], self._ay_onehot[:, shuffled_index]
# loop over mini-batches
# **********************
for idxb in rg_idx_batches:
if b_measure_batch_time:
start_0_b = time.perf_counter()
# deal with a mini-batch
self._handle_mini_batch(num_batch = idxb, num_epoch=idxe, b_print_y_vals = False, b_print = False)
if b_measure_batch_time:
end_0_b = time.perf_counter()
print('Time_CPU for batch ' + str(idxb+1), end_0_b - start_0_b)
# predictions
# ***********
# Control and perform predictions on the full test data set
if self._b_predictions_test and idxe % self._prediction_test_period == 0:
self._ay_period_test_epoch[n_predict_test] = idxe
acc_test = self._predict_all_test_data()
self._ay_acc_test_epoch[n_predict_test] = acc_test
n_predict_test += 1
# Control and perform predictions on the full training training data set
if self._b_predictions_train and idxe % self._prediction_train_period == 0:
self._ay_period_train_epoch[n_predict_train] = idxe
acc_train = self._predict_all_train_data()
self._ay_acc_train_epoch[n_predict_train] = acc_train
n_predict_train += 1
# printing some evolution and epoch information
if b_print and (idxe % self._print_period == 0):
if b_measure_epoch_time:
end_0_e = time.perf_counter()
print('Time_CPU for epoch' + str(idxe+1), end_0_e - start_0_e)
print("learning rate = ", self._learn_rate)
print("orig learn rate = ", orig_rate)
print("\ntotal costs of last mini_batch = ", self._ay_costs[idxe, idxb])
print("avg total error of
last mini_batch = ", self._ay_theta[idxe, idxb])
# print presently reached accuracy values on the test and training sets
print("presently reached train accuracy =<div style="width: 95%; overflow: auto; height: 400px;">
<pre style="width: 1000px;">
", acc_train)
print("presently reached test accuracy = ", acc_test)
# print out required secs for training
# **************************************
end_train = time.perf_counter()
print('\n\n ------')
print('Total training Time_CPU: ', end_train - start_train)
print("\nStopping program regularily")
return None
The method we apply in the above code to reduce the learning rate with every epoch is by the way called “power scheduling“. The book of Aurelien Geron [“Hands on Machine learning ….”, 2019, 2nd
edition, O’Reilly], quoted already in previous articles, lists a bunch of other methods, e.g. “exponential scheduling”, where we multiply the learning rate with a constant factor < 1 at every epoch.
A further change of code happens in the functions _create_WM_input() and _ceate_WM_hidden” to initiate weight values.
Modifications of functions _create_WM_input() and _create_WM_hidden”:
Addendum 25.03.2020: Changed _create_WM_hidden() because of errors in the code
'''-- Method to create the weight matrix between L0/L1 --'''
def _create_WM_Input(self):
Method to create the input layer
The dimension will be taken from the structure of the input data
We need to fill self._w[0] with a matrix for conections of all nodes in L0 with all nodes in L1
We fill the matrix with random numbers between [-1, 1]
# the num_nodes of layer 0 should already include the bias node
num_nodes_layer_0 = self._ay_nodes_layers[0]
num_nodes_with_bias_layer_0 = num_nodes_layer_0 + 1
num_nodes_layer_1 = self._ay_nodes_layers[1]
# Set interval borders for randomizer
if self._init_weight_meth_L0 == "sqrt_nodes": # sqrtr(nodes) - rule of Prof. J. Frochte
rand_high = self._init_weight_fact / math.sqrt(float(num_nodes_layer_0))
rand_low = - rand_high
rand_low = self._init_weight_intervals[0][0]
rand_high = self._init_weight_intervals[0][1]
print("\nL0: weight range [" + str(rand_low) + ", " + str(rand_high) + "]" )
# fill the weight matrix at layer L0 with random values
randomizer = 1 # method np.random.uniform
rand_size = num_nodes_layer_1 * (num_nodes_with_bias_layer_0)
w0 = self._create_vector_with_random_values(rand_low, rand_high, rand_size, randomizer)
w0 = w0.reshape(num_nodes_layer_1, num_nodes_with_bias_layer_0)
# put the weight matrix into array of matrices
print("\nShape of weight matrix between layers 0 and 1 " + str(self._li_w[0].shape))
'''-- Method to create the weight-matrices for hidden layers--'''
def _create_WM_Hidden(self):
Method to create the weights of the hidden layers, i.e. between [L1, L2] and so on ... [L_n, L_out]
We fill the matrix with random numbers between [-1, 1]
# The "+1" is required due to range properties !
rg_hidden_layers = range(1, self._n_hidden_layers + 1, 1)
# Check parameter input fro weight intervals
if self._init_weight_meth_Ln == "const":
if len(self._init_
weight_intervals) != (self._n_hidden_layers + 1):
print("\nError: we shall initialize weights with values from intervals, but wrong number of intervals provided!")
for i in rg_hidden_layers:
print ("\nCreating weight matrix for layer " + str(i) + " to layer " + str(i+1) )
num_nodes_layer = self._ay_nodes_layers[i]
num_nodes_with_bias_layer = num_nodes_layer + 1
# Set interval borders for randomizer
if self._init_weight_meth_Ln == "sqrt_nodes": # sqrtr(nodes) - rule of Prof. J. Frochte
rand_high = self._init_weight_fact / math.sqrt(float(num_nodes_layer))
rand_low = - rand_high
rand_low = self._init_weight_intervals[i][0]
rand_high = self._init_weight_intervals[i][1]
print("L" + str(i) + ": weight range [" + str(rand_low) + ", " + str(rand_high) + "]" )
# the number of the next layer is taken without the bias node!
num_nodes_layer_next = self._ay_nodes_layers[i+1]
# ill the weight matrices at the hidden layer with random values
rand_size = num_nodes_layer_next * num_nodes_with_bias_layer
randomizer = 1 # np.random.uniform
w_i_next = self._create_vector_with_random_values(rand_low, rand_high, rand_size, randomizer)
w_i_next = w_i_next.reshape(num_nodes_layer_next, num_nodes_with_bias_layer)
# put the weight matrix into our array of matrices
print("Shape of weight matrix between layers " + str(i) + " and " + str(i+1) + " = " + str(self._li_w[i].shape))
As you see, we distinguish between different cases depending on the parameters “init_weight_meth_L0” and “init_weight_meth_Ln”. There, obviously, happens a choice regarding the borders of the
intervals from which we randomly pick our initial weight values. In case of the method “sqrt_nodes” the interval borders are determined by the number of nodes of the neighboring layer. Otherwise we
can read the interval borders from the parameter list “init_weight_intervals”. You will better understand these options later on.
Plotting accuracys
We use a very simple code in a Jupyter cell to get a plot for the accuracy values on the training and the test datasets. The orange line will show the accuracy reached at each epoch for the training
dataset when we apply the weights evaluated given at the epoch. The blue line shows the accuracy reached for the test dataset. In the text below we shall use the following abbreviations:
acc_train = accuracy reached for the X_train dataset of MNIST
acc_test = accuracy reached for the X_test dataset of MNIST
Code for plotting
plt.ylim(y_min, y_max)
Experiment 1: Accuracy plot for a Reference Run
The next test run will be used as a reference run for comparisons later on. It shows where we stand right now.
Test 1:
Parameters: learn_rate = 0.0001, decrease_rate = 0.000001, mom_rate = 0.00005, n_size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 1800, initial weights for all layers in [-0.5, +0.5]:
Results: acc_train: 0.996 , acc_test: 0.961, convergence after ca. 1150 epochs
Note that we use a very small learn_rate, but an even smaller decrease rate. The
evolution of the accuracy values looks like follows:
The x-axis measures the number of epochs during training. The y-axis the degree of accuracy – given as a fraction; multiply by 100 to get percentage values. By the way, applying the reached weight
set on the full training and test datasets in each epoch cost us at least 20% rise in CPU time (45 minutes).
What does our new way of representing the “learning” of our MLP by the evolution of the accuracy levels tell us?
Noise: There is substantial noise visible along the lines. If you go back to previous articles you may detect the same type of noise in the plots of the evolution of the cost function. Obviously, our
mini-batches and the constant change of their composition with each epoch lead to wiggles around the visible trends.
Tendencies: Note that there is a tendency of linear rise of the accuracy acc_train between periods 350 and 900. And, actually, the accuracy even decreases a bit around epoch 1550. This is a warning
that the very last epoch of a run may not reveal the optimal solution.
Overfitting and a typical related splitting of the evolution of the two accuracys: One clearly sees that after a certain epoch (here: around epoch 300) the accuracy on the training dataset deviates
systematically from the accuracy on the test dataset. In the end the gap is bigger than 3.5 percent. And in our case the accuracy on the test dataset reaches its final level of 0.96 significantly
earlier – at around epoch 750 – and remains there, while the accuracy on the training set still rises up to epoch 1000.
However, I would like to add a warning:
Warning: Later on we shall see that there are cases for which both curves turn into a convergence at almost the same epoch. So, yes, there almost always occurs some overfitting during training of a
MLP. However, we cannot set up a rule which says that convergence of the accuracy on the test dataset always occurs much earlier than for the training set. You always have to watch the evolution of
both during your training experiments!
Experiment 2: Increasing the learning rate – better efficiency?
Let us now be brave and increase the learning rate by a factor of 10:
Test 2:
Parameters: learn_rate = 0.001, decrease_rate = 0.000001, mom_rate = 0.00005, n_size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 1800, initial weights for all layers in [-0.5, +0.5]:
Results: acc_train: 0.971 , acc_test: 0.959, no convergence after ca. 1800 epochs, yet
Ooops! Our algorithm ran into real difficulties! We seem to hop in and out of a minimum area until epoch 400 and despite a following systematic linear improvementthere is no sign of a real
convergence – yet!
The learning rate seems to big to lead to a consistent quick path into a minimum of all mini-batches! This may have to do with the size of the mini-batches, too – see below. The increase of the
learning rate did not do us any good.
Experiment 3: Increased learning rate – but a higher decrease rate, too
As the larger learning rate seems to be a problem after period 50, we may think of a faster reduction of the learning rate.
Test 3:
Parameters: learn_rate = 0.001, decrease_rate = 0.00001, mom_rate = 0.00005, n_
size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 2000, initial weights for all layers in [-0.5, +0.5]:
Results: acc_train: 0.9909, acc_test: 0.9646, convergence after ca. 800 epochs
The evolution looks strange, too, but better than experiment 2! We see a real convergence again after some rather linear development! As a lesson learned I would say: Yes we can work with an
initially bigger learning rate – but we need a stronger decrease of it, too, to really run into a global minimum eventually.
Experiment 4: Increased learning rate, higher decrease rate and smaller initial weights
Maybe the weight initialization has some impact? According to a rule published by Prof. Frochte in his book “Maschinelles Lernen” [2019, 2. Auflage, Carl Hanser Verlag] I limited the initial random
weight values to a range between [-1.0/sqrt(784), +1.0/sqrt(784)] – instead of [-0.5, 0.5] for all layers.
Test 4:
Parameters: learn_rate = 0.001, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 2000, initial weights for all layers within [-0.36 0.36]:
Results: acc_train: 0.987 , acc_test: 0.967, convergence after ca. 900 epochs
The interesting part in this case happens below and at epoch 200: There we see a clear indication that something has “trapped” us for a while before we could descend into some minimum with the
typical split of the accuracy for the training set and the accuracy for the test set. Remember that smaller initial weights also mean an initially smaller contribution of the regularization term to
the cost function!
Did we run into a side minimum? Or walk around the edge between two minima? Too complex to analyze in a space with 7000 dimensions!, But, I think this gives you some impression of what might happen
on the surface of a varying, bumpy hyperplane …
Experiment 5: Reduced weights only between the L0/L1 layers
The next test shows the same as the last experiment, but with the initial weights only reduced for the L0/L1 matrix.
Test 5:
Parameters: learn_rate = 0.001, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 2000, initial weights for the matrix of the first layers L0/L1 within
[-0.36 0.36], otherwise in [-0.5, 0.5]:
Results: acc_train: 0.988 , acc_test: 0.967, convergence after ca. 900 epochs
All in all – the trouble the code has with finding a way into a global minimum got even more pronounced around epoch 100. It seems as if the algorithm has to find a new path direction there. The
lesson learned is: Weight initialization is important!
Experiment 6: Enlarged mini-batch-size – do we get a smoother evolution?
Now we keep the parameters of experiment 5, but we enlarge the batch size – could be helpful to align and deepen the different minima for the different mini-batches – and thus maybe lead to a
smoothing. We choose a batch-size of 1200 (i.e. 50 batches instead of 120 in the training set):
Test 6:
Parameters: learn_rate = 0.001, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 1200, Lambda2 = 0.2, n_epochs = 2000, initial weights for the matrix first layers L0/L1 [-0.36 0.36],
otherwise in [-0.5, 0.5]:
Results: acc_train: 0.959 , acc_test: 0.946, not yet converged after ca. 750 epochs
Would you say that enlarging the mini-batch-siz really helped us? I would say: Bigger batch-sizes do not help an algorithm on the verge of trouble! Nope, the structural problems do not disappear.
Experiment 7: Reduced learn-rate, increased decrease-rate
Let us face it: For our present state of the MLP-algorithm and the MNIST X-data values directly fed into the input nodes the learn-rate must be chosen significantly smaller to overcome the initial
problems of restructuring the weight matrices. So, we give up our trials to work with larger learn-rates – but only for a moment. Let us for confirmation now reduce the initial learning-rate again,
but increase the “decrease rate”. At the same time we also decrease the values of the weights.
Test 7:
Parameters: learn_rate = 0.0002, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 1200,initial weights for the matrix first layers L0/L1 [-0.36 0.36]
and for the next layers L1/L2 + L2/L3 in [0.08, 0.08]:
Results: acc_train: 0.9943 , acc_test: 0.9655, convergence after ca. 600 epochs
OK, nice again! There is some trouble, but we only need 600 epochs to come to a pretty good accuracy value for the test data set!
Intermediate conclusion
Quite often you may read in literature that a bigger learning rate (often abbreviated with a greek eta) can save computational time in terms of required epochs – as long as convergence is guaranteed.
Hmmm – we saw in the tests above that this may not be true under certain conditions. It
is better to say that – depending on the data, the depth of the network and the size of the mini-batches – you may have to control a delicate balance of an initial rate and a rate decline to find an
optimum in terms of epochs.
Initial learning rates which are chosen too big together with a too small decrease rate may lead into trouble: the algorithm may get trapped after a few hundred epochs or even stay a long time in
some side minimum until it finds a deepening which it really can descent into.
With a smaller learning rate, however, you may find a reasonable path much faster and descent into the minimum much more steadfast and smoothly – in the end requiring remarkably fewer epochs until
But as we saw with our experiment 4: Even a wiggled start can end up in a pretty good minimum with a really good accuracy. Reducing the learning rate too fast may lead to a circle path with some
distance to the minimum. We are talking here about the last < 0.5 percent.
Which minimum level you reach in the end depends on many parameters, but in particular also on the initial weight values. In general setting the initial weight values small enough with respect to the
number of nodes on the lower neighbor layer seems to be reasonable.
The sigmoid function – and a major problem
It is time to think a bit deeper before we start more experiments. All in all one does not get rid of the feeling that something profound is wrong with our algorithm or our setup of the experiments.
In my youth I have seen similar things in simulations on non-linear physics – and almost always a basic concept was missing or wrongly applied. Time to care about the math.
An important ingredient in the whole learning via back-propagation was the activation function, which due to its non-linearity has an impact on the gradients which we need to calculate. The sigmoid
function is a smooth function; but it has some properties which obviously can lead to trouble.
One is that it produces function values pretty close to 1 for arguments x > 15.
sig(10) = 0.9999546021312976
sig(12) = 0.9999938558253978
sig(15) = 0.9999998874648379
sig(20) = 0.9999999979388463
sig(25) = 0.9999999999948910
sig(30) = 0.9999999999999065
So, function values for bigger arguments can almost not be distinguished and resulting gradients during backward propagation will get extremely small. Keeping this in mind we turn towards the initial
steps of forward propagation. What happens to our input data there?
We directly present the feature values of the MNIST data images at 784 input nodes in layer L0. The following sketch only shows the basic architecture ofa a MLP; the node numbers do NOT correspond to
our present MLP.
Then we multiply by the weights (randomly chosen initially from some interval) and accumulate 784 contributions at each of the 70 nodes of layer L1. Even if we choose the initial weight values to be
in range of [-0.5, +0.5] this will potentially lead to big input values at layer L1 due to summing up all contributions. Thus at the output side of layer L1 our sigmoid function will produce many
almost indistinguishable values and pretty small gradients in the first steps. This is certainly not good for a clear adjustment of weights during backward propagation.
There are two remedies, one can think about:
• We should adapt the initial weight values to the number of nodes of the lower
layer in forward propagation direction. A first guess would be something in the range 1.0e-3 for weights between layer L0 and L1 – assuming that ca. 10% of the 784 input features show values
around 220. Weights between layers L1 and L2 should be in the range of [-0.05, 0.05] and between layer L2 and L3 in the range [-0.1, 0.1] to prevent maximum values above 5.
• We should scale down the input data, i.e. we should normalize them such that they cover a reasonable value range which leads to distinguishable output values of the sigmoid function.
A plot for the first option with a reasonably small learn-rate as in experiment 7 and weights following the 1/sqrt(num_nodes) at every layer (!) is the following :
Quite OK, but not a breakthrough. So, let us look at normalization.
Normalization – Standardization
There are different methods how one can normalize values for a bunch of instances in a set. One basic method is to subtract the minimum value “x_min” of all instances from the value of each instance
followed by a division of the difference between the max value (x_max) and the minimum value (x_max – x_min): x => (x – x_min) / (x_max – x_min).
A more clever version – which is called “standardization” – subtracts the mean value “x_mean” of all instances and divides by the standard deviation of the set. The resulting values have a mean of
zero and a variance of 1. The advantage of this normalization approach is that it does not react strongly to extreme data values in the set; still it reduces big values to a very moderate scale.
SciKit-Learn provides the second normalization variant as a function with the name “StandardScaler” – this is the reason why we introduced an import statement for this function at the top of this
Code modifications to address standardization of the input data
Let us include standardization in our method to handle the input data:
Modifications to function “_method _handle_input_data()”:
''' -- Method to handle different types of input data sets --'''
def _handle_input_data(self):
Method to deal with the input data:
- check if we have a known data set ("mnist" so far)
- reshape as required
- analyze dimensions and extract the feature dimension(s)
# check for known dataset
if (self._my_data_set not in self._input_data_sets ):
raise ValueError
except ValueError:
print("The requested input data" + self._my_data_set + " is not known!" )
# MNIST datasets
# **************
# handle the mnist original dataset - is not supported any more
#if ( self._my_data_set == "mnist"):
# mnist = fetch_mldata('MNIST original')
# self._X, self._y = mnist["data"], mnist["target"]
# print("Input data for dataset " + self._my_data_set + " : \n" + "Original shape of X = " + str(self._X.shape) +
# "\n" + "Original shape of y = " + str(self._y.shape))
# handle the mnist_784 dataset
if ( self._my_data_set == "mnist_784"):
mnist2 = fetch_openml('mnist_784', version=1, cache=True, data_home='~/scikit_learn_data')
self._X, self._y = mnist2["data"], mnist2["
print ("data fetched")
# the target categories are given as strings not integers
self._y = np.array([int(i) for i in self._y])
print ("data modified")
print("Input data for dataset " + self._my_data_set + " : \n" + "Original shape of X = " + str(self._X.shape) +
"\n" + "Original shape of y = " + str(self._y.shape))
# handle the mnist_keras dataset
if ( self._my_data_set == "mnist_keras"):
(X_train, y_train), (X_test, y_test) = kmnist.load_data()
len_train = X_train.shape[0]
len_test = X_test.shape[0]
X_train = X_train.reshape(len_train, 28*28)
X_test = X_test.reshape(len_test, 28*28)
# Concatenation required due to possible later normalization of all data
self._X = np.concatenate((X_train, X_test), axis=0)
self._y = np.concatenate((y_train, y_test), axis=0)
print("Input data for dataset " + self._my_data_set + " : \n" + "Original shape of X = " + str(self._X.shape) +
"\n" + "Original shape of y = " + str(self._y.shape))
# common MNIST handling
if ( self._my_data_set == "mnist" or self._my_data_set == "mnist_784" or self._my_data_set == "mnist_keras" ):
# handle IMPORTED datasets
# **************************+
if ( self._my_data_set == "imported"):
if (self._X_import is not None) and (self._y_import is not None):
self._X = self._X_import
self._y = self._y_import
print("Shall handle imported datasets - but they are not defined")
# number of total records in X, y
# *******************************
self._dim_X = self._X.shape[0]
# Give control to preprocessing - has to happen before normalizing and splitting
# ****************************
# Common dataset handling
# ************************
# normalization
if self._b_normalize_X:
# normalization by sklearn.preprocessing.StandardScaler
scaler = StandardScaler()
self._X = scaler.fit_transform(self._X)
# mixing the training indices - MUST happen BEFORE encoding
shuffled_index = np.random.permutation(self._dim_X)
self._X, self._y = self._X[shuffled_index], self._y[shuffled_index]
# Splitting into training and test datasets
if self._num_test_records > 0.25 * self._dim_X:
print("\nNumber of test records bigger than 25% of available data. Too big, we stop." )
num_sep = self._dim_X - self._num_test_records
self._X_train, self._X_test, self._y_train, self._y_test = self._X[:num_sep], self._X[num_sep:], self._y[:num_sep], self._y[num_sep:]
# numbers, dimensions
self._dim_sets = self._y_train.shape[0]
self._dim_features = self._X_train.shape[1]
print("\nFinal dimensions of training and test datasets of type " + self._my_data_set +
" : \n" + "Shape of X_train = " + str(self._X_train.shape) +
"\n" + "Shape of y_train = " + str(self._y_train.shape) +
"\n" + "Shape of X_test = " + str(self._X_test.shape) +
"\n" + "Shape of y_test = " + str(self._y_test.shape)
print("\nWe have " + str(self._dim_sets) + " data records for training")
print("Feature dimension is " + str(self._dim_features))
# encoding the y-values = categories // MUST happen AFTER encoding
''' Remark: Other input data sets can not yet be handled '''
return None
Well, this looks a bit different compared to our original function. Actually, we perform normalization twice. Once inside the new function “_preprocess_input_data()” and once afterwards.
New function “_preprocess_data()”:
Method to preprocess the input data
----------------------------------- '''
def _preprocess_input_data(self):
# normalization ahead
if self._b_normalize_X:
# normalization by sklearn.preprocessing.StandardScaler
scaler = StandardScaler()
self._X = scaler.fit_transform(self._X)
# Clustering
if self._b_perform_clustering:
print("\nClustering started")
print("\nNo Clustering requested")
return None
The reason is that we have to take into account other transformations of the input data by other methods, too. One of these methods will be clustering, which we shall investigate in a forthcoming
article. (For the nervous ones among the readers: The StandardScaler is intelligent enough to avoid divisions by zero means at the second time it is called!)
Experiment 8: Standardizes input data, reduced learn-rate, increased decrease-rate and “1/sqrt(nodes)-rule for the initial weights of all layers
We shall call our class My_ANN now with the parameter “b_normalize_X = True”, i.e. we standardize the whole MNIST input data set X before we split it into a training and a test data set.
In addition we apply the rule to set the interval-borders for initial weights to [-1.0/sqrt(num_nodes_layer), 1.0/sqrt(num_nodes_layer)], with “num_nodes_layer” being the number of nodes in the lower
layer which the weights act upon during forward propagation.
Test 8:
Parameters: learn_rate = 0.0002, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 500, Lambda2 = 0.2, n_epochs = 2000, weights at all layers in [- 1.0/sqrt(num_nodes_layer), 1.0/sqrt
Results: acc_train: 0.9913 , acc_test: 0.9689, convergence after ca. 650 epochs
Wow, extremely smooth curves now – and we got the highest accuracy so far!
Experiment 9: Standardized input, bigger initial learning rate, enlarged intervals for weight initialization
We get brave again! we enlarge the learning-rate back to 0.001. In addition we enlarge the intervals for a random distribution of initial weights for each layer by a factor of 2 =>- [-2*1.0/sqrt
(num_nodes_layer), 2*1.0/sqrt(num_nodes_layer)].
Test 9:
Parameters: learn_rate = 0.001, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 500, n_epochs = 1200, weights at all layers in [-2*1.0/sqrt(num_nodes_layer), 2*1.
Results: acc_train: 0.9949 , acc_test: 0.9754, convergence after ca. 550-600 epochs
Not such smooth curves as in the previous plot. But WoW again – now we broke the 0.97-threshold – already at an epoch as small as 100!
I admit that a very balanced initial statistical distribution of digit images across the training and the test datasets helped in this specific test run, but only a bit. You will easily and regularly
pass a value of 0.972 for the accuracy on the test dataset during multiple consecutive runs. Compared to our reference value of 0.96 this is a solid improvement!
But what is really convincing is the fact the even with a relatively high initial learning rate we see no major trouble on our way to the minimum! I would call this a breakthrough!
We learned today that working with mini-batch training can be tricky. In some cases we may need to control a balance between a sufficiently small initial learning rate and a reasonable reduction rate
during training. We also saw that it is helpful to get some control over the weight initialization. The rule to create randomly distributed initial weight values initialization within intervals given
by [n*1/sqrt(num_nodes), n*1/sqrt(num_nodes)] appears to be useful.
However, the real lesson of our experiments was that we do our MLP learning algorithm a major favor by normalizing and centering the input data.
At least if the sigmoid function is applied as the activation function at the MLP’s nodes a initial standardization of the input data should always be tested and compared to training runs without
In the next article of this series
A simple Python program for an ANN to cover the MNIST dataset – XIII – the impact of regularization
we shall have a look at the impact of the regularization parameter Lambda2, which we kept constant, so far. An interesting question in this context is: How does the ratio between the (quadratic)
regularization term and the standard cost term in our total loss function change during a training run?
In a further article to come we will then look at a method to detect clusters in the feature parameter space and use related information for gradient descent. The objective of such a step is the
reduction of input features and input nodes. Stay tuned! | {"url":"https://linux-blog.anracom.com/2020/03/25/","timestamp":"2024-11-06T17:42:44Z","content_type":"text/html","content_length":"191360","record_id":"<urn:uuid:f75f59b8-5c56-487c-a399-4cb8522e34b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00273.warc.gz"} |
All papers
All papers (23118 results)
Non-Interactive Zero-Knowledge Proofs with Certified Deletion
We introduce the notion of non-interactive zero-knowledge (NIZK) proofs with certified deletion. Our notion enables the recipient of a quantum NIZK proof for a (quantumly hard) NP statement to delete
the proof and collapse it into a classical deletion certificate. Once this certificate is successfully validated, we require the recipient of the proof to lose their ability to find accepting inputs
to NIZK verification. We formally define this notion and build several candidate constructions from standard cryptographic assumptions. In particular, we propose a primary construction from classical
NIZK for NP and one-way functions, albeit with two limitations: (i) deletion certificates are only privately verifiable, and (ii) both prover and verifier are required to be quantum algorithms. We
resolve these hurdles in two extensions that assume the quantum hardness of the learning with errors problem. The first one achieves publicly verifiable certificates, and the second one requires
merely classical communication between classical provers and quantum verifiers.
Notions of Quantum Reductions and Impossibility of Statistical NIZK
Non-Interactive Zero-Knowledge Arguments (NIZKs) are cryptographic protocols that enable a prover to demonstrate the validity of an $\mathsf{NP}$ statement to a verifier with a single message,
without revealing any additional information. The soundness and zero-knowledge properties of a NIZK correspond to security against a malicious prover and a malicious verifier respectively.
Statistical NIZKs (S-NIZKs) are a variant of NIZKs for which the zero-knowledge property is guaranteed to hold information-theoretically. Previous works have shown that S-NIZKs satisfying a weak
version of soundness known as static soundness exist based on standard assumptions. However, the work of Pass (TCC 2013) showed that S-NIZKs with the stronger \emph{adaptive} soundness property are
inherently challenging to obtain. The work proved that standard (black-box) proof techniques are insufficient to prove the security of an S-NIZK based on any standard (falsifiable) assumption. We
extend this result to the setting where parties can perform quantum computations and communicate using quantum information, while the quantum security reduction is restricted to query the adversary
classically. To this end, we adapt the well-known meta-reduction paradigm for showing impossibility results to the quantum setting. Additionally, we reinterpret our result using a new framework for
studying quantum reductions, which we believe to be of independent interest.
The LaZer Library: Lattice-Based Zero Knowledge and Succinct Proofs for Quantum-Safe Privacy
The hardness of lattice problems offers one of the most promising security foundations for quantum-safe cryptography. Basic schemes for public key encryption and digital signatures are already close
to standardization at NIST and several other standardization bodies, and the research frontier has moved on to building primitives with more advanced privacy features. At the core of many such primi-
tives are zero-knowledge proofs. In recent years, zero-knowledge proofs for (and using) lattice relations have seen a dramatic jump in efficiency and they currently provide arguably the shortest, and
most computationally efficient, quantum-safe proofs for many sce- narios. The main difficulty in using these proofs by non-experts (and experts!) is that they have a lot of moving parts and a lot of
internal parameters depend on the particular instance that one is trying to prove. Our main contribution is a library for zero-knowledge and suc- cinct proofs which consists of efficient and flexible
C code under- neath a simple-to-use Python interface. Users without any back- ground in lattice-based proofs should be able to specify the lattice relations and the norm bounds that they would like
to prove and the library will automatically create a proof system, complete with the intrinsic parameters, using either the succinct proofs of LaBRADOR (Beullens and Seiler, Crypto 2023) or the
linear-size, though smaller for certain application, proofs of Lyubashevsky et al. (Crypto 2022). The Python interface also allows for common operations used in lattice-based cryptography which will
enable users to write and pro- totype their full protocols within the syntactically simple Python environment. We showcase some of the library’s usefulness by giving protocol implementations for
blind signatures, anonymous credentials, the zero-knowledge proof needed in the recent Swoosh protocol (Gaj- land et al., Usenix 2024), proving knowledge of Kyber keys, and an aggregate signature
scheme. Most of these are the most efficient, from a size, speed, and memory perspective, known quantum-safe instantiations.
Single-Server Client Preprocessing PIR with Tight Space-Time Trade-off
This paper partly solves the open problem of tight trade-off of client storage and server time in the client preprocessing setting of private information retrieval (PIR). In the client preprocessing
setting of PIR, the client is allowed to store some hints generated from the database in a preprocessing phase and use the hints to assist online queries. We construct a new single-server client
preprocessing PIR scheme. For a database with $n$ entries of size $w$, our protocol uses $S=O((n/T) \cdot (\log n + w))$ bits of client storage and $T$ amortized server probes over $n/T$ queries,
where $T$ is a tunable online time parameter. Our scheme matches (up to constant factors) a $ST = \Omega(nw)$ lower bound generalized from a recent work by Yeo (EUROCRYPT 2023) and a communication
barrier generalized from Ishai, Shi, and Wichs (CRYPTO 2024). From a technical standpoint, we present a novel organization of hints where each PIR query consumes a hint, and entries in the consumed
hint are relocated to other hints. We then present a new data structure to track the hint relocations and use small-domain pseudorandom permutations to make the hint storage sublinear while
maintaining efficient lookups in the hints.
KLaPoTi: An asymptotically efficient isogeny group action from 2-dimensional isogenies
We construct and implement an efficient post-quantum commutative cryptographic group action based on combining the SCALLOP framework for group actions from isogenies of oriented elliptic curves on
one hand with the recent Clapoti method for polynomial-time evaluation of the CM group action on elliptic curves on the other. We take advantage of the very attractive performance of $(2^e, 2^e)
$-isogenies between products of elliptic curves in the theta coordinate system. To successfully apply Clapoti in dimension $2$, it is required to resolve a particular quadratic diophantine norm
equation, for which we employ a slight variant of the KLPT algorithm. Our work marks the first practical instantiation of the CM group action for which both the setup as well as the online phase can
be computed in (heuristic) polynomial time.
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
We prove that Basefold(Crypto 2024) is secure in the $\textit{list decoding regime}$, within the double Johnson bound and with error probability $\frac{O(n)}{|F|}$. At the heart of this proof is a
new, stronger statement for $\textit{correlated agreement}$, which roughly states that if a linear combination of vectors $\pi_L + r \pi_R$ agrees with a codeword at every element in $S \subset [n]$,
then so do $\pi_L, \pi_R$. Our result is purely combinatorial and therefore extends to any finite field and any linear code. As such, it can be applied to any coding-based multilinear Polynomial
Commitment Scheme to reduce its communication complexity.
Zero-Knowledge Location Privacy via Accurate Floating-Point SNARKs
We introduce Zero-Knowledge Location Privacy (ZKLP), enabling users to prove to third parties that they are within a specified geographical region while not disclosing their exact location. ZKLP
supports varying levels of granularity, allowing for customization depending on the use case. To realize ZKLP, we introduce the first set of Zero-Knowledge Proof (ZKP) circuits that are fully
compliant to the IEEE 754 standard for floating-point arithmetic. Our results demonstrate that our floating point circuits amortize efficiently, requiring only $64$ constraints per multiplication for
$2^{15}$ single-precision floating-point multiplications. We utilize our floating point implementation to realize the ZKLP paradigm. In comparison to a baseline, we find that our optimized
implementation has $15.9 \times$ less constraints utilizing single precision floating-point values, and $12.2 \times$ less constraints when utilizing double precision floating-point values. We
demonstrate the practicability of ZKLP by building a protocol for privacy preserving peer-to-peer proximity testing - Alice can test if she is close to Bob by receiving a single message, without
either party revealing any other information about their location. In such a configuration, Bob can create a proof of (non-)proximity in $0.26 s$, whereas Alice can verify her distance to about $470$
peers per second.
Verifying Jolt zkVM Lookup Semantics
Lookups are a popular way to express repeated constraints in state-of-the art SNARKs. This is especially the case for zero-knowledge virtual machines (zkVMs), which produce succinct proofs of correct
execution for programs expressed as bytecode according to a specific instruction set architecture (ISA). The Jolt zkVM (Arun, Setty & Thaler, Eurocrypt 2024) for RISC-V ISA employs Lasso (Setty,
Thaler & Wahby, Eurocrypt 2024), an efficient lookup argument for massive structured tables, to prove correct execution of instructions. Internally, Lasso performs multiple lookups into smaller
subtables, then combines the results. We present an approach to formally verify Lasso-style lookup arguments against the semantics of instruction set architectures. We demonstrate our approach by
formalizing and verifying all Jolt 32-bit instructions corresponding to the RISC-V base instruction set (RV32I) using the ACL2 theorem proving system. Our formal ACL2 model has undergone extensive
validation against the Rust implementation of Jolt. Due to ACL2's bit-blasting, rewriting, and developer-friendly features, our formalization is highly automated. Through formalization, we also
discovered optimizations to the Jolt codebase, leading to improved efficiency without impacting correctness or soundness. In particular, we removed one unnecessary lookup each for four instructions,
and reduced the sizes of three subtables by 87.5\%.
Pseudorandom codes are error-correcting codes with the property that no efficient adversary can distinguish encodings from uniformly random strings. They were recently introduced by Christ and Gunn
[CRYPTO 2024] for the purpose of watermarking the outputs of randomized algorithms, such as generative AI models. Several constructions of pseudorandom codes have since been proposed, but none of
them are robust to error channels that depend on previously seen codewords. This stronger kind of robustness is referred to as adaptive robustness, and it is important for meaningful applications to
watermarking. In this work, we show the following. - Adaptive robustness: We show that the pseudorandom codes of Christ and Gunn are adaptively robust, resolving a conjecture posed by Cohen, Hoover,
and Schoenbach [S&P 2025]. Our proof involves several new ingredients, combining ideas from both cryptography and coding theory and taking hints from the analysis of Boolean functions. - Ideal
security: We define an ideal pseudorandom code as one which is indistinguishable from the ideal functionality, capturing both the pseudorandomness and robustness properties in one simple definition.
We show that any adaptively robust pseudorandom code for single-bit messages can be bootstrapped to build an ideal pseudorandom code with linear information rate, under no additional assumptions. -
CCA security: In the setting where the encoding key is made public, we define a CCA-secure pseudorandom code in analogy with CCA-secure encryption. We show that any adaptively robust public-key
pseudorandom code for single-bit messages can be used to build a CCA-secure pseudorandom code with linear information rate, in the random oracle model. Together with the result of Christ and Gunn, it
follows that there exist ideal pseudorandom codes assuming the $2^{O(\sqrt{n})}$-hardness of LPN. This extends to CCA security in the random oracle model. These results immediately imply stronger
robustness guarantees for generative AI watermarking schemes, such as the practical quality-preserving image watermarks of Gunn, Zhao, and Song (2024).
Cryptographically Secure Digital Consent
In the digital age, the concept of consent for online actions executed by third parties is crucial for maintaining trust and security in third-party services. This work introduces the notion of
cryptographically secure digital consent, which aims to replicate the traditional consent process in the online world. We provide a flexible digital consent solution that accommodates different use
cases and ensures the integrity of the consent process. The proposed framework involves a client (referring to the user or their devices), an identity manager (which authenticates the client), and an
agent (which executes the action upon receiving consent). It supports various applications and ensures compatibility with existing identity managers. We require the client to keep no more than a
password. The design addresses several security and privacy challenges, including preventing offline dictionary attacks, ensuring non-repudiable consent, and preventing unauthorized actions by the
agent. Security is maintained even if either the identity manager or the agent is compromised, but not both. Our notion of an identity manager is broad enough to include combinations of different
authentication factors such as a password, a smartphone, a security device, biometrics, or an e-passport. We demonstrate applications for signing PDF documents, e-banking, and key recovery.
Pushing the QAM method for finding APN functions further
APN functions offer optimal resistance to differential attacks and are instrumental in the design of block ciphers in cryptography. While finding APN functions is very difficult in general, a
promising way to construct APN functions is through symmetric matrices called Quadratic APN matrices (QAM). It is known that the search space for the QAM method can be reduced by means of orbit
partitions induced by linear equivalences. This paper builds upon and improves these approaches in the case of homogeneous quadratic functions over $\mathbb{F}_{2^n}$ with coefficients in the
subfield $\mathbb{F}_{2^m}$. We propose an innovative approach for computing orbit partitions for cases where it is infeasible due to the large search space, resulting in the applications for the
dimensions $(n,m)=(8,4)$, and $(n,m)=(9,3)$. We find and classify, up to CCZ-equivalence, all quadratic APN functions for the cases of $(n,m)=(8,2),$ and $(n,m)=(10,1)$, discovering a new APN
function in dimension $8$. Also, we show that an exhaustive search for $(n,m) = (10,2)$ is infeasible for the QAM method using currently available means, following partial searches for this case.
A Query Reconstruction Attack on the Chase-Shen Substring-Searchable Symmetric Encryption Scheme
Searchable symmetric encryption (SSE) enables queries over symmetrically encrypted databases. To achieve practical efficiency, SSE schemes incur a certain amount of leakage; however, this leads to
the possibility of leakage cryptanalysis, i.e., cryptanalytic attacks that exploit the leakage from the target SSE scheme to subvert its data and query privacy guarantees. Leakage cryptanalysis has
been widely studied in the context of SSE schemes supporting either keyword queries or range queries, often with devastating consequences. However, little or no attention has been paid to
cryptanalysing substring-SSE schemes, i.e., SSE schemes supporting arbitrary substring queries over encrypted data. This is despite their relevance to many real-world applications, e.g., in the
context of securely querying outsourced genomic databases. In particular, the first ever substring-SSE scheme, proposed nearly a decade ago by Chase and Shen (PoPETS '15), has not been cryptanalysed
to date. In this paper, we present the first leakage cryptanalysis of the substring-SSE scheme of Chase and Shen. We propose a novel inference-based query reconstruction attack that: (i) exploits a
reduced version of the actual leakage profile of their scheme, and (ii) assumes a weaker attack model as compared to the one in which Chase and Shen originally claimed security. We implement our
attack and experimentally validate its success rate and efficiency over two real-world datasets. Our attack achieves high query reconstruction rate with practical efficiency, and scales smoothly to
large datasets containing $100,000$ strings. To the best of our knowledge, ours is the first and only query reconstruction attack on (and the first systematic leakage cryptanalysis of) any
substring-SSE scheme till date.
Symmetric Encryption on a Quantum Computer
Classical symmetric encryption algorithms use $N$ bits of a shared secret key to transmit $N$ bits of a message over a one-way channel in an information theoretically secure manner. This paper
proposes a hybrid quantum-classical symmetric cryptosystem that uses a quantum computer to generate the secret key. The algorithm leverages quantum circuits to encrypt a message using a one-time
pad-type technique whilst requiring a shorter classical key. We show that for an $N$-qubit circuit, the maximum number of bits needed to specify a quantum circuit grows as $N^{3/2}$ while the maximum
number of bits that the quantum circuit can encode grows as $N^2$. We do not utilise the full expressive capability of the quantum circuits as we focus on second order Pauli expectation values only.
The potential exists to encode an exponential number of bits using higher orders of Pauli expectation values. Moreover, using a parameterised quantum circuit (PQC), we could further augment the
amount of securely shared information by introducing a secret key dependence on some of the PQC parameters. The algorithm may be suitable for an early fault-tolerant quantum computer implementation
as some degree of noise can be tolerated. Simulation results are presented along with experimental results on the 84-qubit Rigetti Ankaa-2 quantum computer.
Hybrid Zero-Knowledge from Garbled Circuits
We present techniques for constructing zero-knowledge argument systems from garbled circuits, extending the GC-to-ZK compiler by Jawurek, Kerschbaum, and Orlandi (ACM CCS 2023) and the GC-to-Σ
compiler by Hazay and Venkitasubramaniam (J. Crypto, 2020) to the following directions: - Our schemes are hybrid, commit-and-prove zero-knowledge argument systems that establish a connection between
secrets embedded in algebraic commitments and a relation represented by a Boolean circuit. - Our schemes incorporate diverse cross-domain secrets embedded within distinct algebraic commitments,
simultaneously supporting Pedersen-like commitments and lattice-based commitments. As an application, we develop circuit-represented compositions of Σ-protocols that support attractive access
structures, such as weighted thresholds, that can be easily represented by a small circuit. For predicates P1, . . . , Pn individually associated with a Σ-protocol, and a predicate C represented by a
Boolean circuit, we construct a Σ-protocol for proving C(P1, . . . , Pn) = 1. This result answers positively an open question posed by Abe, et. al., at TCC 2021.
Scutum: Temporal Verification for Cross-Rollup Bridges via Goal-Driven Reduction
Scalability remains a key challenge for blockchain adoption. Rollups—especially zero-knowledge (ZK) and optimistic rollups—address this by processing transactions off-chain while maintaining
Ethereum’s security, thus reducing gas fees and improving speeds. Cross-rollup bridges like Orbiter Finance enable seamless asset transfers across various Layer 2 (L2) rollups and between L2 and
Layer 1 (L1) chains. However, the increasing reliance on these bridges raises significant security concerns, as evidenced by major hacks like those of Poly Network and Nomad Bridge, resulting in
losses of hundreds of millions of dollars. Traditional security analysis methods such as static analysis and fuzzing are inadequate for cross-rollup bridges due to their complex designs involving
multiple entities, smart contracts, and zero-knowledge circuits. These systems require reasoning about temporal sequences of events across different entities, which exceeds the capabilities of
conventional analyzers. In this paper, we introduce a scalable verifier to systematically assess the security of cross-rollup bridges. Our approach features a comprehensive multi-model framework that
captures both individual behaviors and complex interactions using temporal properties. To enhance scalability, we approximate temporal safety verification through reachability analysis of a graph
representation of the contracts, leveraging advanced program analysis techniques. Additionally, we incorporate a conflict-driven refinement loop to eliminate false positives and improve precision.
Our evaluation on mainstream cross-rollup bridges, including Orbiter Finance, uncovered multiple zero-day vulnerabilities, demonstrating the practical utility of our method. The tool also exhibited
favorable runtime performance, enabling efficient analysis suitable for real-time or near-real-time applications.
Private Neural Network Training with Packed Secret Sharing
We present a novel approach for training neural networks that leverages packed Shamir secret sharing scheme. For specific training protocols based on Shamir scheme, we demonstrate how to realize the
conversion between packed sharing and Shamir sharing without additional communication overhead. We begin by introducing a method to locally convert between Shamir sharings with secrets stored at
different slots. Building upon this conversion, we achieve free conversion from packed sharing to Shamir sharing. We then show how to embed the conversion from Shamir sharing to packed sharing into
the truncation used during the training process without incurring additional communication costs. With free conversion between packed sharing and Shamir sharing, we illustrate how to utilize the
packed scheme to parallelize certain computational steps involved in neural network training. On this basis, we propose training protocols with information-theoretic security between general $n$
parties under the semi-honest model. The experimental results demonstrate that, compared to previous work in this domain, applying the packed scheme can effectively improve training efficiency.
Specifically, when packing $4$ secrets into a single sharing, we observe a reduction of more than $20\%$ in communication overhead and an improvement of over $10\%$ in training speed under the WAN
How to Delete Without a Trace: Certified Deniability in a Quantum World
Is it possible to comprehensively destroy a piece of quantum information, so that nothing is left behind except the memory of that one had it at some point? For example, various works, most recently
Morimae, Poremba, and Yamakawa (TQC '24), show how to construct a signature scheme with certified deletion where a user who deletes a signature on $m$ cannot later produce a signature for $m$.
However, in all of the existing schemes, even after deletion the user is still able keep irrefutable evidence that $m$ was signed, and thus they do not fully capture the spirit of deletion. In this
work, we initiate the study of certified deniability in order to obtain a more comprehensive notion of deletion. Certified deniability uses a simulation-based security definition, ensuring that any
information the user has kept after deletion could have been learned without being given the deleteable object to begin with; meaning that deletion leaves no trace behind! We define and construct two
non-interactive primitives that satisfy certified deniability in the quantum random oracle model: signatures and non-interactive zero-knowledge arguments (NIZKs). As a consequence, for example, it is
not possible to delete a signature/NIZK and later provide convincing evidence that it used to exist. Notably, our results utilize uniquely quantum phenomena to bypass Pass's (CRYPTO '03) celebrated
result showing that deniable NIZKs are impossible even in the random oracle model.
Fast Two-party Threshold ECDSA with Proactive Security
We present a new construction of two-party, threshold ECDSA, building on a 2017 scheme of Lindell and improving his scheme in several ways. ECDSA signing is notoriously hard to distribute securely,
due to non-linearities in the signing function. Lindell's scheme uses Paillier encryption to encrypt one party's key share and handle these non-linearities homomorphically, while elegantly avoiding
any expensive zero knowledge proofs over the Paillier group during the signing process. However, the scheme pushes that complexity into key generation. Moreover, avoiding ZK proofs about Paillier
ciphertexts during signing comes with a steep price -- namely, the scheme requires a ``global abort" when a malformed ciphertext is detected, after which an entirely new key must be generated. We
overcome all of these issues with a proactive Refresh procedure. Since the Paillier decryption key is part of the secret that must be proactively refreshed, our first improvement is to radically
accelerate key generation by replacing one of Lindell's ZK proofs -- which requires 80 Paillier ciphertexts for statistical security $2^{-40}$ -- with a much faster "weak" proof that requires only 2
Paillier ciphertexts, and which proves a weaker statement about a Paillier ciphertext that we show is sufficient in the context of our scheme. Secondly, our more efficient key generation procedure
also makes frequent proactive Refreshes practical. Finally, we show that adding noise to one party's key share suffices to avoid the need to reset the public verification key when certain bad
behavior is detected. Instead, we prove that our Refresh procedure, performed after each detection, suffices for addressing the attack, allowing the system to continue functioning without disruption
to applications that rely on the verification key. Our scheme is also very efficient, competitive with the best constructions that do not provide proactive security, and state-of-the-art among the
few results that do. Our optimizations to ECDSA key generation speed up runtime and improve bandwidth over Lindell's key generation by factors of 7 and 13, respectively. Our Key Generation protocol
requires 20% less bandwidth than existing constructions, completes in only 3 protocol messages, and executes much faster than all but OT-based key generation. For ECDSA signing, our extra Refresh
protocol does add a 10X latency and 5X bandwidth overhead compared to Lindell. However, this still fits in 150 ms runtime and about 5.4 KB of messages when run in our AWS cluster benchmark.
A Tight Analysis of GHOST Consistency
The GHOST protocol has been proposed as an improvement to the Nakamoto consensus mechanism that underlies Bitcoin. In contrast to the Nakamoto fork-choice rule, the GHOST rule justifies selection of
a chain with weights computed over subtrees rather than individual paths. This mechanism has been adopted by a variety of consensus protocols, and is a part of the currently deployed protocol
supporting Ethereum. We establish an exact characterization of the security region of the GHOST protocol, identifying the relationship between the rate of honest block production, the rate of
adversarial block production, and network delays that guarantee that the protocol reaches consensus. In contrast to the closely related Nakamoto consensus protocol, we find that the region depends on
the convention used by the protocol for tiebreaking; we establish tight results for both adversarial tiebreaking, in which ties are broken adversarially in order to frustrate consensus, and
deterministic tiebreaking, in which ties between pairs of blocks are broken consistently throughout an execution. We provide explicit attacks for both conventions which stall consensus outside of the
security region. Our results conclude that the security region of GHOST can be strictly improved by incorporating a tiebreaking mechanism; in either case, however, the final region of security is
inferior to the region of Nakamoto consensus.
Compiled Nonlocal Games from any Trapdoor Claw-Free Function
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to
develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computation done by a quantum server. Their compiler relies on the existence of quantum
fully-homomorphic encryption. In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols. Our compiler is based on the framework of measurement-based quantum
computation. It can be instantiated assuming the existence of \emph{any} trapdoor function that satisfies the claw-freeness property. Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled
nonlocal games, our work implies the existence of new protocols to classically verify quantum computation from potentially weaker computational assumptions than previously known.
Classic McEliece Hardware Implementation with Enhanced Side-Channel and Fault Resistance
In this work, we propose the first hardware implementation of Classic McEliece protected with countermeasures against Side-Channel Attacks (SCA) and Fault Injection Attacks (FIA). Classic Mceliece is
one of the leading candidates for Key Encapsulation Mechanisms (KEMs) in the ongoing round 4 of the NIST standardization process for post-quantum cryptography. In particular, we implement a range of
generic countermeasures against SCA and FIA, particularly protected the vulnerable operations such as additive Fast Fourier Transform (FFT) and Gaussian elimination, that have been targeted by prior
SCA and FIA attacks. We also perform a detailed SCA evaluation demonstrating no leakage even with 100000 traces (improvement of more than 100× the number of traces compared to unprotected
implementation). This comes at a modest total area overhead of between 4× to 7×, depending on the type of implemented SCA countermeasure. Furthermore, we present a thorough ASIC benchmark for SCA and
FIA protected Classic McEliece design.
OPTIMSM: FPGA hardware accelerator for Zero-Knowledge MSM
The Multi-Scalar Multiplication (MSM) is the main barrier to accelerating Zero-Knowledge applications. In recent years, hardware acceleration of this algorithm on both FPGA and GPU has become a
popular research topic and the subject of a multi-million dollar prize competition (ZPrize). This work presents OPTIMSM: Optimized Processing Through Iterative Multi-Scalar Multiplication. This novel
accelerator focuses on the acceleration of the MSM algorithm for any Elliptic Curve (EC) by improving upon the Pippenger algorithm. A new iteration technique is introduced to decouple the required
buckets from the window size, resulting in fewer EC computations for the same on-chip memory resources. Furthermore, we combine known optimizations from the literature for the first time to achieve
additional latency improvements. Our enhanced MSM implementation significantly reduces computation time, achieving a speedup of up to $\times 12.77$ compared to recent FPGA implementations.
Specifically, for the BLS12-381 curve, we reduce the computation time for an MSM of size $2^{24}$ to 914 ms using a single compute unit on the U55C FPGA or to 231 ms using four U55C devices. These
results indicate a substantial improvement in efficiency, paving the way for more scalable and efficient Zero-Knowledge proof systems.
Cloning Games, Black Holes and Cryptography
The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of
interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes. The conceptual contribution of
this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which
features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole---but
never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby,
for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to
overcome many long-standing barriers in our understanding of cloning games: 1. Are there cloning games which admit no non-trivial winning strategies? Resolving this particular question turns out to
be crucial for our application to black-hole physics. Existing work analyzing the $n$-qubit BB84 game and the subspace coset game only achieve the bounds of $2^{-0.228n}$ and $2^{-0.114n+o(n)}$,
respectively, while the trivial adversarial strategy wins with probability $2^{-n}$. We show that the Haar cloning game is the hardest cloning game, by demonstrating a worst-case to average-case
reduction for a large class of games which we refer to as oracular cloning games. We then show that the Haar cloning game admits no non-trivial winning strategies. 2. All existing works analyze $1\
mapsto 2$ cloning games; can we prove bounds on $t\mapsto t+1$ games for large $t$? Such bounds are crucial in our application to unclonable cryptography. Unfortunately, the BB84 game is not even $2\
mapsto 3$ secure, and the subspace coset game is not $t\mapsto t+1$ secure for a polynomially large $t$. We show that the Haar cloning game is $t\mapsto t+1$ secure provided that $t = o(\log n / \log
\log n)$, and we conjecture that this holds for $t$ that is polynomially large (in $n$). Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and
Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with
novel bounds on the operator norms of block-wise tensor products of matrices.
BrakingBase - a linear prover, poly-logarithmic verifier, field agnostic polynomial commitment scheme
We propose a Polynomial Commitment Scheme (PCS), called BrakingBase, which allows a prover to commit to multilinear (or univariate) polynomials with $n$ coefficients in $O(n)$ time. The evaluation
protocol of BrakingBase operates with an $O(n)$ time-complexity for the prover, while the verifier time-complexity and proof-complexity are $O(\lambda \log^2 n)$, where $λ$ is the security parameter.
Notably, BrakingBase is field-agnostic, meaning it can be instantiated over any field of sufficiently large size. Additionally, BrakingBase can be combined with the Polynomial Interactive Oracle
Proof (PIOP) from Spartan (Crypto 2020) to yield a Succinct Non-interactive ARgument of Knowledge (SNARK) with a linear-time prover, as well as poly-logarithmic complexity for both the verifier
runtime and the proof size. We obtain our PCS by combining the Brakedown and Basefold PCS. The commitment protocol of BrakingBase is similar to that of Brakedown. The evaluation protocol of
BrakingBase improves upon Brakedown’s verifier work by reducing it through multiple instances of the sum-check protocol. Basefold PCS is employed to commit to and later evaluate the multilinear
extension (MLE) of the witnesses involved in the sum-check protocol at random points. This includes the MLE corresponding to the parity-check matrix of the linear-time encodable code used in
Brakedown. We show that this matrix is sparse and use the Spark compiler from Spartan to evaluate its multilinear extension at a random point. We implement BrakingBase and compare its performance to
Brakedown and Basefold over a 128 bit prime field.
Constructing Dembowski–Ostrom permutation polynomials from upper triangular matrices
We establish a one-to-one correspondence between Dembowski-Ostrom (DO) polynomials and upper triangular matrices. Based on this correspondence, we give a bijection between DO permutation polynomials
and a special class of upper triangular matrices, and construct a new batch of DO permutation polynomials. To the best of our knowledge, almost all other known DO permutation polynomials are located
in finite fields of $\mathbb{F}_{2^n}$, where $n$ contains odd factors (see Table 1). However, there are no restrictions on $n$ in our results, and especially the case of $n=2^m$ has not been studied
in the literature. For example, we provide a simple necessary and sufficient condition to determine when $\gamma\, Tr(\theta_{i}x)Tr(\theta_{j}x) + x$ is a DO permutation polynomial. In addition,
when the upper triangular matrix degenerates into a diagonal matrix and the elements on the main diagonal form a basis of $\mathbb{F}_{q^{n}}$ over $\mathbb{F}_{q}$, this diagonal matrix corresponds
to all linearized permutation polynomials. In a word, we construct several new DO permutation polynomials, and our results can be viewed as an extension of linearized permutation polynomials.
A Composability Treatment of Bitcoin's Transaction Ledger with Variable Difficulty
As the first proof-of-work (PoW) permissionless blockchain, Bitcoin aims at maintaining a decentralized yet consistent transaction ledger as protocol participants (“miners”) join and leave as they
please. This is achieved by means of a subtle PoW difficulty adjustment mechanism that adapts to the perceived block generation rate, and important steps have been taken in previous work to provide a
rigorous analysis of the conditions (such as bounds on dynamic participation) that are sufficient for Bitcoin’s security properties to be ascertained. Such existing analysis, however, is
property-based, and as such only guarantees security when the protocol is run $\textbf{in isolation}$. In this paper we present the first (to our knowledge) simulation-based analysis of the Bitcoin
ledger in the dynamic setting where it operates, and show that the protocol abstraction known as the Bitcoin backbone protocol emulates, under certain participation restrictions, Bitcoin’s intended
specification. Our formulation and analysis extend the existing Universally Composable treatment for the fixed-difficulty setting, and develop techniques that might be of broader applicability, in
particular to other composable formulations of blockchain protocols that rely on difficulty adjustment.
Anonymous Public-Key Quantum Money and Quantum Voting
Quantum information allows us to build quantum money schemes, where a bank can issue banknotes in the form of authenticatable quantum states that cannot be cloned or counterfeited: a user in
possession of k banknotes cannot produce k +1 banknotes. Similar to paper banknotes, in existing quantum money schemes, a banknote consists of an unclonable quantum state and a classical serial
number, signed by bank. Thus, they lack one of the most fundamental properties cryptographers look for in a currency scheme: privacy. In this work, we first further develop the formal definitions of
privacy for quantum money schemes. Then, we construct the first public-key quantum money schemes that satisfy these security notions. Namely, • Assuming existence of indistinguishability obfuscation
and hardness of Learning with Errors, we construct a public-key quantum money scheme with anonymity against users and traceability by authorities. Since it is a policy choice whether authorities
should be able to track banknotes or not, we also construct an untraceable money scheme, where no one (not even the authorities) can track banknotes. • Assuming existence of indistinguishability
obfuscation and hardness of Learning with Er- rors, we construct a public-key quantum money scheme with untraceability. Further, we show that the no-cloning principle, a result of quantum mechanics,
allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting! • Assuming existence of indistinguishability obfuscation and
hardness of Learning with Errors, we construct a universally verifiable quantum voting scheme with classical votes. Finally, as a technical tool, we introduce the notion of publicly rerandomizable
encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious random tape such that the ciphertext before and after rerandomization (with the
malicious tape) decrypts to different values! We believe this might be of independent interest. • Assuming the (quantum) hardness of Learning with Errors, we construct a (post-quantum) classical
publicly rerandomizable encryption scheme with strong correctness
SCIF: Privacy-Preserving Statistics Collection with Input Validation and Full Security
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation.
Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain.
However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel
multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol
overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input
inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service
attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the
ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
On the Power of Oblivious State Preparation
We put forth Oblivious State Preparation (OSP) as a cryptographic primitive that unifies techniques developed in the context of a quantum server interacting with a classical client. OSP allows a
classical polynomial-time sender to input a choice of one out of two public observables, and a quantum polynomial-time receiver to recover an eigenstate of the corresponding observable -- while
keeping the sender's choice hidden from any malicious receiver. We obtain the following results: - The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode
trapdoor claw-free functions implies round-optimal (two-round) OSP. - OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and
classical verification of quantum computation. - Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical
FHE with log-depth decryption) quantum FHE. Thus, the OSP abstraction helps separate the cryptographic layer from the information-theoretic layer when building cryptosystems across classical and
quantum participants. Indeed, several of the aforementioned applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a
wider variety of assumptions, including hard problems on cryptographic group actions. Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following:
- OSP implies oblivious transfer between one classical and one quantum party. - Two-round OSP implies public-key encryption with classical keys and ciphertexts. In particular, these results help to
''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we
conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.
VCVio: A Formally Verified Forking Lemma and Fiat-Shamir Transform, via a Flexible and Expressive Oracle Representation
As cryptographic protocols continue to become more complex and specialized, their security proofs have grown more complex as well, making manual verification of their correctness more difficult.
Formal verification via proof assistants has become a popular approach to solving this, by allowing researchers to write security proofs that can be verified correct by a computer. In this paper we
present a new framework of this kind for verifying security proofs, taking a foundational approach to representing and reasoning about protocols. We implement our framework in the Lean programming
language, and give a number of security proofs to demonstrate that our system is both powerful and usable, with comparable automation to similar systems. Our framework is especially focused on
reasoning about and manipulating oracle access, and we demonstrate the usefulness of this approach by implementing both a general forking lemma and a version of the Fiat-Shamir transform for sigma
protocols. As a simple case study we then instantiate these to an implementation of a Schnorr-like signature scheme.
SoK: On the Physical Security of UOV-based Signature Schemes
Multivariate cryptography currently centres mostly around UOV-based signature schemes: All multivariate round 2 candidates in the selection process for additional digital signatures by NIST are
either UOV itself or close variations of it: MAYO, QR-UOV, SNOVA, and UOV. Also schemes which have been in the focus of the multivariate research community, but are broken by now - like Rainbow and
LUOV - are based on UOV. Both UOV and the schemes based on it have been frequently analyzed regarding their physical security in the course of the NIST process. However, a comprehensive analysis
regarding the physical security of UOV-based signature schemes is missing. In this work, we want to bridge this gap and create a comprehensive overview of physical attacks on UOV and its variants
from the second round of NIST’s selection process for additional post-quantum signature schemes, which just started. First, we collect all existing side-channel and fault attacks on UOV-based schemes
and transfer them to the current UOV specification. Since UOV was subject to significant changes over the past few years, e.g., adaptions to the expanded secret key, some attacks need to be
reassessed. Next, we introduce new physical attacks in order to obtain an overview as complete as possible. We then show how all these attacks would translate to MAYO, QR-UOV, and SNOVA. To improve
the resistance of UOV-based signature schemes towards physical attacks, we discuss and introduce dedicated countermeasures. As related result, we observe that certain implementation decisions, like
key compression techniques and randomization choices, also have a large impact on the physical security, in particular on the effectiveness of the considered fault attacks. Finally, we provide
implementations of UOV and MAYO for the ARM Cortex-M4 architecture that feature first-order masking and protection against selected fault attacks. We benchmark the resulting overhead on a
NUCLEO-L4R5ZI board and validate our approach by performing a TVLA on original and protected subroutines, yielding significantly smaller t-values for the latter.
Improved ML-DSA Hardware Implementation With First Order Masking Countermeasure
We present the protected hardware implementation of the Module-Lattice-Based Digital Signature Standard (ML-DSA). ML-DSA is an extension of Dilithium 3.1, which is the winner of the Post Quantum
Cryptography (PQC) competition in the digital signature category. The proposed design is based on the existing high-performance Dilithium 3.1 design. We implemented existing Dilithium masking gadgets
in hardware, which were only implemented in software. The masking gadgets are integrated with the unprotected ML-DSA design and functional verification of the complete design is verified with the
Known Answer Tests(KATs) generated from ML-DSA reference software. We also present the practical power side-channel attack experimental results by implementing masking gadgets on the standard
side-channel evaluation FPGA board and collecting power traces up-to 1 million traces. The proposed protected design has the overhead of 1.127× LUT, 1.2× Flip-Flop, and 378× execution time compared
to unprotected design. The experimental results show that it resists side-channel attacks.
Attacking Automotive RKE Security: How Smart are your ‘Smart’ Keys?
Remote Keyless Entry (RKE) systems are ubiqui- tous in modern day automobiles, providing convenience for vehicle owners - occasionally at the cost of security. Most automobile companies have
proprietary implementations of RKE; these are sometimes built on insecure algorithms and authentication mechanisms. This paper presents a compre- hensive study conducted on the RKE systems of
multiple cars from four automobile manufacturers not previously explored. Specifically, we analyze the design, implementation, and security levels of 7 different cars manufactured by Honda,
Maruti-Suzuki, Toyota, and Mahindra. We also do a deep dive into the RKE system of a particular Honda model. We evaluate the susceptibility of these systems to known vulnerabilities (such as RollJam
and RollBack at- tacks). This is accomplished using a novel tool – ‘Puck- py’, that helps analyze RKE protocols. Our tool automates several aspects of the protocol analysis process, reducing time and
logistical constraints in RKE research; we provide standardized protocols to execute various attacks using our Puck-Py tool. We find that, despite having a long period of time to fix security issues,
several popular automobiles remain susceptible to attacks, including the basic RollJam attack.
Succinct Randomized Encodings from Non-compact Functional Encryption, Faster and Simpler
Succinct randomized encodings allow encoding the input $x$ of a time-$t$ uniform computation $M(x)$ in sub-linear time $o(t)$. The resulting encoding $\tilde{x}$ allows recovering the result of the
computation $M(x)$, but hides any other information about $x$. Such encodings are known to have powerful applications such as reducing communication in MPC, bootstrapping advanced encryption schemes,
and constructing time-lock puzzles. Until not long ago, the only known constructions were based on indistinguishability obfuscation, and in particular they were not based on standard post-quantum
assumptions. In terms of efficiency, these constructions' encoding time is $\rm{polylog}(t)$, essentially the best one can hope for. Recently, a new construction was presented based on Circular
Learning with Errors, an assumption similar to the one used in fully-homomorphic encryption schemes, and which is widely considered to be post-quantum resistant. However, the encoding efficiency
significantly falls behind obfuscation-based scheme and is $\approx \sqrt{t} \cdot s$, where $s$ is the space of the computation. We construct, under the same assumption, succinct randomized
encodings with encoding time $\approx t^{\varepsilon} \cdot s$ for arbitrarily small constant $\varepsilon<1$. Our construction is relatively simple, generic and relies on any non-compact single-key
functional encryption that satisfies a natural {\em efficiency preservation} property.
SophOMR: Improved Oblivious Message Retrieval from SIMD-Aware Homomorphic Compression
Privacy-preserving blockchains and private messaging services that ensure receiver-privacy face a significant UX challenge: each client must scan every payload posted on the public bulletin board
individually to avoid missing messages intended for them. Oblivious Message Retrieval (OMR) addresses this issue by securely outsourcing this expensive scanning process to a service provider using
Homomorphic Encryption (HE). In this work, we propose a new OMR scheme that substantially improves upon the previous state-of-the-art, PerfOMR (USENIX Security'24). Our implementation demonstrates
reductions of 3.3x in runtime, 2.2x in digest size, and 1.5x in key size, in a scenario with 65536 payloads (each 612 bytes), of which up to 50 are pertinent. At the core of these improvements is a
new homomorphic compression mechanism, where ciphertexts of length proportional to the number of total payloads are compressed into a digest whose length is proportional to the upper bound on the
number of pertinent payloads. Unlike previous approaches, our scheme fully exploits the native homomorphic SIMD structure of the underlying HE scheme, significantly enhancing efficiency. In the
setting described above, our compression scheme achieves 7.4x speedup compared to PerfOMR.
Revisiting Leakage-Resilient MACs and Succinctly-Committing AEAD: More Applications of Pseudo-Random Injections
Pseudo-Random Injections (PRIs) have had several applications in symmetric-key cryptography, such as in the idealization of Authenticated Encryption with Associated Data (AEAD) schemes, building
robust AEAD, and, recently, in converting a committing AEAD scheme into a succinctly committing AEAD scheme. In Crypto 2024, Bellare and Hoang showed that if an AEAD scheme is already committing, it
can be transformed into a succinctly committed scheme by encrypting part of the plaintext using a PRI. In this paper, we revisit the applications of PRIs in building Message Authentication Codes
(MACs) and AEAD schemes. First, we look at some of the properties and definitions PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different
leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality
check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security only degrades a little if the equality check is only
leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs
to build a succinctly committing online AEAD scheme dubbed as scoAE from scratch that achieves succinct CMT4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last
but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the
Encode-then-Encipher (EtE) framework).
Batching Adaptively-Sound SNARGs for NP
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement $x$ is true with a proof whose size is sublinear in the length of the traditional NP
witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to
construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially-secure indistinguishability obfuscation, sub-exponentially-secure one-way
functions, and polynomial hardness of discrete log). We consider the batch setting where the prover wants to prove a collection of $T$ statements $x_1, \ldots, x_T$ and its goal is to construct a
proof whose size is sublinear in both the size of a single witness and the number of instances $T$. In this setting, existing constructions either require the size of the public parameters to scale
linearly with $T$ (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP
witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu
adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is $\mathsf{poly}(\lambda)$ and the size of the CRS is $\mathsf{poly}(\lambda + |C|)$, where $
\lambda$ is a security parameter and $|C|$ is the size of the circuit that computes the associated NP relation. Our first approach builds directly on top of the Waters-Wu construction and relies on
indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg,
Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
Pseudorandom Function-like States from Common Haar Unitary
Recent active studies have demonstrated that cryptography without one-way functions (OWFs) could be possible in the quantum world. Many fundamental primitives that are natural quantum analogs of OWFs
or pseudorandom generators (PRGs) have been introduced, and their mutual relations and applications have been studied. Among them, pseudorandom function-like state generators (PRFSGs) [Ananth, Qian,
and Yuen, Crypto 2022] are one of the most important primitives. PRFSGs are a natural quantum analogue of pseudorandom functions (PRFs), and imply many applications such as IND-CPA secret-key
encryption (SKE) and EUF-CMA message authentication code (MAC). However, only known constructions of (many-query-secure) PRFSGs are ones from OWFs or pseudorandom unitaries (PRUs). In this paper, we
construct classically-accessible adaptive secure PRFSGs in the invertible quantum Haar random oracle (QHRO) model which is introduced in [Chen and Movassagh, Quantum]. The invertible QHRO model is an
idealized model where any party can access a public single Haar random unitary and its inverse, which can be considered as a quantum analog of the random oracle model. Our PRFSG constructions
resemble the classical Even-Mansour encryption based on a single permutation, and are secure against any unbounded polynomial number of queries to the oracle and construction. To our knowledge, this
is the first application in the invertible QHRO model without any assumption or conjecture. The previous best construction in the idealized model is PRFSGs secure up to o(λ/ log λ) queries in the
common Haar state model [Ananth, Gulati, and Lin, TCC 2024]. We develop new techniques on Haar random unitaries to prove the selective and adaptive security of our PRFSGs. For selective security, we
introduce a new formula, which we call the Haar twirl approximation formula. For adaptive security, we show the unitary reprogramming lemma and the unitary resampling lemma. These have their own
interest, and may have many further applications. In particular, by using the approximation formula, we give an alternative proof of the non-adaptive security of the PFC ensemble [Metger, Poremba,
Sinha, and Yuen, FOCS 2024] as an additional result. Finally, we prove that our construction is not PRUs or quantum-accessible non-adaptive PRFSGs by presenting quantum polynomial time attacks. Our
attack is based on generalizing the hidden subgroup problem where the relevant function outputs quantum states.
Linear Proximity Gap for Reed-Solomon Codes within the 1.5 Johnson Bound
We establish a linear proximity gap for Reed-Solomon (RS) codes within the one-and-a-half Johnson bound. Specifically, we investigate the proximity gap for RS codes, revealing that any affine
subspace is either entirely $\delta$-close to an RS code or nearly all its members are $\delta$-far from it. When $\delta$ is within the one-and-a-half Johnson bound, we prove an upper bound on the
number of members (in the affine subspace) that are $\delta$-close to the RS code for the latter case. Our bound is linear in the length of codewords. In comparison, Ben-Sasson, Carmon, Ishai,
Kopparty and Saraf [FOCS 2020] prove a linear bound when $\delta$ is within the unique decoding bound and a quadratic bound when $\delta$ is within the Johnson bound. Note that when the rate of the
RS code is smaller than 0.23, the one-and-a-half Johnson bound is larger than the unique decoding bound. Proximity gaps for Reed-Solomon (RS) codes have implications in various RS code-based
protocols. In many cases, a stronger property than individual distance—known as correlated agreement—is required, i.e., functions in the affine subspace are not only $\delta$-close to an RS code, but
also agree on the same evaluation domain. Our results support this stronger property.
Foundations of Adaptor Signatures
Adaptor signatures extend the functionality of regular signatures through the computation of pre-signatures on messages for statements of NP relations. Pre-signatures are publicly verifiable; they
simultaneously hide and commit to a signature of an underlying signature scheme on that message. Anybody possessing a corresponding witness for the statement can adapt the pre-signature to obtain the
"regular" signature. Adaptor signatures have found numerous applications for conditional payments in blockchain systems, like payment channels (CCS'20, CCS'21), private coin mixing (CCS'22, SP'23),
and oracle-based payments (NDSS'23). In our work, we revisit the state of the security of adaptor signatures and their constructions. In particular, our two main contributions are: - Security Gaps
and Definitions: We review the widely-used security model of adaptor signatures due to Aumayr et al. (ASIACRYPT'21) and identify gaps in their definitions that render known protocols for private
coin-mixing and oracle-based payments insecure. We give simple counterexamples of adaptor signatures that are secure w.r.t. their definitions but result in insecure instantiations of these protocols.
To fill these gaps, we identify a minimal set of modular definitions that align with these practical applications. - Secure Constructions: Despite their popularity, all known constructions are (1)
derived from identification schemes via the Fiat-Shamir transform in the random oracle model or (2) require modifications to the underlying signature verification algorithm, thus making the
construction useless in the setting of cryptocurrencies. More concerningly, all known constructions were proven secure w.r.t. the insufficient definitions of Aumayr et al., leaving us with no
provably secure adaptor signature scheme to use in applications. Firstly, in this work, we salvage all current applications by proving the security of the widely-used Schnorr adaptor signatures under
our proposed definitions. We then provide several new constructions, including presenting the first adaptor signature schemes for Camenisch-Lysyanskaya (CL), Boneh-Boyen-Shacham (BBS+), and Waters
signatures, all of which are proven secure in the standard model. Our new constructions rely on a new abstraction of digital signatures, called dichotomic signatures, which covers the essential
properties we need to build adaptor signatures. Proving the security of all constructions (including identification-based schemes) relies on a novel non-black-box proof technique. Both our digital
signature abstraction and the proof technique could be of independent interest to the community.
We provide several attacks on the BASS signature scheme introduced by Grigoriev, Ilmer, Ovchinnikov and Shpilrain in 2023. We lay out a trivial forgery attack which generates signatures passing the
scheme's probabilistic signature verification with high probability. Generating these forgeries is faster than generating signatures honestly. Moreover, we describe a key-only attack which allows us
to recover an equivalent private key from a signer's public key. The time complexity of this recovery is asymptotically the same as that of signing messages.
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However,
no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it
cryptographically. The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature
functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore
higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no
consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities. We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly
by the set of standard EUF-CMA signature schemes that are consistent and linear time. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality.
We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83) in the UC framework. Our result can be interpreted as a step toward a sound
realization of the Dolev-Yao methodology.
Encrypted RAM Delegation: Applications to Rate-1 Extractable Arguments, Homomorphic NIZKs, MPC, and more
In this paper we introduce the notion of encrypted RAM delegation. In an encrypted RAM delegation scheme, the prover creates a succinct proof for a group of two input strings $x_\mathsf{pb}$ and $x_\
mathsf{pr}$, where $x_\mathsf{pb}$ corresponds to a large \emph{public} input and $x_\mathsf{pr}$ is a \emph{private} input. A verifier can check correctness of computation of $\mathcal{M}$ on $(x_\
mathsf{pb}, x_\mathsf{pr})$, given only the proof $\pi$ and $x_\mathsf{pb}$. We design encrypted RAM delegation schemes from a variety of standard assumptions such as DDH, or LWE, or $k$-linear. We
prove strong knowledge soundness guarantee for our scheme as well as a special input hiding property to ensure that $\pi$ does not leak anything about $x_\mathsf{pr}$. We follow this by describing
multiple applications of encrypted RAM delegation. First, we show how to design a rate-1 non-interactive zero-knowledge (NIZK) argument system with a straight-line extractor. Despite over 30+ years
of research, the only known construction in the literature for rate-1 NIZKs from standard assumptions relied on fully homomorphic encryption. Thus, we provide the first rate-1 NIZK scheme based
purely on DDH or $k$-linear assumptions. Next, we also design fully-homomorphic NIZKs from encrypted RAM delegation. The only prior solution crucially relied on algebraic properties of pairing-based
NIZKs, thus was only known from the decision linear assumption. We provide the first fully-homomorphic NIZK system from LWE (thus post-quantum security) and from DDH-hard groups. We also provide a
communication-complexity-preserving compiler for a wide class of semi-malicious multiparty computation (MPC) protocols to obtain fully malicious MPC protocols. This gives the first such compiler for
a wide class of MPC protocols as any comparable compiler provided in prior works relied on strong non-falsifiable assumptions such as zero-knowledge succinct non-interactive arguments of knowledge
(zkSNARKs). Moreover, we also show many other applications to composable zero-knowledge batch arguments, succinct delegation of committed programs, and fully context-hiding multi-key multi-hop
homomorphic signatures.
Smoothing Parameter and Shortest Vector Problem on Random Lattices
Lattice problems have many applications in various domains of computer science. There is currently a gap in the understanding of these problems with respect to their worst-case complexity and their
average-case behaviour. For instance, the Shortest Vector problem (SVP) on an n-dimensional lattice has worst-case complexity $2^{n+o(n)}$ \cite{ADRS15}. However, in practice, people rely on
heuristic (unproven) sieving algorithms of time complexity $2^{0.292n+o(n)}$ \cite{BeckerDGL16} to assess the security of lattice-based cryptography schemes. Those heuristic algorithms are
experimentally verified for lattices used in cryptography, which are usually random in some way. In this paper, we try to bridge the gap between worst-case and heuristic algorithms. Using the
formalism of random real lattices developped by Siegel, we show a tighter upper bound on an important lattice parameter called the smoothing parameter that applies to almost all random lattices. This
allows us to obtain a $2^{n/2+o(n)}$ time algorithm for an approximation version of the SVP on random lattices with a small constant approximation factor.
Quantum Chosen-Cipher Attack on Camellia
The Feistel structure represents a fundamental architectural component within the domain of symmetric cryptographic algorithms, with a substantial body of research conducted within the context of
classical computing environments. Nevertheless, research into specific symmetric cryptographic algorithms utilizing the Feistel structure is relatively scarce in quantum computing environments. This
paper builds upon a novel 4-round distinguisher proposed by Ito et al. for the Feistel structure under the quantum chosen-ciphertext attack (qCCA) setting. It introduces a 5-round distinguisher for
Camellia. The efficacy of the distinguisher has been empirically validated. Furthermore, this paper combines Grover's algorithm with Simon's algorithm, utilizing an analysis of Camellia's key
scheduling characteristics to construct a 9-round key recovery attack on Camellia algorithm. The time complexity for acquiring the correct key bits is $2^{61.5}$, and it requires 531 quantum bits.
This represents the inaugural chosen-ciphertext attack on Camellia under the Q2 model.
Siniel: Distributed Privacy-Preserving zkSNARK
Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking
any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart
contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation. In this work, we propose Siniel, an efficient private delegation framework for
zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its
expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art
zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire
computation to the workers. We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel
saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.
ColliderScript: Covenants in Bitcoin via 160-bit hash collisions
We introduce a method for enforcing covenants on Bitcoin outputs without requiring any changes to Bitcoin by designing a hash collision based equivalence check which bridges Bitcoin's limited Big
Script to Bitcoin's Small Script. This allows us evaluate the signature of the spending transaction (available only to Big Script) in Small Script. As Small Script enables arbitrary computations, we
can introspect into the spending transaction and enforce covenants on it. Our approach leverages finding collisions in the $160$-bit hash functions: SHA-1 and RIPEMD-160. By the birthday bound this
should cost $\sim2^{80}$ work. Each spend of our covenant costs $\sim2^{86}$ hash queries and $\sim2^{56}$ bytes of space. For security, we rely on an assumption regarding the hardness of finding a
$3$-way collision (with short random inputs) in $160$-bit hash functions, arguing that if the assumption holds, breaking covenant enforcement requires $\sim2^{110}$ hash queries. To put this in
perspective, the work to spend our covenant is $\sim33$ hours of the Bitcoin mining network, whereas breaking our covenant requires $\sim 450,000$ years of the Bitcoin mining network. We believe
there are multiple directions of future work that can significantly improve these numbers. Evaluating covenants and our equivalence check requires performing many operations in Small Script, which
must take no more than $4$ megabytes in total size, as Bitcoin does not allow transactions greater than $4$ megabytes. We only provide rough estimates of the transaction size because, as of this
writing, no Small Script implementations of the hash functions required, SHA-1 and RIPEMD-160, have been written.
Investigation of the Optimal Linear Characteristics of BAKSHEESH (Full Version)
This paper aims to provide a more comprehensive understanding of the optimal linear characteristics of BAKSHEESH. Initially, an explicit formula for the absolute correlation of the $R$-round optimal
linear characteristic of BAKSHEESH is proposed when $R \geqslant 12$. By examining the linear characteristics of BAKSHEESH with three active S-boxes per round, we derive some properties of the three
active S-boxes in each round. Furthermore, we demonstrate that there is only one 1-round iterative linear characteristic with three active S-boxes. Since the 1-round linear characteristic is unique,
it must be included in any $R$-round ($R \geqslant 12$) linear characteristics of BAKSHEESH with three active S-boxes per round. Finally, we confirm that BAKSHEESH's total number of $R$-round optimal
linear characteristics is $3072$ for $R \geqslant 12$. All of these characteristics are generated by employing the 1-round iterative linear characteristic.
Privacy-Preserving Multi-Party Search via Homomorphic Encryption with Constant Multiplicative Depth
We propose a privacy-preserving multiparty search protocol using threshold-level homomorphic encryption, which we prove correct and secure to honest but curious adversaries. Unlike existing
approaches, our protocol maintains a constant circuit depth. This feature enhances its suitability for practical applications involving dynamic underlying databases.
Consensus Under Adversary Majority Done Right
A spectre is haunting consensus protocols—the spectre of adversary majority. The literature is inconclusive, with possibilities and impossibilities running abound. Dolev and Strong in 1983 showed an
early possibility for up to 99% adversaries. Yet, we have known impossibility results for adversaries above 1/2 in synchrony, and above 1/3 in partial synchrony. What gives? It is high time that we
pinpoint the culprit of this confusion: the critical role of the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We
systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new,
and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier
results, and fill gaps left in the literature with new protocols and impossibility theorems.
Quantum One-Time Protection of any Randomized Algorithm
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of
exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum
resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens. A quantum one-time token is a quantum
state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for
building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as
outputs of the classical algorithm have high enough min-entropy, in a black box model. Importantly, the classical program being protected does not need to be implemented coherently on a quantum
computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the
protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum
computing in either the NISQ or early fault tolerant regime.
FLock: Robust and Privacy-Preserving Federated Learning based on Practical Blockchain State Channels
\textit{Federated Learning} (FL) is a distributed machine learning paradigm that allows multiple clients to train models collaboratively without sharing local data. Numerous works have explored
security and privacy protection in FL, as well as its integration with blockchain technology. However, existing FL works still face critical issues. \romannumeral1) It is difficult to achieving \
textit{poisoning robustness} and \textit{data privacy} while ensuring high \textit{model accuracy}. Malicious clients can launch \textit{poisoning attacks} that degrade the global model. Besides,
aggregators can infer private data from the gradients, causing \textit{privacy leakages}. Existing privacy-preserving poisoning defense FL solutions suffer from decreased model accuracy and high
computational overhead. \romannumeral2) Blockchain-assisted FL records iterative gradient updates on-chain to prevent model tampering, yet existing schemes are not compatible with practical
blockchains and incur high costs for maintaining the gradients on-chain. Besides, incentives are overlooked, where unfair reward distribution hinders the sustainable development of the FL community.
In this work, we propose FLock, a robust and privacy-preserving FL scheme based on practical blockchain state channels. First, we propose a lightweight secure \textit{Multi-party Computation} (MPC)
-friendly robust aggregation method through quantization, median, and Hamming distance, which could resist poisoning attacks against up to $<50\%$ malicious clients. Besides, we propose
communication-efficient Shamir's secret sharing-based MPC protocols to protect data privacy with high model accuracy. Second, we utilize blockchain off-chain state channels to achieve immutable model
records and incentive distribution. FLock achieves cost-effective compatibility with practical cryptocurrency platforms, e.g. Ethereum, along with fair incentives, by merging the secure aggregation
into a multi-party state channel. In addition, a pipelined \textit{Byzantine Fault-Tolerant} (BFT) consensus is integrated where each aggregator can reconstruct the final aggregated results. Lastly,
we implement FLock and the evaluation results demonstrate that FLock enhances robustness and privacy, while maintaining efficiency and high model accuracy. Even with 25 aggregators and 100 clients,
FLock can complete one secure aggregation for ResNet in $2$ minutes over a WAN. FLock successfully implements secure aggregation with such a large number of aggregators, thereby enhancing the fault
tolerance of the aggregation.
Isogeny interpolation and the computation of isogenies from higher dimensional representations
The Supersingular Isogeny Diffie-Hellman (SIDH) scheme is a public key cryptosystem that was submitted to the National Institute of Standards and Technology's competition for the standardization of
post-quantum cryptography protocols. The private key in SIDH consists of an isogeny whose degree is a prime power. In July 2022, Castryck and Decru discovered an attack that completely breaks the
scheme by recovering Bob's secret key, using isogenies between higher dimensional abelian varieties to interpolate and reconstruct the isogenies comprising the SIDH private key. The original attack
applies in theory to any prime power degree, but the implementation accompanying the original attack required one of the SIDH keys involved in a key exchange to have degree equal to a power of $2$.
An implementation of the power of $3$ case was published subsequently by Decru and Kunzweiler. However, despite the passage of several years, nobody has published any implementations for prime powers
other than $2$ or $3$, and for good reason --- the necessary higher dimensional isogeny computations rapidly become more complicated as the base prime increases. In this paper, we provide for the
first time a fully general isogeny interpolation implementation that works for any choice of base prime, and provide timing benchmarks for various combinations of SIDH base prime pairs. We remark
that the technique of isogeny interpolation now has constructive applications as well as destructive applications, and that our methods may open the door to increased flexibility in constructing
isogeny-based digital signatures and cryptosystems.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field $\mathbb{F}$ of size $n$, the (patched) inverse permutation $\operatorname{INV}: \mathbb{F} \to \mathbb{F}$ computes the inverse of $x$ over $\mathbb{F}$ when $x\neq 0$ and outputs
$0$ when $x=0$, and the $\operatorname{ARK}_K$ (for AddRoundKey) permutation adds a fixed constant $K$ to its input, i.e., $$\operatorname{INV}(x) = x^{n-2} \hspace{.1in} \mbox{and} \hspace{.1in} \
operatorname{ARK}_K(x) = x + K \;.$$ We study the process of alternately applying the $\operatorname{INV}$ permutation followed by a random linear permutation $\operatorname{ARK}_K$, which is a
random walk over the alternating (or symmetric) group that we call the inverse walk. We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random
permutation over $\mathbb{F}$. We show that $r$ rounds of the inverse walk over the field of size $n$ with $$r = \Theta\left(n\log^2 n + n\log n\log \frac{1}{\epsilon}\right)$$ rounds generates a
permutation that is $\epsilon$-close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group $A_{n}$). This is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro and Vaikuntanathan (CRYPTO 2023) by providing a missing piece in their proof of $t$-wise independence of (a variant of)
AES. It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a reachability result: namely, that every even permutation can be
generated eventually by composing $\operatorname{INV}$ and $\operatorname{ARK}$. We show a tight convergence result, namely a tight quantitative bound on the number of rounds to reach a random (even)
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries. These goals are usually achieved
using a public randomness beacon to select roles for each participant. We examine to what extent this randomness is necessary. Specifically, we provide tight bounds on the amount of entropy a
Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security. We first establish that no consensus protocol can simultaneously be efficient, be
adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three
On the Jordan-Gauss graphs and new multivariate public keys
We suggest two families of multivariate public keys defined over arbitrary finite commutative ring \(K\) with unity. The first one has quadratic multivariate public rule, this family is an
obfuscation of previously defined cryptosystem defined in terms of well known algebraic graphs \(D(n, K)\) with the partition sets isomorphic to \(K^n\). Another family of cryptosystems uses the
combination of Eulerian transformation of \(K[x_1, x_2, \ldots, x_n]\) sending each variable \(x_i\) to a monomial term with the quadratic encryption map of the first cryptosystem. The resulting map
has unbounded degree and the density \(O(n^4)\) like the cubic multivariate map. The space of plaintexts of the second cryptosystem is the variety \((K^*)^n\) and the space of ciphertexts is the
affine space \(K^n\).
Towards Explainable Side-Channel Leakage: Unveiling the Secrets of Microarchitecture
We explore the use of microbenchmarks, small assembly code snippets, to detect microarchitectural side-channel leakage in CPU implementations. Specifically, we investigate the effectiveness of
microbenchmarks in diagnosing the predisposition to side-channel leaks in two commonly used RISC-V cores: Picorv32 and Ibex. We propose a new framework that involves diagnosing side-channel leaks,
identifying leakage points, and constructing leakage profiles to understand the underlying causes. We apply our framework to several realistic case studies that test our framework for explaining
side-channel leaks and showcase the subtle interaction of data via order-reducing leaks.
Discrete gaussian sampling for BKZ-reduced basis
Discrete Gaussian sampling on lattices is a fundamental problem in lattice-based cryptography. In this paper, we revisit the Markov chain Monte Carlo (MCMC)-based Metropolis-Hastings-Klein (MHK)
algorithm proposed by Wang and Ling and study its complexity under the Geometric Series Assuption (GSA) when the given basis is BKZ-reduced. We give experimental evidence that the GSA is accurate in
this context, and we give a very simple approximate formula for the complexity of the sampler that is accurate over a large range of parameters and easily computable. We apply our results to the dual
attack on LWE of [Pouly and Shen 2024] and significantly improve the complexity estimates of the attack. Finally, we provide some results of independent interest on the Gaussian mass of a random
$q$-ary lattices.
Revisiting subgroup membership testing on pairing-friendly curves via the Tate pairing
In 2023, Koshelev proposed an efficient method for subgroup membership testing on a list of non-pairing-friendly curves via the Tate pairing. In fact, this method can also be applied to certain
pairing-friendly curves, such as the BLS and BW13 families, at a cost of two small Tate pairings. In this paper, we revisit Koshelev's method to enhance its efficiency for these curve families.
First, we present explicit formulas for computing the two small Tate pairings. Compared to the original formulas, the new versions offer shorter Miller iterations and reduced storage requirements.
Second, we provide a high-speed software implementation on a 64-bit processor. Our results demonstrate that the new method is up to $62.0\%$ and $22.4\%$ faster than the state-of-the-art on the
BW13-310 and BLS24-315 curves, respectively, while being $14.1\%$ slower on BLS12-381. When precomputation is utilized, our method achieves speed improvements of up to $34.8\%$, $110.6\%$, and $63.9\
%$ on the BLS12-381, BW13-310, and BLS24-315 curves, respectively.
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both
sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and
ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new
cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not
require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Advanced Transparency System
In contemporary times, there are many situations where users need to verify that their information is correctly retained by servers. At the same time, servers need to maintain transparency logs. Many
algorithms have been designed to address this problem. For example, Certificate Transparency (CT) helps track certificates issued by Certificate Authorities (CAs), while CONIKS aims to provide key
transparency for end users. However, these algorithms often suffer from either high append time or imbalanced inclusion-proof cost and consistency-proof cost. To find an optimal solution, we
constructed two different but similar authenticated data structures tailored to two different lookup protocols. We propose ATS (Advanced Transparency System), which uses only linear storage cost to
reduce append time and balances the time costs for both servers and users. When addressing the value-lookup problem, this system allows servers to append user information in constant time and enables
radical-level inclusion proof and consistency proof. For the key transparency problem, the system requires logarithmic time complexity for the append operation and offers acceptable inclusion proof
and consistency proof.
An Efficient and Secure Boolean Function Evaluation Protocol
Boolean functions play an important role in designing and analyzing many cryptographic systems, such as block ciphers, stream ciphers, and hash functions, due to their unique cryptographic properties
such as nonlinearity, correlation immunity, and algebraic properties. The secure evaluation of Boolean functions or Secure Boolean Evaluation (SBE) is an important area of research. SBE allows
parties to jointly compute Boolean functions without exposing their private inputs. SBE finds applications in privacy-preserving protocols and secure multi-party computations. In this manuscript, we
present an efficient and generic two-party protocol (namely $\textsf{BooleanEval}$) for the secure evaluation of Boolean functions by utilizing a 1-out-of-2 Oblivious Transfer (OT) as a building
block. $\textsf{BooleanEval}$ only employs XOR operations as the core computational step, thus making it lightweight and fast. Unlike other lightweight state-of-the-art designs of SBE, $\textsf
{BooleanEval}$ avoids the use of additional cryptographic primitives, such as hash functions and commitment schemes to reduce the computational overhead.
Black-Box Timed Commitments from Time-Lock Puzzles
A Timed Commitment (TC) with time parameter $t$ is hiding for time at most $t$, that is, commitments can be force-opened by any third party within time $t$. In addition to various cryptographic
assumptions, the security of all known TC schemes relies on the sequentiality assumption of repeated squarings in hidden-order groups. The repeated squaring assumption is therefore a security
bottleneck. In this work, we give a black-box construction of TCs from any time-lock puzzle (TLP) by additionally relying on one-way permutations and collision-resistant hashing. Currently, TLPs are
known from (a) the specific repeated squaring assumption, (b) the general (necessary) assumption on the existence of worst-case non-parallelizing languages and indistinguishability obfuscation, and
(c) any iteratively sequential function and the hardness of the circular small-secret LWE problem. The latter admits a plausibly post-quantum secure instantiation. Hence, thanks to the generality of
our transform, we get i) the first TC whose timed security is based on the the existence of non-parallelizing languages and ii) the first TC that is plausibly post-quantum secure. We first define
quasi publicly-verifiable TLPs (QPV-TLPs) and construct them from any standard TLP in a black-box manner without relying on any additional assumptions. Then, we devise a black-box commit-and-prove
system to transform any QPV-TLPs into a TC.
A General Quantum Duality for Representations of Groups with Applications to Quantum Money, Lightning, and Fire
Aaronson, Atia, and Susskind [Aaronson et al., 2020] established that efficiently mapping between quantum states $\ket{\psi}$ and $\ket{\phi}$ is computationally equivalent to distinguishing their
superpositions $\frac{1}{\sqrt{2}}(|\psi\rangle + |\phi\rangle)$ and $\frac{1}{\sqrt{2}}(|\psi\rangle - |\phi\rangle)$. We generalize this insight into a broader duality principle in quantum
computation, wherein manipulating quantum states in one basis is equivalent to extracting their value in a complementary basis. In its most general form, this duality principle states that for a
given group, the ability to implement a unitary representation of the group is computationally equivalent to the ability to perform a Fourier subspace extraction from the invariant subspaces
corresponding to its irreducible representations. Building on our duality principle, we present the following applications: * Quantum money, which captures quantum states that are verifiable but
unclonable, and its stronger variant, quantum lightning, have long resisted constructions based on concrete cryptographic assumptions. While (public-key) quantum money has been constructed from
indistinguishability obfuscation (iO)—an assumption widely considered too strong—quantum lightning has not been constructed from any such assumptions, with previous attempts based on assumptions that
were later broken. We present the first construction of quantum lightning with a rigorous security proof, grounded in a plausible and well-founded cryptographic assumption. We extend Zhandry's
construction from Abelian group actions [Zhandry, 2024] to non-Abelian group actions, and eliminate Zhandry's reliance on a black-box model for justifying security. Instead, we prove a direct
reduction to a computational assumption—the pre-action security of cryptographic group actions. We show how these group actions can be realized with various instantiations, including with the group
actions of the symmetric group implicit in the McEliece cryptosystem. * We provide an alternative quantum money and lightning construction from one-way homomorphisms, showing that security holds
under specific conditions on the homomorphism. Notably, our scheme exhibits the remarkable property that four distinct security notions—quantum lightning security, security against both worst-case
cloning and average-case cloning, and security against preparing a specific canonical state—are all equivalent. * Quantum fire captures the notion of a samplable distribution on quantum states that
are efficiently clonable, but not efficiently telegraphable, meaning they cannot be efficiently encoded as classical information. These states can be spread like fire, provided they are kept alive
quantumly and do not decohere. The only previously known construction relied on a unitary quantum oracle, whereas we present the first candidate construction of quantum fire in the plain model.
Fine-Grained Non-Interactive Key-Exchange without Idealized Assumptions
In this paper, we study multi-party non-interactive key exchange (NIKE) in the fine-grained setting. More precisely, we propose three multi-party NIKE schemes in three computation models, namely, the
bounded parallel-time, bounded time, and bounded storage models. Their security is based on a very mild assumption (e.g., NC1 ⊊ ⊕L/poly) or even without any complexity assumption. This improves the
recent work of Afshar, Couteau, Mahmoody, and Sadeghi (EUROCRYPT 2023) that requires idealized assumptions, such as random oracles or generic groups. Additionally, we show that all our constructions
satisfy a natural desirable property that we refer to as extendability, and we give generic transformations from extendable multi-party NIKE to multi-party identity-based NIKEs in the fine-grained
PriSrv: Privacy-Enhanced and Highly Usable Service Discovery in Wireless Communications
Service discovery is essential in wireless communications. However, existing service discovery protocols provide no or very limited privacy protection for service providers and clients, and they
often leak sensitive information (e.g., service type, client’s identity and mobility pattern), which leads to various network-based attacks (e.g., spoofing, man-in-the-middle, identification and
tracking). In this paper, we propose a private service discovery protocol, called PriSrv, which allows a service provider and a client to respectively specify a fine-grained authentication policy
that the other party must satisfy before a connection is established. PriSrv consists of a private service broadcast phase and an anonymous mutual authentication phase with bilateral control, where
the private information of both parties is hidden beyond the fact that a mutual match to the respective authentication policy occurred. As a core component of PriSrv, we introduce the notion of
anonymous credential-based matchmaking encryption (ACME), which exerts dual-layer matching in one step to simultaneously achieve bilateral flexible policy control, selective attribute disclosure and
multi-show unlinkability. As a building block of ACME, we design a fast anonymous credential (FAC) scheme to provide constant size credentials and efficient show/verification mechanisms, which is
suitable for privacy-enhanced and highly usable service discovery in wireless networks. We present a concrete PriSrv protocol that is interoperable with popular wireless communication protocols, such
as Wi-Fi Extensible Authentication Protocol (EAP), mDNS, BLE and Airdrop, to offer privacy-enhanced protection. We present formal security proof of our protocol and evaluate its performance on
multiple hardware platforms: desktop, laptop, mobile phone and Raspberry Pi. PriSrv accomplishes private discovery and secure connection in less than 0.973 s on the first three platforms, and in less
than 2.712 s on Raspberry Pi 4B. We also implement PriSrv into IEEE 802.1X in the real network to demonstrate its practicality.
Is Periodic Pseudo-randomization Sufficient for Beacon Privacy?
In this paper, we investigate whether the privacy mechanism of periodically changing the pseudorandom identities of Bluetooth Low Energy (BLE) beacons is sufficient to ensure privacy. We consider a
new natural privacy notion for BLE broadcasting beacons which we call ``Timed-sequence- indistinguishability'' of beacons. This new privacy definition is stronger than the well-known
indistinguishability, since it considers not just the advertisements' content, but also the advertisements' broadcasting times which are observable in the physical world. We then prove that beacons
with periodically changing pseudorandom identities do not achieve timed-sequence- indistinguishability. We do this by presenting a novel privacy attack against BLE beacons, which we call the ``Timer
Manipulation Attack.'' This new time-based privacy attack can be executed by merely inserting or reinserting the beacon's battery at the adversary's chosen time. We performed this attack against an
actually deployed beacon. To mitigate the ``Timer Manipulation Attack'' and other attacks associated with periodic signaling, we propose a new countermeasure involving quasi-periodic randomized
scheduling of identity changes. We prove that our countermeasure ensures timed-sequence indistinguishability for beacons, thereby enhancing the beacon's privacy. Additionally, we show how to
integrate this countermeasure in the attacked system while essentially preserving its feasibility and utility, which is crucial for practical industrial adoption.
New results in Share Conversion, with applications to evolving access structures
We say there is a share conversion from a secret sharing scheme $\Pi$ to another scheme $\Pi'$ implementing the same access structure if each party can locally apply a deterministic function to their
share to transform any valid secret sharing under $\Pi$ to a valid (but not necessarily random) secret sharing under $\Pi'$ of the same secret. If such a conversion exists, we say that $\Pi\ge\Pi'$.
This notion was introduced by Cramer et al. (TCC'05), where they particularly proved that for any access structure (AS), any linear secret sharing scheme over a given field $\mathbb{F}$, has a
conversion from a CNF scheme, and is convertible to a DNF scheme. In this work, we initiate a systematic study of convertability between secret sharing schemes, and present a number of results with
implications to the understanding of the convertibility landscape. - In the context of linear schemes, we present two key theorems providing necessary conditions for convertibility, proved using
linear-algebraic tools. It has several implications, such as the fact that Shamir secret sharing scheme can be neither maximal or minimal. Another implication of it is that for a broad class of
access structures, a linear scheme where some party has sufficiently small share complexity, may not be minimal. - Our second key result is a necessary condition for convertibility to CNF from a
broad class of (not necessarily linear) schemes. This result is proved via information-theoretic techniques and implies non-maximality for schemes with share complexity smaller than that of CNF. We
also provide a condition which is both necessary and sufficient for the existence of a share conversion to some linear scheme. The condition is stated as a system of linear equations, such that a
conversion exists iff. a solution to the linear system exists. We note that the impossibility results for linear schemes may be viewed as identifying a subset of contradicting equations in the
system. Another contribution of our paper, is in defining and studying share conversion for evolving secret sharing schemes. In such a schemes, recently introduced by Komargodski et al. (IEEE
ToIT'18), the number of parties is not bounded apriori, and every party receives a share as it arrives, which never changes in the sequel. Our impossibility results have implications to the evolving
setting as well. Interestingly, that unlike the standard setting, there is no maximum or minimum in a broad class of evolving schemes, even without any restriction on the share size. Finally, we show
that, generally, there is no conversion between additive schemes over different fields, however by degrading to statistical security, it may be possible to create convertible schemes.
ABE for Circuits with $\mathsf{poly}(\lambda)$-sized Keys from LWE
We present a key-policy attribute-based encryption (ABE) scheme for circuits based on the Learning With Errors (LWE) assumption whose key size is independent of the circuit depth. Our result
constitutes the first improvement for ABE for circuits from LWE in almost a decade, given by Gorbunov, Vaikuntanathan, and Wee (STOC 2013) and Boneh, et al. (EUROCRYPT 2014) -- we reduce the key size
in the latter from $\mathsf{poly}(\mbox{depth},\lambda)$ to $\mathsf{poly}(\lambda)$. The starting point of our construction is a recent ABE scheme of Li, Lin, and Luo (TCC 2022), which achieves $\
mathsf{poly}(\lambda)$ key size but requires pairings and generic bilinear groups in addition to LWE; we introduce new lattice techniques to eliminate the additional requirements.
Ciphertext-Policy ABE from Inner-Product FE
The enormous potential of Attribute-Based Encryption (ABE) in the context of IoT has driven researchers to propose pairing-free ABE schemes that are suitable for resource-constrained devices.
Unfortunately, many of these schemes turned out to be insecure. This fact seems to reinforce the point of view of some authors according to which instantiating an Identity-Based Encryption (IBE) in
plain Decision Diffie-Hellman (DDH) groups is impossible. In this paper, we provide a generic AND gate access structured Ciphertext-Policy ABE (CP-ABE) scheme with secret access policy from
Inner-Product Functional Encryption (IPFE). We also propose an instantiation of that generic CP-ABE scheme from the DDH assumption. From our generic CP-ABE scheme we derive an IBE scheme by
introducing the concept of Clustered Identity-Based Encryption (CIBE). Our schemes show that it is indeed possible to construct practical and secure IBE and ABE schemes based on the classical DDH
Construction of quadratic APN functions with coefficients in $\mathbb{F}_2$ in dimensions $10$ and $11$
Yu et al. described an algorithm for conducting computational searches for quadratic APN functions over the finite field $\mathbb{F}_{2^n}$, and used this algorithm to give a classification of all
quadratic APN functions with coefficients in $\mathbb{F}_{2}$ for dimensions $n$ up to 9. In this paper, we speed up the running time of that algorithm by a factor of approximately $\frac{n \times 2^
n}{n^3}$. Based on this result, we give a complete classification of all quadratic APN functions over $\mathbb{F}_{2^{10}}$ with coefficients in $\mathbb{F}_{2}$. We also perform some partial
computations for quadratic APN functions over $\mathbb{F}_{2^{11}}$ with coefficients in $\mathbb{F}_{2}$ , and conjecture that they form 6 CCZ-inequivalent classes which also correspond to known APN
Masking Gaussian Elimination at Arbitrary Order, with Application to Multivariate- and Code-Based PQC
Digital signature schemes based on multivariate- and code-based hard problems are promising alternatives for lattice-based signature schemes, due to their smaller signature size. Hence, several
candidates in the ongoing additional standardization for quantum secure digital signature (DS) schemes by the National Institute of Standards and Technology (NIST) rely on such alternate hard
problems. Gaussian Elimination (GE) is a critical component in the signing procedure of these schemes. In this paper, we provide a masking scheme for GE with back substitution to defend against
first- and higher-order attacks. To the best of our knowledge, this work is the first to analyze and propose masking techniques for multivariate- or code-based DS algorithms. We propose a masked
algorithm for transforming a system of linear equations into row-echelon form. This is realized by introducing techniques for efficiently making leading (pivot) elements one while avoiding costly
conversions between Boolean and multiplicative masking at all orders. We also propose a technique for efficient masked back substitution, which eventually enables a secure unmasking of the public
output. We evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels at first-, second-, and third-order, including UOV, MAYO, SNOVA, QR-UOV,
and MQ-Sign. Notably, the operational cost of first-, second-, and third-order masked GE is 2.3$\times$ higher, and the randomness cost is 1.2$\times$ higher in MAYO compared to UOV for security
levels III and V. In contrast, these costs are similar in UOV and MAYO for one version of level I. We also show detailed performance results for masked GE implementations for all three security
versions of UOV on the Arm Cortex-M4 and compare them with unmasked results. Our first-order implementations targeting UOV parameters have overheads of factor 6.5$\times$, 5.9$\times$, and 5.7$\
times$ compared to the unprotected implementation for NIST security level I, III, and V.
An efficient collision attack on Castryck-Decru-Smith’s hash function
In 2020, Castryck-Decru-Smith constructed a hash function, using the (2,2)-isogeny graph of superspecial principally polarized abelian surfaces. In their construction, the initial surface was chosen
from vertices very "close" to the square of a supersingular elliptic curve with a known endomorphism ring. In this paper, we introduce an algorithm for detecting a collision on their hash function.
Under some heuristic assumptions, the time complexity and space complexity of our algorithm are estimated to be $\widetilde{O}(p^{3/10})$ which are smaller than the complexity $\widetilde{O}(p^{3/2})
$ the authors claimed to be necessary to detect a collision, where $p$ is the characteristic of the base field. In particular case where $p$ has a special form, then both the time and space
complexities of our algorithm are polynomial in $\log{p}$. We implemented our algorithm in Magma, and succeeded in detecting a collision in 17 hours (using 64 parallel computations) under a parameter
setting which the authors had claimed to be 384-bit secure.
zkMarket : Privacy-preserving Digital Data Trade System via Blockchain
In this paper, we introduce zkMarket, a privacy-preserving fair trade system on the blockchain. zkMarket addresses the challenges of transaction privacy and computational efficiency. To ensure
transaction privacy, zkMarket is built upon an anonymous transfer protocol. By combining encryption with zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), both the seller and
the buyer are enabled to trade fairly. Furthermore, by encrypting the decryption key, we make the data registration process more concise and improve the seller's proving time by leveraging
commit-and-prove SNARK (CP-SNARK) and our novel pseudorandom generator, the matrix-formed PRG (MatPRG). Our evaluation demonstrates that zkMarket significantly reduces the computational overhead
associated with traditional blockchain solutions while maintaining robust security and privacy. The seller can register 1MB of data in 3.2 seconds, while the buyer can generate the trade transaction
in 0.2 seconds, and the seller can finalize the trade in 0.4 seconds.
PANTHER: Private Approximate Nearest Neighbor Search in the Single Server Setting
Approximate nearest neighbor search (ANNS), also known as vector search, is an important building block for varies applications, such as databases, biometrics, and machine learning. In this work, we
are interested in the private ANNS problem, where the client wants to learn (and can only learn) the ANNS results without revealing the query to the server. Previous private ANNS works either suffers
from high communication cost (Chen et al., USENIX Security 2020) or works under a weaker security assumption of two non-colluding servers (Servan-Schreiber et al., SP 2022). We present Panther, an
efficient private ANNS framework under the single server setting. Panther achieves its high performance via several novel co-designs of private information retrieval (PIR), secretsharing, garbled
circuits, and homomorphic encryption. We made extensive experiments using Panther on four public datasets, results show that Panther could answer an ANNS query on 10 million points in 23 seconds with
318 MB of communication. This is more than 6× faster and 18× more compact than Chen et al..
Universal Adaptor Signatures from Blackbox Multi-Party Computation
Adaptor signatures (AS) extend the functionality of traditional digital signatures by enabling the generation of a pre-signature tied to an instance of a hard NP relation, which can later be turned
(adapted) into a full signature upon revealing a corresponding witness. The recent work by Liu et al. [ASIACRYPT 2024] devised a generic AS scheme that can be used for any NP relation---which here we
will refer to as universal adaptor signatures scheme, in short UAS---from any one-way function. However, this generic construction depends on the Karp reduction to the Hamiltonian cycle problem,
which adds significant overhead and hinders practical applicability. In this work, we present an alternative approach to construct universal adaptor signature schemes relying on the multi-party
computation in the head (MPCitH) paradigm. This overcomes the reliance on the costly Karp reduction, while inheriting the core property of the MPCitH---which makes it an invaluable tool in efficient
cryptographic protocols---namely, that the construction is black-box with respect to the underlying cryptographic primitive (while it remains non-black-box in the relation being proven). Our
framework simplifies the design of UAS and enhances their applicability across a wide range of decentralized applications, such as blockchain and privacy-preserving systems. Our results demonstrate
that MPCitH-based UAS schemes offer strong security guarantees while making them a promising tool in the design of real-world cryptographic protocols.
Byte-wise equal property of ARADI
ARADI is a low-latency block cipher proposed by the NSA (National Security Agency) in 2024 for memory encryption. Bellini et al. experimentally demonstrated that in specific cubes of 5-round ARADI,
the cube sums are byte-wise equal, for example, to 0x9d9dc5c5. This paper modifies the MILP-based division property algorithm to prove this and observes that the rotation amount of 8 in ARADI causes
cancellations of monomials, allowing us to extend the byte-wise equal property up to 8 rounds. As a result, we obtained distinguishers for rounds 6 and 7 with lower data complexities of $2^{77}$ and
$2^{112}$, respectively, compared to previous methods.
PRIME: Differentially Private Distributed Mean Estimation with Malicious Security
Distributed mean estimation (DME) is a fundamental and important task as it serves as a subroutine in convex optimization, aggregate statistics, and, more generally, federated learning. The inputs
for distributed mean estimation (DME) are provided by clients (such as mobile devices), and these inputs often contain sensitive information. Thus, protecting privacy and mitigating the influence of
malicious adversaries are critical concerns in DME. A surge of recent works has focused on building multiparty computation (MPC) based protocols tailored for the task of secure aggregation. However,
MPC fails to directly address these two issues: (i) the potential manipulation of input by adversaries, and (ii) the leakage of information from the underlying function. This paper presents a novel
approach that addresses both these issues. We propose a secure aggregation protocol with a robustness guarantee, effectively protecting the system from "faulty" inputs introduced by malicious
clients. Our protocol further ensures differential privacy, so that the underlying function will not leak significant information about individuals. Notably, this work represents the first
comprehensive effort to combine robustness and differential privacy guarantees in the context of DME. In particular, we capture the security of the protocol via a notion of "usefulness" combined with
differential privacy inspired by the work of Mironov et al. (CRYPTO 2009) and formally analyze this security guarantee for our protocol.
Improved Attacks for SNOVA by Exploiting Stability under a Group Action
SNOVA is a post-quantum digital signature scheme based on multivariate polynomials. It is a first-round candidate in an ongoing NIST standardization process for post-quantum signatures, where it
stands out for its efficiency and compactness. Since its initial submission, there have been several improvements to its security analysis, both on key recovery and forgery attacks. All these works
reduce to solving a structured system of quadratic polynomials, which we refer to as SNOVA system. In this work, we propose a polynomial solving algorithm tailored for SNOVA systems, which exploits
the stability of the system under the action of a commutative group of matrices. This new algorithm reduces the complexity to solve SNOVA systems, over generic ones. We show how to adapt the
reconciliation and direct attacks in order to profit from the new algorithm. Consequently, we improve the reconciliation attack for all SNOVA parameter sets with speedup factors ranging between $2^3$
and $2^{22}$. Our algorithm also reduces the complexity of the direct attack for several parameter sets. It is particularly effective for the parameters that give the best performance to SNOVA $(l=4)
$, and which were not taken below NIST's security threshold by previous attacks. Our attack brings these parameter sets $(l=4)$ below that threshold with speedup factors between $2^{33}$ and $2^{52}
$, over the state-of-the-art.
Falcon is a winner of NIST's six-year post-quantum cryptography standardisation competition. Based on the celebrated full-domain-hash framework of Gentry, Peikert and Vaikuntanathan (GPV) (STOC'08),
Falcon leverages NTRU lattices to achieve the most compact signatures among lattice-based schemes. Its security hinges on a Rényi divergence-based argument for Gaussian samplers, a core element of
the scheme. However, the GPV proof, which uses statistical distance to argue closeness of distributions, fails when applied naively to Falcon due to parameter choices resulting in statistical
distances as large as $2^{-34}$. Additional implementation-driven deviations from the GPV framework further invalidate the original proof, leaving Falcon without a security proof despite its
selection for standardisation. This work takes a closer look at Falcon and demonstrates that introducing a few minor, conservative modifications allows for the first formal proof of the scheme in the
random oracle model. At the heart of our analysis lies an adaptation of the GPV framework to work with the Rényi divergence, along with an optimised method for parameter selection under this measure.
Furthermore, we obtain a provable version of the GPV framework over NTRU rings. Both these tools may be of independent interest. Unfortunately, our analysis shows that despite our modification of
Falcon-512 and Falcon-1024 we do not achieve strong unforgeability for either scheme. For plain unforgeability we are able to show that our modifications to Falcon-512 barely satisfy the claimed
120-bit security target and for Falcon-1024 we confirm the claimed security level. As such we recommend revisiting falcon and its parameters.
Push-Button Verification for BitVM Implementations
Bitcoin, while being the most prominent blockchain with the largest market capitalization, suffers from scalability and throughput limitations that impede the development of ecosystem projects like
Bitcoin Decentralized Finance (BTCFi). Recent advancements in BitVM propose a promising Layer 2 (L2) solution to enhance Bitcoin's scalability by enabling complex computations off-chain with on-chain
verification. However, Bitcoin's constrained programming environment—characterized by its non-Turing-complete Script language lacking loops and recursion, and strict block size limits—makes
developing complex applications labor-intensive, error-prone, and necessitates manual partitioning of scripts. Under this complex programming model, subtle mistakes could lead to irreversible damage
in a trustless environment like Bitcoin. Ensuring the correctness and security of such programs becomes paramount. To address these challenges, we introduce the first formal verification tool for
BitVM implementations. Our approach involves designing a register-based, higher-level domain-specific language (DSL) that abstracts away complex stack operations, allowing developers to reason about
program correctness more effectively while preserving the semantics of the original Bitcoin Script. We present a formal computational model capturing the semantics of BitVM execution and Bitcoin
Script, providing a foundation for rigorous verification. To efficiently handle large programs and complex constraints arising from unrolled computations that simulate loops, we summarize repetitive
"loop-style" computations using loop invariant predicates in our DSL. We leverage a counterexample-guided inductive synthesis (CEGIS) procedure to lift low-level Bitcoin Script into our DSL,
facilitating efficient verification without sacrificing accuracy. Evaluated on 98 benchmarks from BitVM's SNARK verifier, our tool successfully verifies 94% of cases within seconds, demonstrating its
effectiveness in enhancing the security and reliability of BitVM.
ECPM Cryptanalysis Resource Estimation
Elliptic Curve Point Multiplication (ECPM) is a key component of the Elliptic Curve Cryptography (ECC) hierarchy protocol. However, the specific estimation of resources required for this process
remains underexplored despite its significance in the cryptanalysis of ECC algorithms, particularly binary ECC in GF (2𝑚). Given the extensive use of ECC algorithms in various security protocols and
devices, it is essential to conduct this examination to gain valuable insights into its cryptanalysis, specifically in terms of providing precise resource estimations, which serve as a solid basis
for further investigation in solving the Elliptic Curve Discrete Logarithm Problem. Expanding on several significant prior research, in this work, we refer to as ECPM cryptanalysis, we estimate
quantum resources, including qubits, gates, and circuit depth, by integrating point addition (PA) and point-doubling (PD) into the ECPM scheme, culminating in a Shor’s algorithm-based binary ECC
cryptanalysis circuit. Focusing on optimizing depth, we elaborate on and implement the most efficient PD circuit and incorporate optimized Karatsuba multiplication and FLT-based inversion algorithms
for PA and PD operations. Compared to the latest PA-only circuits, our preliminary results showcase significant resource optimization for various ECPM implementations, including single-step ECPM,
ECPM with combined or selective PA/PD utilization, and total−step ECPM (2𝑛 PD+2 PA).
Critical Round in Multi-Round Proofs: Compositions and Transformation to Trapdoor Commitments
In many multi-round public-coin interactive proof systems, challenges in different rounds serve different roles, but a formulation that actively utilizes this aspect has not been studied extensively.
In this paper, we propose new notions called critical-round special honest verifier zero-knowledge and critical-round special soundness. Our notions are simple, intuitive, easy to apply, and capture
several practical multi-round proof protocols including, but not limited to, those from the MPC-in-the-Head paradigm. We demonstrate the usefulness of these notions with two fundamental applications
where three-round protocols are known to be useful, but multi-round ones generally fail. First, we show that critical-round proofs yield trapdoor commitment schemes. This result also enables the
instantiation of post-quantum secure adaptor signatures and threshold ring signatures from MPCitH, resolving open questions in (Haque and Scafuro, PKC 2020) and in (Liu et al., ASIACRYPT 2024).
Second, we show that critical-round proofs can be securely composed using the Cramer-Schoenmakers-Damgård method. This solves an open question posed by Abe et al. in CRYPTO 2024. Overall, these
results shed new light on the potential of multi-round proofs in both theoretical and practical cryptographic protocol design
Compact and Tightly Secure (Anonymous) IBE from Module LWE in the QROM
We present a new compact and tightly secure (anonymous) identity-based encryption (IBE) scheme based on structured lattices. This is the first IBE scheme that is (asymptotically) as compact as the
most practical NTRU-based schemes and tightly secure under the module learning with errors (MLWE) assumption, known as the standard lattice assumption, in the (quantum) random oracle model. In
particular, our IBE scheme is the most compact lattice-based scheme (except for NTRU-based schemes). We design our IBE scheme by instantiating the framework of Gentry, Peikert, and Vaikuntanathan
(STOC`08) using the compact trapdoor proposed by Yu, Jia, and Wang (CRYPTO'23). The tightness of our IBE scheme is achieved by extending the proof technique of Katsumata et al. (ASIACRYPT'18, JoC'21)
to the hermit normal form setting. To achieve this, we developed some new results on module lattices that may be of independent interest.
Fully Homomorphic Encryption with Efficient Public Verification
We present an efficient Publicly Verifiable Fully Homomorphic Encryption scheme that, along with being able to evaluate arbitrary boolean circuits over ciphertexts, also generates a succinct proof of
correct homomorphic computation. Our scheme is based on FHEW proposed by Ducas and Micciancio (Eurocrypt'15), and we incorporate the GINX homomorphic accumulator (Eurocrypt'16) for improved
bootstrapping efficiency. In order to generate the proof efficiently, we generalize the widely used Rank-1 Constraint System (R1CS) to the ring setting and obtain Ring R1CS, to natively express
homomorphic computation in FHEW. In particular, we develop techniques to efficiently express in our Ring R1CS the "non-arithmetic" operations, such as gadget decomposition and modulus switching used
in the FHEW construction. We further construct a SNARG for Ring R1CS instances, by translating the Ring R1CS instance into a sum-check protocol over polynomials, and then compiling it into a succinct
non-interactive proof by incorporating the lattice-based polynomial commitment scheme of Cini, Malavolta, Nguyen, and Wee (Crypto'24). Putting together, our Publicly Verifiable FHE scheme relies on
standard hardness assumptions about lattice problems such that it generates a succinct proof of homomorphic computation of circuit $C$ in time $O(|C|^2\cdot poly(\lambda))$ and of size $O(\log^2{|C|}
\cdot poly(\lambda))$. Besides, our scheme achieves the recently proposed IND-SA (indistinguishability under semi-active attack) security by Walter (EPrint 2024/1207) that exactly captures client
data privacy when a homomorphic computation can be verified.
Quantum Black-Box Separations: Succinct Non-Interactive Arguments from Falsifiable Assumptions
In their seminal work, Gentry and Wichs (STOC'11) established an impossibility result for the task of constructing an adaptively-sound SNARG via black-box reduction from a falsifiable assumption. An
exciting set of recent SNARG constructions demonstrated that, if one adopts a weaker but still quite meaningful notion of adaptive soundness, then impossibility no longer holds (Waters-Wu,
Waters-Zhandry, Mathialagan-Peters-Vaikunthanathan ePrint'24). These fascinating new results raise an intriguing possibility: is there a way to remove this slight weakening of adaptive soundness,
thereby completely circumventing the Gentry-Wichs impossibility? A natural route to closing this gap would be to use a quantum black-box reduction, i.e., a reduction that can query the SNARG
adversary on superpositions of inputs. This would take advantage of the fact that Gentry-Wichs only consider classical reductions. In this work, we show that this approach cannot succeed.
Specifically, we extend the Gentry-Wichs impossibility result to quantum black-box reductions, and thereby establish an important limit on the power of such reductions.
Homomorphic Matrix Operations under Bicyclic Encoding
Homomorphically encrypted matrix operations are extensively used in various privacy-preserving applications. Consequently, reducing the cost of encrypted matrix operations is a crucial topic on which
numerous studies have been conducted. In this paper, we introduce a novel matrix encoding method, named bicyclic encoding, under which we propose two new algorithms BMM-I and BMM-II for encrypted
matrix multiplication. BMM-II outperforms the stat-of-the-art algorithms in theory, while BMM-I, combined with the segmented strategy, performs well in practice, particularly for matrices with high
dimensions. Another noteworthy advantage of bicyclic encoding is that it allows for transposing an encrypted matrix entirely free. A comprehensive experimental study based on our proof-of-concept
implementation shows that each algorithm introduced in this paper has specific scenarios outperforming existing algorithms, achieving speedups ranging from 2x to 38x.
Resilience-Optimal Lightweight High-threshold Asynchronous Verifiable Secret Sharing
Shoup and Smart (SS24) recently introduced a lightweight asynchronous verifiable secret sharing (AVSS) protocol with optimal resilience directly from cryptographic hash functions (JoC 2024), offering
plausible quantum resilience and computational efficiency. However, SS24 AVSS only achieves standard secrecy to keep the secret confidential against $n/3$ corrupted parties \textit{if no honest party
publishes its share}. In contrast, from ``heavyweight'' public-key cryptography, one can realize so-called \textit{high-threshold} asynchronous verifiable secret sharing (HAVSS), with a stronger \
textit{high-threshold} secrecy to tolerate $n/3$ corrupted parties and additional leaked shares from $n/3$ honest parties. This raises the following question: can we bridge the remaining gap to
design an efficient HAVSS using only lightweight cryptography? We answer the question in the affirmative by presenting a lightweight HAVSS with optimal resilience. When executing across $n$ parties
to share a secret, it attains a worst-case communication complexity of $\Tilde{\bigO}(\lambda n^3)$ (where $\lambda$ is the cryptographic security parameter) and realizes high-threshold secrecy to
tolerate a fully asynchronous adversary that can control $t= \lfloor \frac{n-1}{3} \rfloor$ malicious parties and also learn $t$ additional secret shares from some honest parties. The (worst-case)
communication complexity of our lightweight HAVSS protocol matches that of SS24 AVSS---the state-of-the-art lightweight AVSS without high-threshold secrecy. Notably, our design is a direct and
concretely efficient reduction to hash functions in the random oracle model, without extra setup assumptions like CRS/PKI or heavy intermediate steps like hash-based zk-STARK.
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption schemes from the learning sparse parities with noise (sparse LPN) problem, along with an assumption that implies linearly homomorphic encryption (e.g.,
the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: $O(\log \lambda/\log \log \lambda)$
multiplications followed by poly($\lambda$) additions, where $\lambda \in \mathbb{N}$ is a security parameter. These schemes have compact ciphertexts: after homomorphic evaluation, the bit-length of
each ciphertext is a fixed polynomial in the security parameter $\lambda$, independent of the number of homomorphic operations applied to it. This gives the first somewhat homomorphic encryption
schemes that can evaluate the class of bounded-degree polynomials with a bounded number of monomials without relying on lattice assumptions or bilinear maps. Much like in the Gentry-Sahai-Waters
fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when
encrypting many messages at once and performing many homomorphic evaluations at once, the bit-length of ciphertexts in some of our schemes (before and after homomorphic evaluation) can be arbitrarily
close to the bit-length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation
performed, though this key can be re-used across any polynomial number of encryptions and evaluations.
A Forgery Attack on a Code-based Signature Scheme
With the advent of quantum computers, the security of cryptographic primitives, including digital signature schemes, has been compromised. To deal with this issue, some signature schemes have been
introduced to resist against these computers. These schemes are known as post-quantum signature schemes. One group of these schemes is based on the hard problems of coding theory, called code-based
cryptographic schemes. Several code-based signature schemes are inspired by the McEliece encryption scheme using three non-singular, parity-check, and permutation matrices as the only components of
the private keys, and their product as the public key. In this paper, we focus on the analysis of a class of such signature schemes. For this purpose, we first prove that the linear relationships
between the columns of the parity-check/generator matrix appear in the public key matrix, and by exploiting this feature we perform a forgery attack on one of the signature schemes of this class as
an evidence. The complexity of this attack is of O(n^4).
A comprehensive analysis of Regev's quantum algorithm
Public key cryptography can be based on integer factorization and the discrete logarithm problem (DLP), applicable in multiplicative groups and elliptic curves. Regev’s recent quantum algorithm was
initially designed for the factorization and was later extended to the DLP in the multiplicative group. In this article, we further extend the algorithm to address the DLP for elliptic curves.
Notably, based on celebrated conjectures in Number Theory, Regev’s algorithm is asymptotically faster than Shor’s algorithm for elliptic curves. Our analysis covers all cases where Regev’s algorithm
can be applied. We examine the general framework of Regev’s algorithm and offer a geometric description of its parameters. This preliminary analysis enables us to certify the success of the algorithm
on a particular instance before running it. In the case of integer factorization, we demonstrate that there exists an in- finite family of RSA moduli for which the algorithm always fails. On the
other hand, when the parameters align with the Gaussian heuristics, we prove that Regev’s algorithm succeeds. By noting that the algorithm naturally adapts to the multidimensional DLP, we proved that
it succeeds for a certain range of parameters.
On the Sample Complexity of Linear Code Equivalence for all Code Rates
In parallel with the standardization of lattice-based cryptosystems, the research community in Post-quantum Cryptography focused on non-lattice-based hard problems for constructing public-key
cryptographic primitives. The Linear Code Equivalence (LCE) Problem has gained attention regarding its practical applications and cryptanalysis. Recent advancements, including the LESS signature
scheme and its candidacy in the NIST standardization for additional signatures, supported LCE as a foundation for post-quantum cryptographic primitives. However, recent cryptanalytic results have
revealed vulnerabilities in LCE-based constructions when multiple related public keys are available for one specific code rate. In this work, we generalize the proposed attacks to cover all code
rates. We show that the complexity of recovering the private key from multiple public keys is significantly reduced for any code rate scenario. Thus, we advise against constructing specific
cryptographic primitives using LCE.
$\mathsf{Graphiti}$: Secure Graph Computation Made More Scalable
Privacy-preserving graph analysis allows performing computations on graphs that store sensitive information while ensuring all the information about the topology of the graph, as well as data
associated with the nodes and edges, remains hidden. The current work addresses this problem by designing a highly scalable framework, $\mathsf{Graphiti}$, that allows securely realising any graph
algorithm. $\mathsf{Graphiti}$ relies on the technique of secure multiparty computation (MPC) to design a generic framework that improves over the state-of-the-art framework of GraphSC by Araki et
al. (CCS'21). The key technical contribution is that $\mathsf{Graphiti}$ has round complexity independent of the graph size, which in turn allows attaining the desired scalability. Specifically, this
is achieved by (i) decoupling the $\mathsf{Scatter}$ primitive of GraphSC into separate operations of $\mathsf{Propagate}$ and $\mathsf{ApplyE}$, (ii) designing a novel constant-round approach to
realise $\mathsf{Propagate}$, as well as (iii) designing a novel constant-round approach to realise the $\mathsf{Gather}$ primitive of GraphSC by leveraging the linearity of the aggregation
operation. We benchmark the performance of $\mathsf{Graphiti}$ for the application of contact tracing via BFS for 10 hops and observe that it takes less than 2 minutes when computing over a graph of
size $10^7$. Concretely it improves over the state-of-the-art up to a factor of $1034\times$ in online run time. Similar to GraphSC by Araki et al., since $\mathsf{Graphiti}$ relies on a secure
protocol for shuffle, we additionally design a shuffle protocol secure against a semi-honest adversary in the 2-party with a helper setting. Given the versatility of shuffle protocol, the designed
solution is of independent interest. Hence, we also benchmark the performance of the designed shuffle where we observe improvements of up to $1.83\times$ in online run time when considering an input
vector of size $10^7$, in comparison to the state-of-the-art in the considered setting.
Exponential sums in linear cryptanalysis
It is shown how bounds on exponential sums derived from modern algebraic geometry, and l-adic cohomology specifically, can be used to upper bound the absolute correlations of linear approximations
for cryptographic constructions of low algebraic degree. This is illustrated by applying results of Deligne, Denef and Loeser, and Rojas-León, to obtain correlation bounds for a generalization of the
Butterfly construction, three-round Feistel ciphers, and a generalization of the Flystel construction. For each of these constructions, bounds obtained using other methods are significantly weaker.
In the case of the Flystel construction, our bounds resolve a conjecture by the designers. Correlation bounds of this type are relevant for the development of security arguments against linear
cryptanalysis, especially in the weak-key setting or for primitives that do not involve a key. Since the methods used in this paper are applicable to constructions defined over arbitrary finite
fields, the results are also relevant for arithmetization-oriented primitives such as Anemoi, which uses S-boxes based on the Flystel construction.
PQNTRU: Acceleration of NTRU-based Schemes via Customized Post-Quantum Processor
Post-quantum cryptography (PQC) has rapidly evolved in response to the emergence of quantum computers, with the US National Institute of Standards and Technology (NIST) selecting four finalist
algorithms for PQC standardization in 2022, including the Falcon digital signature scheme. The latest round of digital signature schemes introduced Hawk, both based on the NTRU lattice, offering
compact signatures, fast generation, and verification suitable for deployment on resource-constrained Internet-of-Things (IoT) devices. Despite the popularity of Crystal-Dilithium and Crystal-Kyber,
research on NTRU-based schemes has been limited due to their complex algorithms and operations. Falcon and Hawk's performance remains constrained by the lack of parallel execution in crucial
operations like the Number Theoretic Transform (NTT) and Fast Fourier Transform (FFT), with data dependency being a significant bottleneck. This paper enhances NTRU-based schemes Falcon and Hawk
through hardware/software co-design on a customized Single-Instruction-Multiple-Data (SIMD) processor, proposing new SIMD hardware units and instructions to expedite these schemes along with software
optimizations to boost performance. Our NTT optimization includes a novel layer merging technique for SIMD architecture to reduce memory accesses, and the use of modular algorithms (Signed Montgomery
and Improved Plantard) targets various modulus data widths to enhance performance. We explore applying layer merging to accelerate fixed-point FFT at the SIMD instruction level and devise a
dual-issue parser to streamline assembly code organization to maximize dual-issue utilization. A System-on-chip (SoC) architecture is devised to improve the practical application of the processor in
real-world scenarios. Evaluation on 28 nm technology and FPGA platform shows that our design and optimizations can increase the performance of Hawk signature generation and verification by over 7
HTCNN: High-Throughput Batch CNN Inference with Homomorphic Encryption for Edge Computing
Homomorphic Encryption (HE) technology allows for processing encrypted data, breaking through data isolation barriers and providing a promising solution for privacy-preserving computation. The
integration of HE technology into Convolutional Neural Network (CNN) inference shows potential in addressing privacy issues in identity verification, medical imaging diagnosis, and various other
applications. The CKKS HE algorithm stands out as a popular option for homomorphic CNN inference due to its capability to handle real number computations. However, challenges such as computational
delays and resource overhead present significant obstacles to the practical implementation of homomorphic CNN inference, largely due to the complex nature of HE operations. In addition, current
methods for speeding up homomorphic CNN inference primarily address individual images or large batches of input images, lacking a solution for efficiently processing a moderate number of input images
with fast homomorphic inference capabilities, which is more suitable for edge computing applications. In response to these challenges, we introduce a novel leveled homomorphic CNN inference scheme
aimed at reducing latency and improving throughput using the CKKS scheme. Our proposed inference strategy involves mapping multiple inputs to a set of ciphertext by exploiting the sliding window
properties of convolutions to utilize CKKS's inherent Single-Instruction-Multiple-Data (SIMD) capability. To mitigate the delay associated with homomorphic CNN inference, we introduce optimization
techniques, including mask-weight merging, rotation multiplexing, stride convolution segmentation, and folding rotations. The efficacy of our homomorphic inference scheme is demonstrated through
evaluations carried out on the MNIST and CIFAR-10 datasets. Specifically, results from the MNIST dataset on a single CPU thread show that inference for 163 images can be completed in 10.4 seconds
with an accuracy of 98.9%, which is a 6.9 times throughput improvement over state-of-the-art works. Comparative analysis with existing methodologies highlights the superior performance of our
proposed inference scheme in terms of latency, throughput, communication overhead, and memory utilization.
DEEP Commitments and Their Applications
This note studies a method of committing to a polynomial in a way that allows executions of low degree tests such as FRI to be batched and even deferred. In particular, it achieves (unlimited-depth)
aggregation for STARKs.
Offline-Online Indifferentiability of Cryptographic Systems
The indifferentiability framework has become a standard methodology that enables us to study the security of cryptographic constructions in idealized models of computation. Unfortunately, while
indifferentiability provides strong guarantees whenever the security of a construction is captured by a ``single-stage'' security game, it may generally provide no meaningful guarantees when the
security is captured by a ``multi-stage'' one. In particular, the indifferentiability framework does not capture offline-online games, where the adversary can perform an extensive offline computation
to later speed up the online phase. Such security games are extremely common, both in practice and in theory. Over the past decade, there has been numerous attempts to meaningfully extend the
indifferentiability framework to offline-online games, however, they all ultimately met with little success. In this work, our contribution is threefold. First, we propose an extension of the
classical indifferentiability framework, we refer to as *offline-online-indifferentiability*, that applies in the context of attackers with an expensive offline phase (á la Ghoshal and Tessaro,
CRYPTO '23). Second, we show that our notion lends itself to a natural and meaningful composition theorem for offline-online security games. Lastly, as our main technical contribution, we analyze the
offline-online-indifferentiability of two classical variants of the Merkle-Damg\aa rd hashing mechanism, one where the key is fed only to the first block in the chain and the other where the key is
fed to each block in the chain. For both constructions, we prove a *tight* bound on their offline-online-indifferentiability (i.e., an upper bound and an attack that matches it). Notably, our bound
for the second variant shows that the construction satisfies *optimal* offline-online-indifferentiability.
Robust Double Auctions for Resource Allocation
In a zero-knowledge proof market, we have two sides. On one side, bidders with proofs of different sizes and some private value to have this proof computed. On the other side, we have distributors
(also called sellers) which have compute available to process the proofs by the bidders, and these distributors have a certain private cost to process these proofs (dependent on the size). More
broadly, this setting applies to any online resource allocation where we have bidders who desire a certain amount of a resource and distributors that can provide this resource. In this work, we study
how to devise double auctions for this setting which are truthful for users, weak group strategy proof, weak budget balanced, computationally efficient, and achieve a good approximation of the
maximum welfare possible by the set of bids. We denote such auctions as $\textit{robust}$.
Revisiting the “improving the security of multi-party quantum key agreement with five- qubit Brown states”
In 2018 Cai et al. proposed a multi-party quantum key agreement with five-qubit Brown states. They confirmed the security of their proposed scheme. However, Elhadad, Ahmed, et al. found the scheme
cannot resist the collusion attack launched by legal participants. They suggested a modification and declared that their improved version is capable of resisting this type of attack. Nevertheless,
after analysis, we found that the collusion attack still exists. Subsequently, we proposed a straightforward modification to prevent the attack. After analysis, we conclude that our modification
meets the required security and collusion attack requirements, which are very important in the quantum key agreement scheme. | {"url":"https://eprint.iacr.org/complete/","timestamp":"2024-11-14T20:32:06Z","content_type":"text/html","content_length":"279439","record_id":"<urn:uuid:4dcf855d-9834-4cda-92d8-85b9d0216dd5>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00786.warc.gz"} |
Flux phase of the half-filled band
The conjecture is verified that the optimum, energy minimizing, magnetic flux for a half-filled band of electrons hopping on a planar, bipartite graph is per square plaquette. We require only that
the graph has periodicity in one direction and the result includes the hexagonal lattice (with flux 0 per hexagon) as a special case. The theorem goes beyond previous conjectures in several ways: (1)
It does not assume, a priori, that all plaquettes have the same flux (as in Hofstadter's model). (2) A Hubbard-type on-site interaction of any sign, as well as certain longer range interactions, can
be included. (3) The conclusion holds for positive temperature as well as the ground state. (4) The results hold in D2 dimensions if there is periodicity in D-1 directions (e.g., the cubic lattice
has the lowest energy if there is flux in each square face).
All Science Journal Classification (ASJC) codes
• General Physics and Astronomy
Dive into the research topics of 'Flux phase of the half-filled band'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/flux-phase-of-the-half-filled-band","timestamp":"2024-11-03T19:59:06Z","content_type":"text/html","content_length":"47271","record_id":"<urn:uuid:fb191877-93f6-4c44-9670-bfe3133d3cde>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00577.warc.gz"} |
Texas Go Math Grade 5 Unit 3 Answer Key Algebraic Reasoning
Refer to our Texas Go Math Grade 5 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 5 Unit 3 Answer Key Algebraic Reasoning.
Texas Go Math Grade 5 Unit 3 Answer Key Algebraic Reasoning
Show What You Know
Check your understanding of important skills.
Question 1.
Since 3 × _______ = 12
then 12 ÷ 3 = _________.
Since 3 × 4 = 12
then 12 ÷ 3 = 4.
Multiplication is also known as the inverse (or “opposite”) of Division. For example, since , then 12 ÷ 3 = 4.  As such, knowing the multiplication tables can be helpful with division.
Question 2.
Since 48 ÷ 6 = _______.
then _______ × 6 = 48.
Since 48 ÷ 6 = 8.
then 8 × 6 = 48.
Division is a basic algebraic operation where we split a number into equal parts of groups. This is expressed as  48 ÷ 6 = 8. Division is also known as the inverse (or “opposite”) of
Multiplication. For example, since 48 ÷ 6 = 8 , then 8 x 6 = 48.
Question 3.
Since 7 × _______ = 70
then 70 ÷ 7 = _________.
Since 7 × 10 = 70
then 70 ÷ 7 = 10.
Multiplication is also known as the inverse (or “opposite”) of Division. For example, since , then
70 ÷ 7 = 10.  As such, knowing the multiplication tables can be helpful with division.
Question 4.
Since 54 ÷ 9 = _______.
then _______ × 9 = 54.
Since 54 ÷ 9 = 6.
then 6 × 9 = 54.
Division is a basic algebraic operation where we split a number into equal parts of groups. This is expressed as  54 ÷ 9 = 6. Division is also known as the inverse (or “opposite”) of
Multiplication. For example, since 54 ÷ 9 = 6 , then 6 x 9 = 54.
Missing Factors Find the missing factor.
Question 5.
4 × ______ = 24
4 x 6 = 24
The missing factor in the above multiplication operation is 6.
Question 6.
6 × _______ = 36
6 x 6 = 36
The missing factor in the above multiplication operation is 6.
Question 7.
________ × 9 = 63
7 x 9 = 63
The missing factor in the above multiplication operation is 7.
Question 8.
_________ × 5 = 40
8 x 5 = 40
The missing factor in the above multiplication operation is 8.
Question 9.
_________ × 8 = 16
2 x 8 = 16
The missing factor in the above multiplication operation is 2.
Question 10.
11 × ________ = 88
11 x 8 = 88
The missing factor in the above multiplication operation is 8.
Area Write the area of each figure.
Question 11.
____________ square units
Area of the rectangle = length x breadth
Area of the rectangle = 3 x 6 = 18 square units
The area is 18 square units.
In the above figure we can observe that some area is covered. The covered are is in the shape of rectangle. We have to calculate the area of the rectangle for the above figure. The covered area is 18
square units.
Question 12.
____________ square units
Area is 26 square units.
In the above figure we can observe that some area is covered. We have to calculate the area for the above figure. The covered area is 26 square units.
Question 13.
____________ square units
Area of the rectangle = length x breadth
Area of the rectangle = 1 x 7 = 7 square units
Area is 7 square units.
In the above figure we can observe that some area is covered. The covered area is in the shape of rectangle. We have to calculate the area of the rectangle for the above figure. The covered area is 7
square units.
Question 14.
____________ square units
Area of square = side x side
Area of square = 4 x 4 = 16 square units
Area is 16 square units.
In the above figure we can observe that some area is covered. The covered area is in the shape of square. We have to calculate the area of the square for the above figure. The covered area is 16
square units.
Vocabulary Builder
Visualize It
Match the ✓ words with their examples.
Understand Vocabulary
Complete the sentences with the words.
Question 1.
A ___________ is an ordered set of numbers or objects.
A pattern is an ordered set of numbers or objects.
Question 2.
___________ is the distance around a closed plane figure.
Perimeter is the distance around a closed plane figure.
Question 3.
A ___________ is a mathematical phrase that has numbers and operation signs but does not have an equal sign.
A numerical expression is a mathematical phrase that has numbers and operation signs but does not have an equal sign.
Question 4.
___________ is the measure of the number of unit squares.
Area is the measure of the number of units squares.
Question 5.
An ___________ is an algebraic or numerical sentence that shows that two quantities are equal.
An equation is an algebraic or numerical sentence that shows that two quantities are equal.
Question 6.
___________ are the symbols used to show which operation or operations in an expression should be done first.
Parentheses are the symbols used to show which operation or operations in an expression should be done first.
Question 7.
___________ is the measure of the space a solid figure occupies.
Volume is the measure of the space a solid figure occupies.
Question 8.
A ___________ is a number multiplied by another number to find a product.
A factor is a number multiplied by another number to find a product.
Reading & Writing Math
Reading When you read a story, you interpret the words, sentences, and paragraphs. When you read math, you have to go beyond words. Often you must analyze graphs and tables to get the information you
Read the problem. Study the input/output table.
Vann works part-time for the Transit Authority. He earns $12 an hour. The input column of the input/output table shows the number of hours, x, that he works. The output column shows how much he
earns, y. What is the rule for the table?
Writing Keith drives a limo to the airport. He can take 4 passengers at a time. Write a problem that can be answered by using this information and an input/output table. Ask a classmate to solve the
Get Ready Game
Make It Even
Object of the Game Make an expression equal to an even number.
Number/Symbol Cards: 3 sets labeled 0—9, 2 sets labeled +, —
Number of Players 2
Set Up
Give each player a set of symbol cards. Shuffle the number cards and place them face down in a stack.
How to Play
(1) Each player draws 3 number cards from the stack. Players use the number cards and one or two of his or her symbol cards to make an expression.
(2) A player as score is the value of the expression. If the value is an even number, the score is doubled.
(3) Return all the cards to the deck and shuffle. Repeat steps 1-2.
(4) The player with the highest score after 4 rounds wins.
Leave a Comment
You must be logged in to post a comment. | {"url":"https://gomathanswerkey.com/texas-go-math-grade-5-unit-3-answer-key/","timestamp":"2024-11-14T07:37:05Z","content_type":"text/html","content_length":"244743","record_id":"<urn:uuid:96da9951-4cc6-493a-bfe8-5ab21891a893>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00053.warc.gz"} |
Create a 3d axis-plane
8362 Views
6 Replies
1 Total Likes
Hello, I have 2 questions:
1 ) In geogebra I can create a 3d axis and a plane with it. I can ''mark'' the x,y axis also. I can rotate all this coordinate system. Can I do the same with Mathematica 9? Are there any
commands for creating a 3d axis-plane (2d-3d)?
2) Is there any official or third party ebook for Mathematica 9?
6 Replies
Geogebra is a very nifty tool for construction and manipulation of geometric structures and graphics.
I can see it as a complementary resource to Mathematica, not just as a teaching/learning path but also for construction of basic layouts that can be so tedious from the keyboard.
I haven't dug into it very deeply, but the way the Geogebra output is structured (an xml file among others) suggests that it can be parsed with Mathematica tools and the output incorporated into
Mathematica data structures.
It looks like it would be a fun thing to explore if I didn't already have thirty or forty other winter projects. I'd be happy to join anyone else in the Community who thought this would be a good
for ebook?is there any ebook from official s ite to buy it?
thanks for the s econd.i mean a geometry plane.how can i create it?also is there any ebook from official site?
I have only a rudimentary knowledge of Geogebra, but I think you would find that anything you can do in Geogebra you can duplicate in Mathematica, but Mathematica is immensely more powerful.
In Mathematica you don't "create a 3D axis (itself a problematic idea) and a plane" as much as define geometric objects in relation to a coordinate system. All your geometric and display operations
are carried out on that data set.
With Geogebra, as with other drag-n-drop programming systems, you can produce dazzling, expressive, illuminating results very quickly, but you're ultimately limited to the scope the package authors
thought of.
It's going to be a universal problem, figuring out how to make the transition between the drag-n-drop sensibilities and the bits-n-bytes approach. Maybe, the drag-n-drop systems will advance to the
point that they'll be capable of doing everything.
I think you must be on the leading edge of this, so you're going to have to tell us.
Fred Klingener
Unfortunately I don't really understand what you're asking in (1), but for (2) you'll find a book list here:
Two freely available books are
Mathematica programming: and advanced introduction
by Leonid Shifrin and
Power programming with Mathematica
by David Wagner. Both of these will teach you the fundamentals of the language, up to an advanced level, but they won't discuss domain specific functionality (such as image processing, etc.)
However, once you're comfortable with the base language itself, it's easy to use the documentation to look up specific functionality (that would be provided by libraries/packages in most other
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/170129","timestamp":"2024-11-14T00:59:54Z","content_type":"text/html","content_length":"122314","record_id":"<urn:uuid:3c31c85e-e399-4c34-b5cb-cf21ed8cb622>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00471.warc.gz"} |
Exploring the Properties of a Rectangle in Geometry
Rectangles are one of the most common shapes that students encounter in geometry. A rectangle is a four-sided shape with four right angles and two pairs of parallel sides. It has special properties
that make it an interesting subject to explore. Let’s take a look at some of these properties and how they can be used in everyday life.
Opposite Sides are Congruent
The first property that makes a rectangle unique is that its opposite sides are congruent, or equal in length. This means that all sides are the same length and all angles are the same size. In
everyday life, this property can be seen when constructing a rectangular frame for something like a window or door, as each side must be cut to the exact same length to ensure it is indeed
rectangular once installed.
Rectangles Have Perpendicular Diagonals
The second property of rectangles is that their diagonals (lines that connect opposite corners) always form two right angles when drawn on paper. This means they form a perfect cross in the center,
making them an ideal shape for items like tic-tac-toe boards or other rectangular items where two perpendicular lines are needed. This property also helps people measure objects accurately by
dividing them into two equal parts with 90 degree turns at each corner.
The Sum of Angles Equals 360 Degrees
The third property of rectangles involves their angle measurements. All four angles inside a rectangle add up to 360 degrees total, meaning each angle must measure exactly 90 degrees for it to be
considered a true rectangle. In construction, this property can help builders ensure accuracy when creating walls and other rectilinear structures using multiple straight edges and right angles that
fit together perfectly without any gaps or overlapping sections.
Rectangles have many interesting properties which make them one of the most commonly used shapes in geometry and design alike. Understanding these properties can help students better understand why
certain kinds of problems require specific formulas or techniques when being solved mathematically, as well as help them recognize rectangles in everyday life situations such as construction projects
or furniture building tasks. With this knowledge, students will gain not only a better understanding of geometry but also practical applications for their knowledge outside the classroom!
What are the properties of a rectangle in geometry?
The properties of a rectangle in geometry are that its opposite sides are congruent, its diagonals form two right angles, and the sum of its angles is equal to 360 degrees.
What are the 7 properties of rectangles?
The seven properties of rectangles are: opposite sides are congruent, diagonals form two right angles, the sum of its angles is equal to 360 degrees, two pairs of parallel sides, all angles are right
angles, four-sided shape and a closed figure. | {"url":"https://www.intmath.com/functions-and-graphs/exploring-the-properties-of-a-rectangle-in-geometry.php","timestamp":"2024-11-09T15:53:08Z","content_type":"text/html","content_length":"101036","record_id":"<urn:uuid:eace1eb3-eb21-474d-860b-73c5d621a9e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00047.warc.gz"} |
NC State University Libraries
MSE FINDR: A Shiny R Application to Estimate Mean Square Error Using Treatment Means and Post Hoc Test Results
Garnica, V. C., Shah, D. A., Esker, P. D., & Ojiambo, P. S. (2024, June 21). PLANT DISEASE.
author keywords: meta-analysis; missing summary statistics; R Shiny; residual variance recovery; unreported variability
UN Sustainable Development Goal Categories
Research synthesis methods such as meta-analysis rely primarily on appropriate summary statistics (i.e., means and variance) of a response of interest for implementation to draw general conclusions
from a body of research. A commonly encountered problem arises when a measure of variability of a response across a study is not explicitly provided in the summary statistics of primary studies.
Typically, these otherwise credible studies are omitted in research synthesis, leading to potential small-study effects and loss of statistical power. We present MSE FINDR, a user-friendly Shiny R
application for estimating the mean square error (i.e., within-study residual variance, [Formula: see text]) for continuous outcomes from analysis of variance (ANOVA)-type studies, with specific
experimental designs and treatment structures (Latin square, completely randomized, randomized complete block, two-way factorial, and split-plot designs). MSE FINDR accomplishes this by using
commonly reported information on treatment means, significance level (α), number of replicates, and post hoc mean separation tests (Fisher’s least significant difference [LSD], Tukey’s honest
significant difference [HSD], Bonferroni, Šidák, and Scheffé). Users upload a CSV file containing the relevant information reported in the study and specify the experimental design and post hoc test
that was applied in the analysis of the underlying data. MSE FINDR then proceeds to recover [Formula: see text] based on user-provided study information. The recovered within-study variance can be
downloaded and exported as a CSV file. Simulations of trials with a variable number of treatments and treatment effects showed that the MSE FINDR-recovered [Formula: see text] was an accurate
predictor of the actual ANOVA [Formula: see text] for one-way experimental designs when summary statistics (i.e., means, variance, and post hoc results) were available for the single factor.
Similarly, [Formula: see text] recovered by the application accurately predicted the actual [Formula: see text] for two-way experimental designs when summary statistics were available for both
factors and the sub-plot factor in split-plot designs, irrespective of the post hoc mean separation test. The MSE FINDR Shiny application, documentation, and an accompanying tutorial are hosted at
https://garnica.shinyapps.io/MSE_FindR/ and https://github.com/vcgarnica/MSE_FindR/ . With this tool, researchers can now easily estimate the within-study variance absent in published reports that
nonetheless provide appropriate summary statistics, thus enabling the inclusion of such studies that would have otherwise been excluded in meta-analyses involving estimates of effect sizes based on a
continuous response. | {"url":"https://ci.lib.ncsu.edu/citations/1201123","timestamp":"2024-11-14T20:50:31Z","content_type":"text/html","content_length":"40285","record_id":"<urn:uuid:b570e77f-99b2-4577-8cb1-cdb8bed5f0aa>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00166.warc.gz"} |
What is the doubling and halving method?
What is the doubling and halving method?
To use halving and doubling, you simply half one of the factors and double the other. Take this example. To solve 25×16, we could double the 25 to make 50 and then half the 16 to make 8. Suddenly
this problem becomes much easier to solve!
How do you teach halving and doubling?
To introduce this activity, children could explore ‘doubles’ and ‘halves’ in the world around them. Invite them to find items that could be described in this way. For example, pairs of items, the
number of eyelets on each side of a shoe for laces, legs on each side of a spider, dots on a ‘double’ domino.
What is halving method?
The bisection method, which is alternatively called binary chopping, interval halving, is one type of incremental search method in which the interval is always divided in half. If a function changes
sign over an interval, the function value at the midpoint is evaluated.
Does doubling and halving work in division?
Doubling and halving Find half of even numbers to 100 using partitioning. Use halving as a strategy in dividing by 2, e.g. 36 ÷ 2 is half of 36. Grouping Recognise that division is not commutative,
e.g. 16 ÷ 8 does not equal 8 ÷ 16.
What is the doubling strategy?
Doubling is a strategy that people of all ages frequently use. Young children first learn doubles as an addition of two groups. What multiplication facts can be used by using a doubling strategy? If
you said the twos, fours, and eights facts then you are correct! That’s what makes this strategy so powerful.
What is the doubling sequence?
The doubling sequence is my name for the sequence of powers of 2. D = 〈1, 2, 4, 8, 16, 32, 64, 128, . . .〉 Term in the doubling sequence form the basis for the binary. number system. Any natural
number can are written as a sum of.
How do you introduce halving?
To teach the halving of smaller numbers, count out the required amount of counters and move them one by one into two equal piles. For larger numbers, break them down into tens and ones, halve each
separately and then add up the result. It may help to write this down.
What is the doubling method?
As long as you know a few multiplication facts, you can use this strategy to figure out and learn new facts. To use this strategy, find a set of facts that is known to you. Then, double one factor
(or add the number to itself), and double the product, or answer from the first set of facts.
What is a doubling function?
A double exponential function is a constant raised to the power of an exponential function. The general formula is. (where a>1 and b>1), which grows much more quickly than an exponential function.
For example, if a = b = 10: f(0) = 10.
What is the relationship between growth rate and doubling time?
There is an important relationship between the percent growth rate and its doubling time known as “the rule of 70”: to estimate the doubling time for a steadily growing quantity, simply divide the
number 70 by the percentage growth rate.
What is the concept of doubling?
(ˈdʌblɪŋ ) noun. the activity of multiplying by two or repeating. her doubling of her prayers for him.
What is the difference between doubling time and half life?
The doubling time is a characteristic unit (a natural unit of scale) for the exponential growth equation, and its converse for exponential decay is the half-life. For example, given Canada’s net
population growth of 0.9% in the year 2006, dividing 70 by 0.9 gives an approximate doubling time of 78 years.
How do you calculate doubling?
What is Doubling Time and How is it Calculated?
1. Doubling time is the amount of time it takes for a given quantity to double in size or value at a constant growth rate.
2. Note: growth rate (r) must be entered as a percentage and not a decimal fraction.
3. dt = 70/r.
4. 35 = 70/2. | {"url":"https://liverpoololympia.com/what-is-the-doubling-and-halving-method/","timestamp":"2024-11-11T04:06:01Z","content_type":"text/html","content_length":"75503","record_id":"<urn:uuid:491c5bf4-112c-4649-afc7-b5d4bd35eaf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00000.warc.gz"} |
A Simple Metric for the Stance of Monetary Policy: Nominal GDP Growth Rate minus the Federal Funds Rate
I wrote up a note that briefly examines the role monetary policy may have played in the housing boom of 2003-2005. In that note I use a simple metric to measure the stance of monetary policy: the
nominal GDP growth rate minus the federal funds rate. This metric is popular outside of academia and based on the idea that if the cost of borrowing is too low relative to the growth rate of the
American economy, then excessive leverage and investment is encouraged and vice
. The
Economist describes
it this way: "One way to interpret this [measure] is to see America's nominal GDP growth as a proxy for the average return on investment in America Inc. If this return is higher than the cost of
borrowing, investment and growth will expand."
Using this metric, a
monetary policy would be one where the federal funds rate never wondered too far from the nominal GDP growth rate. Here, I calculate this metric as the year-on-year growth rate each quarter of
nominal GDP less the nominal federal funds rate for the quarter. Since the early 1980s the average difference between these two series, called the policy rate gap, was about 0.50%. From the 1960s to
the early 1980s the policy rate gap average almost 2.00%. If, in fact ,this is a reasonable measure of the stance of monetary policy, these two averages shed some light into the 'Great Moderation'
debate in macroeconomics.
I used this metric in an
earlier post
where I looked at the role the Federal Reserve may have played in the housing sector. Now, I want to see how well it predicts the
recessions. To begin, take a look at the figure below which plots the policy gap rate for 1956:Q1 through 2007:Q2 and shades in those quarters that fall under the
recession was preceded by a negative value for the policy gap rate, but not every negative policy gap rate was followed by a
recession. This problem also arises when using the yield curve spread to predict recessions, but my impression is that it is not as pronounced. This information can be used in a
model to estimate the
of a recession. Specifically, the policy rate gap is regressed upon a
recession dummy variable that is 1 if a recession is present and 0 otherwise. Below are the results from two forms of this
regression. The first recession simply regresses the contemporaneous value of the policy gap rate on the recession dummy. The second recession regresses the 4 lags of the policy gap rate on the
recession dummy.
These results look promising, but still need refining (e.g. need to account for serial correlation). Nonetheless, I took a first stab at the data by taking these estimates, the actual policy gap
measure, and then plugging it into and standard normal cumulative distribution to get the following figures. These figures show the probability of a recession given the policy rate gap:
model appears to do better with the misses (compare 1984 in both models). Overall, the policy rate gap appears to be a promising way--in need of further refinement--to measure the stance of monetary
In response to a commentator's suggestion, I have enlarged the
results and the last two graphs to make them more readable. I also went ahead and redid the analysis using a longer time series.
6 comments:
1. Thoroughly fascinating posting. From your first chart, it appears that The Fed has plenty of room to cut up to one percent off the fed funds rate.
The probit charts shows that at about 35%, the chance of a recession occuring is about 90%.
As I recall in the late 80's and in 1994, interest rates were creeping up. Today the bias is down, so I would hazard a guess that the recession will not occur.
2. The last three tables/graphs are too small to read. (Not that I'd know what the numbers in the table meant anyway.)
3. Anonymous: I hope you are right concerning the recession...I moved to Texas from Michigan this summer and am still trying to sell my house.
4. Brian:
Yes, they are small and I am not sure how to fix them. Blogger is free and usually does everything I want, but sometimes I find it challenging to use.
5. Have you compared the forecasting power of your simple metric with some of those other "real-time" forecasting methods like yield curve ?
6. Paul:
No, I have not compared the two, but I have been thinking about doing it. I would also like to see if using both measures in a forecasting model reduces the forecast error. Any suggestions? | {"url":"https://macromarketmusings.blogspot.com/2007/09/simple-metric-for-stance-of-monetary.html","timestamp":"2024-11-02T05:47:31Z","content_type":"text/html","content_length":"124773","record_id":"<urn:uuid:a7c9bb36-ca1a-4641-ba84-fb97fb4796c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00395.warc.gz"} |
Gerandy Brito : Alons conjecture in random bipartite biregular graphs with applications.
Javascript must be enabled
Gerandy Brito : Alons conjecture in random bipartite biregular graphs with applications.
This talk concerns to spectral gap in random regular graphs. We prove that almost all bipartite biregular graphs are almost Ramanujan by providing a tight upper bound for the second eigenvalue of its
adjacency operator. The proof relies on a technique introduced recently by Massoullie, which we developed for random regular graphs. The same analysis allow us to recover hidden communities in random
networks via spectral algorithms.
0 Comments
Comments Disabled For This Video | {"url":"https://www4.math.duke.edu/media/watch_video.php?v=d69765dd2c6d9824e2cb5f35a374de96","timestamp":"2024-11-04T14:16:13Z","content_type":"text/html","content_length":"46434","record_id":"<urn:uuid:05121859-a741-41b5-b952-52b342340d07>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00758.warc.gz"} |
March of the Penguins
Problem C
March of the Penguins
Somewhere near the south pole, a number of penguins are standing on a number of ice floes. Being social animals, the penguins would like to get together, all on the same floe. The penguins do not
want to get wet, so they have use their limited jump distance to get together by jumping from piece to piece. However, temperatures have been high lately, and the floes are showing cracks, and they
get damaged further by the force needed to jump to another floe. Fortunately the penguins are real experts on cracking ice floes, and know exactly how many times a penguin can jump off each floe
before it disintegrates and disappears. Landing on an ice floe does not damage it. You have to help the penguins find all floes where they can meet.
Input contains a single test case, consisting of:
• One line with the integer $N$ ($1 \le N \le 100$) and a real number $D$ ($0 \le D \le 100\, 000$, at most $2$ decimals after the decimal point), denoting the number of ice pieces and the maximum
distance a penguin can jump.
• $N$ lines, each line containing four integers $x_ i$, $y_ i$, $n_ i$ and $m_ i$, denoting for each ice piece its $X$ and $Y$ coordinate, the number of penguins on it and the maximum number of
times a penguin can jump off this piece before it disappears ($-10\, 000 \le x_ i, y_ i \le 10\, 000$, $0 \le n_ i \le 10$, $1 \le m_ i \le 200$).
• One line containing a space-separated list of $0$-based indices of the pieces on which all penguins can meet. If no such piece exists, output a line with the single number $-1$.
Sample Input 1 Sample Output 1
5 3.5
Sample Input 2 Sample Output 2
3 1.1
-1 0 5 10 -1 | {"url":"https://kth.kattis.com/courses/DD2458/popup17/assignments/hitevp/problems/marchofpenguins","timestamp":"2024-11-05T10:26:13Z","content_type":"text/html","content_length":"27926","record_id":"<urn:uuid:99774b27-cd65-4c49-85e6-0efc0316374a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00473.warc.gz"} |
what is the difference between estimate and estimator? – Q&A Hub – 365 Data Science
what is the difference between estimate and estimator?
In the Practice exam I did not understand the last question. what is the difference between estimate and estimator? Can you help me with that please. The answer says the Number is the
2 answers ( 0 marked as helpful)
estimator is the rule we use to calculate a specific data base and the estimate is the result
The formula we apply to determine the population number is represented by an estimator, but the numeric value is represented by an estimate. The mean symbol x bar, for example, is an estimator that
equals the sample mean rule, but the value like: 20, is an estimate. | {"url":"https://365datascience.com/question/what-is-the-difference-between-estimate-and-estimator/","timestamp":"2024-11-07T09:36:22Z","content_type":"text/html","content_length":"111260","record_id":"<urn:uuid:f1cbc828-4309-449c-83bd-6f035bae120a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00476.warc.gz"} |
Diagramming Solutions for PrepTest 56 | The LSAT Trainer
Game 1
Step 1
Per the given scenario, we can write out the six elements to be placed - F, G, H, J, K, and L - and the six positions to be filled, in order.
Step 2
Per the third and fourth rules, we can create two separate frames, one in which we have J G L consecutively, and another in which we have J (one other element) L G consecutively.
Step 3
Per the first rule, we can notate on both frames that H comes after J. In the first frame, that means it must come after the entire J G L chain.
Step 4
Per the second rule, we can notate on both frames that K comes after G.
Step 5
We can notate that F is not directly restricted by any of the given rules.
Game 2 (Option 1)
Step 1
Per the given scenario, we can write out the four elements to be placed - G, H, J, and M, and the six positions to be filled, two each for groups R, S, and T.
Step 2
Per the first rule, we can notate the biconditional that G is assigned to S if and only if H is assigned to R, as well as the contrapositive.
Step 3
Per the second rule, we can notate that if J is assigned to T, M will be assigned to R, as well as the contrapositive.
Step 4
Per the third rule, we can notate that G and J cannot be grouped together.
Game 2 (Option 2)
Step 1
Per the given scenario, we can write out the four elements to be placed - G, H, J, and M, and the six positions to be filled, two each for groups R, S, and T.
Step 2
Per the first rule, we can create two frames, one in which G is assigned to S and H to R, and another in which G is not assigned to S and H is not assigned to R.
Step 3
Per the second rule, we can notate that if J is assigned to T, M will be assigned to R, as well as the contrapositive.
Step 4
Per the third rule, we can notate that G and J cannot be grouped together. We can infer from this that in frame 1 J can’t be in group S.
Game 3 (Option 1)
Step 1
Per the given scenario and the first rule, we can write out the four elements to be placed - M, O, S, and T - and we can lay out our six total assignments - three each in groups G and L. It’s helpful
to notice that for each group only one of the four elements will not be chosen.
Step 2
Per the second rule, we can create two frames, one in which M and S are assigned to G, and another in which M and S are assigned to L. Note that assigning M and S to one group does not exclude the
possibility of M and S also appearing together in the other group.
Step 3
Per the third rule, we can infer that T can never be the element not selected. If T were not selected, O would have to be, in order to get us up to the three varieties that we would need, and we
would then violate this third rule. Thus we can simply notate this rule by placing T into every group. And, with T placed, we no longer have a need to represent this rule conditionally.
Step 4
Per the fourth rule, in the second frame, we can place M into G.
Game 3 (Option 2)
Step 1
Per the given scenario and the first rule, we can write out the four elements to be placed - M, O, S, and T - and we can lay out our six total assignments - three each in groups G and L. It’s helpful
to notice that for each group only one of the four elements will not be chosen.
Step 2
Per the second rule, we can create three frames, one in which M and S are assigned to G (but not assigned together to L), and another in which M and S are assigned to L (but not assigned together to
G), and a third in which M and S are assigned to both G and L.
Step 3
Per the third rule, we can infer that T can never be the element not selected. If T were not selected, O would have to be, in order to get us up to the three varieties that we would need, and we
would then violate this third rule. Thus we can simply notate this rule by placing T into every group. And, with T placed, we no longer have a need to represent this rule conditionally.
Step 4
In the first frame, we have three elements - M, S, and O - left to fill two positions, and, per how we designed our frames, we know M and S can’t fill them together. So, the final two positions in
frame 1 must be filled by O and either M or S.
Step 5
In the second frame, per the fourth rule, we need to place an M into group G. Since we can’t have M and S together in group G in this frame, the final position in group G must be filled by O.
Game 4 (Option 1)
Step 1
Per the given scenario, we can write out the elements to be placed - executives Q, R, S, T, and V, as well as the subsets to be placed - plants F, H, M, and we can lay out the positions to be filled,
ordered first through third, and separated out by subset or element. For the executives, we can indicate that each site must get at least one executive.
Step 2
Per the first rule, we can notate that F must come before H.
Step 3
Per the second rule, we can notate that F will get exactly one assignment.
Step 4
Per the third rule, we can notate that Q must come before both R and T.
Step 5
Per the fourth rule, we can notate that V cannot come before S.
Step 6
We can notate that plant M is not directly restricted by any of the given rules.
Game 4 (Option 2)
Step 1
Per the given scenario, we can write out the elements to be placed - executives Q, R, S, T, and V, as well as the subsets to be placed - plants F, H, M, and we can lay out the positions to be filled,
ordered first through third, and separated out by subset or element. For the executives, we can indicate that each site must get at least one executive.
Step 2
Per the first rule, we can create two frames, one in which F is the first plant visited, and the second in which F is the second plant visited. When F is the first plant visited, H or M could be
second or third. When F is the second plant visited, H must be third and M first.
Step 3
Per the second rule, we can notate in both frames that F will have exactly one assignment.
Step 4
Per the third rule, we can notate that Q must come before both R and T.
Step 5
Per the fourth rule, we can notate that V cannot come before S.
Step 6
We can note a small inference, per the third and fourth rules, that either Q or S must occupy the first slot in the first frame, and that we must, per the same reason, have a Q or S in the first
group in the second frame as well. (Additionally, we could also split the first frame into two, one with Q as the only executive assigned to F, and the other with S as the only executive assigned to
Step 7
We can notate that plant M is not directly restricted by any of the given rules. | {"url":"https://www.trainertestprep.com/lsat/logic-game-diagramming-solutions/prep-test-56","timestamp":"2024-11-05T02:29:54Z","content_type":"text/html","content_length":"21956","record_id":"<urn:uuid:97b20b0d-9a36-4806-b8f5-54e38e3127fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00796.warc.gz"} |
Latoya Bought 8 Pounds Of Potatoes At A Local Wholesaler For $18. Her Friend Herman Bought 6 Pounds Of
Latoya got a better deal because she paid less than Herman by 25 cents
Step-by-step explanation:
Latoya 18 divided by 8= $2.25
Herman 15 divided by 6= $2.50
So Latoya saves $0.25 cents
The Fourier transform of this sequence does not exist
What is Fourier Transform?
The output of a Fourier transform is a function of frequency and is a mathematical transformation that breaks down functions into frequency components.
To determine the z-transform of a given sequence, we need to find the sum of the sequence's terms multiplied by z raised to the negative of the term's position. For example, given a sequence x[n],
the z-transform X(z) is given by:
[tex]X(z) = sum_{n=-infinity}^{infinity} x[n] * z^{-n}[/tex]
To sketch the pole-zero plot of a z-transform, we need to plot the poles (values of z where the z-transform is infinite) and zeros (values of z where the z-transform is zero) in the complex plane.
The region of convergence (ROC) is the set of values of z for which the sum defining the z-transform converges.
[tex]x[n] = a^n u[n][/tex], where a is a constant and u[n] is the unit step function[tex](u[n] = 1 for n > = 0[/tex] and [tex]u[n] = 0 for n < 0)[/tex]
The z-transform of this sequence is given by:
[tex]X(z) = sum_{n=0}^{infinity} a^n * z^{-n}= 1 / (1 - a*z^{-1})[/tex]
This z-transform has a pole at z = 1/a and a zero at z = 0. The pole-zero plot consists of a single pole at 1/a in the complex plane. The ROC is the set of values of z such that [tex]|a*z^{-1}| < 1[/
tex], which is the region inside the unit circle centered at the origin. The Fourier transform of this sequence exists.
[tex]x[n] = (-1)^n u[n].[/tex]
The z-transform of this sequence is given by:
[tex]X(z) = sum_{n=0}^{infinity} (-1)^n * z^{-n}= 1 / (1 + z^{-1})[/tex]
This z-transform has a pole at z = -1 and a zero at z = 0. The pole-zero plot consists of a single pole at -1 in the complex plane. The ROC is the set of values of z such that [tex]|z^{-1}| < 1[/
tex], which is the region inside the unit circle centered at the origin. The Fourier transform of this sequence exists.
[tex]x[n] = n^2 u[n].[/tex]
The z-transform of this sequence is given by:
[tex]X(z) = sum_{n=0}^{infinity} n^2 * z^{-n}= z^{-2} / (1 - z^{-1})^3[/tex]
This z-transform has poles at z = 1, z = 1, and z = 1. The pole-zero plot consists of three poles at the origin in the complex plane. The ROC is the set of values of z such that [tex]|z^{-1}| > 1[/
tex], which is the region outside the unit circle centered at the origin.
Hence, The Fourier transform of this sequence does not exist.
To know more about Fourier Transform visit, | {"url":"https://www.cairokee.com/homework-solutions/latoya-bought-8-pounds-of-potatoes-at-a-local-wholesaler-for-j8ux","timestamp":"2024-11-09T12:53:15Z","content_type":"text/html","content_length":"93326","record_id":"<urn:uuid:c963d4e3-9177-4d47-8669-a0205d8890e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00372.warc.gz"} |
Physics with Calculus/Electromagnetism/Gauss' Law - Wikibooks, open books for an open world
Gauss' Law relates the amount of charge contained within a volume to the electric flux passing through the surface of that volume. It was developed by Carl Fredric Gauss, an 18th century
mathematician. Electric flux can only truly be described by mathematics, but has an intuitive meaning as well. Essentially, electric flux is the amount of 'electric field' going through a surface.
Think of a cage which is immersed in a flowing river, where the cage is meshed so that the water runs through it fairly unabated. The 'flux' is the amount of water passing through the cage's surface
at any one moment. For this discussion we shall not concern ourselves with electric fields which change in time, so consider the electric flux to be constant in time.
When using Gauss' law we create something called a "Gaussian surface", or just the "Gaussian". A Gaussian is an imaginary surface which is completely enclosed.
Electric Flux (Flux: field line flow) Φ: how much field passes through a surface S. Uniform Field (parallel field lines) Φ=E A cos (theta) Where theta is the angle measured between direction of field
lines and direction for the normal to the surface.
${\displaystyle \Phi _{f}=\int _{S}\mathbf {F} \cdot \mathbf {dA} }$
where F is a vector field, dA is the area element of the surface S, directed as the surface normal, and Φf is the resulting flux. For a surface that encloses the field generator
Gauss' Law
${\displaystyle \Phi =\oint _{S}\mathbf {E} \cdot d\mathbf {A} ={1 \over \epsilon _{o}}\int _{V}\rho \cdot dV={\frac {Q_{A}}{\epsilon _{o}}}}$
where ${\displaystyle \mathbf {E} }$ is the electric field, ${\displaystyle d\mathbf {A} }$ is the area of a differential square on the closed surface ${\displaystyle S}$ with an outward facing
surface normal defining its direction, ${\displaystyle \mathrm {Q} _{A}}$ is the charge enclosed by the surface, ${\displaystyle \rho }$ is the charge density at a point in ${\displaystyle V}$ , ${\
displaystyle \epsilon _{o}}$ is the permittivity of free space and ${\displaystyle \oint _{S}}$ is the integral over the surface ${\displaystyle S}$ enclosing volume ${\displaystyle V}$ . | {"url":"https://en.m.wikibooks.org/wiki/Physics_with_Calculus/Electromagnetism/Gauss%27_Law","timestamp":"2024-11-10T12:10:28Z","content_type":"text/html","content_length":"40307","record_id":"<urn:uuid:d2b8dd75-fc5e-4004-b0f4-d958b20f9606>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00874.warc.gz"} |
Bioequivalence and Bioavailability Forum
Indirect adjusted comparisons in BE [General Statistics]
Dear all (and esp. Shuanghe),
I don’t want to hi-jack ElMaestro’s
thread about spreadsheets any longer
Shuanghe and I had an OT-discussion about indirect adjusted comparisons in BE starting
I must confess that I don’t get what Luther Gwaza
et al.^1,2
means by his “pragmatic method”:
This method does not require the assumption of homogeneity of variances (\(SE_d=SE_{A}^{2}+SE_{B}^{2}\)), since it is unlikely verifiable, between studies with small sample sizes that follow
Student’s t-test distribution (t[0.9,d.f.]), whose degrees of freedom are approximated for simplicity as if the variances were homogeneous (n[1]+n[2]–2).
Sounds great but
does he pool variances here? If they are weighted by the studies’ sample sizes – which IMHO, is the correct method – we end up in the homoscedastic method, where (in his notation):$$SD_{pooled}^{2}=\
frac{(n_1-1)SD_{1}^{2}+(n_2-1)SD_{2}^{2}}{n_1+n_2-2}$$ BTW, is this a typo in the quote? A pooled variance as the sum? I suspect that he simply used the arithmetic mean which would not be correct
even for equal samples & variances.
@Shuanghe: How did you do it?
1. Gwaza L, Gordon J, Welink J, Potthast H, Hansson H, Stahl M, García-Arieta A. Statistical approaches to indirectly compare bioequivalence between generics: a comparison of methodologies employing
artemether / lumefantrine 20/120 mg tablets as prequalified by WHO. Eur J Clin Pharmacol. 2012;68(12):1611–8. doi:10.1007/s00228-012-1396-1.
Dif-tor heh smusma 🖖🏼 Довге життя Україна! []
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked.
🚮 Science Quotes
Complete thread: | {"url":"https://forum.bebac.at/forum_entry.php?id=20417","timestamp":"2024-11-03T23:36:17Z","content_type":"text/html","content_length":"15628","record_id":"<urn:uuid:7ee6f143-127e-49e5-ade6-9a69ecde0308>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00441.warc.gz"} |
Math Day 15 (age 15)
During the Math Day 15 secondary school students (3havo/vwo, age 15) work for a day (ca. 09:00-14:00 hr) in three or four strong teams on a ‘large’ mathematical (thinking) task, which focuses on
problem solving. The final product is a paper. The task matches as closely as possible to the new goals for the lower grades of secondary education and the cTWO thinking activities. | {"url":"https://wiskundeinteams.sites.uu.nl/events/math-day-15/?lang=en","timestamp":"2024-11-02T08:44:13Z","content_type":"text/html","content_length":"36136","record_id":"<urn:uuid:8bab5ad7-24e2-4b8b-bc92-6f46861ada6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00102.warc.gz"} |
Classical Magnitudes and Scaling Laws | Nanosystems
Classical Magnitudes and Scaling Laws
2.1. Overview
Most physical magnitudes characterizing nanoscale systems differ enormously from those familiar in macroscale systems. Some of these magnitudes can, however, be estimated by applying scaling laws to
the values for macroscale systems. Although later chapters seldom use this approach, it can provide orientation, preliminary estimates, and a means for testing whether answers derived by more
sophisticated methods are in fact reasonable.
The first of the following sections considers the role of engineering approximations in more detail (Section 2.2); the rest present scaling relationships based on classical continuum models and
discuss how those relationships break down as a consequence of atomic-scale structure, mean-free-path effects, and quantum mechanical effects. Section 2.3 discusses mechanical systems, where many
scaling laws are quite accurate on the nanoscale. Section 2.4 discusses electromagnetic systems, where many scaling laws fail dramatically on the nanoscale. Section 2.5 discusses thermal systems,
where scaling laws have variable accuracy. Finally, Section 2.6 briefly describes how later chapters go beyond these simple models.
2.2. Approximation and classical continuum models
When used with caution, classical continuum models of nanoscale systems can be of substantial value in design and analysis. They represent the simplest level in a hierarchy of approximations of
increasing accuracy, complexity, and difficulty.
Experience teaches the value of approximation in design. A typical design process starts with the generation and preliminary evaluation of many options, then selects a few options for further
elaboration and evaluation, and finally settles on a detailed specification and analysis of a single preferred design. The first steps entail little commitment to a particular approach. The ease of
exploring and comparing many qualitatively distinct approaches is at a premium, and drastic approximations often suffice to screen out the worst options. Even the final design and analysis does not
require an exact calculation of physical behavior: approximations and compensating safety margins suffice. Accordingly, a design process can use different approximations at different stages, moving
toward greater analytical accuracy and cost.
Approximation is inescapable because the most accurate physical models are computationally intractable. In macromechanical design, engineers employ approximations based on classical mechanics,
neglecting quantum mechanics, the thermal excitation of mechanical motions, and the molecular structure of matter. Since macromechanical engineering blends into nanomechanical engineering with no
clear line of demarcation, the approximations of macromechanical engineering offer a point of departure for exploring the nanomechanical realm. In some circumstances, these approximations (with a few
adaptations) provide an adequate basis for the design and analysis of nanoscale systems. In a broader range of circumstances, they provide an adequate basis for exploring design options and for
conducting a preliminary analysis. In a yet broader range of circumstances, they provide a crude description to which one can compare more sophisticated approximations.
2.3. Scaling of classical mechanical systems
Nanomechanical systems are fundamental to molecular manufacturing and are useful in many of its products and processes. The widespread use in chemistry of molecular mechanics approximations together
with the classical equations of motion (Sections 3.3, 4.2.3a) indicates the utility of describing nanoscale mechanical systems in terms of classical mechanics. This section describes scaling laws and
magnitudes with the added approximation of continuous media.
2.3.1. Basic assumptions
The following discussion considers mechanical systems, neglecting fields and currents. Like later sections, it examines how different physical magnitudes depend on the size of a system (defined by a
length parameter $L$ ) if all shape parameters and material properties (e.g., strengths, moduli, densities, coefficients of friction) are held constant.
A description of scaling laws must begin with choices that determine the scaling of dynamical variables. A natural choice is that of constant stress. This implies scale-independent ${ }^{\circ}$
elastic deformation, and hence scale-independent shape; since it results in scale-independent speeds, it also implies constancy of the space-time shapes describing the trajectories of moving parts.
Some exemplar calculations are provided, based on material properties like those of diamond (Table 9.1): density $\rho=3.5 \times 10^{3} \mathrm{~kg} / \mathrm{m}^{3}$; Young's modulus $E=10^{12} \
mathrm{~N} / \mathrm{m}^{2}$; and a low working stress ( $\sim .2$ times tensile strength) $\sigma=10^{10} \mathrm{~N} / \mathrm{m}^{2}$. This choice of materials often yields large parameter values
(for speeds, accelerations, etc.) relative to those characteristic of more familiar engineering materials.
2.3.2. Magnitudes and scaling
Given constancy of stress and material strength, both the strength of a structure and the force it exerts scale with its cross-sectional area
$\begin{equation*} \text { total strength } \propto \text { force } \propto \text { area } \propto L^{2} \tag{2.1} \end{equation*}$
Nanoscale devices accordingly exert only small forces: a stress of $10^{10} \mathrm{~N} / \mathrm{m}^{2}$ equals $10^{-8} \mathrm{~N} / \mathrm{nm}^{2}$, or $10 \mathrm{nN} / \mathrm{nm}^{2}$.
Stiffness in ${ }^{\circ}$shear, like stretching stiffness, depends on both area and length
$\begin{equation*} \text { shear stiffness } \propto \text { stretching stiffness } \propto \frac{\text { area }}{\text { length }} \propto L \tag{2.2} \end{equation*}$
and varies less rapidly with scale; a cubic nanometer block of $E=10^{12} \mathrm{~N} / \mathrm{m}^{2}$ has a stretching stiffness of $1000 \mathrm{~N} / \mathrm{m}$. The bending stiffness of a rod
scales in the same way
$\begin{equation*} \text { bending stiffness } \propto \frac{\text { radius }^{4}}{\text { length }^{3}} \propto L \tag{2.3} \end{equation*}$
Given the above scaling relationships, the magnitude of the deformation under load
$\begin{equation*} \text { deformation } \propto \frac{\text { force }}{\text { stiffness }} \propto L \tag{2.4} \end{equation*}$
is proportional to scale, and hence the shape of deformed structures is scale invariant.
The assumption of constant density makes mass scale with volume,
$\begin{equation*} \text { mass } \propto \text { volume } \propto L^{3} \tag{2.5} \end{equation*}$
and the mass of a cubic nanometer block of density $\rho=3.5 \times 10^{3} \mathrm{~kg} / \mathrm{m}^{3}$ equals $3.5 \times 10^{-24} \mathrm{~kg}$.
The above expressions yield the scaling relationship
$\begin{equation*} \text { acceleration } \propto \frac{\text { force }}{\text { mass }} \propto L^{-1} \tag{2.6} \end{equation*}$
A cubic-nanometer mass subject to a net force equaling the above working stress applied to a square nanometer experiences an acceleration of $\sim 3 \times 10^{15} \mathrm{~m} / \mathrm{s}^{2}$.
Accelerations in nanomechanisms commonly are large by macroscopic standards, but aside from special cases (such as transient acceleration during impact and steady acceleration in a small flywheel)
they rarely approach the value just calculated. (Terrestrial gravitational accelerations and stresses usually have negligible effects on nanomechanisms.)
Modulus and density determine the acoustic speed, a scale-independent parameter [along a slim rod, the speed is $(E / \rho)^{1 / 2}$; in bulk material, somewhat higher]. The vibrational frequencies
of a mechanical system are proportional to the acoustic transit time
$\begin{equation*} \text { frequency } \propto \frac{\text { acoustic speed }}{\text { length }} \propto L^{-1} \tag{2.7} \end{equation*}$
The acoustic speed in diamond is $\sim 1.75 \times 10^{4} \mathrm{~m} / \mathrm{s}$. Some vibrational modes are more conveniently described in terms of lumped parameters of stiffness and mass,
$\begin{equation*} \text { frequency } \propto \sqrt{\frac{\text { stiffness }}{\text { mass }}} \propto L^{-1} \tag{2.8} \end{equation*}$
but the scaling relationship is the same. The stiffness and mass associated with a cubic nanometer block yield a vibrational frequency characteristic of a stiff, nanometer-scale object: $\left[(1000
\mathrm{~N} / \mathrm{m}) /\left(3.5 \times 10^{-24} \mathrm{~kg}\right)\right]^{1 / 2} \approx 1.7 \times 10^{13} \mathrm{rad} / \mathrm{s}$.
Characteristic times are inversely proportional to characteristic frequencies
$\begin{equation*} \text { time } \propto \text { frequency }^{-1} \propto L \tag{2.9} \end{equation*}$
The speed of mechanical motions is constrained by strength and density. Its scaling can be derived from the above expressions
$\begin{equation*} \text { speed } \propto \text { acceleration } \cdot \text { time }=\text { constant } \tag{2.10} \end{equation*}$
A characteristic speed (only seldom exceeded in practical mechanisms) is that at which a flywheel in the form of a slim hoop is subject to the chosen working stress as a result of its mass and
centripetal acceleration. This occurs when $v=$ $(\sigma / \rho)^{1 / 2} \approx 1.7 \times 10^{3} \mathrm{~m} / \mathrm{s}$ (with the assumed $\sigma$ and $\rho$ ). Most mechanical motions
considered in this volume, however, have speeds between 0.001 and $10 \mathrm{~m} / \mathrm{s}$.
The frequencies characteristic of mechanical motions scale with transit times
$\begin{equation*} \text { frequency } \propto \frac{\text { speed }}{\text { length }} \propto L^{-1} \tag{2.11} \end{equation*}$
These frequencies scale in the same manner as vibrational frequencies, hence the assumption of constant stress leaves frequency ratios as scale invariants. At the above characteristic speed, crossing
a $1 \mathrm{~nm}$ distance takes $\sim 6 \times 10^{-13} \mathrm{~s}$; the large speed makes this shorter than the motion times anticipated in typical nanomechanisms. A modest $1 \mathrm{~m} / \
mathrm{s}$ speed, however, still yields a transit time of only $1 \mathrm{~ns}$, indicating that nanomechanisms can operate at frequencies typical of modern micron-scale electronic devices.
The above expressions yield relationships for the scaling of mechanical power
$\begin{equation*} \text { power } \propto \text { force } \cdot \text { speed } \propto L^{2} \tag{2.12} \end{equation*}$
and mechanical power density
$\begin{equation*} \text { power density } \propto \frac{\text { power }}{\text { volume }} \propto L^{-1} \tag{2.13} \end{equation*}$
A $10 \mathrm{nN}$ force and a $1 \mathrm{~nm}^{3}$ volume yield a power of $17 \mu \mathrm{W}$ and a power density of $1.7 \times 10^{22} \mathrm{~W} / \mathrm{m}^{3}$ (at a speed of $1.7 \times 10^
{3} \mathrm{~m} / \mathrm{s}$ ) or $10 \mathrm{nW}$ and $10^{19} \mathrm{~W} / \mathrm{m}^{3}$ (at a speed of $1 \mathrm{~m} / \mathrm{s}$ ). The combination of strong materials and small devices
promises mechanical systems of extraordinarily high power density, even at low speeds (an example of a mechanical power density is the power transmitted by a gear divided by its volume).
Most mechanical systems use bearings to support moving parts. Macromechanical systems frequently use liquid lubricants, but (as noted by Feynman, 1961), this poses problems on a small scale. The
above scaling law ordinarily holds speeds and stresses constant, but reducing the thickness of the lubricant layer increases shear rates and hence viscous shear stresses:
$\begin{equation*} \text { viscous stress at constant speed } \propto \text { shear rate } \propto \frac{\text { speed }}{\text { thickness }} \propto L^{-1} \tag{2.14} \end{equation*}$
In Newtonian fluids, shear stress is proportional to shear rate. Molecular simulations indicate that liquids can remain nearly Newtonian at shear rates in excess of $100 \mathrm{~m} / \mathrm{s}$
across a $1 \mathrm{~nm}$ layer (e.g., in the calculations of Ashurst and Hoover, 1975), but they depart from bulk viscosity (or even from liquid behavior) when film thicknesses are less than 10
molecular diameters (Israelachvili, 1992; Schoen et al., 1989), owing to interface-induced alterations in liquid structure. Feynman suggested the use of low-viscosity lubricants (such as kerosene)
for micromechanisms (Feynman, 1961); from the perspective of a typical nanomechanism, however, kerosene is better regarded as a collection of bulky molecular objects than as a liquid. If one
nonetheless applies the classical approximation to a $1 \mathrm{~nm}$ film of low-viscosity fluid $\left(\eta=10^{-3} \mathrm{~N} \cdot \mathrm{s} / \mathrm{m}^{2}\right)$, the viscous shear stress
at a speed of $1.7 \times 10^{3} \mathrm{~m} / \mathrm{s}$ is $1.7 \times 10^{9} \mathrm{~N} / \mathrm{m}^{2}$; the shear stress at a speed of $1 \mathrm{~m} / \mathrm{s}, 10^{6} \mathrm{~N} / \
mathrm{m}^{2}$, is still large, dissipating energy at a rate of $1 \mathrm{MW} / \mathrm{m}^{2}$.
The problems of liquid lubrication motivate consideration of dry bearings (as suggested by Feynman, 1961). Assuming a constant coefficient of friction,
$\begin{equation*} \text { frictional force } \propto \text { force } \propto L^{2} \tag{2.15} \end{equation*}$
and both stresses and speeds are once again scale-independent. The frictional power,
$\begin{equation*} \text { frictional power } \propto \text { force } \cdot \text { speed } \propto L^{2} \tag{2.16} \end{equation*}$
is proportional to the total power, implying scale-independent mechanical efficiencies. In light of engineering experience, however, the use of dry bearings would seem to present problems (as it has
in silicon micromachine research). Without lubrication, efficiencies may be low, and static friction often causes jamming and vibration.
A yet more serious problem for unlubricated systems would seem to be wear. Assuming constant interfacial stresses and speeds (as implied by the above scaling relationships), the anticipated surface
erosion rate is independent of scale. Assuming that wear life is determined by the time required to produce a certain fractional change in shape,
$\begin{equation*} \text { wear life } \propto \frac{\text { thickness }}{\text { erosion rate }} \propto L \tag{2.17} \end{equation*}$
and a centimeter-scale part having a ten-year lifetime would be expected to have a $30 \mathrm{~s}$ lifetime if scaled to nanometer dimensions.
Design and analysis have shown, however, that dry bearings with atomically precise surfaces need not suffer these problems. As shown in Chapters 6, 7, and 10 , dynamic friction can be low, and both
static friction and wear can be made negligible. The scaling laws applicable to such bearings are compatible with the constant-stress, constant-speed expressions derived previously.
2.3.3. Major corrections
The above scaling relationships treat matter as a continuum with bulk values of strength, modulus, and so forth. They readily yield results for the behavior of iron bars scaled to a length of $10^
{-12} \mathrm{~m}$, although such results are meaningless because a single atom of iron is over $10^{-10} \mathrm{~m}$ in diameter. They also neglect the influence of surfaces on mechanical
properties (Section 9.4), and give (at best) crude estimates regarding small components, in which some dimensions may be only one or a few atomic diameters.
Aside from the molecular structure of matter, major corrections to the results suggested by these scaling laws include uncertainties in position and velocity resulting from statistical and quantum
mechanics (examined in detail in Chapter 5). Thermal excitation superimposes random velocities on those intended by the designer. These random velocities depend on scale, such that
$\begin{equation*} \text { thermal speed } \propto \sqrt{\frac{\text { thermal energy }}{\text { mass }}} \propto L^{-3 / 2} \tag{2.18} \end{equation*}$
where the thermal energy measures the characteristic energy in a single degree of freedom, not in the object as a whole. For $\rho=3.5 \times 10^{3} \mathrm{~kg} / \mathrm{m}^{3}$, the mean thermal
speed of a cubic nanometer object at $300 \mathrm{~K}$ is $\sim 55 \mathrm{~m} / \mathrm{s}$. Random thermal velocities (commonly occurring in vibrational modes) often exceed the velocities imposed
by planned operations, and cannot be ignored in analyzing nanomechanical systems.
Quantum mechanical uncertainties in position and momentum are parallel to statistical mechanical uncertainties in their effects on nanomechanical systems. The importance of quantum mechanical effects
in vibrating systems depends on the ratio of the characteristic quantum energy ( $\hbar \omega$, the quantum of vibrational energy in a harmonic oscillator of angular frequency $\omega$ ) and the
characteristic thermal energy ( $k T$, the mean energy of a thermally excited harmonic oscillator at a temperature $T$, if $k T \gg \hbar \omega$ ). The ratio $\hbar \omega / k T$ varies directly
with the frequency of vibration, that is, as $L^{-1}$. An object of cubic nanometer size with $\omega=1.7 \times 10^{13} \mathrm{rad} / \mathrm{s}$ has $\hbar \omega / k T_{300} \approx 0.4\left(T_
{300}=300 \mathrm{~K} ; k T_{300} \approx 4.14 \mathrm{maJ}\right)$. The associated quantum mechanical effects (e.g., on positional uncertainty) are smaller than the classical thermal effects, but
still significant (see Figure 5.2).
2.4. Scaling of classical electromagnetic systems
2.4.1. Basic assumptions
In considering the scaling of electromagnetic systems, it is convenient to assume that electrostatic field strengths (and hence electrostatic stresses) are independent of scale. With this assumption,
the above constant-stress, constant-speed scaling laws for mechanical systems continue to hold for electromechanical systems, so long as magnetic forces are neglected. The onset of strong
field-emission currents from conductors limits the electrostatic field strength permissible at the negative electrode of a nanoscale system; values of $\sim 10^{9} \mathrm{~V} / \mathrm{m}$ can
readily be tolerated (Section 11.6.2).
2.4.2. Major corrections
Chapter 11 describes several nanometer scale electromechanical systems, requiring consideration of the electrical conductivity of fine wires and of insulating layers thin enough to make tunneling a
significant mechanism of electron transport. These phenomena are sometimes (within an expanding range of conditions) understood well enough to permit design calculations.
Corrections to classical continuum models are more important in electromagnetic systems than in mechanical systems: quantum effects, for example, become dominant and at small scales can render
classical continuum models useless even as crude approximations. Electromagnetic systems on a nanometer scale commonly have extremely high frequencies, yielding large values of $\hbar \omega / k T_
{300}$. Molecules undergoing electronic transitions typically absorb and emit light in the visible to ultraviolet range, rather than the infrared range characteristic of thermal excitation at room
temperature. The mass of an electron is less than $10^{-3}$ that of the lightest atom, hence for comparable confining energy barriers, electron ${ }^{\circ}$wave functions are more diffuse and permit
longer-range tunneling. At high frequencies, the inertial effects of electron mass become significant, but these are neglected in the usual macroscopic expressions for electrical circuits.
Accordingly, many of the following classical continuum scaling relationships fail in nanoscale systems. The assumption of scale-independent electrostatic field strengths itself fails in the opposite
direction, when scaling up from the nanoscale to the macroscale: the resulting large voltages introduce additional modes of electrical breakdown. In small structures, the discrete size of the
electronic charge unit, $\sim 1.6 \times 10^{-19} \mathrm{C}$, disrupts the smooth scaling of classical electrostatic relationships (Section 11.7.2c).
2.4.3. Magnitudes and scaling: steady-state systems
Given a scale-invariant electrostatic field strength,
$\begin{equation*} \text { voltage } \propto \text { electrostatic field } \cdot \text { length } \propto L \tag{2.19} \end{equation*}$
At a field strength of $10^{9} \mathrm{~V} / \mathrm{m}$, a one nanometer distance yields a $1 \mathrm{~V}$ potential difference. A scale-invariant field strength implies a force proportional to
$\begin{equation*} \text { electrostatic force } \propto \text { area } \cdot(\text { electrostatic field })^{2} \propto L^{2} \tag{2.20} \end{equation*}$
and a $1 \mathrm{~V} / \mathrm{nm}$ field between two charged surfaces yields an electrostatic force of $\sim 0.0044 \mathrm{nN} / \mathrm{nm}^{2}$.
Assuming a constant resistivity,
$\begin{equation*} \text { resistance } \propto \frac{\text { length }}{\text { area }} \propto L^{-1} \tag{2.21} \end{equation*}$
and a cubic nanometer block with the resistivity of copper would have a resistance of $\sim 17 \Omega$. This yields an expression for the scaling of currents,
$\begin{equation*} \text { ohmic current } \propto \frac{\text { voltage }}{\text { resistance }} \propto L^{2} \tag{2.22} \end{equation*}$
which leaves current density constant. In present microelectronics work, current densities in aluminum interconnections are limited to $<10^{10} \mathrm{~A} / \mathrm{m}^{2}$ or less by
electromigration, which redistributes metal atoms and eventually interrupts circuit continuity (Mead and Conway, 1980). This current density equals $10 \mathrm{nA} / \mathrm{nm}^{2}$ (as discussed in
Section 11.6.1b, however, present electromigration limits are unlikely to apply to well-designed eutactic conductors).
For field emission into free space, current density depends on surface properties and the electrostatic field intensity, hence
$\begin{equation*} \text { field emission current } \propto \text { area } \propto L^{2} \tag{2.23} \end{equation*}$
and field emission currents scale with ohmic currents. Where surfaces are close enough together for tunneling to occur from conductor to conductor, rather than from conductor to free space, this
scaling relationship breaks down.
With constant field strength, electrostatic energy scales with volume:
$\begin{equation*} \text { electrostatic energy } \propto \text { volume } \cdot(\text { electrostatic field })^{2} \propto L^{3} \tag{2.24} \end{equation*}$
A field with a strength of $10^{9} \mathrm{~V} / \mathrm{m}$ has an energy density of $\sim 4.4 \mathrm{maJ}$ per cubic nanometer $\left(\approx k T_{300}\right)$.
Scaling of capacitance follows from the above,
$\begin{equation*} \text { capacitance } \propto \frac{\text { electrostatic energy }}{(\text { voltage })^{2}} \propto L \tag{2.25} \end{equation*}$
and is independent of assumptions regarding field strength. The calculated capacitance per square nanometer of a vacuum capacitor with parallel plates separated by $1 \mathrm{~nm}$ is $\sim 9 \times
10^{-21} \mathrm{~F}$; note, however, that electron tunneling causes substantial conduction through an insulating layer this thin.
In electromechanical systems dominated by electrostatic forces,
$\begin{equation*} \text { electrostatic power } \propto \text { electrostatic force } \cdot \text { speed } \propto L^{2} \tag{2.26} \end{equation*}$
$\begin{array}{cc}& \text{ electrostatic power density }\propto \frac{\text{ electrostatic power }}{\text{ volume }}\propto {L}^{}\end{array}$ | {"url":"https://nanosyste.ms/classical_magnitudes_and_scaling_laws/","timestamp":"2024-11-07T17:18:31Z","content_type":"text/html","content_length":"365602","record_id":"<urn:uuid:6ae0d77b-9251-4d91-92c6-54d22a9b4af7>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00225.warc.gz"} |
2D NavMesh PathFinding
I am working on 2D game, My 2D project need use pathfinding. I search manual from official, and I found the NavMesh Pathfinding of unity support 3D game only… there must be mesh renderer. It cant
work on 2D because there is no mesh renderer, unless you add MeshRenderer component add to GameObject,But it conflict with 2D SpriteRenderer…
So… I use the vertex coordinates of Collider2D attached to gameobject to create mesh, eg:
private MeshTransform CreateEdge2dMesh (EdgeCollider2D e2d)
if (e2d == null) {
return null;
Vector2[] points = e2d.points;
Vector2 offset = e2d.offset;
Bounds bounds = e2d.bounds;
Vector3 cv = new Vector3 (bounds.center.x, bounds.center.y, 0);
Vector3[] vertices = new Vector3[points.Length + 1];
Vector2[] uvs = new Vector2[vertices.Length];
int[] triangles = new int[(points.Length - 1) * TRIANGLES_COUNT];
vertices [points.Length] = cv;
uvs [points.Length] = Vector2.zero;
for (int i = 0; i < points.Length; i++) {
Vector2 pt = e2d.transform.TransformPoint (points [i]);
vertices [i] = new Vector3 (pt.x + offset.x, pt.y + offset.y, 0);
uvs [i] = Vector2.zero;
if (i < points.Length - 1) {
triangles [i * TRIANGLES_COUNT] = i;
triangles [(i * TRIANGLES_COUNT) + 1] = i + 1;
triangles [(i * TRIANGLES_COUNT) + 2] = points.Length;
Mesh mesh = new Mesh ();
mesh.vertices = vertices;
mesh.triangles = triangles;
mesh.uv = uvs;
mesh.RecalculateNormals ();
MeshTransform mt = new MeshTransform ();
mt.mesh = mesh;
mt.transform = e2d.transform;
return mt;
Then I call the function eg:
public void AddMesh (MeshTransform mt)
if (mt == null) {
NavMeshBuildSource nmbs = new NavMeshBuildSource ();
nmbs.shape = NavMeshBuildSourceShape.Mesh;
nmbs.transform = mt.transform.localToWorldMatrix;
nmbs.area = 0;
nmbs.sourceObject = mt.mesh;
m_NavMeshSourceList.Add (nmbs);
and call NavMeshBuilder.UpdateNavMeshData…
I add the NavMeshAgent to my player, but it doesn’t work.
Console print:
Failed to create agent because it is not close enough to the NavMesh
I think the NavMesh PathFinding system need the mesh vertex information to create or bake the walkable/diswalkable area, I create mesh which sharp is exactly match the collider, and use this to
create/bake walkable area…
Do I went about it in the wrong way?..
1 Like
You can’t use navmesh in 2d right now. If you absolutely cannot use a grid for A*, check this out:
1 Like
Thank you, I think the navmesh need the mesh vertex information to bake walkable area, so I create mesh and send the mesh information to navmesh system of unity, no matter what unity pathfinding
algorithm is It need something like mesh to generate nodemap or orther else assist struct, the algorithm just calculate out the walk path data and return base on the mesh information we provided… But
I found it cant work… Is there something wrong, or the thread go to a wrong way?
Apparently the problem with navmesh in 2d is the coordinates. 2d uses x,y and navmesh uses x,z. People with a lot of time on their hands have figured out how to switch the coordinates around. Check
this out:
1 Like
PolyNav is another asset with strong reviews:
1 Like
I am sorry but this is completely incorrect. Navmesh can be used in 2D since 2017. Use a navmesh surface component, and youll be able to create a navmesh on anything.
Have you gotten this to work? I don’t think it works with sprites due to coordinate mismatch
1 Like
Apparently, it’s theoretically possible for this to work, however, it’s not feasible. You will need to rebuild your scene in 3d and rotate all game objects to face xz instead of xy. You will also
need 3d colliders instead of 2d. I’ve ran some tests and with some tinkering it maybe is possible, but, I strongly recommend using A*!!
3 Likes
thx you so mush, you best! I will try A*.
I dont think it work… I have tried…
Works fine for me?
What did you do to set it up?
Just to confirm before sinking in hours upon hours trying to do the impossible.
You made navmesh pathfinding work just on sprites in a 2D unity project (started as a 2D project)?
1 Like
Not without some workarounds. From my experience I had to do this:
• As mentioned, you need to get the navmeshplane component from unity’s git. This will make it possible to bake on any plane, no matter the orientation. So we can use XY plane.
• 2D Colliders doesn’t bake by default. I found a script from a blog that made it possible on runtime. If you want to bake in editor I would need to use a 3D collider.
• The navmesh agent would rotate itself to ground to the plane, this would be a problem because it would rotate the sprite/image as well with it. Work around for this was to just put the agent
component on an child object and use it with script. So you set it to not move or rotate, but still give it a path - you can then use the desiredVelocity and use those values to move the 2D
I think that solved all the problems. But it’s a lot of workaround, so you would need to get used to setting your objects up in this way.
Hope that helped. Just saying it’s possible.
If anyone has an easier way I would love to hear about it!
2 Likes
I here to confirm that NavMesh totally works in 2D. I implemented NavMeshSurface2d for tilemap in top down shooter as proof of concept.
You need:
1. https://docs.unity3d.com/Manual/class-NavMeshSurface.html just because it has base implementation
2. Empty object rotated respectively to Tilemap (90;0;0) with NavMeshSurface
3. Implement source collector for tiles, because NavMeshBuilder.CollectSources will not work
4. Use X and Z axis for NavMeshBuildSource()
so with something like this
var src = new NavMeshBuildSource();
src.transform = Matrix4x4.Translate(tilemap.GetCellCenterWorld(vec3int));
src.shape = NavMeshBuildSourceShape.Box;
src.size = Vector3.one;
You will able to bake tilemap. With NavMesh API you can implement any complex shapes you want.
Also totally check this “Runtime NavMesh Generation”
2 Likes
here some POC code, if somebody interested.
It will generate NavMesh from first TileMap with TilemapColider2d.
3795436–318604–NavMeshSurface2d.cs (19.2 KB)
3851014–325753–NavMeshSurfaceEditor2d.cs (19.8 KB)
3 Likes
Wish I found this before I implemented my own A* pathing into my 2D game, though I guess now I have something more efficient and optimisable.
Could you please help me out with a basic Unity project where you set up these? I’m stuck with it. I have a simple project but I’m generating the Tilemap in runtime. To be honest I don’t know where
to put the 90,0,0 rotated empty game object, but I’ve set a navmeshsurface2d component to the Tilemap and the ‘Collect Objects’ is set to Grid. ‘Use Geometry’ is set to Render Meshes but I tried also
with Physics Collider doesn’t change a thing.
Sorry I’m completely lost. I’m a bit new to Unity but otherwise an experienced programmer.
This kind a proof of concept for Top Down games, so it is not a production code. So in any way you need to learn it by yourself.
But here some hints:
1. If you generating Tilemap at runtime, so NavMeshSurface2d.BuildNavMesh() should be called after it is done (check the videos I linked).
2. Empty Gameobject should be placed in Scene root, with NavMeshSurface2d component added and configured. Press right click on its transform, and input rotation x:90 y:0 z:0
3. In NavMeshSurface2d select Collection Object to Grid.
So the points of code you are interested are “CollectObjects2d.Grid” (enum that I added)
First - word bounds calculation:
L:351 Bounds CalculateWorldBounds(List sources)
Second - collect source objects, that are tiles:
L:241 List CollectSources()
My implementation looks for TilemapCollider2D component, by iteration through its tiles as unwalkable. That’s all folks.
You can proxy with navmeshAgent, or use static methods CalculatePath. Just be sure that agent touches navmesh, or it will throw exception that agent is too far.
after couple of updates it does not work)
After couple of updates code above does not work)
So here is an update, it has better production value, but still its just a POC. Maybe I will put in on GitHub
I updated bounds calculation
if (m_CollectObjects == CollectObjects2d.Grid)
var grid = FindObjectOfType<Grid>();
var colider = grid.GetComponentInChildren<CompositeCollider2D>();
var bounds = GetWorldBounds(worldToLocal , colider.bounds);
return bounds;
And Source Collection, that will substract unwalkable areas
It is mandatory to have CompositCollider2d, as it used to get bounds.
Create root object, rotate x 90, in navmesh2d select “Grid” and “Unwalkable”.
3851014–325753–NavMeshSurfaceEditor2d.cs (19.8 KB)
3851014–325756–NavMeshSurface2d.cs (19.8 KB)
1 Like | {"url":"https://discussions.unity.com/t/2d-navmesh-pathfinding/682119","timestamp":"2024-11-05T01:50:17Z","content_type":"text/html","content_length":"74517","record_id":"<urn:uuid:d2a49f2b-056d-41bc-9d85-89f1757516a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00079.warc.gz"} |
Balayya Wikipedia
Under Controversies section...the below content exists,
In 2024, he violently pushed the actress Anjali publicly on a stage for not being able to properly listen to him amidst the crowd. He faced huge public backlash for this.^[
That is completely one-sided and we all know facts behind this. Yevaro ventane velli add chesaru. Someone please delete this if has access to do it.
Delete cheyyakunte, adi alage permanent ga vundipoddi
Movie industry and politics lo unappudu min pattinchukovali elantivi… inka PR enduku unnaro thelidhu.
IMDB profile photo marchadanike avvaledhu..
Bro! We Don't Care.... Jai Balayya...
27 minutes ago, uma said:
Bro! We Don't Care.... Jai Balayya...
Adhi chetakaanitanam avtundi……
48 minutes ago, uma said:
Bro! We Don't Care.... Jai Balayya...
avi permanent ga vundipothay....anduke delete cheyyali
24 minutes ago, sskmaestro said:
Adhi chetakaanitanam avtundi……
navve vaallu navvani...yedchevallu yedvani....don't care.... Bro!...maa aim and target veru...ilaa road side dwags ki annitiki respond avvali ante...maaku antha time ledu...avasara ledu....we dont
5 minutes ago, Hello26 said:
avi permanent ga vundipothay....anduke delete cheyyali
no uncle.....everyone can edit it.... at any time...
2 minutes ago, uma said:
navve vaallu navvani...yedchevallu yedvani....don't care.... Bro!...maa aim and target veru...ilaa road side dwags ki annitiki respond avvali ante...maaku antha time ledu...avasara ledu....we
dont care...
Road side dwag ni kooda control seyyaleni batch Airavatanni em control sestundi bro ? Comedy kaakapothe!
19 minutes ago, sskmaestro said:
Road side dwag ni kooda control seyyaleni batch Airavatanni em control sestundi bro ? Comedy kaakapothe!
aim big...ignore small uncle...
1 minute ago, uma said:
aim big...ignore small uncle...
Small things cheyyalenodu Big things em cheyyagaladu bro ?
Add he brutally raped ex CM YS Jagan with his words called him as Pyscho
Called PM Modi as Makhi Choos, Kojjjaa and Sikhandi in 2019 ani raayandi
Manaki konchem weight vastadi anthe gani cheap ga heroine ni nettaru enti
Ilanti strong persons and words padithene ayana range ento telusudhi
2 hours ago, Hello26 said:
Under Controversies section...the below content exists,
In 2024, he violently pushed the actress Anjali publicly on a stage for not being able to properly listen to him amidst the crowd. He faced huge public backlash for this.^[
That is completely one-sided and we all know facts behind this. Yevaro ventane velli add chesaru. Someone please delete this if has access to do it.
Delete cheyyakunte, adi alage permanent ga vundipoddi
this cannot be changed as it has a source - NDTV. Review chesina they will retain the original sentence as it has a source.
1 hour ago, uma said:
no uncle.....everyone can edit it.... at any time...
53 minutes ago, suravaram said:
this cannot be changed as it has a source - NDTV. Review chesina they will retain the original sentence as it has a source.
1 hour ago, Hello26 said:
anyone with an account (free one) can edit. But it will be reviewed. The reviewers will read the source and verify. Source lekapothe, easy to edit/delete.
Why do we really care?
Wikipedia chusi hindupur lo votes vestara?
Wikipedia chadivi next movie hit chestara?
Time waste yavvaralu ivanni.
Siro gadu oka singer ni pisikadu daaniki video source undhi ga pettandi aadi wiki lo
This topic is now archived and is closed to further replies. | {"url":"https://www.nandamurifans.com/forum/index.php?/topic/474455-balayya-wikipedia/","timestamp":"2024-11-06T17:30:30Z","content_type":"text/html","content_length":"251823","record_id":"<urn:uuid:cf50d62e-beb9-46f4-bac1-cbb085c235aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00340.warc.gz"} |
Storage Performance Basics for Deep Learning | NVIDIA Technical Blog
When production systems are not delivering expected levels of performance, it can be a challenging and time-consuming task to root-cause the issue(s). Especially in today’s complex environments,
where the workload is comprised of many software components, libraries, etc, and rely on virtually all of the underlying hardware subsystems (CPU, memory, disk IO, network IO) to deliver maximum
In the last several years we have seen a huge upsurge in a relatively new type of workload that is evolving rapidly and becoming a key component to business-critical computing – Artificial
Intelligence (AI) derived from deep learning (DL). NVIDIA GPU technology – the technology of choice for running computationally-intensive Deep Learning workloads across virtually all vertical market
segments. The software ecosystem, built on top of NVIDIA GPUs and NVIDIA’s CUDA architecture, is experiencing unprecedented growth, driving a steady increase of deep learning in the enterprise data
center deployments.
The complexity of the workloads plus the volume of data required to feed deep-learning training creates a challenging performance environment. Deep learning workloads cut across a broad array of data
sources (images, binary data, etc), imposing different disk IO load attributes, depending on the model and a myriad of parameters and variables. Minimizing potential stalls while pulling data from
storage becomes essential to maximizing throughput. Especially in GPU-driven environments running DL jobs, where AI derived from batch training workloads is processed to drive realtime decision
making, ensuring a steady flow of data from the storage subsystem into the GPU jobs is essential for enabling optimal and timely results.
Given the complexity of these environments, collecting baseline performance data before rolling into production, verifying that the core system — hardware components and operating system — can
deliver expected performance under synthetic loads, is essential. Microbenchmarking uses tools specifically designed to generate loads on a particular subsystem, such as storage. In this blog post,
we use a disk IO load generation tool to measure and evaluate disk IO performance.
Tools of The Trade
Fortunately, when it comes to tools and utilities for microbenchmarking, much of the development work has already been done. Gone are the days when you need to roll up your sleeves and start coding
up a synthetic workload generator. There is almost certainly something already available to meet your requirements.
• iperf is the tool of choice for verifying network IO performance.
• fio has emerged as the goto tool of choice for generating a storage workload in Linux.
• Vdbench is also an extremely powerful and flexible storage load generator.
Both fio and vdbench work well. Both facilitate the creation of a run file that has a predefined syntax for describing the workload you wish to generate, including the target devices, mountpoints,
etc. Let’s take a look at a few microbenchmarking experiments using fio. Bundled Linux utilities (iostat(1)), combined with the data generated by either fio or vdbench, are well-suited for
determining whether or not your storage meets performance expectations.
What to Expect
Before discussing methods and results, let’s align on key terms directly related to all performance discussions.
• Bandwidth – how big. Theoretical maximum throughput, typically expressed as bytes per second, e.g. MB/sec, GB/sec, etc. Bigger numbers are better.
• Throughput – how much. How much data is really moving, also expressed as bytes per second. The larger the number, the better.
• OPS – how many. Operations per second, device dependent. For disks, reads per second, writes per second, typically expressed as IOPS (IO operations per second). Again, bigger is better.
• Latency – how long. Time to complete an operation, such as a disk read or write. Expressed as a unit of time, typically in the millisecond range (ms) for spinning disks, microsecond range (us)
for SSD’s. With latency, smaller is better.
Determining what to expect requires assigning values to some of the terms above. The good news is that, when it comes to things like storage subsystems and networks, we can pretty much just do the
math. Starting at the bottom layer, the disks, it’s very easy to get the specs on a given disk drive.
For example, looking at the target system configuration:
• We typically see 500MB/sec or so of sequential throughput and about 80k-90k of small, random IOPS (IO operations per second) with modern Solid State Disks (SSD). Aggregate the numbers based on
the number of installed disks.
□ NVMe SSD’s provide substantially higher IOPS and throughput than SATA drives
• If we’re going to put a bunch of those in a box, we need only determine how that box connects to our system.
□ If it’s DAS (Direct Attached Storage), what is the connection path – eSATA, PCIe expansion, etc?
□ If it’s NAS (Network Attached Storage), we need to now factor in the throughput of the network – is the system using a single 10Gb link, or multiple links bonded together, etc.
Looking at the system architecture and understanding what the capabilities are of the hardware helps inform us regarding potential performance expectations and issues. Let’s take a look at some
concrete examples.
A Single SSD
We begin with the simplest possible example, a single SSD installed in an Nvidia DGX Station. First, we need to determine what the correct device name is under Linux:
# lsscsi
[2:0:0:0] disk ATA Mobius H/W RAID0 0962 /dev/sda
[3:0:0:0] disk ATA Samsung SSD 850 3B6Q /dev/sdb
[4:0:0:0] disk ATA SAMSUNG MZ7LM1T9 204Q /dev/sdc
[5:0:0:0] disk ATA SAMSUNG MZ7LM1T9 204Q /dev/sdd
[6:0:0:0] disk ATA SAMSUNG MZ7LM1T9 204Q /dev/sde
[7:0:0:0] disk ATA SAMSUNG MZ7LM1T9 204Q /dev/sdf
We’re specifically looking at the Samsung 850 EVO, device /dev/sdb (/dev/sda is an eSATA connected desktop RAID box that we will test later). The other 4 disks are the 4 internal SSD’s that ship
with the DGX Station. As we have disk model information, it’s a simple matter to look up the specs for the device. Samsung rates the 850 EVO at roughly 98k random reads/sec, 90k random writes, and a
bit over 500MB/sec for large, sequential IO. The second thing to check is the speed of the SATA link. This requires searching around a bit in the output of dmesg(1)and locating the link
initialization information:
[1972396.275689] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[1972396.275910] ata4.00: supports DRM functions and may not be fully accessible
[1972396.276209] ata4.00: disabling queued TRIM support
[1972396.276211] ata4.00: ATA-9: Samsung SSD 850 EVO 1TB, EMT03B6Q, max UDMA/133
[1972396.276213] ata4.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
Similar information can be derived by reading various subdirectories in /sys/class/. We see the drive on ata4, configured for 6.0 Gbps, or about 600MB/sec. So now we have an expectation for
sequential throughput – the drive spec indicates 510-540MB/sec, and the link is sufficient to enable achieving full drive throughput.
Let’s fio! We’ll create a basic fio run file which generates simple large sequential reads:
# cat seq_r.fio
Here’s a snippet from the output of fio during the run in which we observe throughput (519.0MB) and IOPS (519/0/0 iops) during the fio execution:
# fio -f seq_r.fio
seq-read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [R(1)] [7.2% done] [519.0MB/0KB/0KB /s] [519/0/0 iops] [eta 02m:47s]
We recommend that you monitor the output of iostat(1)in conjunction with the output of whatever disk IO load generator tool is being used. This allows us to validate the various metrics provided by
the load generator tool as well as ensure that we’re seeing disk IO and not cache IO. Iostat captures data at the Linux kernel block layer and thus reflects actual physical disk IO.
We suggest using the command line ‘iostat -cxzk 1’. Let’s look at a sample line from the output:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 2077.00 0.00 531712.00 0.00 512.00 1.79 0.86 0.86 0.00 0.48 100.00
We see very quickly that the SSD under test indeed delivers the expected throughput, at about 519MB/sec, as reported by fio. A couple of bits of iostat(1) data pique our interest:
• Difference in reported throughput: 519MB/sec (fio) versus 532MB/sec (iostat – the 531712.00 value under the rkB/s heading). This 13MB/sec variance represents only a 2.5% difference so we don’t
consider this an issue. The difference occurs because the fio test workload calculates throughput in the user context of the running process whereas iostat captures results several layers down in
the kernel.
• The reported IOPS (IO operations per second) rates look more interesting. Fio reports 519 IOPS as opposed to 2077 for iostat (r/s – reads-per-second value). This is again attributable to
differences in kernel versus user-space metrics. Fio issues 1MB reads, which apparently get decomposed somewhere down the code path into smaller IOs, which are actually what gets sent down to the
□ The avgrq-sz value is 512; this is the average size, in sectors, of the IOs being issued to the device.
□ Since a disk sector is 512 bytes, 512 x 512 = 256k. Therefore each 1MB read issued by fio is being decomposed into four 256k reads at the block layer.
□ The reads-per-second reported by iostat (2077) is roughly 4X the IOPS reported by fio (519). We will get back to this in just a bit.
We also observed expected results of about 515MB/sec for the sequential mixed read/write results, which we choose not to show due to space constraints. The bottom line: we’re getting expected
throughput with large sequential reads and writes to/from this SSD.
Sequential throughout is just one performance attribute. Random IO performance, measured as IOPS (IO operations per second) is also extremely important. And it’s not just the actual IOPS rate, but
observed latency as well, as reported in the iostat(1) r_await column (for reads, w_await for writes).
Let’s tweak the fio run file to specify random IO and a much smaller IO size – 4k in this case for the IOPS test. Here is the random read fio run file:
# cat rand_r.fio
Now running 4k random reads:
# fio -f rand_r.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [r(1)] [16.7% done] [98.91MB/0KB/0KB /s] [25.4K/0/0 iops] [eta 02m:30s]
And here’s the matching iostat output:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 26314.00 0.00 105256.00 0.00 8.00 0.76 0.03 0.03 0.00 0.03 76.00
We see about 25k random read IOPs. This sounds pretty good, but a new 850 EVO SSD should get around 92k random 4k reads per second, so this is substantially less than expected. Before we write off
the SSD as being an issue, let’s make sure the workload is sufficient to maximize the performance of the target device. The speed of modern SSDs means we often need concurrent load (multi-process or
multi-thread) to extract maximum performance. (The same holds true for high speed networks).
The fio run file includes an attribute called iodepth, which determines the number of IOs that are queued up to the target. The iostat data, avgqu-sz, shows that the queue depth to the device is
typically less than 1 (0.76). Let’s try queuing up more IO’s.
We can use the Linux lsblk(1) utility to take a look at the kernels request queue size for the disk devices:
# lsblk -o "NAME,TRAN,RQ-SIZE"
NAME TRAN RQ-SIZE
sda sata 128
sdb sata 128
sdc sata 128
This means the kernel maintains a queue depth of 128 for each device. Let’s try tweaking the iodepth parameter in the fio run file by adding iodepth=32 to the random IO run file shown above.
# fio -f rand_r.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=32
Starting 1 process
Jobs: 1 (f=1): [r(1)] [31.1% done] [100.1MB/0KB/0KB /s] [25.9K/0/0 iops] [eta 02m:04s]
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 0.00 25218.00 0.00 100872.00 0.00 8.00 0.68 0.03 0.03 0.00 0.03 68.00
We show both the fio output and iostat output samples above. With the change in the iodepth from 1 to 32, we observe no improvement in IOPS. Also, in both cases, the avgqu-sz (average queue size
during the sampling period) remained less than 1. It seems the iodepth value change did not result in a larger number of IO’s queued to the device. Let’s take a closer look at the man page for fio(1)
. Zeroing in on the iodepth parameter description, the man page tells us:
“Note that increasing iodepth beyond 1 will not affect synchronous ioengines…”.
Thus, we need to use a different ioengine parameter for fio. Of the available ioengines, libaio seems the logical choice, so we change the run file by replacing ioengine=psync with ioengine=libaio.
The fio results after the change to libaio generated the same IOPS result – about 24k, and the queue depth (avgqu-sz) reported by iostat still showed less than 1 IO in the queue. A trip back to the
man page reveals the problem lies with the iodepth parameter:
“Even async engines may impose OS restrictions causing the desired depth not to be achieved. This may happen on Linux when using libaio and not setting direct=1, since buffered IO is not async on
that OS.”
Let’s add direct=1 to the fio run file, and try again.
# fio -f rand_r.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
Starting 1 process
Jobs: 1 (f=1): [r(1)] [25.0% done] [383.8MB/0KB/0KB /s] [98.3K/0/0 iops] [eta 02m:15s]
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdb 0.00 0.00 98170.00 0.00 392680.00 0.00 8.00 31.35 0.32 0.32 0.00 0.01 100.00
This time we observe 98k random 4k reads per second from the device, which aligns well with the specification for the device. The latency (await_r) is excellent at 320us (sub-millisecond disk IO).
Also, the avgqu-sz value sits right around 32, which aligns with the iodepth value in the run file. The random write and mixed random read/write results also all aligned well with what is expected
from the device, now that we have a correct run parameter file.
The key lesson learned on this simple experiment: ensure you understand your load generation tool. For reference, here is the fio run file used for the random 4k read load:
# cat rand_r.fio
File Systems
Testing on the raw block device has benefits because it’s the simplest code path through the kernel for doing disk IO, thus making it easier to test the capabilities of the underlying hardware. But
file systems are a fact of life, and it’s important to understand performance when a file system is in the mix. Let’s create a ext4 file system on /dev/sdb, using all defaults (no parameter tweaks,
straight out of the box), and run another series of fio tests.
Here is the fio run file for file system large sequential reads:
# cat srfs.fio
First, sequential reads:
# fio -f srfs.fio
seq_fs_read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [R(1)] [92.3% done] [5433MB/0KB/0KB /s] [5433/0/0 iops] [eta 00m:14s]
Just a single job doing 1MB sequential reads generates sustained throughput of 5433MB/sec, or just over 5.4GB/sec. Given we’re on a eSATA link capable of a maximum of 600MB/sec, we can assume this
workload is running out of the kernel page cache: in other words, system memory. Running iostat confirms there is no physical disk IO happening. As an experiment, we add concurrency to this load,
using fio’s numjobs parameter, setting numjobs=4 in the run file.
# fio -f srfs.fio
seq_fs_read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=1
Starting 4 processes
Jobs: 4 (f=4): [R(4)] [22.7% done] [22016MB/0KB/0KB /s] [22.2K/0/0 iops] [eta 02m:20s]
With 4 processes running concurrently, throughput increases about 4x, to 22061MB/sec, or 22GB/sec. It’s great that the system can pull a lot of data out of the page cache (memory), but we’re still
not measuring disk IO performance.
Tracking the disk IO via iostat when starting this fio test, the system does perform actual disk reads, with the reported throughput from fio at about 520MB/sec. After the files are read, all
subsequent reads are from the page cache, at which point you see the throughput value reported by fio increases substantially, depending on the IO size, number of jobs, etc.
As an aside, using Intel’s Memory Latency Checker, mlc, we verified that this system is capable of just over 70GB/sec read throughput from memory, so the 23GB/sec we observed with 4 jobs running
falls well within the capabilities of the hardware.
Linux includes direct IO support, using the O_DIRECT flag passed to the kernel via the open(2) system call, instructing the kernel to do unbuffered IO. This bypasses the page cache. Adding direct=1
to the run file enables this. After setting this flag in the run file, we noticed a sustained 520MB/sec throughput during the sequential read test — similar to performance with the block device and
consistent with the SSD performance specification. With direct=1, we also observed physical disk reads that aligned with the metrics reported with fio.
An interesting side-note regarding direct=1: the sequential read experiments on the raw block device resulted in a disparity between the IOPS reported by fio, and the reads-per-second reported by
iostat. Recall that 1MB reads issued by fio broke down into four 256k reads in the kernel block layer. Thus, iostat reported 4X more reads-per-second than fio. That disparity goes away when setting
direct=1 in the run file for the block device.The IOPS reported by fio and iostat aligned similarly to our observations with the file system. Using direct=1 on block devices changes the disk IO
We observed a similar disparity in results for the 4k random read load based on whether we set direct=1 or not. The IOPS value jumped to 6124k IOPS, or roughly 6.1M IOPS without direct IO after the
initial file read — well beyond what a single SSD can do. This resulted in 6.1M read IOPS with 8 jobs running. Increasing the number of concurrent jobs from 8 to 16 brought the IOPS number to 10.2M –
not linear, but an interesting data point nonetheless in terms of having a sense for small random reads from memory.
The 10.2M IOPS may be a limitation in fio, a limitation in the kernel for a single ext4 file system, or several other possibilities. We emphatically do not assert that 10.2M 4k reads-per-second is a
ceiling/limit – this requires more analysis and experimentation. We’re focused on what we can get from physical storage so we’ll defer chasing the cached random read IOPS ceiling for another article.
Finally, setting direct=1 for the random 4k read test resulted in device-limited results – about 96k IOPS. Therefore, the hardware can sustain device-level performance with no noticeable issues when
doing direct IO with a file system in place. We also see a substantial increase in throughput and IOPS when our load is reading from the page cache (kernel memory).
Evaluating write performance with file systems in the mix gets a little trickier because of synchronous write semantics and page cache flushing. For a quick first test, we have an fio run file
configured to do random 4k writes, one job, psync engine. In this test, fio reported an IOP rate of just over 570k IOPS:
# fio -f rwfs.fio
random_fs_write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w(1)] [16.6% done] [0KB/2259MB/0KB /s] [0/578K/0 iops] [eta 02m:31s]
We know the SSD can handle peak writes of around 90k IOPS; clearly, these writes were writes to the page cache, not the SSD itsefl. Watching iostat data confirms this, as nowhere near that number of
disk writes is seen. We do observe a regular burst of writes to the disk, on the order of 1k write IOPS every few seconds. This is the Linux kernel periodically flushing dirty pages from the page
cache to disk.
In order to force bypassing the page cache, add direct=1 to the run file, and start again.
# fio -f rwfs.fio
random_fs_write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w(1)] [51.4% done] [0KB/135.3MB/0KB /s] [0/34.7K/0 iops] [eta 01m:28s]
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 1.00 0.00 32360.00 0.00 129444.00 8.00 0.66 0.02 0.00 0.02 0.02 65.60
The fio output now reports 34.7k IOPS, and the iostat samples indicate this is all physical IO going to disk. Adding the direct flag bypasses the page cache, enabling writing directly to disk. We
know the 34.7k write IOPS falls below the stated spec for the drive; based on our previous experiment tells us we need to add more processes (concurrency) to max out the SSD. Adding numjobs=4
increased write IOPS to 75k; increasing to numjobs=8 boosts results slightly to 85k write IOPS — pretty close to the 90k specification. Since this microbenchmark is hitting the ext4 file system, we
have a longer code path through the kernel, even with direct IO. We used fio’s numjobs instead of iodepth in this example intentionally, as we wish to illustrate there are multiple ways of increasing
IO load.
We can also force synchronous writes in fio using sync=1 in the run file. In this example, direct=1 was replaced with sync=1.
# fio -f rwfs.fio
random_fs_write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
Starting 8 processes
Jobs: 8 (f=8): [w(8)] [42.0% done] [0KB/2584KB/0KB /s] [0/646/0 iops] [eta 01m:45s]
Write IOPS dropped to 646 with sync=1, a substantial drop in write performance. In general, this is not unusual when enforcing synchronous write semantics because each write needs to be committed to
non-volatile storage before it returns success to the calling application.
We need to keep several key points in mind when evaluating write performance:
• Concurrency required for maximum IOPS
• Enforcing synchronous writes substantially reduces write IOPS, and increases latency
• Using direct=1 and sync=1 on write loads with fio have two very different effects on the resulting performance. Direct enforces bypassing the page cache, but writes may still get cached at the
device level (NVRAM in SSD’s, track buffers, etc), whereas sync=1 must ensure writes are committed to non-volatile storage. In the case of SSD’s, that will be the backend NAND storage, where
things like write amplification and garbage collection can get in the way of write performance.
External RAID Box
Let’s now examine a desktop storage system that implements hardware RAID which can be configured with up to 5 SSDs. We’ll configure the device as RAID 0 to maximize potential performance. The RAID
system employs five Samsung 850 EVO SSDs, the same make and model used previously, so we can do the math to determine expected performance levels. Each drive offers 520MB/sec sequential throughput
per drive, which translates to 2.6GB/sec total aggregate throughput. The limitations of eSATA v3 throttles throughput – we know we will never get close to that as we’re on eSATA v3 so throughput will
be limited to about 600MB/sec.
Theoretically, we should see roughly 90k IOPS per device for random IO or 450k random IOPS for the array. In reality, the system will never achieve that due fo the eSATA limit, since 450k IOPS at 4k
per IO would generate about 1.6GB/sec throughput — substantially more than the eSATA link can sustain. Doing the math, our random IOPS performance will top out at about 140k IOPS.
# lsscsi
[2:0:0:0] disk ATA Mobius H/W RAID0 0962 /dev/sda
[3:0:0:0] disk ATA Samsung SSD 850 3B6Q /dev/sdb
[4:0:0:0] disk ATA SAMSUNG MZ7LM1T9 204Q /dev/sdc
. . .
There’s our RAID device, /dev/sda.
Once again, we begin with large, sequential reads.
# fio -f seq_r.fio
seq-read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
Starting 1 process
Jobs: 1 (f=1): [R(1)] [17.7% done] [256.0MB/0KB/0KB /s] [256/0/0 iops] [eta 02m:29s]
The fio results come in at around 256MB/sec, well below expectations. We expected to saturate the eSATA link at near 600MB/sec. The iostat data aligned with this, showing around 260MB/sec from /dev/
sda. Our fio run file reflects what we learned the first time around, thus we have iodepth=32 and ioengine=libaio. It looks like we’re not getting what we expect in terms of IO’s getting queued to
the device based on the iostat data:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 1020.00 0.00 261120.00 0.00 512.00 1.86 1.82 1.82 0.00 0.98 100.40
The avgqu-sz field is less than 2 (1.86) which is inconsistent with our earlier test. When we previously set iodepth=32, IO’s queued to a single device increased substantially. Earlier, we solved a
similar issue by setting direct=1.
# fio -f seq_r.fio
seq-read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
Starting 1 process
Jobs: 1 (f=1): [R(1)] [11.6% done] [262.0MB/0KB/0KB /s] [262/0/0 iops] [eta 02m:40s]
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 277.00 0.00 267264.00 0.00 1929.70 33.94 122.06 122.06 0.00 3.61 100.00
Setting direct=1 in the run file definitely changes things, even to a block device. Throughput increases to 262MB/sec. We also see the IO queue depth at an expected value of just over 32 (avgqu-sz).
IOPS values also improved. In the first case, fio generates 254 read IOPS with direct=0 (default) whereas setting direct=1 generates 1020 read IOPS – four times as many IOPS. It’s possible the IO
size is being reduced somewhere in the kernel code path. We’re generating 1MB reads from fio, but we’re sending (512 x 512) 256k IO’s to the device at the block layer. That explains why we see four
times as many IOPS: the IO size is reduced by one fourth. The IOPS reported by iostat and fio are virtually the same with direct=1 and we can see the avgrq-sz of 1929.70 reflects an average IO size
of about 1MB. (Remember, this data reflects 277 data points averaged over 1-second intervals, so the math will not align precisely).
Determining why using direct IO on a block device changes things requires some research and likely an excursion through the source code, which we may cover in a future post.
That 262MB/sec result presents a problem. Why is our desktop RAID box delivering substantially less sequential read throughput than a single SSD? Let’s verify the link speed first by looking at
kernel messages via dmesg(1):
[ 10.174282] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 10.174756] ata3.00: ATA-7: Mobius H/W RAID0, 0962, max UDMA/133
. . .
Now we see the problem: the external SATA reports 3.0Gbps, not 6.0Gbps, which implies a maximum theoretical throughput of about 300MB/sec. We’re getting about 262MB/sec which is probably realistic
given protocol overhead.
If our production workload needs maximum available sequential read performance, this is not a good solution to pair with a DGX Station. But there’s still random IOPS; perhaps it delivers on a
different workload. The appropriate changes are made to the fio run file to generate 4k random reads with libaio as the ioengine, and an iodepth of 32.
# fio -f rand_r.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
Starting 1 process
Jobs: 1 (f=1): [r(1)] [6.6% done] [62396KB/0KB/0KB /s] [15.6K/0/0 iops] [eta 02m:49s]
Only 15k random 4k reads per second looks pretty abysmal — much worse than the first go-round with only a single SSD.
Let’s crank up the iodepth to 128 since we have multiple physical SSD’s behind this RAID controller, all striped up as a RAID 0 device. This allows us to see if more IO’s in the queue yield a better
IOPS result.
# fio -f rand_r.fio
random-read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128
Starting 1 process
Jobs: 1 (f=1): [r(1)] [17.1% done] [62096KB/0KB/0KB /s] [15.6K/0/0 iops] [eta 02m:30s]
Even pumping up iodepth results in no improvement. Checking for random writes, we observed pretty much the same level of performance. It’s looking like the RAID box under test offers notably
substandard performance. Results such as these need to be a key component of the decision-making process when selecting a storage solution.
A few key points come to mind:
• Microbenchmarking with a synthetic load generator before going into production is relatively easy and yields big benefits.
• It’s important to analyze and understand the tool(s) and the data being reported. Sometimes not getting expected values is a load issue, not a target issue.
• Disk IO has several potential variables that can impact performance in subtle or not-so-subtle ways
□ An active page cache potentially masks actual file system performance.
□ The direct IO flag apparently affects IO with both block and file system. We’ll dive deeper on this in a future article.
□ Synchronous writes used to ensure data integrity are expensive in terms of performance.
It seems that the particular external RAID box we examined isn’t well-suited for deep learning applications if any level of sustained IO performance is important to the workload. The time invested in
performance testing really paid off when evaluating the performance of the storage hardware. Had we just started using this storage for production work, the poor performance may have been blamed on
the overall system.
Both of these examples represent simple and small configurations, but the same methodology can be applied to much larger data center environments. For example, if you’re deploying DGX-1 servers with
NAS storage connected via the DGX-1 10Gb ethernet ports, you know peak performance tops out at about 1.2GB/sec using one port, or 2.4GB/sec using both ports concurrently. That will be your max
deliverable throughput (wire speed of the network link).
If you’re doing small (for example, 4k) random IOs, you’ll hit the throughput limit on the wire at about 300k IOPS for a single 10Gb link or twice that if you’re aggregating across both ports. Five
SSDs employed in a current-generation NAS system will easily saturate dual 10Gbe links. A six-or-seven drive array will saturate dual 10Gbe connections with random IOs.
NVIDIA GPUs provide massive computational power for deep learning workloads. The massive parallelism of NVIDIA GPUs means deep learning jobs can run at very high rates of concurrency – thousands of
threads processing data on NVIDIA GPU cores. Completing those jobs in a timely manner requires high sustained rates of data delivery from the storage. Understanding the throughput capabilities is
critical to properly assessing performance and capacity requirements for DL in the enterprise.
High rates of low-latency IOPS may be just as important as throughput, depending on the type of source data (image files, binary files, etc) and how it is stored. Low-latency minimizes wait time for
data, whether feeding training jobs, writing files as a result of a transformation process prior to DL training, or generating results for use by data scientists and business analysts, maximizing
customer ROI when using NVIDIA Deep Learning accelerators. Spending a little time running benchmarks on your storage subsystem likely reaps signicant returns in performance. | {"url":"https://developer.nvidia.com/blog/storage-performance-basics-for-deep-learning/","timestamp":"2024-11-07T08:57:33Z","content_type":"text/html","content_length":"226472","record_id":"<urn:uuid:a33de573-46f4-47ad-a20e-7e525801903f>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00617.warc.gz"} |
Mechanical vibrations - Contents
This series of tutorials covers the basic theory of mechanical vibrations, an important subject in mechanical engineering. Establishing the fundamentals of the subject is largely an exercise in
applied mathematics, in particular finding and interpreting solutions to homogeneous and non-homogeneous, second order differential equations. For this reason I've included a maths tutorial covering
this aspect.
Tutorials in the Mechanical vibrations series are as follows.
Mechanical vibrations - introduction and overview
Establishing free body diagrams and equations of motion for spring and mass vibrating system with one degree of freedom for: (a) free vibrations, (b) free vibrations with damping, (c) forced free
vibrations, (d) forced free vibrations with damping.
Mechanical vibrations - maths tutorial
Methods for solving ordinary, second order, linear, homogeneous and non-homogeneous equations, specifically: general solutions for homogeneous equations of form y = e^rx obtained from the roots of
characteristic equation ar^2 + br + c = 0; particular solution derived from solving for constant terms of the general solution using two initial conditions; general solution for non-homogeneous
equations obtained from sum of the general solution of the complementary homogeneous equation and the particular solution of the non-homogeneous equation obtained by the method of undetermined
Deriving the expression for displacement x(t) = R.cos(ωt - φ) from solutions to the homogeneous differential equation of motion of a free vibrating spring and mass system where R is the amplitude and
φ the phase angle; establishing the angular frequency of a free vibrating system ω is the natural frequency ω[n] for all initial conditions of the system.
Deriving an expression for displacement from solutions to the homogeneous differential equation of motion of a free vibrating spring and mass system with damping; definition of dimensionless damping
ratio ζ and derivation of solutions for ζ > 1, ζ < 1 and ζ = 1 ; worked examples and plots illustrating heavily damped (ζ > 1), critically damped (ζ = 1) and lightly damped (ζ < 1) behaviour; method
of estimating value of ζ from experimentally derived plots.
Forced vibrations without damping
Deriving an expression for displacement from solutions to the non-homogeneous differential equation of motion of a free vibrating spring and mass system without damping subjected to a harmonic
forcing function F[0]cos(ωt) ; interpretation of the solution as separable transient and steady state responses; plots illustrating amplification factor and phase relationship for steady state
response where ω < ω[n] and ω > ω[n] ; illustration of resonance condition where ω ≅ ω[n] ; development of solution for combined transient and steady state response illustrating the harmonic
phenomenon of beating;
Forced vibrations with damping
Deriving expressions for steady state displacement and phase angle using the non-homogeneous differential equation of motion of a free vibrating spring and mass system with damping subjected to a
harmonic forcing function F[0]cos(ωt) ; plots illustrating amplification factor and phase relationship for steady state response where ω < ω[n] and ω > ω[n] ; plot illustrating variation of
amplification factor with damping ratio ζ .
I welcome feedback at: | {"url":"https://alistairstutorials.co.uk/mechanical_vibrations_content.html","timestamp":"2024-11-05T22:37:05Z","content_type":"text/html","content_length":"6208","record_id":"<urn:uuid:79a69d1d-7831-4365-bcf3-9e41f01b7c6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00690.warc.gz"} |
Using Parseval's theorem to check for energy conservation between the time and frequency domain
Note: Energy conservation - nonorm state
This example only applies to the nonom state. In nonorm, the source is a pulse by default. The pulse carries a finite amount of energy, in Joule. If there is no loss in Fourier transform, the amount
of energy has to be exactly the same in time and frequency domain. Unlike a CW source, the amount of energy accumulated is a function of time. This example does not apply when CWnorm is on.
Suppose that e(t) and E(f) are the fields in the time and frequency domain respectively, where E(f) is obtained by the fourier transform of e(t). According to Parseval's theorem, the following
equation holds,
$$\int_{-\infty}^{\infty}|e(t)|^2dt= \int_{-\infty}^{\infty}|E(f)|^2df$$
For a simple plane wave, the Poynting vector(\(P\)) is directly proportional to \(|E|^2\), where E is the electric field. This relation holds in both time and frequency domain. For a short pulse in
the nonorm state (without time averaging),
$$Power\underline\ time(t)=\int real(P_\bot(t))dS=n\sqrt{\frac{\epsilon_0}{\mu_0}}\int |e(t)^2|dS$$
$$Energy\underline\ spectrum(f)=\int real(P_\bot(t))dS=n\sqrt{\frac{\epsilon_0}{\mu_0}}\int |E(f)^2|dS$$
In other words, for the same area dS, the theorem can be written as a form of conservation of energy. Note that Energy_spectrum(f) has a unit of Watt/Hz^2, while power_time(t) has a unit of Watt.
$$Energy=\int_{-\infty}^{\infty} power(t)dt=\int_{-\infty}^{\infty}Energy\underline\ spectrum(f)df$$
To verify the above equation, a simple simulation with a plane wave and planar monitors (both time and frequency) is performed in the nonorm state. Download the above associated files and run the
script. The script contains two parts: 1) the "power" returned by the monitors and 2) Poynting vector integration.
First, it extracts the "power" recorded by the time and frequency monitor accordingly. The reason for this step is because the returned "power" is not computational expensive since it's just a 1D
vector. One can integrate power_time(t) and energy_spectrum(f) to calculate the amount of energy carried by a pulse, in Joule. Note that, the "power" returned by the time monitor is the instantaneous
power, while the "power" returned by the frequency monitor is the time averaged power (therefore needs a factor of 2 to compensate the 0.5 set by default, see the sourcepower command). The
sourcepower command is also used for comparison, where another factor of 2 is introduced from the integration of the negative frequencies due to Fourier transform.
By integrating the area under the above curves,
Ratio of energy_time to energy_f = 1.00111
Ratio of energy_time to energy_f_sourcepower = 0.99886
Second, an integration of Poynting vector is performed. Note that, this step could be quite computational expensive since the field data is usually a 3D matrix, especially the time domain data. Pz is
used to calculate power in the both domains, using the above equation.
Ratio of energy_time_manual to energy_f_manual = 1.00111
This is an example to show that the time and frequency domain monitors record the same data without losing information during Fourier transform.
See also
Frequency domain normalization | {"url":"https://optics.ansys.com/hc/en-us/articles/360034394274-Using-Parseval-s-theorem-to-check-for-energy-conservation-between-the-time-and-frequency-domain","timestamp":"2024-11-04T11:36:29Z","content_type":"text/html","content_length":"36195","record_id":"<urn:uuid:b1462788-82f3-4a2c-bf11-84ecd7258f14>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00342.warc.gz"} |
Multiple Linear Regression Calculator - MathCracker.com
Multiple Linear Regression Calculator
Instructions: You can use this Multiple Linear Regression Calculator to estimate a linear model by providing the sample values for several predictors \((X_i)\) and one dependent variable \((Y)\), by
using the spreadsheet below. Click on the "Add Predictor" button to add more independent variables (up to 5):
Multiple Linear Regression Calculator
More about this Multiple Linear Regression Calculator with steps, so you can have a deeper perspective of the results that will be provided by this calculator.
Multiple Linear Regression is very similar to Simple Linear Regression, only that two or more predictors \(X_1\), \(X_2\), ..., \(X_n\) are used to predict a dependent variable \(Y\).
What is the Multiple Linear Model
The multiple linear regression model formula is
\[ Y = \displaystyle \beta_0 + \beta_1 X_1 + \beta_2 X_2 + ... + \beta_n X_n + \epsilon\]
where \(\epsilon\) is the error term that has the property of being normally distributed with mean 0 and constant variance \(\epsilon ~ N(0, \sigma^2)\).
After providing sample values for the predictors \(X_1\), \(X_2\), ..., \(X_n\) and the response variable \(Y\), estimates of the population slope coefficients are obtained by minimizing the total
sum of squared errors . The estimated model is expressed as:
\[ \hat Y = \displaystyle \hat\beta_0 + \hat\beta_1 X_1 + \hat\beta_2 X_2 + ... + \hat\beta_n X_n\]
How do you calculate multiple linear regression?
1) First, you need to collect your data. You need to have one dependent variable (Y) and one or more independent variables (X's)
2) Next, you need to make sure that the variables have the appropriate level of measurement, especially the dependent variable. Indeed, you need to make sure that the dependent variable Y is a scale
3) Next, you need to ensure that the data have relatively bell shaped distributions, or at least that the data are not strongly skewed, in order to test for the validity of the linear regression
4) Finally, put the data in tabular form, and use either our calculator, Excel, or your calculator of choice.
Multiple Linear Regression Calculation with Excel
Can I compute a multiple regression with Excel? Absolutely, and in fact it may be one of the most commonly used methods to compute linear regressions.
Excel will provide a very complete summary, with the coefficient of determination, regression coefficients, standard errors and associated p-values, which will determine the statistical significance
of each predictor.
The only thing that Excel lacks is that it does not show step-by-step calculations, like this multiple linear regression calculator does.
Multiple Regression Analysis Interpretation
So, how do you interpret the result of a linear regression analysis? First, and most importantly, you have the regression coefficients, which represent marginal changes in the dependent variable,
when the corresponding independent variable increases by one unit, when keeping all the rest of predictors constant.
You need to be very careful at interpreting these coefficients, as it only makes sense to do so when the corresponding coefficient is statistically significant.
More Regression Calculators
You won't find a more versatile tool than linear regression. It is applied in so many contexts, that is fame is certainly well deserved.
Notice that this multiple regression calculator involves several predictors. If, on the other hand, you want to use only one predictors, you can use this simple linear regression calculator instead.
One case that is a combination of a simple linear regression (with one predictor) and a multiple linear regression ( with several predictors) is this polynomial regression calculator. For a
polynomial regression, there is one predictor \(X\), but we also use as predictors a number of integer powers of \(X\). | {"url":"https://mathcracker.com/multiple-linear-regression-calculator","timestamp":"2024-11-12T21:45:40Z","content_type":"text/html","content_length":"112268","record_id":"<urn:uuid:21f52700-0c98-43a1-902c-62632a5c3f44>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00478.warc.gz"} |
Causal structure in spin-foams
SciPost Submission Page
Causal structure in spin-foams
by Eugenio Bianchi, Pierre Martin-Dussaud
Submission summary
Authors (as registered SciPost users): Eugenio Bianchi · Pierre Martin-Dussaud
Submission information
Preprint Link: https://arxiv.org/abs/2109.00986v1 (pdf)
Date submitted: 2021-09-08 18:53
Submitted by: Martin-Dussaud, Pierre
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
• Gravitation, Cosmology and Astroparticle Physics
Specialties: • High-Energy Physics - Theory
• Quantum Physics
Approach: Theoretical
The metric field of general relativity is almost fully determined by its causal structure. Yet, in spin-foam models for quantum gravity, the role played by the causal structure is still largely
unexplored. The goal of this paper is to clarify how causality is encoded in such models. The quest unveils the physical meaning of the orientation of the two-complex and its role as a dynamical
variable. We propose a causal version of the EPRL spin-foam model and discuss the role of the causal structure in the reconstruction of a semiclassical spacetime geometry.
Current status:
Awaiting resubmission
Reports on this Submission
0- the issue of causality in spin foam models is very important, and too little explored in the literature, so this paper is a welcome contribution in this respect
1-the presentation is very good in terms of clarity and structure, very pedagogic and very well written
2-the technical improvements of the construction of causal spin foam amplitudes, and in particular the EPRL model, provided by the authors, are valuable
3- the paper adds several elements of precision in the definition of causal structures in simplicial gravity, and the relevant discussion is very well done
1-In the end, the original contribution of this paper to the literature is not too substantial. The technical improvements in the construction of causal spin foam models seem rather marginal; the
definition of causal structures in simplicial gravity is precise but not new, and the overall strategy for defining causal spin foam models or identifying causal elements in existing ones is the same
that had been proposed earlier.
2-if the paper is to be understood (also) as an introduction to the topic of causality in spin foams or as a review of the existing work, it is definitely lacking in giving a proper account of how
the current knowledge has been achieved, in explaining the connection to other facts about quantum gravity (e.g. those explained in the early QG papers by Teitelboim et al.) and in citing relevant
3-the outstanding open issues concerning causality in spin foam models have to do with the use of the "causal models" and their physical interpretation, as independent (but of course related)
constructions from the orientation-independent ones, again in the spirit of the formal QG path integral constructions or of the QFT analogues. These issues are not even touched by the authors, and in
some points their presentation is on the contrary quite confusing, if not misleading, concerning these aspects.
This paper presents an overview of how causality can be identified at the level of the simplicial complexes and geometries underlying spin foam models and how the latter can be modified to give
"causal spin foam amplitudes". In particular, a specific construction of a causal version of the EPRL amplitudes is presented.
I have several comments about this paper.
a. the basic definitions of discrete bare causality and time-orientability, for the simplicial complex, the translation in terms of dual 1-skeleton, and the Lorentzian simplicial geometry in terms of
Regge calculus, i.e. sections II,III and IV, are basically like in [6], only slightly more detailed in what concerns the global Lorentzian properties of the simplicial complex and its geometry, and
about boundary data. This should be made more clear.
b. in fact, the broader idea could be traced back to the formalism of quantum causal histories, developed in F. Markopoulou, L. Smolin, gr-qc/9702025 [gr-qc]; F. Markopoulou, L.Smolin, gr-qc/9712067
[gr-qc]; F. Markopoulou, hep-th/9904009 [hep-th]; E. Hawkins, F. Markopoulou, H. Sahlamnn, hep-th/0302111 [hep-th]; they should be cited
c. also for what concerns the restriction of spin foam amplitudes to causal configurations, i.e. the construction of causal spin foam amplitudes, the strategy outlined at the end of page 7 is exactly
the one argued for and employed in [6], where the relation to Teitelboim's construction of the QG causal propagator was also discussed, and then later in [7]. The authors are possibly improving on
this latter construction, although not in a very dramatic way, but the overall strategy, and procedure is basically the same. Also this should be made clear when introducing the general procedure,
rather than simply mentioning it when comparing the BC and EPRL construction on page 12.
d. it should be made much more evident that the dual 1-skeleton of the simplicial complex, when interpreted in causal terms, contains closed timelike loops. Transitive closure (which gives poset from
dual 1-skeleton) implies contraction of some dual faces (the closed timelike loops), but it is important to note that this contraction changes the combinatorics, thus the duality with the simplicial
complex (it amounts to a specific coarse graining of the simplicial complex). It would also change the spin foam amplitudes, in fact. The role of closed timelike loops in causal spin foam models has
been studied, from a quantum information perspective, in E. Livine, D. Terno, gr-qc/0611135 [gr-qc], which should be cited.
e. the discussion of the PR model is focusing exclusively on the spin representation of the model; this is surprising, since the orientation independence of the amplitudes, the connection to the
discrete gravity path integral, which makes clear where this orientation-independence comes from, and the geometric interpretation of the same amplitudes, is much clearer in the equivalent expression
as lattice BF theory, or, indeed, 3d lattice gravity written in Lie algebra and group variables. The spin foam representation is simply the result of a change of variables, which can be performed
explicitly and straightforwardly in both directions (this is the exact counterpart of writing down the path integral for a particle on the 3-sphere in terms of spherical harmonics). From this point
of view, there is no need to go to the semiclassical limit to see the connection to discrete 3d gravity, as implied by the authors. In fact the same is true for any spin foam model, although the
corresponding lattice gravity path integral is more involved (as it should be) in the 4d case. This has been shown in several papers, for example in M. Finocchiaro, D. Oriti, 1812.03550 [gr-qc]
In particular, it is true for the BC model (whose lattice gravity expression has been introduced in V. Bonzom, E. Livine, 0812.3456 [gr-qc]), and for the EPRL model. This shows that the reliance on
the semi-classical approximation to give a "causal"interpretation to the terms appearing in the spin foam amplitudes is not needed.
f. From this point of view, the strategy applied by the authors to impose a causality restriction on the amplitudes, and by the other authors before them including the authors of [6,7], is rather
artificial. Indeed, it basically amounts to forcing the amplitudes in the spin representation to take a "path integral-like" form, with the appearance of the exponential of an action, and then
restrict the corresponding sum over opposite orientations for the wedge sub-amplitudes., as one would naturally do in a quantum gravity path integral following Teitelboim. Given that the path
integral expression is already available for the same amplitudes, without any artificial splitting and in the classical variables from the underlying phase space, it would seem much more natural to
perform the causal restriction in this expression. In fact, this was done for the Ponzano-Regge model coupled to point particles, in D. Oriti, T. Tlas, gr-qc/0608116 [gr-qc], where the resulting
appearance of the causal propagator for the point particles is also shown.
The authors could explain why the focus on the spin representation instead, since their strategy is actually motivated and phrased in a path integral perspective (by the way, one can again do the
same for the point particle on the 3-sphere, and construct different versions of the path integral, the causal propagator as well as the orientation-independent Hadamard propagator; both can be
expanded in spherical harmonics; the expression in spherical harmonics of the causal propagator does not coincide with the "causal restriction of the Hadamard propagator written in spherical
harmonics". Even insisting for some reason on applying the restriction in the spin representation, the authors fail to explain why their procedure would be more convenient that using the integral
expression of the 6j-symbol and restricting that, which was the procedure in [6] in the 4d case. Using this integral expression, one sees immediately the terms which will then be selected by the
semiclassical limit as forming the discrete gravity path integral, which the case also in the 4d case, for both BC and EPRL model.
g. the authors are of course free to regard the "causal spin foam models" as simply nicely identified components of the original spin foam models, rather than new models, and their usefulness as
limited to signaling out underlying causal properties. However, this is rather unnatural from the QG path integral perspective and quite unnatural also from the point of view of particle propagators
or QFT 2-point functions, where causal and orientation-independent constructions are used alongside one another, and on equal footing (in fact, the Feynman propagator is clearly playing a more
prominent role in QFT), since they correspond to different "observables" or, more generally, answers to different questions one can pose to the theory. In any case, causal models seem to be actually
put forward as new models also by the authors, when they suggest that they can "cure" issues of the non-causal ones. If this was not the case, i.e. if one did not work with causal models as "the
correct models", by dropping all non-causal configurations, they could not cure anything, since one would still be working with the non-causal ones, in the end.
h. I am puzzled by the discussion of the relation with the construction in [7]; in particular, the answer to the question posed in 1. on page 13 seems obviously "yes". It is the same analysis by
Teitelboim, that shows that, as the only difference between the two sectors of solutions of the simplicity constraints (the third being the degenerate configurations) is indeed the orientation of the
resulting bivectors, and this is exactly what is interpreted in causal terms by the authors to motivate their restriction. When the Immirzi parameter is present the difference between gravitational
and topological sectors is basically irrelevant, for what concerns us here. This is also confirmed by the BC case, and by the fact that the simplicity constraints that are actually imposed in the
construction of BC and EPRL models are in fact the linear ones, which are stronger and select away the topological sectors.
i. I am in fact even more puzzled by the authors mentioning the "cosine problem". Their own analysis of causality and of the gravitational path integral, discrete or continuous, should have made
clear to them that there is no cosine problem at all. Spin foam amplitudes which are intended to define the projector onto solutions of the Hamiltonian constraint of canonical quantum gravity must be
orientation independent, and thus sum over opposite orientations also in the semiclassical limit (unless the configurations with opposite orientation are suppressed by boundary conditions/states).
The appearance of the cosine in the asymptotic is a confirmation that they do what they are supposed to do, at least in this regard. By the way, this is also confirmed by the 3d Ponzano-Regge model
and the way the causal restrictions break symmetries that underly the topological nature of the model and, from the canonical perspective, prevent the imposition of the canonical constraints.
Causal spin foam amplitudes correspond to a different path integral construction, which is possibly equally valid and possibly useful also from the canonical perspective, but do not define such
imposition of the Hamiltonian constraint. They are defining something like a "Green function" for the Hamiltonian constraint, which is in fact more natural from a Lagrangian path integral
perspective, but less easy to interpret from a canonical one. This was discussed in the early spin foam literature, but it is also very clear already in the old papers by Teitelboim. In fact, the
appearance of the cosine from a sum over opposite orientations in the (formal) QG path integral in the continuum, and an explanation of why this is actually needed to have a path integral that
satisfies the canonical constraints, is explained in J. Halliwell, J. Hartle, Phys.Rev.D 43 (1991) 1170-1194 (section IVA, see in particular the end of page 1181) from the Hamiltonian perspective,
relating it in particular to the known difference between diffeomorphisms and canonical transformations, thus adding an important layer to the interpretation.
- in the end, the technical improvements of the spin foam construction provided by the authors seem rather marginal to me, compared to the existing literature, although valuable, and their
contribution to the overall strategy is limited to some added element of precision in the definition of causal structures in simplicial gravity, although this can be appreciated. Thus the original
contribution of this paper is not too substantial.
- the presentation is very good in terms of clarity and structure, very pedagogic and very well written; however, if it is to be understood as an introduction to the topic of causality in spin foams
or as a review of the existing work, it is definitely lacking in giving a proper account of how the current knowledge has been achieved, in explaining the connection to other facts about quantum
gravity (e.g. those explained in the early QG papers by Teitelboim et al.) and in citing relevant work.
- finally, technical improvements in specific spin foam constructions are welcome. However, the outstanding open issues concerning causality in spin foam models have to do with the use of the "causal
models" and their physical interpretation, as independent (but of course related) constructions from the orientation-independent ones, again in the spirit of the formal QG path integral constructions
or of the QFT analogues. These issues are not even touched by the authors, and in some points their presentation is on the contrary quite confusing (if not misleading), concerning these aspects.
Given the above, and while I appreciate very much several aspects of this paper, and its original contribution, I cannot recommend publication in the present form. I recommend instead a profound
revision, and then, in case, resubmission.
Requested changes
The authors should revise substantially the paper, taking into account points a-i in the report.
This is a good paper that brings useful clarity on a thorn issue. It is clearly written and the authors' definition of discrete orientation and causality is of value, and convincing. The comparison
with the existing literature is extensive and useful. The paper is definitely worth publishing. The final discussion leaves a sense of uncertainty. While the abstract appears to suggest that a
modification of the EPRL model is proposed, the text itself refers, instead, to an interpretation of different components of the amplitude in terms of causal structures. This second objective is
achieved, and justifies the interest of the paper. The first one depends on various motivations that the authors quote from the literature, at least on some of which there is no consensus. This
ambiguity in the paper does not necessarily need to be removed, but it would perhaps be better to be more explicit, for instance in the abstract. After all, this is an issue where, more than new
proposals (of which there are too many) what is needed is clarity. The paper does a good job in reviewing motivations, but without assessing them. On the other hand, it does contribute nicely to
clarify the causal nature of the discrete geometry. | {"url":"https://www.scipost.org/submissions/2109.00986v1/","timestamp":"2024-11-01T19:58:04Z","content_type":"text/html","content_length":"47938","record_id":"<urn:uuid:a07c4e1a-be70-4c67-9fe2-5027ec14ec73>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00472.warc.gz"} |
Assignment: Working With Data
Assignment: Working With Data
Statistical analysis software is a valuable tool that helps researchers perform the complex calculations. However, to use such a tool effectively, the study must be well designed. The social worker
must understand all the relationships involved in the study. He or she must understand the study’s purpose and select the most appropriate design. The social worker must correctly represent the
relationship being examined and the variables involved. Finally, he or she must enter those variables correctly into the software package. This assignment will allow you to analyze in detail the
decisions made in the “Social Work Research: Chi Square” case study and the relationship between study design and statistical analysis. Assume that the data has been entered into SPSS and you’ve been
given the output of the chi-square results. (See Week 4 Handout: Chi-Square findings).
To prepare for this Assignment, review the Week 4 Handout: Chi-Square Findings and follow the instructions.
By Day 7
Submit a 1-page paper of the following:
• An analysis of the relationship between study design and statistical analysis used in the case study that includes:
□ An explanation of why you think that the agency created a plan to evaluate the program
□ An explanation of why the social work agency in the case study chose to use a chi square statistic to evaluate whether there is a difference between those who participated in the program and
those who did not (Hint: Think about the level of measurement of the variables)
□ A description of the research design in terms of observations (O) and interventions (X) for each group.
□ Interpret the chi-square output data. What do the data say about the program?
Week 4 Handout: Chi-Square Findings The chi square test for independence is used to determine whether there is a relationship between the two variables that are categorical in the level of
measurement. In this case, the variables are: employment level and treatment condition. It tests whether there is a difference between groups. The research question for the study is: Is there a
relationship between the independent variable, treatment, and the dependent variable, employment level? In other words, is there a difference in the number of participants who are not employed,
employed part-time and employed full-time in the program and the control group (i.e., waitlist group)? The hypotheses are: H0 (The null hypothesis): There is no difference in the proportions of
individuals in the three employment categories between the treatment group and the waitlist group. In other words, the frequency distribution for variable 2 (employment) has the same proportions for
both categories of variable 1 (program participation). ** It is the null hypothesis that is actually tested by the statistic. A chi square statistic that is found to be statistically significant,
(e.g. p< .05) indicates that we can reject the null hypothesis (understanding that there is less than a 5% chance that the relationship between the variables is due to chance). H1 (The alternative
hypothesis): There is a difference in the proportions of individuals in the three employment categories between the treatment group and the waitlist group. ** The alternative hypothesis states that
there is a difference. It would allow us to say that it appears that the treatment (voc rehab program) is effective in increasing the employment status of participants. Assume that the data has been
collected to answer the above research question. Someone has entered the data into SPSS. A chi-square test was conducted, and you were given the following:
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's
honor code
terms of service | {"url":"https://www.studypool.com/discuss/17762659/assignment-working-with-data","timestamp":"2024-11-08T20:31:20Z","content_type":"text/html","content_length":"292200","record_id":"<urn:uuid:c7dabb3a-8776-4bea-af59-93ab291e5961>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00498.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ESA.2018.1
URN: urn:nbn:de:0030-drops-94646
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2018/9464/
Ahmadian, Sara ; Bhaskar, Umang ; Sanità , Laura ; Swamy, Chaitanya
Algorithms for Inverse Optimization Problems
We study inverse optimization problems, wherein the goal is to map given solutions to an underlying optimization problem to a cost vector for which the given solutions are the (unique) optimal
solutions. Inverse optimization problems find diverse applications and have been widely studied. A prominent problem in this field is the inverse shortest path (ISP) problem [D. Burton and Ph.L.
Toint, 1992; W. Ben-Ameur and E. Gourdin, 2004; A. Bley, 2007], which finds applications in shortest-path routing protocols used in telecommunications. Here we seek a cost vector that is positive,
integral, induces a set of given paths as the unique shortest paths, and has minimum l_infty norm. Despite being extensively studied, very few algorithmic results are known for inverse optimization
problems involving integrality constraints on the desired cost vector whose norm has to be minimized.
Motivated by ISP, we initiate a systematic study of such integral inverse optimization problems from the perspective of designing polynomial time approximation algorithms. For ISP, our main result is
an additive 1-approximation algorithm for multicommodity ISP with node-disjoint commodities, which we show is tight assuming P!=NP. We then consider the integral-cost inverse versions of various
other fundamental combinatorial optimization problems, including min-cost flow, max/min-cost bipartite matching, and max/min-cost basis in a matroid, and obtain tight or nearly-tight approximation
guarantees for these. Our guarantees for the first two problems are based on results for a broad generalization, namely integral inverse polyhedral optimization, for which we also give approximation
guarantees. Our techniques also give similar results for variants, including l_p-norm minimization of the integral cost vector, and distance-minimization from an initial cost vector.
BibTeX - Entry
author = {Sara Ahmadian and Umang Bhaskar and Laura Sanit{\`a} and Chaitanya Swamy},
title = {{Algorithms for Inverse Optimization Problems}},
booktitle = {26th Annual European Symposium on Algorithms (ESA 2018)},
pages = {1:1--1:14},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-081-1},
ISSN = {1868-8969},
year = {2018},
volume = {112},
editor = {Yossi Azar and Hannah Bast and Grzegorz Herman},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2018/9464},
URN = {urn:nbn:de:0030-drops-94646},
doi = {10.4230/LIPIcs.ESA.2018.1},
annote = {Keywords: Inverse optimization, Shortest paths, Approximation algorithms, Linear programming, Polyhedral theory, Combinatorial optimization}
Keywords: Inverse optimization, Shortest paths, Approximation algorithms, Linear programming, Polyhedral theory, Combinatorial optimization
Collection: 26th Annual European Symposium on Algorithms (ESA 2018)
Issue Date: 2018
Date of publication: 14.08.2018
DROPS-Home | Fulltext Search | Imprint | Privacy | {"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=9464","timestamp":"2024-11-10T22:36:10Z","content_type":"text/html","content_length":"7810","record_id":"<urn:uuid:4984fc27-af6f-4fb4-aacc-e4fc2d836b7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00445.warc.gz"} |
Feedspot - A fast, free, modern RSS Reader. Its a simple way to track all your favorite websites in one place.
PE Petroleum Exam Blog
PE Petroleum Exam Blog discusses the PE exam at length, not just the preparation but the aftermath is also calculated in this blog. General questions from the previous year's exams can be found here.
PE Petroleum Exam Blog
2w ago
Congrats to all those who took the 2024 exam. I truly enjoy reading your comments. Any comments you make can help future test-takers. And of course suggestions for blog/Guidebook/Companion
improvements are always welcome and appreciated. Please remember the blog rule: specific prior PE Exam questions cannot be discussed. General topics, resource suggestions, and testing techniques only
please. Try not to discuss specific problems from prior exams, such as comments like: "...several of the drilling questions with probability...” it too specific as per the test-writers. Thanks, folk!
&nbs ..
PE Petroleum Exam Blog
1y ago
To all who were bold enough congratulations on taking the 2023 exam! Any comments you take the time to make can help future test-takers prepare. And of course suggestions for blog/Guidebook/Companion
improvements are always welcome and appreciated. I enjoy hearing from all. Please remember the blog rule: specific prior PE Exam questions cannot be discussed. General topics, resource suggestions,
and testing techniques only please. Try not to discuss specific problems from prior exams, such as comments like: "...several of the drilling questions with probability...” it ..
PE Petroleum Exam Blog
2y ago
A gas reservoir produced 1 MMscf gas and 13 MSTB water. The current and initial gas formation value factors...reservoir modeling predicts two equally possible scenarios for water influx...The initial
gas in place (MMscf) is most likely closest to: A) 27.1 B) 30.1 C) 33.1 D) There is likely not any water influx. This problem is fairly simple; just watch the units. It try to crank these out quickly
and let the chips fall where they may, so it wouldn't surprise me if I had an error floating around on this one. Just remember on gas reservoir problems, 90% of the errors are units, and the las ..
PE Petroleum Exam Blog
2y ago
Well 24-7X was drilled and capable of 2,000 STB/D. However, production was immediately choked back to 900 STB/D from first production due to a combination of contractual and facility issues. In July
of the fourth year...24-7X’s percentage of total production over the last twelve months was closest to: A) 23; B) 23.5; C) 24; D) 24.5. This is a standard DCA problem (with a few tricks). I'll post
the solution later, but feel free to ask any questions/discuss in the meantime. It does take some time to get used to the format and equations in the new SPE Reference ..
PE Petroleum Exam Blog
2y ago
Effective liquid permeability is found in the lab by graphing gas permeability versus the reciprocal mean flowing pressure and extrapolating the reciprocal mean pressure to zero. In this problem, a
rock core filled with gas “A” has a permeability of 40 md with an average flowing pressure of 1.25 atm and has a permeability of 30 md when said mean flowing pressure is doubled. If this same rock
core is filled with gas “B” and then has a permeability of 40 md with a flowing pressure of 2.5 atm, the permeability (md) for gas “B” at a flowing pressure of 5 atm is closest to: A) 30 B) 25 C) 20
D) 15 ..
PE Petroleum Exam Blog
2y ago
Question: I found your blog, thank you for all the information and practice questions and guide work. I am overwhelmed so far in my quest to study for the exam. Seeing your suggestions that I read
SPE Textbook Series #1, #2, #12, and #4 and the 7 volume Petroleum Engineering books makes me think I need to start with a prep course to hone in on how to study efficiently. Based on commenters or
private correspondence do you know which prep course is best suited for the newer CBT test? Answer: I would merely 1) do as many practice problems as possible using the SPE Exam Resource. Once you've
done ..
PE Petroleum Exam Blog
2y ago
The 2022 exam is now in the past! Any comments you test-takers have will help future test-takers to prepare. And any suggestions for blog/Guidebook/Companion improvements are always welcome and
appreciated. I really enjoy hearing from everyone. Please remember the blog rule: specific prior PE Exam questions, in whole or in part, cannot be discussed. General topics, resource suggestions, and
testing techniques only please. Try not to "cross the line" into discussing specific problems from prior exams, such as comments like: "...several of the drilling questions with probability ..
PE Petroleum Exam Blog
2y ago
A company has two good but highly speculative opportunities to invest in, gas field A and oil field B, yet only enough capital to invest in one due to a stiff interest rate of 15% for both. The
investments and expected cash flows are below. Which alternative should be selected based on NPV analysis? A) Investment A; NPV of A >$50,000 more than “B” B) Investment A; NPV of A <$50,000 more
than “B” C) Investment B; NPV of B <$50,000 more than “A” D) Investment B; NPV of B >$50,000 more than “A” A and B Initial invest: $200M and $300M Annual revenue: $100M and $150M Annual expense: $10
PE Petroleum Exam Blog
2y ago
If a 10,000 ft drillstring’s frictional pressure loss is 1,433 psi, and the 12 lbm/gal mud returns fill a 10 ft x 10 ft tank at 6-1/2 inches per minute, the pressure at the base of the drill collar
is closest to? The ID of the drill collars is 2.5 in and pump pressure is 3,000 psi. A) 7825 B) 7819 C) 7813 D) 7800. This problem is solved using the standard mechanical energy balance equation.
Note the only source truly needed to solve it is the SPE Reference (to calculate the pressure from gravity). If you don't include the KE effect will be off just slightly. Note KE is generally ignored
in the ..
PE Petroleum Exam Blog
2y ago
Which mechanisms of water intrusion into oil wells are relatively easily controlled (select four only): __ Watered-out layer without crossflow. __ Fractures between injector and producer. __ Moving
oil/water contact. __ Coning. __ Cusping. __ Casing leaks. __ Edge water from poor areal sweeps. __ Gravity segregated layer in a thick reservoir layer with high-vertical permeability. __ Channel
flow behind the casing from primary cementing that does not isolate water-bearing zones from the pay zone. __ Fractures or faults from behind the water zone. A fairly straighforward problem; you will
eithe .. | {"url":"https://www.feedspot.com/infiniterss.php?_src=feed_title&followfeedid=5480453&q=site:http%3A%2F%2Fpepetroexam.blogspot.com%2Ffeeds%2Fposts%2Fdefault","timestamp":"2024-11-06T20:41:59Z","content_type":"text/html","content_length":"46741","record_id":"<urn:uuid:9d5d1c95-8df8-4d2b-9a4b-171e916e2259>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00068.warc.gz"} |
What are the parameters in LLM?
Let’s understand how parameters play an important role in LLM
The scale of present large language models (LLMs) is quantified through parameter count. GPT-3 is noted to possess 175 billion parameters. Phi-1.5, on the other hand, comprises merely 1.3 billion
parameters, while Llama encompasses versions spanning from 7 billion to 70 billion parameters.
So let’s understand this in a simple analogy.
We have seen price of a product depends on various factors.
Price of a product = Manufacturing unit price + quantity.
Now here ‘Quantity’ is one of the parameters that we take into consideration for determining the price of a product. Likewise, we can keep on adding more parameters to determine a more accurate price
of the product.
Example: Price of a product = Manufacturing unit price + quantity + logistics + marketing + taxes ….. +, etc., and so on.
Parameters for the price of a product
The same we can think of parameters for Large Language Models(LLM). LLM are again neural networks. The foundational element of a neural network model closely resembles our current method of
determining the price of a product.
parameters connected each other with nodes
Here we have 3-layer neural network where nodes are inter-connected to each other and the connection to each node are parameter. Just like in neural networks in each layer parameters get filtered
based on their importance and carried forward to the next layer to determine the price of a product.
How Large are they for LLM?
Image: https://microsoft.github.io/
LLM workflow with encoder and decoder layer
We can adjust the parameters in the LLM layer using Low-Rank Adaptation of Large Language Models(Lora) config.
Thanks again, for your time, if you enjoyed this short article there are tons of topics in advanced analytics, data science, and machine learning available in my medium repo. https://medium.com/
Some of my alternative internet presences are Facebook, Instagram, Udemy, Blogger, Issuu, Slideshare, Scribd, and more.
Also available on Quora @ https://www.quora.com/profile/Rupak-Bob-Roy
Let me know if you need anything. Talk Soon. | {"url":"https://bobrupakroy.medium.com/what-are-the-parameters-in-llm-76da7040e607","timestamp":"2024-11-02T00:35:32Z","content_type":"text/html","content_length":"117385","record_id":"<urn:uuid:d2d174b2-29a2-4984-bcd1-1a703efd18db>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00433.warc.gz"} |
Investigating Solids
There are three parts to this activity:
1. Making models of the solids
2. Investigating Euler's Formula
3. Estimating surface areas and volumes
1. Making Models
You will make models of the solids below. They are all polyhedra and the first five of these solids are known as Platonic Solids.
You can make models of these solids using the templates below. You can print these onto paper - or better still use thin card, if your printer can take thin card. Otherwise glue the template onto
thin card before you start cutting (you will see why I want you to use card later).
│Solid│Template │Details │
│ │Tetrahedron │Tetrahedron │
│ │Cube │Cube │
│ │Octahedron │Octahedron │
│ │Dodecahedron │Dodecahedron │
│ │Icosahedron │Icosahedron │
│ │Square Pyramid │Square Pyramid │
│ │Pentagonal Pyramid │Pentagonal Pyramid │
• Print templates onto thin card
• cut out the templates (including the tabs)
• fold along the lines
• apply glue to the tabs and stick them under the adjacent face of the solid
It gets a bit tricky when you have to stick the final tab, but take your time and be careful.
For more tips on how to make your models see Construction Tips.
Make some beautiful models!
2. Investigating Euler's Formula
Once you've made all your models, count up the numbers of Vertices, Edges and Faces on each solid. You will have to find some way to mark them as you count them, so you don't miss any out or count
some more than once.
So, as you count them (using a felt-tip pen) mark each vertex with one color, each edge with a second color, and each face with a third color. When you've finished counting, fill in your answers in
the following table:
│Solid │F │V │E │F + V − E│
│ │Number of Faces│Number of Vertices │Number of Edges│ │
│Cube │ │ │ │ │
│Tetrahedron │ │ │ │ │
│Octahedron │ │ │ │ │
│Icosahedron │ │ │ │ │
│Dodecahedron │ │ │ │ │
│Square pyramid │ │ │ │ │
│Pentagonal pyramid │ │ │ │ │
Complete the final column of the table by calculating the value of F + V − E in each case.
What did you find?
You should have got the same answer in each case.
This result is known as Euler's Formula.
It applies to all Polyhedra.
3. Estimating Surface Area
If you look at the formulas for volumes and surface areas (given on the pages in the "details" column of the first table above), you will see that some of them are quite complicated. For example, for
the dodecahedron the formulas are:
Surface Area = 3×√(25+10×√5) × (Edge Length)^2
Volume = (15+7×√5)/4 × (Edge Length)^3
Wow! They look pretty complicated, don't they?
So, let's see how we could estimate the surface areas and volumes using our templates or models:
Surface Areas
To estimate the surface areas, we could just use a grid and count squares.
But it is easier to do this using the template, isn't it? Can you see why? And can you see that the surface area of the solid is exactly the same as the area of the template? Just one thing though -
we mustn't include the tabs. They're not part of the surface area, are they?
Example: Pentagonal Pyramid
I have used a 1 cm^2 grid.
I estimate the surface area of the pentagonal pyramid to be about 98cm^2
Use the same method to estimate the surface area of each of your solids.
Complete the following table:
│Solid │Estimate of surface area (cm^2) │
│Cube │ │
│Tetrahedron │ │
│Octahedron │ │
│Icosahedron │ │
│Dodecahedron │ │
│Square pyramid │ │
│Pentagonal pyramid │ │
4. Estimating Volumes
In the activity Discover Capacity, you used a measuring cup to measure the capacities (or volumes) of different cups or containers.
Well, it isn't a good idea to try to fill our solids with milk or water, is it?
But we could use sand, or salt.
If you don't have any sand at your house, get a 500g or 1 lb packet of salt.
But don't use the salt for cooking after you've finished with it!
Also get a small funnel to help you pour the salt. Then all you need to do is make a small hole in one of the vertices of your model and pour in the salt until it is full.
Perhaps you can see now why I told you to use card to make the solids. If you use paper the faces of the solid lose their shape and you won't get a good answer.
Here is an example for the icosahedron:
Then, pour the salt out from your solid into a measuring cup. The measuring cup may be callibrated in ml, but remember that a capacity of 1 ml is exactly the same as a volume of 1 cm^3.
Make sure the surface of the salt is even.
Then read off the volume in the measuring cup as accurately as you can.
When I did this experiment, I found the amount of salt was 199 cm^3, so the volume of my icosahedron was also estimated to be 199 cm^3.
Do this for each of your solids (you can reuse the salt).
Complete the following table:
│Solid │Estimate of volume (cm^3) │
│Cube │ │
│Tetrahedron │ │
│Octahedron │ │
│Icosahedron │ │
│Dodecahedron │ │
│Square pyramid │ │
│Pentagonal pyramid │ │ | {"url":"https://mathsisfun.com/activity/solids.html","timestamp":"2024-11-12T03:43:02Z","content_type":"text/html","content_length":"12846","record_id":"<urn:uuid:06db4710-e054-4711-929a-e4791ce43574>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00267.warc.gz"} |
Advent of Code 2020: Day 06 using Python sets
Another short one, this will be quick.
Things mentioned in this post: set theory, list comprehension, intersection, map function, destructuring/splat
The Challenge Part 1
Link to challenge on Advent of Code 2020 website
The challenge talks about a task where you need to find out how many unique members there are for each set. One of the examples given was
Regardless of what the actual colour text says about this, ultimately the task is to find the unique set of letters, and count them. In this case, just a, b, and c, or 3.
Python sets
Python has built-in sets, which are very versatile. We can simply grab all the data, and split them into entries as we did before:
data = open("input.txt").read().split("\n\n")
An example entry (one member of data) looks like this:
This has the new-line in it, so we need to strip that out for this first part
Then, we simply stick this in a set() which automatically treats each character as a separate member, and de-duplicates it for us.
set(entry.replace("\n", "")
{'c', 'd', 'e', 'h', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'v', 'y', 'z'}
The question requires summing the total of this, so this can just go in a len() to find how many set members
len(set(entry.replace("\n", ""))
Repeat this over all entries in the dataset:
sum([len(set(entry.replace("\n",""))) for entry in open("input.txt").read().split("\n\n")])
That's the whole solution for Part 1
The Challenge Part 2
Part 2 switches things up a bit, instead of having to find unique members of the whole lot, the ask is that you find the common members for each item in each entry. So for the example:
Only a is common.
So, each entry we have to break down into individual items. Python's sets have a intersection() method which can give the common items. However, as we have to find the common items across multiple
items per entry, we have to do multiple intersection() with it.
So, taking an entry, we can split it into items using the regular split()
['donpevkjhymzl', 'ezyopckdlnvmj']
Python has a set.intersection() method that takes an intersection of every set argument passed to it. For example, if we make sets out of the two items:
set.intersection(set(items[0]), set(items[1]))
{'d', 'e', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'v', 'y', 'z'}
However, we have to be able to generalize this for an arbitrary number of items. We can do this with destructuring (or "splatting", or "unpacking"), allowing us to pass an arbitrary (and variable)
number of arguments to set.intersection()
set.intersection(*[set(item) for item in items])
(same output)
Here, we use list-comprehension to apply set() to each of the items. We can also use the map() function, which does pretty much the same thing:
set.intersection(*map(set, items))
(same output)
The * here is the "splat" operator, which means "unpack this list, and use each of its members as arguments to the function". The terminology is somewhat unclear here, some call it "splat", some call
it "unpack", some call it "destructure", some call it "expanding".
The challenge wants the length of this set, and to sum the lengths of all entries, so it's a case of putting this in a loop, and summing the results:
total = 0
for entry in data:
items = entry.split()
common = set.intersection(*map(set, items))
total += len(common)
print("total", total)
Or, to code-golf this down to a single line:
sum(len(set.intersection(*map(set, entry.split()))) for entry in open("input.txt").read().split("\n\n"))
The end.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/meseta/advent-of-code-2020-day-06-using-python-sets-59af","timestamp":"2024-11-08T18:22:19Z","content_type":"text/html","content_length":"121871","record_id":"<urn:uuid:8fa8e0a2-a16a-498d-94f8-b914e198ef31>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00020.warc.gz"} |
How the PERCENTILE_CONT() function works in Mariadb?
The PERCENTILE_CONT() function is a built-in function in Mariadb that returns the value of a given percentile within a group of values.
Posted on
The PERCENTILE_CONT() function is a built-in function in Mariadb that returns the value of a given percentile within a group of values. The function is useful for finding the median, quartiles, or
any other percentile of a distribution. The function is also known as PERCENTILE_CONTINUOUS().
The syntax of the PERCENTILE_CONT() function is as follows:
PERCENTILE_CONT(percentile) OVER (
PARTITION BY expr1, expr2, ...
ORDER BY expr3, expr4, ...
Where percentile is a decimal value between 0 and 1, inclusive, that specifies the percentile to compute. expr1, expr2, … are the expressions that define the partition or the group of values, and
expr3, expr4, … are the expressions that define the order or the ranking of the values within each partition. The function returns a decimal value that is the value of the given percentile.
Example 1: Calculating the median of students’ scores
The following example shows how to use the PERCENTILE_CONT() function to calculate the median of students’ scores in a table called students:
SELECT name, score, PERCENTILE_CONT(0.5) OVER (ORDER BY score) AS Median
FROM students;
The output is:
| name | score | Median |
| Bob | 50 | 70 |
| Alice| 60 | 70 |
| Eve | 70 | 70 |
| Dave | 80 | 70 |
| Carol| 90 | 70 |
The function returns the median of the students’ scores, which is 70. The median is the value that divides the distribution into two equal halves, or the 50th percentile. The function uses a linear
interpolation method to compute the percentile value when it is not an integer rank.
Example 2: Calculating the quartiles of products’ sales
The following example shows how to use the PERCENTILE_CONT() function to calculate the quartiles of products’ sales in a table called products:
SELECT product, category, sales,
PERCENTILE_CONT(0.25) OVER (PARTITION BY category ORDER BY sales) AS Q1,
PERCENTILE_CONT(0.5) OVER (PARTITION BY category ORDER BY sales) AS Q2,
PERCENTILE_CONT(0.75) OVER (PARTITION BY category ORDER BY sales) AS Q3
FROM products;
The output is:
| product | category | sales | Q1 | Q2 | Q3 |
| A | Books | 100 | 125 | 250 | 325 |
| B | Books | 200 | 125 | 250 | 325 |
| C | Books | 300 | 125 | 250 | 325 |
| D | Books | 400 | 125 | 250 | 325 |
| E | Toys | 50 | 62.5 | 100 | 137.5 |
| F | Toys | 100 | 62.5 | 100 | 137.5 |
| G | Toys | 150 | 62.5 | 100 | 137.5 |
The function returns the quartiles of the products’ sales, which are the values that divide the distribution into four equal parts, or the 25th, 50th, and 75th percentiles. The function computes the
quartiles for each category separately, using the partition by clause. The function uses a linear interpolation method to compute the percentile value when it is not an integer rank.
Related Functions
There are some other functions in Mariadb that are related to the PERCENTILE_CONT() function. They are:
• PERCENTILE_DISC(): This function returns the value of a given percentile within a group of values. The function is similar to the PERCENTILE_CONT() function, but it returns the discrete value
that is closest to the given percentile, rather than using a linear interpolation method. The function is also known as PERCENTILE_DISCRETE().
• PERCENT_RANK(): This function returns the relative rank of a row within a group of rows. The function is similar to the PERCENTILE_CONT() function, but it returns the percentile of a given value,
rather than the value of a given percentile. The function is also known as PERCENTILE_RANK().
• MEDIAN(): This function returns the median of a set of values. The function is equivalent to the PERCENTILE_CONT(0.5) function, but it does not require the over clause.
The PERCENTILE_CONT() function is a useful function in Mariadb that allows you to calculate the value of a given percentile within a group of values. The function is helpful for finding the median,
quartiles, or any other percentile of a distribution. The function uses a linear interpolation method to compute the percentile value when it is not an integer rank. You can also use other functions
like PERCENTILE_DISC(), PERCENT_RANK(), and MEDIAN() to manipulate percentiles in different ways. I hope this article helped you understand how the PERCENTILE_CONT() function works in Mariadb. | {"url":"https://www.sqliz.com/posts/how-percentile_cont-works-in-mariadb/","timestamp":"2024-11-13T02:00:02Z","content_type":"text/html","content_length":"22103","record_id":"<urn:uuid:3af405ce-e551-4c27-af4b-7ffd5113674c>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00542.warc.gz"} |
Multiplication Table
76 Times Table
Greetings, fellow math enthusiasts! Today, we embark on an exhilarating journey through the intricate realm of the 76 times table.
Join me as we dive into the captivating world of multiplication and unravel the mysteries that lie within.
Prepare yourself for an exciting adventure filled with numerical discoveries and fascination. Are you ready? Let's begin our expedition!
Setting the Stage
Before we delve into the depths of the 76 times table, let's set the stage. The 76 times table is a sequence of numbers obtained by multiplying 76 by each natural number.
Though it may initially seem like uncharted territory, fear not! Together, we will navigate through the complexities and uncover the secrets that await us.
Unveiling the 76 Times Table
Our journey commences with the most elementary multiplication: 76 multiplied by 1. As expected, the result is simply 76. Take a moment to appreciate the simplicity and elegance of this calculation.
Now, let's progress and multiply 76 by 2. Brace yourself for a leap to 152. Notice how the numbers increase with each multiplication? It's captivating to witness the growth and progression within the
Next, we multiply 76 by 3, resulting in 228. As we venture further, intriguing patterns begin to reveal themselves, captivating our attention and igniting our mathematical curiosity. The
ever-changing outcomes beckon us to explore deeper.
Venturing Deeper into the Table
Let's fast forward a bit and examine the outcome of multiplying 76 by 10. Lo and behold, the result is 760! We have now entered the realm of three digits, witnessing the expansion of numbers right
before our eyes.
Continuing our exploration, we arrive at 76 multiplied by 20, bringing us to a grand total of 1,520. The magnitude of these numbers is awe-inspiring, showcasing the power and exponential growth
inherent in the realm of multiplication.
The Boundless Magnitude
As our expedition progresses, we encounter 76 multiplied by 50, resulting in a staggering 3,800. The numbers soar to new heights, captivating our imagination and stretching our numerical
Now, brace yourself for a leap into the thousands. 76 multiplied by 100 reveals an impressive 7,600. We have reached a significant milestone in our expedition!
Continuing our exploration, multiplying 76 by 1,000 gives us a remarkable 76,000. And if we push even further, multiplying by 10,000 brings us to an astonishing 760,000.
The magnitude of these numbers is a testament to the vast possibilities that lie within the realm of mathematics.
Reflection and Conclusion
What an incredible journey through the 76 times table! From humble beginnings to numbers reaching hundreds of thousands, this expedition has allowed us to witness the beauty and power of
The world of multiplication tables is a captivating domain, teeming with endless wonders. The 76 times table provides us with a glimpse into the infinite potential that numbers possess.
So, the next time you encounter the number 76, remember the magic it holds within the realm of multiplication. Numbers are not mere symbols; they are gateways to a world of exploration and
Until our paths cross again, keep exploring, keep learning, and keep embracing the marvels of mathematics. Happy multiplying, my fellow adventurers!
Seventy-six Multiplication Table
Read, Repeat and Learn seventy six times table and Check yourself by giving a test below
Also check times table71 times table72 times table73 times table74 times table75 times table76 times table77 times table78 times table79 times table80 times table
76 Times Table Chart
Table of 76
76 Times table Test
Multiplication of 76
Reverse Multiplication of 76
Shuffled Multiplication of 76
How much is 76 multiplied by other numbers? | {"url":"https://www.printablemultiplicationtable.net/76-times-table.php","timestamp":"2024-11-03T21:44:35Z","content_type":"text/html","content_length":"28069","record_id":"<urn:uuid:9cd3bed8-c289-4d23-bdf0-547b1e9c3f6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00355.warc.gz"} |
algebraic modeling worksheets
Test Modeling and solving 2 step equations
Quiz #1: Modeling/Interpreting Expressions
Basic Equations and Modeling
9/22 Homework: Modeling With Expressions
Modeling Equations and Review
Modeling with Expressions
Modeling with Algebra Tiles #1
Explore Worksheets by Subjects
Explore printable algebraic modeling worksheets
Algebraic modeling worksheets are an essential tool for teachers looking to enhance their students' understanding of Math and Algebra concepts. These worksheets provide a structured approach to
learning, allowing students to practice and apply their knowledge in a variety of problem-solving scenarios. With a range of difficulty levels and topics covered, algebraic modeling worksheets cater
to the diverse needs of students in different grades. Teachers can utilize these resources to supplement their lesson plans, reinforce key concepts, and assess students' progress. By incorporating
algebraic modeling worksheets into their teaching strategies, educators can effectively engage their students and foster a deeper understanding of Math and Algebra.
Quizizz, a popular online platform for creating and sharing educational quizzes, offers a wealth of resources for teachers, including algebraic modeling worksheets. In addition to quizzes, Quizizz
provides a variety of other offerings such as interactive lessons, flashcards, and collaborative learning activities. This platform allows teachers to easily create and customize content tailored to
their students' needs, making it an invaluable tool for enhancing Math and Algebra instruction. The user-friendly interface and engaging features of Quizizz make learning enjoyable for students,
while providing teachers with valuable insights into their students' progress. By incorporating Quizizz and its diverse offerings into their teaching strategies, educators can effectively support
their students' growth in Math and Algebra. | {"url":"https://quizizz.com/en/algebraic-modeling-worksheets","timestamp":"2024-11-14T07:23:42Z","content_type":"text/html","content_length":"157775","record_id":"<urn:uuid:5ef6eed4-0fdd-4ddf-b927-b99d199e7b42>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00056.warc.gz"} |
A solid consists of a cone on top of a cylinder with a radius equal to that of the cone. The height of the cone is 39 and the height of the cylinder is 17 . If the volume of the solid is 150 pi, what is the area of the base of the cylinder? | HIX Tutor
A solid consists of a cone on top of a cylinder with a radius equal to that of the cone. The height of the cone is #39 # and the height of the cylinder is #17 #. If the volume of the solid is #150 pi
#, what is the area of the base of the cylinder?
Answer 1
The area of the base of the cylinder is $5 \pi$.
The formula for volume of a cone is: #V=pir^2h/3#
The formula for volume of a cylinder is: #V=pir^2h#
Therefore the formula for the total volume (#V_T#) of the given solid is:
#V_T=pir^2h_1/3+pir^2h_2# (where #h_1=#height of cone, and #h_2=#height of cylinder. The radius, #r#, is the same for both.)
We need to calculate #r# in order to calculate the area of the base of the cylinder, hence we fill in the data given.
We cancel the like term (#pi#) on each side.
Divide both sides by #30#.
The following is the formula for the area of a circle's (or cylinder's) base:
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the area of the base of the cylinder, we first need to determine the radius of the cylinder. Since the radius of the cone is equal to that of the cylinder, we can use the height and volume
information provided to find the radius of the cone.
The volume ( V ) of a cone is given by the formula:
[ V = \frac{1}{3}\pi r^2 h ]
Given that the volume of the solid is ( 150\pi ), and the height of the cone is ( 39 ), we can set up the equation:
[ 150\pi = \frac{1}{3}\pi r^2 \times 39 ]
Solving for ( r ):
[ r^2 = \frac{150\pi \times 3}{39\pi} ]
[ r^2 = \frac{450}{39} ]
[ r^2 = \frac{150}{13} ]
[ r = \sqrt{\frac{150}{13}} ]
Now, since the radius of the cylinder is the same as the radius of the cone, the radius of the cylinder is also ( \sqrt{\frac{150}{13}} ).
Finally, we can find the area of the base of the cylinder using the formula for the area ( A ) of a circle:
[ A = \pi r^2 ]
Substituting the value of ( r ):
[ A = \pi \left(\sqrt{\frac{150}{13}}\right)^2 ]
[ A = \pi \times \frac{150}{13} ]
[ A = \frac{150\pi}{13} ]
Therefore, the area of the base of the cylinder is ( \frac{150\pi}{13} ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-solid-consists-of-a-cone-on-top-of-a-cylinder-with-a-radius-equal-to-that-of-t-73-8f9afa3fd6","timestamp":"2024-11-09T21:08:30Z","content_type":"text/html","content_length":"587461","record_id":"<urn:uuid:001b9a5c-0c82-450f-af36-4b171da81aba>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00004.warc.gz"} |
3D B-Spline Representation | ECAD WIKI
3D B-Spline Representation
Change History
Id Subject Date
Latest Commit Improved display of incoming & outgoing relations. Added support for deprecation. 2023-01-31
KBLFRM-923 Added clarification for the usage of B-Splines in KBL. 2020-06-26
For historical reasons the documentation of the B_spline_curve is not clear and unfortunately leaves room for interpretation. This Implementation Guideline clarifies the relevant facts and describes
the valid interpretations in the field.
The following section contains a short wrap-up about NURBS (Non-uniform rational B-spline). The description in this section aims primarily at an informal understanding and not at a precise and 100%
correct mathematical definition. It contains just enough information to understand the definitions in this guidline and is a summary from multiple sources. For more details check for the following
links ^1^2^3 from which this summary has been derived.
NURBS are a commonly used as a representation of surfaces and curves in computer-aided design and are part of numerous industry wide standards. For the KBL & VEC only the representation of curves
(the 3D centerline of Segments) as NURBS is relevant. NURBS are representing a curve as a mathematical function. The appearance of the curve can be influenced by a set of parameters:
• Degree $d$: This is usally one of $\left [1,2,3,5 \right ]$. Sometimes, there are references to the order of a NURBS, where order is $d + 1$. The degree defines the number of control points that
influence any given point of the curve.
• Control Points: The control points define the shape curve. Each NURBS can have $n$ control points, where $n > d$.
• Weight: Each control point can define an individual weight.
• Knot vector: The knot vector defines where and how the control points affect the NURBS curve. The number of knots is equal to $n + d + 1$. The values in the vector have a non-decreasing order.
However, consecutive knots can have the same value, e.g. $(0,0,1,2,3,4,4)$ is a valid vector. A number of coinciding knots is sometimes referred to as a knot with a certain multiplicity. A knot
where the multiplicity is equal to the order ($d+1$) is a full multiplicity knot.
Special Cases of NURBS
The NURBS (Non-Uniform Rational B-Spline) are the most common form. There are groups of NURBS that have special properties:
1. If all control points have the same weight ($w=1.0$) the B-Spline is called non-rational
2. If knot vector starts and ends with a full multiplicity knot the B-spline is called clamped. A clamped B-spline starts in the first and ends in the last control point.
3. Uniform: There are some slightly different interpretations about the defintion of uniformity. In general uniform refers to the distribution of the knot values in the knot vector. Some sources
(e.g. ^2) define, that if the knot vector is clamped, all other knots have a multiplicity of one, and all knots (values) have the same distance, the B-spline is called uniform. For example a
NURBS with $d=4$ and with a knot vector $(0,0,0,0,0,1,2,3,4,5,5,5,5,5)$ would be uniform. Other sources (e.g. ^4) differentiate between clamped uniform and unclamped uniform:
□ Clamped uniform would correspond to the defintion above,
□ Unclamped uniform would require all knots to have a multiplicity of one, and all knots (values) to have the same distance (e.g. $(0,1,2,3,4,5,6,7,8,9,10,11,12)$).
Current Situation in the KBL
The intention of the KBL was, to keep the B-spline data model as simple as possible. Therefore the data model just contains the control points and the degree, assuming that all other parameters have
an unambigious default when the set of valid NURBS are restricted to Uniform non-rational B-Splines (UNRBS). This is the reason why the KBL model does not define a weight nor a knot-vector.
Unfortunately, the definition of the KBL was not as precise as it could have been. No concrete definition was made as to whether these are clamped or unclamped uniform B-Splines. At the moment
implementations for both variants exist.
A subsequent restriction to one of the two variants was discussed in the relevant committees and considered impracticable. The reasons for this were, on the one hand, the large volume of existing
data and, on the other hand, the non-trivial conversion process between the two variants, which makes it virtually impossible to implement it in practice.
The B_spline_curve in the KBL represents a uniform non-rational B-spline (either clamped or unclamped). When rendering 3D KBL data, the renderer has to use external knowledge to determine which
variant is used.
Note: Due to this fact, the B-Spline modeling in VEC version 1.2.0 and higher has been extended in a way so that all information of a NURBS can be represented. | {"url":"https://ecad-wiki.prostep.org/specifications/kbl/guidelines/topology/3d-bsplines/","timestamp":"2024-11-12T13:16:37Z","content_type":"text/html","content_length":"30060","record_id":"<urn:uuid:eac68f0c-e844-4079-aaba-8c8cd36b657a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00546.warc.gz"} |
Antoine Augustin Cournot (1801-1877) - HKT Consultant
Economic Theorists
Antoine Augustin Cournot (1801-1877)
French philosopher, mathematician and economist, Antoine Augustin Cournot has been rightly hailed as one of the greatest of the Proto-Marginalists. The unique insights of his major economics work,
Researches into the Mathematical Principles of Wealth (1838) were without parallel. Although neglected in his time, the impact of Cournot’s work on modern economics can hardly be overstated.
Antoine Augustin Cournot was born in the small town of Gray (Haute-Saône). He was educated in the schools of Gray until he was fifteen. Subsequently, for the next four years, he worked haphazardly as
a clerk in a lawyer’s office. Cournot directed his own studies throughout this time, mostly centered around philosophy and law. Inspired by the work of Laplace, Cournot realized that he had to learn
mathematics if he was to pursue his philosophical aspirations. So, at the relatively ripe age of nineteen, he enrolled in a mathematical preparatory course at a school in Besançon. He subsequently
won entry into the École Normale Supérieure in Paris in 1821.
For political reasons, the ENS was closed down in 1822 and so Cournot transferred to the Sorbonne, obtaining a lecentiate in mathematics in 1823. He threw himself wholeheartedly into the stimulating
intellectual and scientific atmosphere of Paris, attending the seminars at the Academie des Sciences and the salon of the economist Joseph Droz. Among his main intellectual influences were Laplace,
Lagrange and Hachette, a former disciple of MARIE ANTOINE CONDORCET, who imbibed in him the principles of mathematique sociale, i.e. the idea that the social sciences, like the natural, could be
dealt with mathematically. Cournot counted the young mathematician Lejeune Dirichlet as a close friend.
From 1823, Antoine Augustin Cournot was employed as a literary advisor to Marshal Gouvoin Saint Cyr and a tutor to his son. For the next ten years, Cournot would remain in Paris in this leisurely
capacity, pursuing his studies and research in his own way. In 1829, Cournot acquired a doctorate in sciences, focusing on mechanics and astronomy. After Saint Cyr’s death in 1830, Cournot took it
upon himself to edit and publish the remaining volumes of his late employer’s memoirs.
Cournot’s thesis and a few of his articles brought him to the attention of the mathematician Siméon-Denis Poisson who urged him to return to academia. Cournot refused at first but, after his
engagement with the Saint Cyr family ended in 1833, he took up a temporary appointment at the Academy in Paris. It was during this time that he translated John Herschel’s Astronomy (1834) and
Dionysus Lardner’s Mechanics (1835).
In 1834, through the good offices of Poisson, Cournot found a permanent appointment as professor of analysis and mechanics at Lyons. A year later, Poisson secured him a rectorship at the Academy of
Grenoble. Although his duties were mostly administrative, Cournot excelled at them. In 1838, (again, at the instigation of the loyal Poisson), Cournot was called to Paris as Inspecteur Général des
Études. In that same year, he was made a Knight of the Légion d’honneur (he was elevated to an Officer in 1845).
It was in this year that Cournot published his economics masterpiece, the Recherches (1838). Cournot begins with some preliminary remarks on the role of mathematics applied to the social sciences.
His announces that his purpose in using mathematics is merely to guide his reasoning and illustrate his argument rather than lead to any numerical calculations. He acknowledges (and disparages) N.F.
Canard as his only predecessor.
In his first three chapters, he runs through the definition of wealth, absolute vs. relative prices and the law of one price. Then, in Chapter 4, he unveils his demand function. He writes it in
general form as D = F(p). He assumes that F(.) is continuous and takes it as an empirical proposition that the demand function is downward-sloping (the loi de débit, “law of demand”) and proceeds to
draw it in price-quantity space. He also introduces the idea of “elasticity”, but does not write it down in a mathematical formula.
It is important to note that Cournot’s “demand function” is not a demand schedule in the modern sense. His curve, D = F(p) merely summarizes the empirical relationship between price and quantity
sold, rather than the conceptual relationship between price and the quantity sought by buyers. Cournot refuses to derive demand from any “utility”-based theories of individual behavior. As he notes,
the “accessory ideas of utility, scarcity, and suitability to the needs and enjoyments of mankind…are variable and by nature indeterminate, and consequently ill suited for the foundation of a
scientific theory” (Cournot, 1838: p.10). He satisfies himself with merely acknowledging that the functional form of F(.) depends on “the utility of the article, the nature of the services it can
render or the enjoyments it can procure, on the habits and customs of the people, on the average wealth, and on the scale on which wealth is distributed.” (1838: p.47).
In Chapter 5, Cournot jumps immediately into an analysis of monopoly. Here, the concept of a profit-maximizing producer is introduced. Cournot introduces the cost function f(D) and discusses
decreasing, constant and increasing costs to scale. He shows mathematically how a producer will choose to produce at a quantity where marginal revenue is equal to marginal cost (he re-expresses
marginal cost as a function of price in its own right, f'(D(p)) = y(p)). In Chapter 6, he examines the impact of various forms of taxes and bounties on price and quantity produced, as well as their
impact on the income of producers and consumers.
In Chapter 7, Cournot presents his famous duopoly model. He sets up a mathematical model with two rival producers of a homogeneous product. Each producer is conscious that his rival’s quantity
decision will also impact the price he faces and thus his profits. Consequently, each producer chooses a quantity that maximizes his profits subject to the quantity reactions of his rival. Cournot
mathematically derives a deterministic solution as the quantities chosen by the rival producers are in accordance with each other’s anticipated reactions. Cournot showed how this equilibrium can be
drawn as the intersection of two “reaction curves”. He depicts a stable and an unstable equilibrium in Figures 2 and 3 respectively.
Comparing solutions, Cournot notes that under duopoly, the price is lower and the total quantity produced greater than under monopoly. He runs with this insight, showing that as the number of
producers increases, the quantity becomes greater and the price lower. In Chapter 8, he introduces the case of unlimited competition, i.e. where the quantity of producers is so great that the entry
or departure of a individual producer has a negligible effect on the total quantity produced. He goes on to derive the prices and quantities in this “perfectly competitive” situation, in particular
showing that, at the solution, price is equal to marginal cost.
In the remainder of his book, Cournot takes up what he calls the “communication of markets”, or trade of a single good between regions. In Chapter 10, he analyzes two isolated countries and one
homogeneous product. He shows that the impact of opening trade between the two countries leads to the equalization of prices, with the lower cost producer exporting to the higher cost country.
Cournot tries to prove that there are conditions where the opening of trade will lead to a decline in the quantity of the good and lower revenue. He then proceeds to discuss the impact of import and
export taxes and subsidies (and algebraic error here was spotted later by Edgeworth (1894)) . On account of this, Cournot raises doubts in Chapter 12 about the “gains from trade” and defends the
profitability of import duties.
Finally, Cournot also acknowledges that the solutions obtained via his partial equilibrium theory are incomplete. He recognizes the need to take multiple markets into account and trying to solve for
the general equilibrium, but ‘this would surpass the powers of mathematical analysis’ (Cournot, 1838: p.127).
Cournot’s 1838 work received hardly any response when it came out. The denizens of the French Liberal School, who dominated the economics profession in France at the time, took no notice of it,
leaving Cournot crushed and bitter. In 1839, plagued by ill-health, Poisson asked Cournot to represent him at the concours d’agrégation de mathématiques at the Conseil Royal. After Poisson died in
1840, Cournot continued on at the Conseil as a deputy to Poisson’s successor, the mathematician Louis Poinsot.
In 1841, Cournot published his lecture notes on analysis from Lyons, dedicating the resulting Traité to Possion. In 1843, he made his first stab at probability theory in his Exposition. He
differentiated between three types of probabilities: objective, subjective and philosophical. The former two follow their standard ontological and epistemological definitions. The third category
refers to probabilities “which depend mainly on the idea that we have of the simplicity of the laws of nature.” (1843: p.440).
After the 1848 Revolution, Cournot was appointed to the Commission des Hautes Études. It was during this time that he wrote his first treatise on the philosophy of science (1851). In 1854, he became
rector of the Academy at Dijon. However, Cournot’s lifelong eye-sight problem began getting worse. Cournot retired from teaching in 1862 and moved back to Paris.
In 1859, Cournot wrote his Souvenirs, a haunting autobiographical memoir (published posthumously in 1913). Despite the dark pessimism about the decline of his creative powers, he wasn’t quite yet
finished. He published two more philosophical treatises in 1861 and 1872 which sealed his fame in the French philosophy community, but did nothing to advance his economics. He took another stab at
economics with his Principes (1863), which, on the whole, was merely a restatement of the 1838 Recherches without the math and in more popular prose. Once again, it was completely neglected. A
Journal des économistes review churlishly claimed that Cournot had “not gone beyond Ricardo”, and Cournot’s bitterness increased accordingly.
However, by this time the Marginalist Revolution had already started. Léon Walras (1874), who had read Cournot’s work early on, argued that his own theory was but a multi-market generalization of
Cournot’s partial equilibrium theory (indeed, the notation is almost identical). W. Stanley Jevons, who had not read him, nonetheless hailed him as a predecessor in later editions of his Theory
(1871). Francis Ysidro Edgeworth (1881) went to Cournot to pick up his theory of perfect competition. Alfred Marshall claimed to have read him as far back as 1868, and extensively acknowledged
Cournot’s influence in his 1890 textbook, particularly in his discussion of the theory of the firm.
Cournot lived long enough to greet the works of Walras and Jevons with a warm sense of vindication. This is evident in Cournot’s Revue sommaire (1877), a long, non-mathematical statement of his
earlier work. He seemed particularly grateful that Walras had bravely climbed the steps of the Institute de France and accused the academicians of injustice towards Cournot. He died that same year.
Walras, Jevons and the other young blades complained loudly that Cournot had been unjustly neglected by his contemporaries. So, in 1883, the French mathematician Joseph Bertrand took it upon himself
to finally provide the first review of the Cournot’s Recherches (jointly with a Walras book) in the Journal des savants. It was not a kind review. Bertrand argued that Cournot had reached the wrong
conclusion on practically everything, and reworked Cournot’s duopoly model with prices, rather than quantities, as the strategic variables – and obtained the competitive solution immediately.
Edgeworth (1897) revisited the model and assailed both Cournot and Bertrand for obtaining deterministic solutions, arguing that the equilibrium solution in the case of a small number of producers
should always be indeterminate.
The development of monopolistic competition in the 1930s drew much inspiration from Cournot’s work. As the theory of games advanced in the 1950s, Mayberry, JOHN NASH and Shubik (1953) restated
Cournot’s duopoly theory as a non-cooperative game with quantities as strategic variables. They showed that Cournot’s solution was nothing other than its Nash equilibrium (Nash, 1951). Cournot’s
influence on modern theory continues unabated, having been given a particular boost in the attempt to develop non-cooperative foundations for Walrasian general equilibrium theory (e.g. Novshek and
Sonnenschein (1978) and the 1980 JET Symposium).
Major works of Antoine Augustin Cournot
– Translator, Traité d’astronomie, par Sir John F.-W. Herschel, 1834
– Translator, Eléments de Mechanique by Dionysus Lardner, 1835
– Mémoire sur les applications du calcul des chances à la statistique judiciaire, 1838, Journal des mathématiques pures et appliquées, 12. T. 3. p.257-334
– Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth), 1838 (1897, England trans. by N.T. Bacon)
– Traité élémentaire de la théorie des fonctions et du calcul infinitésimal, 1841
– Exposition de la théorie des chances et des probabilités, 1843
– De l’origine et des limites de la correspondence entre l’agèbre et la géométrie, 1847
– Essai sur les fondements de nos connaissances et sur les caractères de la critique philosophique, 1851 – Vol. I, Vol. II
– Traité de l’enchainement des idées fondamentales dans les sciences et dans l’histoire, 1861
– Principes de la théorie des richesses, 1863
– Les institutions d’instruction publiques en France, 1864
– Considérations sur la marche des ideées et des événements dans les temps modernes, 2 vols, 1872
– Materialisme, vitalisme, rationalisme: Études des données de las science en philosophie, 1875
– Revue sommaire des doctrines économiques, 1877
– Souvenirs, 1760-1860, 1913
– A.A. Cournot, Oeuvres Complètes. 5 vols, 1973
1 thoughts on “Antoine Augustin Cournot (1801-1877)”
Retha Rosenkranz says:
I like what you guys are up too. Such clever work and reporting! Keep up the excellent works guys I¦ve incorporated you guys to my blogroll. I think it’ll improve the value of my site 🙂 | {"url":"https://sciencetheory.net/antoine-augustin-cournot-1801-1877/","timestamp":"2024-11-03T16:47:50Z","content_type":"text/html","content_length":"119400","record_id":"<urn:uuid:b46622b8-d2ff-44a0-9363-03c79c652a4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00735.warc.gz"} |
We apply the adjoint weighted equation method (AWE) to the direct
We apply the adjoint weighted equation method (AWE) to the direct solution of inverse problems of incompressible plane strain elasticity. to a wider range of problems. is the Cauchy stress tensor
which for an incompressible isotropic linear elastic material takes the form: represents the pressure is the shear modulus and is the strain tensor (the symmetric part of the displacement gradient).
With this definition the equilibrium equation becomes: problem of isotropic incompressible plane strain elasticity: Given a strain field in ? ? ?2 get are data arising from measured displacements.
Note that compared to the forward problem of elasticity which is definitely governed by a second order elliptic partial differential equation for the displacements the inverse problem is definitely
governed by a first-order system equation for the shear modulus and pressure. The perfect solution is to partial differential equations without boundary data entails arbitrary functions. This is also
the case for the inverse problem of incompressible Trelagliptin Succinate aircraft strain with a single deformation field available since boundary data (i.e. the shear modulus and pressure Rabbit
polyclonal to ZKSCAN3. fields) are usually unknown a-priori. A solution in terms of arbitrary functions is definitely too general for practical modulus reconstruction purposes. One way to cope with
this challenge is to use an additional deformation field taken from the same elastic body but with different loadings applied. This provides additional equations without increasing the unknown
quantity of shear modulus distributions (since it is the same elastic body). Properties of the inverse problem of isotropic incompressible aircraft strain elasticity with two strain fields were
regarded as in [18]. It is shown the most general remedy no longer entails arbitrary functions but instead entails 4 arbitrary constants for the shear modulus. One additional arbitrary constant is
also attributed to each of the two pressure fields. To remove the arbitrariness associated with the four constants related to the shear modulus it is sufficient to know its value at four unique
points. For biomedical imaging applications these can often be measured for example along the revealed pores and skin surface. Finally Trelagliptin Succinate since in the context of shear modulus
reconstruction the pressure fields are Trelagliptin Succinate often of no interest their value at a point can be prescribed arbitrarily. Consider a sub-region in the website is definitely available.
We refer to this region like a “calibration region” and use it to impose the known shear modulus distribution via a solitary continuous calibration condition of the form: is the wanted shear modulus.
Equations (5)-(7) provide a system of three equations for the three unfamiliar fields (denotes the calibration region within which the shear modulus is known to be is definitely a parameter that
settings the amount of regularization and is a constant used to assure continuity at |?= 3 (the suggested regularization level for the stepped inclusion). Number 9 presents the recovered material
properties when regularization is definitely applied in addition to the 3 × 3 averaging process and in addition to the strain interpolation process (the displacement interpolation which was least
accurate is definitely no longer tested). For the stepped inclusion (Number 9b) the results possess further improved. The perfect solution is away from the calibration region is now accurate and the
inclusion appears more homogeneous with very little loss of contrast. The 3 × 3 averaging process once more appears to provide better results than strain interpolation. For the simple inclusion
Trelagliptin Succinate (Number 9a) the results are right now more accurate away from the calibration region but about 20% of the contrast has also been diminished like a results of the
regularization. The regularization level suggested in the appendix for the clean inclusion (= 0.2) is much lower than the value for the stepped inclusion which we collection here. In those instances
where clean inclusions are wanted lower ideals should be arranged. When no a-priori information about the shape of the inclusion is definitely available the general guidelines provided in the end of
the appendix can be used. Figure 9 Inclusion problem with smoothed data and with TV regularization applied (= 3): (a) Reconstructed material properties for the clean profile and (b) the stepped
profile. Once we saw above when large gradients are present in. | {"url":"https://www.ecolowood.com/2016/07/we-apply-the-adjoint-weighted-equation-method-awe-to-the-direct/","timestamp":"2024-11-08T21:16:07Z","content_type":"text/html","content_length":"34996","record_id":"<urn:uuid:e8a2511e-038d-48a8-96a6-e567d2d3dc53>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00605.warc.gz"} |
% Encoding: UTF-8 @COMMENT{BibTeX export based on data in FAU CRIS: https://cris.fau.de/} @COMMENT{For any questions please write to cris-support@fau.de} @article{faucris.262393308, abstract = {The
European gas market is organized as a so-called entry-exit system with the main goal to decouple transport and trading. To this end, gas traders and the transmission system operator (TSO) sign
so-called booking contracts that grant capacity rights to traders to inject or withdraw gas at certain nodes up to this capacity. On a day-ahead basis, traders then nominate the actual amount of gas
within the previously booked capacities. By signing a booking contract, the TSO guarantees that all nominations within the booking bounds can be transported through the network. This results in a
highly challenging mathematical problem. Using potential-based flows to model stationary gas physics, feasible bookings on passive networks, \ie, networks without controllable elements, have been
characterized in the recent literature. In this paper, we consider networks with linearly modeled active elements such as compressors or control valves. Since these active elements allow the TSO to
control the gas flow, the single-level approaches for passive networks from the literature are no longer applicable. We thus present a bilevel model to decide the feasibility of bookings in networks
with active elements. While this model is well-defined for general active networks, we focus on the class of networks for which active elements do not lie on cycles. This assumption allows us to
reformulate the original bilevel model such that the lower-level problem is linear for every given upper-level decision. Consequently, we derive several single-level reformulations for this case.
Besides the classic Karush--Kuhn--Tucker reformulation, we obtain three problem-specific optimal-value-function reformulations. The latter also lead to novel characterizations of feasible bookings in
networks with active elements that do not lie on cycles. We compare the performance of our methods by a case study based on data from the GasLib.
tions of distributionally robust optimization (DRO) problems. The considered
ambiguity sets can exploit information on moments as well as confidence sets.
Typically, reformulation approaches using duality theory need to make strong
assumptions on the structure of the underlying constraints, such as convexity
in the decisions or concavity in the uncertainty. In contrast, here we present a
duality-based reformulation approach for DRO problems, where the objective of
the adverserial is allowed to depend on univariate indicator functions. This ren-
ders the problem nonlinear and nonconvex. In order to be able to reformulate
the semiinfinite constraints nevertheless, an exact reformulation is presented
that is approximated by a discretized counterpart. The approximation is re-
alized as a mixed-integer linear problem that yields sufficient conditions for
distributional robustness of the original problem. Furthermore, it is proven that
with increasingly fine discretizations, the discretized reformulation converges
to the original distributionally robust problem. The approach is made concrete
for a challenging, fundamental task in particle separation that appears in
material design. Computational results for realistic settings show that the safe
approximation yields robust solutions of high-quality and can be computed
within short time}, author = {Dienstbier, Jana and Liers, Frauke and Rolfes, Jan}, faupublication = {yes}, keywords = {Distributionally Robust Optimization, Robust Optimization, Stochastic
Optimization, Mixed-Integer Optimization, Discrete Optimization}, note = {https://cris.fau.de/converis/publicweb/Publication/288229871}, peerreviewed = {automatic}, title = {{A} {Safe}
{Approximation} {Based} on {Mixed}-{Integer} {Optimization} for {Non}-{Convex} {Distributional} {Robustness} {Governed} by {Univariate} {Indicator} {Functions}}, url = {https://arxiv.org/abs/
2301.11185}, year = {2024} } @article{faucris.307873240, abstract = {We provide a unifying, black-box tool for establishing existence of approximate equilibria in weighted congestion games and, at
the same time, bounding their Price of Stability. Our framework can handle resources with general costs—including, in particular, decreasing ones—and is formulated in terms of a set of parameters
which are determined via elementary analytic properties of the cost functions. We demonstrate the power of our tool by applying it to recover the recent result of Caragiannis and Fanelli [ICALP’19]
for polynomial congestion games; improve upon the bounds for fair cost sharing games by Chen and Roughgarden [Theory Comput. Syst., 2009]; and derive new bounds for nondecreasing concave costs. An
interesting feature of our framework is that it can be readily applied to mixtures of different families of cost functions; for example, we provide bounds for games whose resources are conical
combinations of polynomial and concave costs. In the core of our analysis lies the use of a unifying approximate potential function which is simple and general enough to be applicable to arbitrary
congestion games, but at the same time powerful enough to produce state-of-the-art bounds across a range of different cost functions.}, author = {Giannakopoulos, Yiannis and Poças, Diogo}, doi =
{10.1007/s00224-023-10133-z}, faupublication = {yes}, journal = {Theory of Computing Systems}, keywords = {Approximate equilibria; Atomic congestion games; Potential games; Price of stability}, note
= {CRIS-Team Scopus Importer:2023-07-21}, peerreviewed = {Yes}, title = {{A} {Unifying} {Approximate} {Potential} for {Weighted} {Congestion} {Games}}, year = {2023} } @article{faucris.296104822,
abstract = {We consider the capacitated facility location problem with (partial) single-sourcing (CFLP-SS). A natural mixed integer formulation for the problem involves 0–1 variables xj indicating
whether faclility j is used or not and yij variables indicating the fraction of the demand of client i that is satisfied from facility j. When the x variables are fixed, the remaining problem is a
transportation problem with single-sourcing. This structure suggest the use of a Benders’ type decomposition algorithm. Here we present three possible variants. Applied to CFLP-SS they are compared
computationally with a commercial solver (CPLEX) on the original formulation, a CPLEX version of Benders and a recent cut-and-solve approach developed specifically for CFLP-SS. Our results show that
for CFLP-SS, when the percentage of clients requiring single-sourcing is less equal than 25%, the Benders’ variants achieve speedups of between 1.2 and 3.7.}, author = {Weninger, Dieter and Wolsey,
Laurence A.}, doi = {10.1016/j.ejor.2023.02.042}, faupublication = {yes}, journal = {European Journal of Operational Research}, keywords = {Benders’ algorithm; Branch-and-cut; Facilities planning and
design; Integer programming; Mixed integer subproblems}, note = {CRIS-Team Scopus Importer:2023-04-14}, peerreviewed = {Yes}, title = {{Benders}-type branch-and-cut algorithms for capacitated
facility location with single-sourcing}, year = {2023} } @article{faucris.260238531, author = {Giannakopoulos, Yiannis}, doi = {10.1016/j.tcs.2015.03.010}, faupublication = {no}, journal =
{Theoretical Computer Science}, pages = {83--96}, peerreviewed = {Yes}, title = {{Bounding} the {Optimal} {Revenue} of {Selling} {Multiple} {Goods}}, volume = {581}, year = {2015} } @unpublished
{faucris.316075428, abstract = {We consider locally recoverable codes (LRCs) and aim to determine the smallest possible length n=n{\_}q (k, d, r) of a linear [n, k, d]{\_}q -code with locality r. For
k ≤ 7 we exactly determine all values of n{\_}2(k, d, 2) and for k ≤ 6 we exactly determine all values of n{\_}2(k, d, 1). For the ternary field we also state a few numerical results. As a general
result we prove that n{\_}q(k, d, r) equals the Griesmer bound if the minimum Hamming distance d is sufficiently large and all other parameters are fixe}, author = {Kurz, Sascha}, faupublication =
{yes}, keywords = {linear codes; locally recoverable codes; data storage; bounds for parameters}, note = {https://cris.fau.de/converis/publicweb/Publication/316075428}, peerreviewed = {automatic},
title = {{Bounds} on the minimum distance of locally recoverable codes}, year = {2024} } @article{faucris.266620219, abstract = {We present an algorithm to solve capacity extension problems that
frequently occur in energy system optimization models. Such models describe a system where certain components can be installed to reduce future costs and achieve carbon reduction goals; however, the
choice of these components requires the solution of a computationally expensive combinatorial problem. In our proposed algorithm, we solve a sequence of linear programs that serve to tighten a
budget—the maximum amount we are willing to spend towards reducing overall costs. Our proposal finds application in the general setting where optional investment decisions provide an enhanced
portfolio over the original setting that maintains feasibility. We present computational results on two model classes, and demonstrate computational savings up to 96% on certain instances.}, author =
{Singh, Bismark and Rehberg, Oliver and Groß, Theresa and Hoffmann, Maximilian and Kotzur, Leander and Stolten, Detlef}, doi = {10.1007/s11590-021-01826-w}, faupublication = {yes}, journal =
{Optimization Letters}, peerreviewed = {Yes}, title = {{Budget}-cut: introduction to a budget based cutting-plane algorithm for capacity expansion models}, year = {2021} } @unpublished
{faucris.320188815, abstract = {Linear codes play a central role in coding theory and have applications in several branches of mathematics. For error correction purposes the minimum Hamming
distance should be as large as possible. Linear codes related to applications in Galois Geometry often require a certain divisibility of the occurring weights. In this paper we present an
algorithmic framework for the classification of linear codes over finite fields with restricted sets of weights. The underlying algorithms are based on lattice point enumeration and integer linear
programming. We present new enumeration and non-existence results for projective two-weight codes, divisible codes, and additive GF(4)-code}, author = {Kurz, Sascha}, faupublication = {yes}, keywords
= {linear codes; classification; enumeration; lattice point enumeration; integer linear programming; two-weight codes}, note = {https://cris.fau.de/converis/publicweb/Publication/320188815},
peerreviewed = {automatic}, title = {{Computer} classification of linear codes based on lattice point enumeration and integer linear programming}, year = {2024} } @inproceedings{faucris.327066209,
abstract = {Linear codes related to applications in Galois Geometry often require a certain divisibility of the occurring weights. In this paper we present an algorithmic framework for the
classification of linear codes over finite fields with restricted sets of weights. The underlying algorithms are based on lattice point enumeration and integer linear programming. We present new
enumeration and non-existence results for projective two-weight codes, divisible codes, and additive F4-codes.}, author = {Kurz, Sascha and Kurz, Sascha}, booktitle = {Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, date = {2024-07-22/2024-07-25}, doi = {10.1007/978-3-031-64529-7{\_}11}, editor = {Kevin
Buzzard, Alicia Dickenstein, Bettina Eick, Anton Leykin, Yue Ren}, faupublication = {yes}, isbn = {9783031645280}, keywords = {classification; enumeration; integer linear programming; lattice point
enumeration; linear codes; two-weight codes}, note = {CRIS-Team Scopus Importer:2024-08-16}, pages = {97-105}, peerreviewed = {unknown}, publisher = {Springer Science and Business Media Deutschland
GmbH}, title = {{Computer} {Classification} of {Linear} {Codes} {Based} on {Lattice} {Point} {Enumeration}}, venue = {Durham, GBR}, volume = {14749 LNCS}, year = {2024} } @inproceedings
{faucris.260251370, author = {Giannakopoulos, Yiannis and Noarov, Georgy and Schulz, Andreas S.}, booktitle = {Proceedings of the 13th Symposium on Algorithmic Game Theory (SAGT)}, doi = {10.1007/
978-3-030-57980-7}, faupublication = {no}, pages = {339}, peerreviewed = {Yes}, title = {{Computing} {Approximate} {Equilibria} in {Weighted} {Congestion} {Games} via {Best}-{Responses}}, year =
{2020} } @article{faucris.267633646, abstract = {We present a deterministic polynomial-time algorithm for computing d(d+0(d))-approximate (pure) Nash equilibria in (proportional sharing) weighted
congestion games with polynomial cost functions of degree at most d. This is an exponential improvement of the approximation factor with respect to the previously best deterministic algorithm. An
appealing additional feature of the algorithm is that it only uses best-improvement steps in the actual game, as opposed to the previously best algorithms, that first had to transform the game
itself. Our algorithm is an adaptation of the seminal algorithm by Caragiannis at al. [Caragiannis I, Fanelli A, Gravin N, Skopalik A (2011) Efficient computation of approximate pure Nash equilibria
in congestion games. Ostrovsky R, ed. Proc. 52nd Annual Symp. Foundations Comput. Sci. (FOCS) (IEEE Computer Society, Los Alamitos, CA), 532-541; Caragiannis Fanelli A, Gravin N, Skopalik A (2015)
Approximate pure Nash equilibria in weighted congestion games: Existence, efficient computation, and structure. ACM Trans. Econom. Comput. 3(1):2:1-2:32.], but we utilize an approximate potential
function directly on the original game instead of an exact one on a modified game. A critical component of our analysis, which is of independent interest, is the derivation of a novel bound of [d/W)
(d/rho)](d+1) for the price of anarchy (PoA) of p-approximate equilibria in weighted congestion games, where Phi(d,rho) is the Lambert-W function. More specifically, we show that this PoA is exactly
equal to Phi(d+1)(d,rho) where Phi(d,rho) is the unique positive solution of the equation rho(x +1)(d) = x(d+1). Our upper bound is derived via a smoothness-like argument, and thus holds even for
mixed Nash and correlated equilibria, whereas our lower bound is simple enough to apply even to singleton congestion games.}, author = {Giannakopoulos, Yiannis and Noarov, Georgy and Schulz, Andreas
S.}, doi = {10.1287/moor.2021.1144}, faupublication = {yes}, journal = {Mathematics of Operations Research}, note = {CRIS-Team WoS Importer:2021-12-31}, pages = {1-22}, peerreviewed = {Yes}, title =
{{Computing} {Approximate} {Equilibria} in {Weighted} {Congestion} {Games} via {Best}-{Responses}}, year = {2021} } @misc{faucris.266208597, abstract = {Every optimization problem has a corresponding
verification problem which verifies whether a given optimal solution is in fact optimal. In the literature there are a lot of such ways to verify optimality for a given solution, e.g., the
branch-and-bound tree. To simplify this task, Baes et al. introduced optimality certificates for convex mixed-integer nonlinear programs and proved that these are bounded in the number of integer
variables. We introduce an algorithm to compute the certificates and conduct computational experiments. Through the experiments we show that the optimality certificates can be surprisingly small.
Mathematical optimization can be used to model and solve these planning problems.
However, in order to convince decision-makers of an alternative solution structure, mathematical solutions must be comprehensible and tangible. In this work, we describe optimized decision-support
systems for ambulatory care using the example of four different planning problems that cover a variety of aspects in terms of planning scope and decision support tools. The planning problems that we
present are the problem of positioning centers for vaccination against Covid-19 (strategical) and emergency doctors (strategical/tactical), the out-of-hours pharmacy planning problem (tactical), and
the route planning of patient transport services (operational). For each problem, we describe the planning question, give an overview of the mathematical model and present the implemented decision
support application.
npresented in Bärmann et al. (2016). In Bärmann et al. (2016), we developed a construction for the inner approximation of L^nbased on the ideas of Ben-Tal and Nemirovski (2001) and Glineur (2000). We
showed—using the same decomposition as in the aforementioned papers—that it suffices to find an inner approximation of L^2, which in turn can be obtained from an inner approximation of the unit ball
B^2 ⊂ R^2. However, in the statement of the latter two approximations, there was a signing error which we would like to correct here.}, author = {Bärmann, Andreas and Heidt, Andreas and Martin,
Alexander and Pokutta, Sebastian and Thurner, Christoph}, doi = {10.1007/s10287-016-0269-y}, faupublication = {yes}, journal = {Computational Management Science}, note = {CRIS-Team Scopus
Importer:2022-06-05}, pages = {293-296}, peerreviewed = {Yes}, title = {{Erratum} to: {Polyhedral} approximation of ellipsoidal uncertainty sets via extended formulations: a computational case study
({Computational} {Management} {Science}, (2016), 13, 2, (151-193), 10.1007/s10287-015-0243-0)}, volume = {14}, year = {2017} } @article{faucris.277090376, abstract = {We study the existence of
approximate pure Nash equilibria (alpha-PNE) in weighted atomic congestion games with polynomial cost functions of maximum degree d. Previously, it was known that d-PNE always exist, whereas
nonexistence was established only for small constants, namely, for 1.153-PNE. We improve significantly upon this gap, proving that such games in general do not have (Theta) over tilde(root d)-PNE,
which provides the first superconstant lower bound. Furthermore, we provide a black-box gap-introducing method of combining such nonexistence results with a specific circuit gadget, in order to
derive NP-completeness of the decision version of the problem. In particular, by deploying this technique, we are able to show that deciding whether a weighted congestion game has an (O) over tilde
(root d)-PNE is NP-complete. Previous hardness results were known only for the special case of exact equilibria and arbitrary cost functions. The circuit gadget is of independent interest, and it
allows us to also prove hardness for a variety of problems related to the complexity of PNE in congestion games. For example, we demonstrate that the question of existence of alpha-PNE, in which a
certain set of players plays a specific strategy profile, is NP-hard for any alpha < 3(d/2), even for unweighted congestion games. Finally, we study the existence of approximate equilibria in
weighted congestion games with general (nondecreasing) costs, as a function of the number of players n. We show that n-PNE always exists, matched by an almost tight nonexistence bound of (Theta) over
tilde (n), which we can again transform into an NP-completeness proof for the decision problem.}, author = {Christodoulou, George and Gairing, Martin and Giannakopoulos, Yiannis and Pocas, Diogo and
Waldmann, Clara}, doi = {10.1287/moor.2022.1272}, faupublication = {yes}, journal = {Mathematics of Operations Research}, note = {CRIS-Team WoS Importer:2022-06-24}, peerreviewed = {Yes}, title =
{{Existence} and {Complexity} of {Approximate} {Equilibria} in {Weighted} {Congestion} {Games}}, year = {2022} } @article{faucris.281427039, abstract = {Motivated by a routing problem faced by banks
to enhance their encashment services in the city of Perm, Russia, we solve versions of the traveling salesman problem (TSP) with clustering. To minimize the risk of theft, suppliers seek to operate
multiple vehicles and determine an efficient routing; and, a single vehicle serves a set of locations that forms a cluster. This need to form independent clusters-served by distinct vehicles-allows
the use of the so-called cluster-first route-second approach. We are especially interested in the use of heuristics that are easily implementable and understandable by practitioners and require only
the use of open-source solvers. To this end, we provide a short survey of 13 such heuristics for solving the TSP, five for clustering the set of locations, and three to determine an optimal number of
clusters-all using data from Perm. To demonstrate the practicality and efficiency of the heuristics, we further compare our heuristic solutions against the optimal tours. We then provide statistical
guarantees on the quality of our solution. All of our anonymized code is publicly available allowing extensions by practitioners, and serves as a decision-analytic framework for both clustering data
and solving a TSP.}, author = {Singh, Bismark and Oberfichtner, Lena and Ivliev, Sergey}, doi = {10.1007/s10479-022-04883-1}, faupublication = {yes}, journal = {Annals of Operations Research}, note =
{CRIS-Team WoS Importer:2022-09-09}, peerreviewed = {Yes}, title = {{Heuristics} for a cash-collection routing problem with a cluster-first route-second approach}, year = {2022} } @article
{faucris.281820300, abstract = {Motivated by a routing problem faced by banks to enhance their encashment services in the city of Perm, Russia, we solve versions of the traveling salesman problem
(TSP) with clustering. To minimize the risk of theft, suppliers seek to operate multiple vehicles and determine an efficient routing; and, a single vehicle serves a set of locations that forms a
cluster. This need to form independent clusters—served by distinct vehicles—allows the use of the so-called cluster-first route-second approach. We are especially interested in the use of heuristics
that are easily implementable and understandable by practitioners and require only the use of open-source solvers. To this end, we provide a short survey of 13 such heuristics for solving the TSP,
five for clustering the set of locations, and three to determine an optimal number of clusters—all using data from Perm. To demonstrate the practicality and efficiency of the heuristics, we further
compare our heuristic solutions against the optimal tours. We then provide statistical guarantees on the quality of our solution. All of our anonymized code is publicly available allowing extensions
by practitioners, and serves as a decision-analytic framework for both clustering data and solving a TSP.
9]. We generalize their approach and theoretical results to robust optimization problems in Euclidean spaces with affine uncertainty. Additionally, we demonstrate the value of this approach in an
exemplary manner in the area of robust semidefinite programming (SDP). In particular, we prove that computing a Pareto robustly optimal solution for a robust SDP is tractable and illustrate the
benefit of such solutions at the example of the maximal eigenvalue problem. Furthermore, we modify the famous algorithm of Goemans and Williamson [Assoc Comput Mach 42(6):1115–1145, 8] in order to
compute cuts for the robust max-cut problem that yield an improved approximation guarantee in non-worst-case scenarios.
It is applied to optimize the discrete curtailment of solar feed-in in an electrical distribution network and guarantees network stability under fluctuating feed-in.
This is modeled by a two-stage mixed-integer stochastic optimization problem proposed by Aigner et al. (European Journal of Operational Research, (2021)).
The solution approach is based on the approximation of chance constraints via robust constraints using suitable uncertainty sets.
The resulting robust optimization problem has a known equivalent tractable reformulation.%, which can be solved with standard software.
To compute uncertainty sets that lead to an inner approximation of the stochastic problem, an R-vine copula model is fitted to the distribution of the multi-dimensional power forecast error, i.e.,
the difference between the forecasted solar power and the measured feed-in at several network nodes.
The uncertainty sets are determined by encompassing a sufficient number of samples drawn from the R-vine copula model.
Furthermore, an enhanced algorithm is proposed to fit R-vine copulas which can be used to draw conditional samples for given solar radiation forecasts.
The experimental results obtained for real-world weather and network data demonstrate the effectiveness of the combination of stochastic programming and model-based prediction of uncertainty via
We improve the outcomes of previous work by showing that the resulting uncertainty sets are much smaller and lead to less conservative solutions while maintaining the same probabilistic guarantees.
= 1 across the frontier, between the first price (SP1) and second price (SP infinity) mechanisms. En route to these results, we also provide a definitive answer to an important question related to
the scheduling problem, namely whether nontruthful mechanisms can provide better makespan guarantees in the equilibrium compared with truthful ones. We answer this question in the negative by proving
that the price of anarchy of all scheduling mechanisms is at least n, where n is the number of machines.}, author = {Filos-Ratsikas, Aris and Giannakopoulos, Yiannis and Lazos, Philip}, doi =
{10.1287/moor.2021.1154}, faupublication = {yes}, journal = {Mathematics of Operations Research}, note = {CRIS-Team WoS Importer:2021-12-31}, peerreviewed = {Yes}, title = {{The} {Pareto} {Frontier}
of {Inefficiency} in {Mechanism} {Design}}, year = {2021} } @misc{faucris.267284027, abstract = {The SCIP Optimization Suite provides a collection of software packages for mathematical optimization
centered around the constraint integer programming framework SCIP. This paper discusses enhancements and extensions contained in version 8.0 of the SCIP Optimization Suite. Major updates in SCIP
include improvements in symmetry handling and decomposition algorithms, new cutting planes, a new plugin type for cut selection, and a complete rework of the way nonlinear constraints are handled.
Additionally, SCIP 8.0 now supports interfaces for Julia as well as Matlab. Further, UG now includes a unified framework to parallelize all solvers, a utility to analyze computational experiments has
been added to GCG, dual solutions can be postsolved by PaPILO, new heuristics and presolving methods were added to SCIP-SDP, and additional problem classes and major performance improvements are
available in SCIP-Jack.}, author = {Bestuzheva, Ksenia and Besançon, Mathieu and Chen, Wei-Kun and Chmiela, Antonia and Donkiewicz, Tim and van Doornmalen, Jasper and Eifler, Leon and Gaul, Oliver
and Gamrath, Gerald and Gleixner, Ambros and Gottwald, Leona and Graczyk, Christoph and Halbig, Katrin and Hoen, Alexander and Hojny, Christopher and van der Hulst, Rolf and Koch, Thorsten and
Lübbecke, Marco and Maher, Stephen J. and Matter, Frederic and Mühmer, Erik and Müller, Benjamin and Pfetsch, Marc E. and Rehfeldt, Daniel and Schlein, Steffan and Schlösser, Franziska and Serrano,
Felipe and Shinano, Yuji and Sofranac, Boro and Turner, Mark and Vigerske, Stefan and Wegscheider, Fabian and Wellner, Philipp and Weninger, Dieter and Witzig, Jakob}, faupublication = {yes},
peerreviewed = {automatic}, title = {{The} {SCIP} {Optimization} {Suite} 8.0}, year = {2021} } @misc{faucris.321843198, abstract = {The SCIP Optimization Suite provides a collection of software
packages for mathematical optimization, centered around the constraint integer programming (CIP) framework SCIP. This report discusses the enhancements and extensions included in SCIP Optimization
Suite 9.0. The updates in SCIP 9.0 include improved symmetry handling, additions and improvements of nonlinear handlers and primal heuristics, a new cut generator and two new cut selection schemes, a
new branching rule, a new LP interface, and several bugfixes. SCIP Optimization Suite 9.0 also features new Rust and C++ interfaces for SCIP, new Python interface for SoPlex, along with enhancements
to existing interfaces. SCIP Optimization Suite 9.0 also includes new and improved features in the LP solver SoPlex, the presolving library PaPILO, the parallel framework UG, the decomposition
framework GCG, and the SCIP extension SCIP-SDP. These additions and enhancements have resulted in an overall performance improvement of SCIP in terms of solving time, number of nodes in the
branch-and-bound tree, as well as the reliability of the solve}, author = {Bolusani, Suresh and Besançon, Mathieu and Bestuzheva, Ksenia and Chmiela, Antonia and Dionísio, Joâo and Donkiewicz, Tim
and van Doornmalen, Jasper and Eifler, Leon and Ghannam, Mohammed and Gleixner, Ambros and Graczyk, Christoph and Halbig, Katrin and Hedtke, Ivo and Hoen, Alexander and Hojny, Christopher and van der
Hulst, Rolf and Kamp, Dominik and Koch, Thorsten and Kofler, Kevin and Lentz, Jurgen and Manns, Julian and Mexi, Gioni and Mühmer, Erik and Pfetsch, Marc E. and Schlösser, Franziska and Serrano,
Felipe and Shinano, Yuji and Turner, Mark and Vigerske, Stefan and Weninger, Dieter and Xu, Liding}, faupublication = {yes}, peerreviewed = {automatic}, title = {{The} {SCIP} {Optimization} {Suite}
9.0}, year = {2024} } @article{faucris.268203201, abstract = {The operation of gas pipeline flow with high pressure and small Mach numbers allows to model the flow by a semilinear hyperbolic system
of partial differential equations. In this paper we present a number of transient and stationary analytical solutions of this model. They are used to discuss and clarify why a PDE model is necessary
to handle certain dynamic situations in the operation of gas transportation networks. We show that adequate numerical discretizations can capture the dynamical behavior sufficiently accurate. We also
present examples that show that in certain cases an optimization approach that is based on multi-period optimization of steady states does not lead to approximations that converge to the optimal | {"url":"https://cris.fau.de/bibtex/organisation/251829466.bib","timestamp":"2024-11-11T17:15:33Z","content_type":"application/x-bibtex-text-file","content_length":"81259","record_id":"<urn:uuid:8a9175e8-ea8a-429c-a7f8-727e36d089c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00169.warc.gz"} |
Rings in which elements are a sum of a central and a unit element
BULLETIN OF THE BELGIAN MATHEMATICAL SOCIETY-SIMON STEVIN, vol.26, no.4, pp.619-631, 2019 (SCI-Expanded)
• Publication Type: Article / Article
• Volume: 26 Issue: 4
• Publication Date: 2019
• Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
• Page Numbers: pp.619-631
• Hacettepe University Affiliated: Yes
In this paper we introduce a new class of rings whose elements are a sum of a central and a unit element, namely a ring R is called CU if each element a is an element of R has a decomposition a = c +
u where c is central and u is unit. One of the main results in this paper is that if F is a field which is not isomorphic to Z(2), then M-2(F) is a CU ring. This implies, in particular, that any
square matrix over a field which is not isomorphic to Z(2) is the sum of a central matrix and a unit matrix. | {"url":"https://avesis.hacettepe.edu.tr/yayin/fb8481bb-0239-45df-aed8-7f95d8e35bc7/rings-in-which-elements-are-a-sum-of-a-central-and-a-unit-element","timestamp":"2024-11-11T20:41:24Z","content_type":"text/html","content_length":"48302","record_id":"<urn:uuid:a975fbd7-2295-47ce-a74d-301ef7eeabf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00149.warc.gz"} |
PostgreSQL round | Complete Guide to PostgreSQL round with Examples
Updated May 12, 2023
Introduction to PostgreSQL round
Whenever we deal with numeric values in the PostgreSQL database, the precision and format in which we retrieve those values are of immense importance. The accuracy of the numbers carries a lot of
importance in real-life use cases like, for example, the precision of the measurements of certain aircraft or machine equipment or any other instrument, numeric values related to currency and
transactions, etc. This article will learn how to round the numeric values into a particular integral value or up to the decimal points we need while retrieving or manipulating the numeric data in
the PostgreSQL database.
returned_value = ROUND (source_value [ , decimal_count ] )
Where the two parameters carry the following meaning –
• source_value: You need to round the numeric value or numeric expression to an integer or a specific number of decimal points to maintain precision.
• decimal_count: The optional parameter specifies the number of decimal points the source_value should be rounded up. The default value of this integer parameter is 0 when we do not mention it.
• returned_value: If you do not specify the second parameter decimal_count, the round function returns a value that is most often of the same data type as the data_type of source_value.If the
second parameter value is specified then the datatype of the returned value is numeric.
Examples to Implement PostgreSQL round
Let us learn how we can use the round() function to round the numeric values in PostgreSQL with the help of examples:
Converting the numeric value to integers
Consider one number say 45.145; when this number is rounded to an integer using the ROUND() function, it rounds up to 45 because the decimal value after a point is not equal to or greater than 5
digit. Let us perform and see the results on the PostgreSQL terminal. For that, our query statement will be as follows –
SELECT ROUND(45.145);
Let’s observe the integer value retrieved when the digit after the decimal point is 5 or greater than 5. For that, let’s take a number, say 98.536. Rounding this number in PostgreSQL using the
following query statement
SELECT ROUND(98.536);
Now let us manipulate the field of a certain table and try to round the value. For this, let us create a table named educbademo with the numeric field as price and id integer using the following
create a query.
CREATE TABLE educbademo (id INTEGER PRIMARY KEY, price DECIMAL);
And add a few rows in it using the following query statements –
INSERT INTO educbademo VALUES(1,23.565);
INSERT INTO educbademo VALUES(2,98.148);
INSERT INTO educbademo VALUES(3,94.4616);
INSERT INTO educbademo VALUES(4,352.462);
INSERT INTO educbademo VALUES(5,87.1547);
let us confirm the contents of the table by using the following SELECT query –
SELECT * FROM educbademo;
Now, let us round the values of the column price to integral value using the round() function and the following query statement –
SELECT ROUND(price) FROM educbademo;
Rounding the Decimal Numbers
Instead of integer values, we will round the numbers to a particular decimal number with certain decimal points that we specify in the second parameter to the round() function. Let us see how we can
do this with the help of an example. Consider a decimal numeric number say 985.561. For rounded to two-digits, the query statement should contain the integer parameter decimal_count in the round()
function as 2, and the statement should be as follows –
SELECT ROUND(985.561,2);
That will result in the following output because as the decimal digit after two points, 1 is less than 5, so the number will be rounded as 985.56.
Now, if our number is 985.566, then while rounding to two digits, the numeric value that will result is as follows using the below query statement –
SELECT ROUND(985.566,2);
That gives the following output with a value of 985.57 as the digit after two decimals 6 is greater than or equal to 5; hence the second digit value is increased by one, and the output is 985.57
instead of 985.56.
Now, let us round the values of the certain column to decimal values using the round function. Let’s consider the table educbademo and round its price column to two decimal points. For this, the
query statement will be as follows –
SELECT ROUND(price,2) FROM educbademo;
When rounding a number to two digits, increase the decimal value at the second place by one for numbers that are greater than or equal to 5, and keep it the same for all other numbers. If we round
the column values to 3 digits, then the query statement will be as follows –
SELECT ROUND(price,3) FROM educbademo;
If we round the column values to 4 digits, then the query statement will be as follows –
SELECT ROUND(price,4) FROM educbademo;
We can see that 0 is appended at the end of the numeric value if the decimal value doesn’t contain any value in that decimal place.
The Round() function is used in the PostgreSQL database while dealing with numeric values. It helps in rounding the number to the integer value or up to any decimal place, as mentioned in the
function’s optional second parameter. If the second parameter is not specified, it is considered zero, and the number is converted to an integer value.
You can use this function directly on numbers or on the numeric values stored in the columns of a database table. The rounded value depends on the value of the digit immediately following the digit
to be rounded. If the digit being rounded is greater than or equal to 5, increase the value of the digit by one.
Recommended Articles
We hope that this EDUCBA information on “PostgreSQL round” was beneficial to you. You can view EDUCBA’s recommended articles for more information. | {"url":"https://www.educba.com/postgresql-round/","timestamp":"2024-11-02T09:38:34Z","content_type":"text/html","content_length":"321104","record_id":"<urn:uuid:ac3dcd3e-3ca6-4e29-ab3e-c283fb145bc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00525.warc.gz"} |
How Much Does 4L of Water Weigh: Unveil the Facts!
4 liters of water weigh approximately 4 kilograms. This is based on the density of water at room temperature.
Understanding the weight of water is essential for a range of activities, from scientific experiments to culinary endeavors and even in various industries. Water’s density, which is 1 kilogram per
liter, makes calculations straightforward for most practical purposes. Whether one needs to balance a load for transport, mix solutions for chemical processes, or simply cook with precision, knowing
the weight of water enhances accuracy and efficiency.
Simple and integral, this tidbit of information helps ensure tasks are completed successfully and safely. It is a fundamental fact that serves as a building block in numerous fields, including
physics, engineering, and nutrition.
The Basics Of Water Weight
The Basics of Water Weight begins with understanding that water has a consistent weight. A liter of water, when pure and at its maximum density, has a standard mass. To make sense of how much 4L of
water weighs, knowing water’s density and the metric system is crucial. Let’s explore these fundamental concepts.
Water Density Essentials
Water’s density is key to determining its weight. At room temperature (around 20°C), pure water has a density of 1 kilogram per liter. This means that the weight of water is quite predictable:
• 1 liter (L) of water weighs 1 kilogram (kg).
• 4 liters (L) of water weigh 4 kilograms (kg).
Keep in mind, changes in temperature or impurities can slightly alter this density.
Metric System Primer
The metric system simplifies measuring water weight. It uses kilograms to express mass and liters for volume. Here’s a quick guide:
Volume (Liters) Mass (Kilograms)
1 L 1 kg
4 L 4 kg
This simple correlation between liters and kilograms makes it straightforward to work out water weight. Hence, 4L of water weighs 4kg.
Calculating The Weight Of Water
Understanding the weight of water is essential for various tasks, including cooking, science experiments, and even packing for a hike. This section explores how to calculate the weight of water,
specifically focusing on a volume of 4 liters.
Formula For Determining Weight
To calculate the weight of water, the formula is simple: Weight (kg) = Volume (L) × Density (kg/L).
Since the density of water is roughly 1 kilogram per liter, the weight of 4 liters of water would be approximately 4 kilograms.
Factors Influencing Water’s Weight
Several factors can alter the weight of water:
• Temperature: Water’s density changes with temperature.
• Salt Content: Salty water is denser than fresh water.
• Altitude: Atmospheric pressure can affect water’s density.
In most cases, these changes are small and the standard density of water (1 kg/L) is sufficiently accurate for everyday use.
4 Liters Of Water In Perspective
Imagine carrying a jug full of water. That jug has 4 liters in it.
Volume-to-weight Conversion
Every liter of water weighs 1 kilogram. So, 4 liters weigh 4 kilograms. That’s like carrying four bags of sugar.
Comparative Examples
Let’s see how 4 liters compare to everyday items:
• A small cat: About the same weight.
• Five basketballs: Together, lighter than 4 liters of water.
• A laptop: Most weigh less than 4 liters of water.
Remember, 4 liters = 4 kilograms. It’s a simple match.
Influence Of Temperature And Pressure
Understanding the influence of temperature and pressure on the weight of water is intriguing. At first thought, 4L of water might seem to have a constant weight. Yet, temperature and pressure subtly
modify this. Let’s dive into the effects these factors have on water density and weight.
Effects Of Temperature Variations
Water’s density changes with temperature. It’s a fascinating scientific fact. Warm water expands and becomes less dense. Cool water contracts and grows denser. The weight of water reflects this
density change.
At 4°C, water reaches its maximum density. Weight measurements are most accurate here. But as temperatures rise or fall from this point, subtle changes in water weight occur. Here’s what happens:
• Above 4°C, water expands and weighs slightly less per liter.
• Below 4°C, despite contraction, it still weighs less, due to ice formation.
Pressure Impacts On Water Density
Pressure’s impact on water density is less known but equally vital. Most of us don’t feel the effects daily, but deep underwater, it’s a different story. High pressure increases water’s density,
leading to a heavier weight per liter.
Pressure Change Water Density Change Effect on 4L of Water
Increased Pressure Higher Density Slight Weight Gain
Decreased Pressure Lower Density Slight Weight Loss
This pressure-dictated variation is notably crucial in precision industries. Labs and industries consider these changes to maintain strict measurement standards.
Practical Applications
Understanding the weight of water is essential for daily tasks and numerous scientific industries. It plays a vital role in cooking, farming, and even space travel. Let’s explore how the weight of 4
liters of water is practically applied in various scenarios.
Everyday Situations
In everyday life, knowing that 4 liters of water weighs about 4 kilograms is useful. This information helps in many situations:
• Measuring ingredients for recipes that call for precise water amounts.
• Filling tanks for household items like humidifiers or fish tanks.
• Watering plants with the right volume to ensure healthy growth.
• Managing weight in backpacks for hikers aiming for an efficient load.
Scientific And Industrial Relevance
In professional settings, the weight of water is critical:
Industry Application
Healthcare Dialysis machines use a precise weight of water to aid patients.
Construction Mixing materials like cement requires specific water weights.
Food Production Exact water measurements ensure product consistency.
Pharmaceuticals Drug formulations often require precise water weights.
Scientific experiments often need accurate water weight to validate results. Industrial processes depend on this knowledge to maintain efficiency and safety. From designing life-saving equipment to
engineering food products, the weight of water is a key factor.
Common Misconceptions
Common misconceptions surround the topic of water’s weight. Many believe that the weight of water changes with volume. They are right, but not always how they think. Let’s clear up some myths and get
the facts straight.
Busting Myths
One myth is that 4 liters of water might weigh differently in various conditions. In reality, 4L of water consistently weighs close to 4 kilograms. Weather or altitude don’t change this much. Let’s
break down more myths:
• Cold water is heavier: In truth, water density increases slightly as it cools, until it freezes.
• All liquids weigh the same: Actually, each liquid has a different density. Water is a baseline for comparison.
• A gallon is always heavier: Gallons and liters measure volume, and water’s weight stays proportional to its volume.
Accuracy In Measurements
Accurate measurements depend on precision instruments. A kitchen scale can report how much water weighs. Here’s how to check:
Volume Expected Weight
1L of Water Approx. 1kg
4L of Water Approx. 4kg
1 Gallon of Water Approx. 3.785kg
Remember that scales need to be calibrated regularly. For scientific uses, laboratory scales ensure exact results.
Frequently Asked Questions Of How Much Does 4l Of Water Weigh
What Is The Weight Of 4l Water In Kilograms?
Four liters of water typically weigh approximately 4 kilograms. This is because the density of water is 1 kg/L at standard temperature.
Can Temperature Affect 4l Water’s Weight?
Temperature changes can slightly affect water’s density, not its weight. However, for practical purposes, 4 liters of water is always around 4 kilograms.
How To Convert 4l Of Water To Pounds?
To convert 4 liters of water to pounds, multiply by 2. 20462. Four liters is roughly equivalent to 8. 81849 pounds, considering water’s standard density.
Is Water’s Weight Different In Metrics?
Water weight remains constant in metric units: 1 liter equals 1 kilogram. Thus, 4 liters of water are always 4 kilograms in the metric system.
Understanding the weight of water is essential for various practical applications. As we’ve discussed, 4 liters of water typically weigh about 4 kilograms or roughly 8. 8 pounds. Whether you’re
cooking, science experimenting, or filling an aquarium, this knowledge is key.
Always remember to factor in variables like temperature for complete accuracy. Stay hydrated, and keep measuring smartly! | {"url":"https://sizepedia.org/how-much-does-4l-of-water-weigh/","timestamp":"2024-11-05T07:34:07Z","content_type":"text/html","content_length":"89362","record_id":"<urn:uuid:33cf300e-b346-495e-80b8-a0bf8b608f6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00490.warc.gz"} |
Variant 11.6.0.130 (Strictly Unitary $2$-Categories). The contents of this tag can be found in Definition 2.2.7.1.
We say that a $2$-category $\operatorname{\mathcal{C}}$ is strictly unitary if, for every $1$-morphism $f: X \rightarrow Y$ in $\operatorname{\mathcal{C}}$, we have equalities
\[ \operatorname{id}_{Y} \circ f = f = f \circ \operatorname{id}_{X}, \]
and the left and right unit constraints $\lambda _{f}$, $\rho _{f}$ are the identity $2$-morphisms from $f$ to itself. Every strict $2$-category is strictly unitary, but the converse is false: we
will see later that every $2$-category is isomorphic (in an appropriate sense) to a strictly unitary $2$-category (see Example 11.6.0.134). | {"url":"https://kerodon.net/tag/007U","timestamp":"2024-11-10T04:51:53Z","content_type":"text/html","content_length":"10085","record_id":"<urn:uuid:39edcdec-e52b-4782-9e16-a20886516399>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00523.warc.gz"} |
Lesson 10
Dividing by Unit and Non-Unit Fractions
Let’s look for patterns when we divide by a fraction.
10.1: Dividing by a Whole Number
Work with a partner. One person solves the problems labeled “Partner A” and the other person solves those labeled “Partner B.” Write an equation for each question. If you get stuck, consider drawing
a diagram.
1. Partner A:
How many 3s are in 12?
Division equation:
How many 4s are in 12?
Division equation:
How many 6s are in 12?
Division equation:
Partner B:
What is 12 groups of \(\frac 13\)?
Multiplication equation:
What is 12 groups of \(\frac 14\)?
Multiplication equation:
What is 12 groups of \(\frac 16\)?
Multiplication equation:
2. What do you notice about the diagrams and equations? Discuss with your partner.
3. Complete this sentence based on what you noticed:
Dividing by a whole number \(a\) produces the same result as multiplying by ________.
10.2: Dividing by Unit Fractions
To find the value of \(6 \div \frac 12\), Elena thought, “How many \(\frac 12\)s are in 6?” and then she drew this tape diagram. It shows 6 ones, with each one partitioned into 2 equal pieces.
1. For each division expression, complete the diagram using the same method as Elena. Then, find the value of the expression.
Value of the expression: ____________
Value of the expression: ____________
Value of the expression: ____________
2. Examine the expressions and answers more closely. Look for a pattern. How could you find how many halves, thirds, fourths, or sixths were in 6 without counting all of them? Explain your
3. Use the pattern you noticed to find the values of these expressions. If you get stuck, consider drawing a diagram.
1. \(6 \div \frac 18\)
2. \(6 \div \frac {1}{10}\)
3. \(6 \div \frac {1}{25}\)
4. \(6 \div \frac {1}{b}\)
4. Find the value of each expression.
1. \(8 \div \frac 14\)
2. \(12 \div \frac 15\)
3. \(a \div \frac 12\)
4. \(a \div \frac {1}{b}\)
10.3: Dividing by Non-unit Fractions
1. To find the value of \(6 \div \frac 23\), Elena started by drawing a diagram the same way she did for \(6 \div \frac 13\).
1. Complete the diagram to show how many \(\frac 23\)s are in 6.
2. Elena says, “To find \(6 \div \frac23\), I can just take the value of \(6 \div \frac13\) and then either multiply it by \(\frac 12\) or divide it by 2.” Do you agree with her? Explain your
2. For each division expression, complete the diagram using the same method as Elena. Then, find the value of the expression. Think about how you could find that value without counting all the
pieces in your diagram.
Value of the expression:___________
\(6 \div \frac 43\)
Value of the expression:___________
Value of the expression:___________
3. Elena examined her diagrams and noticed that she always took the same two steps to show division by a fraction on a tape diagram. She said:
“My first step was to divide each 1 whole into as many parts as the number in the denominator. So if the expression is \(6 \div \frac 34\), I would break each 1 whole into 4 parts. Now I have 4
times as many parts.
My second step was to put a certain number of those parts into one group, and that number is the numerator of the divisor. So if the fraction is \(\frac34\), I would put 3 of the \(\frac 14\)s
into one group. Then I could tell how many \(\frac 34\)s are in 6.”
Which expression represents how many \(\frac 34\)s Elena would have after these two steps? Be prepared to explain your reasoning.
□ \(6 \div 4 \boldcdot 3\)
□ \(6 \div 4 \div 3\)
□ \(6 \boldcdot 4 \div 3\)
□ \(6 \boldcdot 4 \boldcdot 3\)
4. Use the pattern Elena noticed to find the values of these expressions. If you get stuck, consider drawing a diagram.
1. \(6 \div \frac27\)
2. \(6\div\frac{3}{10}\)
3. \(6 \div \frac {6}{25}\)
To answer the question “How many \(\frac 13\)s are in 4?” or “What is \(4 \div \frac 13\)?”, we can reason that there are 3 thirds in 1, so there are \((4\boldcdot 3)\) thirds in 4.
In other words, dividing 4 by \(\frac13\) has the same result as multiplying 4 by 3.
\(\displaystyle 4\div \frac13 = 4 \boldcdot 3\)
In general, dividing a number by a unit fraction \(\frac{1}{b}\) is the same as multiplying the number by \(b\), which is the reciprocal of \(\frac{1}{b}\).
How can we reason about \(4 \div \frac23\)?
We already know that there are \((4\boldcdot 3)\) or 12 groups of \(\frac 13\)s in 4. To find how many \(\frac23\)s are in 4, we need to put together every 2 of the \(\frac13\)s into a group. Doing
this results in half as many groups, which is 6 groups. In other words:
\(\displaystyle 4 \div \frac23 = (4 \boldcdot 3) \div 2\)
\(\displaystyle 4 \div \frac23 = (4 \boldcdot 3) \boldcdot \frac 12\)
In general, dividing a number by \(\frac{a}{b}\), is the same as multiplying the number by \(b\) and then dividing by \(a\), or multiplying the number by \(b\) and then by \(\frac{1}{a}\).
• reciprocal
Dividing 1 by a number gives the reciprocal of that number. For example, the reciprocal of 12 is \(\frac{1}{12}\), and the reciprocal of \(\frac25\) is \(\frac52\). | {"url":"https://curriculum.illustrativemathematics.org/MS/students/1/4/10/index.html","timestamp":"2024-11-04T12:27:21Z","content_type":"text/html","content_length":"126781","record_id":"<urn:uuid:56d176fc-2ca4-4e29-897e-2ffa83d4fb59>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00606.warc.gz"} |
The missing asymptotic sector of rotating black-hole spectroscopy
The rotation of a Kerr black hole splits its low-frequency spectrum in two, so it was so far unclear why the known highly-damped resonances show no splitting. We find the missing, split sector, with
spin s quasinormal modes approaching the total reflection frequencies ω(n∈N)=-ΩδJ-iκ(n-s), where Ω, κ and δ. J are the horizon's angular velocity, surface gravity, and induced change in angular
momentum. Surprisingly, the new sector is at least partly polar, and corresponds to reversible J transitions. Its fundamental branch converges quickly, possibly affecting gravitational wave signals.
A simple interpretation of the Carter constant of motion is proposed.
ASJC Scopus subject areas
• Nuclear and High Energy Physics
Dive into the research topics of 'The missing asymptotic sector of rotating black-hole spectroscopy'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/the-missing-asymptotic-sector-of-rotating-black-hole-spectroscopy-2","timestamp":"2024-11-11T16:24:26Z","content_type":"text/html","content_length":"55518","record_id":"<urn:uuid:8dc44d82-63e6-48f6-8b88-7d0a2d67c836>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00000.warc.gz"} |
Dynamical systems with boundary control: Models and characterization of inverse data | Request PDF
... The inverse problem consists of determining the function ρ in Ω via the response operator R 2T provided T > T * . One of the natural ways to solve this problem is the Boundary Control method
(BC-method, Belishev, 1986, see, e.g., [2][3][4] and works cited therein, and the version of the BC-method proposed in [12,13]). We do not give a BC-solution of the inverse problem in this paper. ...
... In contrast to the works cited above, we use measurements (waves) at the same part of the boundary as controls. This corresponds to the boundary triple technique in [2]. The boundary triple used
in the present paper is associated with the Zaremba Laplacian with mixed boundary conditions studied in [9]. ...
... Different versions of dynamical systems with boundary controls are related (see [2]) to different choices of boundary triples for the operator in the space domain, see definitions in [6,8]. Here,
we introduce the boundary triple corresponding to system (1.1)-(1.4). ... | {"url":"https://www.researchgate.net/publication/230900261_Dynamical_systems_with_boundary_control_Models_and_characterization_of_inverse_data","timestamp":"2024-11-03T14:24:35Z","content_type":"text/html","content_length":"561043","record_id":"<urn:uuid:da232cb1-3571-4e6e-bf75-59bc1cbaccff>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00414.warc.gz"} |
numpy.fft.irfftn(a, s=None, axes=None, norm=None)[source]¶
Compute the inverse of the N-dimensional FFT of real input.
This function computes the inverse of the N-dimensional discrete Fourier Transform for real input over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT).
In other words, irfftn(rfftn(a), a.shape) == a to within numerical accuracy. (The a.shape is necessary like len(a) is for irfft, and for the same reason.)
The input should be ordered in the same way as is returned by rfftn, i.e. as for irfft for the final transformation axis, and as for ifftn along all the other axes.
a : array_like
Input array.
s : sequence of ints, optional
Shape (length of each transformed axis) of the output (s[0] refers to axis 0, s[1] to axis 1, etc.). s is also the number of input points used along this axis, except for the last
axis, where s[-1]//2+1 points of the input are used. Along any axis, if the shape indicated by s is smaller than that of the input, the input is cropped. If it is larger, the
input is padded with zeros. If s is not given, the shape of the input along the axes specified by axes is used.
Parameters: axes : sequence of ints, optional
Axes over which to compute the inverse FFT. If not given, the last len(s) axes are used, or all axes if s is also not specified. Repeated indices in axes means that the inverse
transform over that axis is performed multiple times.
norm : {None, “ortho”}, optional
New in version 1.10.0.
Normalization mode (see numpy.fft). Default is None.
out : ndarray
The truncated or zero-padded input, transformed along the axes indicated by axes, or by a combination of s or a, as explained in the parameters section above. The length of each
Returns: transformed axis is as given by the corresponding element of s, or the length of the input in every axis except for the last one if s is not given. In the final transformed axis
the length of the output when s is not given is 2*(m-1) where m is the length of the final transformed axis of the input. To get an odd number of output points in the final axis,
s must be specified.
If s and axes have different length.
If an element of axes is larger than than the number of axes of a.
See also
The forward n-dimensional FFT of real input, of which ifftn is the inverse.
The one-dimensional FFT, with definitions and conventions used.
The inverse of the one-dimensional FFT of real input.
The inverse of the two-dimensional FFT of real input.
See fft for definitions and conventions used.
See rfft for definitions and conventions used for real input.
>>> a = np.zeros((3, 2, 2))
>>> a[0, 0, 0] = 3 * 2 * 2
>>> np.fft.irfftn(a)
array([[[ 1., 1.],
[ 1., 1.]],
[[ 1., 1.],
[ 1., 1.]],
[[ 1., 1.],
[ 1., 1.]]]) | {"url":"https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.fft.irfftn.html","timestamp":"2024-11-05T04:26:50Z","content_type":"text/html","content_length":"15050","record_id":"<urn:uuid:b88bbaa6-1112-4b9f-8561-ffb2df40d34f>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00001.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.APPROX-RANDOM.2015.881
URN: urn:nbn:de:0030-drops-53426
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2015/5342/
Haeupler, Bernhard ; Kamath, Pritish ; Velingker, Ameya
Communication with Partial Noiseless Feedback
We introduce the notion of one-way communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves
sending a sequence of bits over a channel while receiving feedback bits from Bob for delta fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice's
transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of 0 <=
delta <= 1, such that Alice can send a message to Bob via some protocol with delta fraction of noiseless feedback. The case delta = 1 corresponds to full feedback, in which the result of Berlekamp
['64] implies that the maximum tolerable error rate is 1/3, while the case delta = 0 corresponds to no feedback, in which the maximum tolerable error rate is 1/4, achievable by use of a binary
error-correcting code.
In this work, we show that for any delta in (0,1] and gamma in [0, 1/3), there exists a randomized communication scheme with noiseless delta-feedback, such that the probability of miscommunication is
low, as long as no more than a gamma fraction of the rounds are corrupted. Moreover, we show that for any delta in (0, 1] and gamma < f(delta), there exists a deterministic communication scheme with
noiseless delta-feedback that always decodes correctly as long as no more than a gamma fraction of rounds are corrupted. Here f is a monotonically increasing, piecewise linear, continuous function
with f(0) = 1/4 and f(1) = 1/3. Also, the rate of communication in both cases is constant (dependent on delta and gamma but independent of the input length).
BibTeX - Entry
author = {Bernhard Haeupler and Pritish Kamath and Ameya Velingker},
title = {{Communication with Partial Noiseless Feedback}},
booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
pages = {881--897},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-89-7},
ISSN = {1868-8969},
year = {2015},
volume = {40},
editor = {Naveen Garg and Klaus Jansen and Anup Rao and Jos{\'e} D. P. Rolim},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2015/5342},
URN = {urn:nbn:de:0030-drops-53426},
doi = {10.4230/LIPIcs.APPROX-RANDOM.2015.881},
annote = {Keywords: Communication with feedback, Interactive communication, Coding theory Digital}
Keywords: Communication with feedback, Interactive communication, Coding theory Digital
Collection: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)
Issue Date: 2015
Date of publication: 13.08.2015
DROPS-Home | Fulltext Search | Imprint | Privacy | {"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=5342","timestamp":"2024-11-11T02:00:22Z","content_type":"text/html","content_length":"7455","record_id":"<urn:uuid:bacc6ec0-85b2-477d-af2a-f6d7081c14a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00856.warc.gz"} |
Testa - Do we really need 36 herrings or other oily fish for 60 omega 3 capsules?
Yes, the amount of fish necessary for 1 fish oil supplement is indeed this high.
1 Testa capsule contains 450mg omega 3, thus 60 capsules contain 27g of omega 3.
1 herring on the Dutch market weighs on average 68g and contains on average 1.1% extractable omega 3. Thus 750mg omega 3 per herring.
If we then divide the 27g of omega 3 of 1 Testa supplement by 750mg of 1 herring one can see that 36 herrings are used. | {"url":"https://testaomega3.freshdesk.com/support/solutions/articles/27000044733-ben%C3%B6tigen-wir-wirklich-36-heringe-oder-anderen-%C3%B6ligen-fisch-f%C3%BCr-60-omega-3-fisch%C3%B6lkapseln-","timestamp":"2024-11-13T13:57:37Z","content_type":"text/html","content_length":"22982","record_id":"<urn:uuid:1bc8c44c-27fc-4aac-8fb9-8eeee3b684c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00043.warc.gz"} |
Statistics & Probability | Math = Love
This blog post contains Amazon affiliate links. As an Amazon Associate, I earn a small commission from qualifying purchases.
For even more math teaching resources, check out my Math Teaching Resource Index.
Algebra & Functions | Geometry & Measurement | Number & Operations | Statistics & Probability | Trigonometry | Calculus
5 Number Summary
5 W’s and an H of Statistics
Calculator Statistics
Categorical vs Quantitative Variables
Confidence Intervals
Data Collection Ideas
Data Displays
Bar Graphs
Box and Whisker Plots
Circle Graphs (Pie Charts)
Dot Plots
Mosaic Plots
Stem and Leaf Plots
Describing Distributions
Hypothesis Tests
Measures of Central Tendency
Measures of Spread
Normal Distribution
Sampling Methods
Linear Regression
Quadratic Regression
Miscellaneous Regression Activities
Venn Diagrams
Miscellaneous Statistics Resources | {"url":"https://mathequalslove.net/resource-index/statistics-probability/","timestamp":"2024-11-09T00:53:29Z","content_type":"text/html","content_length":"181328","record_id":"<urn:uuid:0024afbb-816e-4c1c-af12-d2cc83fc5e98>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00003.warc.gz"} |
Calculus Archives - WonderSc
Articles for tag: Antiderivatives, Area Under a Curve, Calculus, Fundamental Theorem of Calculus, Integral Calculus, Integral Functions, Mathematics
Unlock the power of calculus with integrals: definite and indefinite integration. Master antiderivatives, the fundamental theorem, techniques, and applications.
Discover the rules, applications, and techniques of derivatives trading, leveraging financial instruments for hedging, speculation, and risk management strategies. | {"url":"https://wondersc.com/tag/calculus/","timestamp":"2024-11-03T20:01:47Z","content_type":"text/html","content_length":"83971","record_id":"<urn:uuid:89b976e6-8d15-4a51-8b96-6273b68ddba2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00767.warc.gz"} |
Parametric versus nonparametric tests
Parametric statistical tests rely on assumptions about the shape of the distribution and the parameters (i.e. mean and standard deviation), and most rely on an assumption of an approximately normal
Nonparametric statistical tests rely on no or few assumptions about the shape or the parameters of the population distribution from which the sample was drawn. If the data are indeed normal, a
nonparametric test will generally have less power for the same sample size compared to the corresponding parametric test.
• Parametric statistical tests are more powerful than non-parametric tests.
• Most parametric tests require data to be normally distributed.
• Before you choose a test, you should visualise and evaluate your data by plotting the distribution of your sample (histogram and distribution curve) and see whether it resembles a bell-shaped
• You can test for normality using the Shapiro-Wilk test. This is the most powerful test for normality, and a p-value >0,05 indicates normality, and you can use a parametric test.
Why don't we always use nonparametric tests?
Nonparametric tests are convenient because we don't need to make any assumptions about the shape or parameters of the distribution of the underlying population from which the sample was drawn. For
example, if the data deviate strongly from the assumptions of a parametric test (i.e. not normally distributed), using a parametric test could lead to incorrect conclusions.
Nonparametric tests have less statistical power
If the data are truly normal, a nonparametric test will generally have less power for the same sample size compared to the corresponding parametric test. This means that a nonparametric test may fail
to correctly reject the null hypothesis when a parametric test will do so - and we may draw the wrong conclusion from our analysis. This also means that if we are planning a study and trying to
determine how many participants to include, a nonparametric test will require a slightly larger sample size to have the same power as the corresponding parametric test.
Nonparametric tests are more difficult to interpret
Interpretation of nonparametric tests can also be more difficult than for parametric tests. Many nonparametric tests use rankings of the values in the data rather than using the actual data. Knowing
that the difference in mean ranks between two groups does not give us an intuitive understanding of the data. On the other hand, knowing that the mean height of women is 10 cm less than that of men
is both intuitive and useful.
Overview - choice of test
Analysis Type Example Parametric test Nonparametric
Compare means between two distinct/ Is the mean systolic blood pressure (at baseline) for patients assigned to placebo different from the mean for patients Two-sample t-test Mann-Whitney U
independent groups assigned to the treatment group? test
Compare two quantitative measurements Was there a significant change in systolic blood pressure between baseline and the six-month follow- up measurement in Paired t-test Wilcoxon
taken from the same individual the treatment group? signed-rank test
Compare means between three or more If our experiment had three analyses of variance groups (e.g. placebo, new drug #1, new drug #2), we might want to know Analysis of variance Kruskal-Wallis
distinct/independent groups whether the mean systolic blood pressure at baseline differed among the three groups? (ANOVA) test
Is systolic blood pressure associated with the patient’s age? Pearson coefficient Spearman’s rank
Estimate the strength of correlation of correlation correlation
between two quantitative variables
Latest From Ledidi Academy | {"url":"https://ledidi.com/academy/parametric-versus-nonparametric-tests","timestamp":"2024-11-14T01:41:25Z","content_type":"text/html","content_length":"26020","record_id":"<urn:uuid:a0935be8-9a12-43a6-8d32-2fdd65e3dbec>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00851.warc.gz"} |
Mixed Number Calculator
Posted on:
Discover the Mixed Number Calculator, your go-to tool for effortlessly adding, subtracting, multiplying, and dividing mixed numbers. Simplify your math tasks today!
Simplifying Fractions Calculator
Posted on:
Simplify fractions effortlessly using our Simplifying Fractions Calculator. Learn how to reduce fractions to their simplest form and convert improper fractions to mixed numbers.
Mixed Fraction Calculator
Posted on:
Mixed Fraction Calculator: Simplify conversions of mixed numbers to improper fractions. Learn the step-by-step guide and use the calculator for effortless results!
Least Common Denominator Calculator
Posted on:
Simplify adding and subtracting fractions with the Least Common Denominator Calculator. Learn what LCD is, why it’s essential, and how to effectively use this handy tool.
Percent to Fraction Calculator
Posted on:
Easily convert percentages to fractions with our guide and calculator. Enhance your math skills and simplify calculations with ease using step-by-step instructions.
Mastering Precision with a Significant Figures Calculator
Posted on:
Discover precision in your calculations with a Significant Figures Calculator! Learn how to master accuracy in scientific, mathematical, and financial data effortlessly.
Adding and Subtracting Integers with Integer Calculator
Posted on:
Effortlessly add and subtract integers with an Integer Calculator. Learn concepts, input methods, and real-life applications for precise calculations.
Prime Factorization Calculator
Posted on:
Break down numbers into prime components with our Prime Factorization Calculator guide. Learn its importance, applications, and how to use it efficiently.
Unlocking the Power of the Square Root Calculator
Posted on:
Explore the fundamentals of squares and square roots, and discover practical applications using a square root calculator for enhanced problem-solving skills.
Arithmetic and Geometric Sequence Calculator Online Tool
Posted on:
Effortlessly find terms and sums in arithmetic and geometric sequences with our online calculator. Simplify your math tasks and enhance your understanding today! | {"url":"https://calculatorbeast.com/tag/math-tools/","timestamp":"2024-11-10T17:44:12Z","content_type":"text/html","content_length":"136044","record_id":"<urn:uuid:71b3a246-c618-424f-8ce4-41e0d3ae121b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00645.warc.gz"} |
Please some help! AP macro economics
Country Y
(f) Country Y has a real GDP per capita of $75, and it has a population of 2 million. Calculate Country Y's real GDP.
(g) Four years later, the GDP per capita of Country Y is $90. Assume there has been no technological advancement and no increase in physical capital in that time period. Identify a policy that could
lead to this increase.
(h) Calculate the economic growth rate for Country Y over the time period described in part (g). Show your work.
Final answer:
Country Y's real GDP is calculated at $150,000,000, and the growth in GDP per capita over four years can be attributed to improved labor productivity. The economic growth rate for Country Y is 20%
during this period.
Country Y has a real GDP per capita of $75, and with a population of 2 million, its real GDP is calculated by multiplying the per capita amount by the population. This results in a real GDP of
$150,000,000 (75 * 2,000,000).
After four years, the GDP per capita increases to $90 without technological advancement or an increase in physical capital. This could indicate a policy that focused on improving labor productivity,
such as education and healthcare enhancements, to raise the efficiency and effectiveness of the workforce. To calculate the economic growth rate over the period described, use the formula: Growth
Rate = ((Final GDP per capita - Initial GDP per capita) / Initial GDP per capita) * 100. Substituting the given values: Growth Rate = (($90 - $75) / $75) * 100 = 20%.
Please explain the process of calculating Country Y's real GDP and the policy that could lead to an increase in GDP per capita without technological advancement or physical capital increase. To
calculate Country Y's real GDP, you multiply the real GDP per capita of $75 by the population of 2 million, resulting in a real GDP of $150,000,000. A policy that could lead to an increase in GDP per
capita without technological advancement or physical capital increase is focusing on improving labor productivity through initiatives like education and healthcare enhancements to enhance the
workforce's efficiency and effectiveness. | {"url":"https://theletsgos.com/advanced-placement/please-some-help-ap-macro-economics.html","timestamp":"2024-11-13T19:22:36Z","content_type":"text/html","content_length":"22192","record_id":"<urn:uuid:547268d0-2685-4627-9718-c5e4fc8b2b43>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00011.warc.gz"} |
GMAT Number Properties: Pratice Questions - GMAT Point by Cracku
GMAT Number Properties: Pratice Questions
GMAT Number Properties Questions
Number Properties questions constitute a major portion of the Quant section of the GMAT. These questions can come under both Problem Solving and Data Sufficiency types. In this article, we will be
looking into –
1. GMAT Number Properties Questions with Answers
2. 4 tips that can help you ace these questions.
Subscribe To GMAT Preparation Channel
Examples – GMAT Number Properties Questions with Answers and explanations
Question 1:
Which of the following numbers is odd? a = even b = odd
a) ab + a + b + 1
b) a + b – ab – 1
c) a + b – 1
d) 3b + 2a
e) 4a – 6b
ab = even x odd = even
a) even + even + odd + odd = even
b) even + odd – even – odd = even
c) even + odd – odd = even
d) odd + even = odd
e) even + even = even
Hence, Option D is odd.
Question 2:
The largest two-digit number having an odd number of factors is?
Odd number of factors -> Number is a perfect square.
Hence, 81.
Question 3:
If two numbers are co-rime, and both of them are greater than 1, find out the highest possible difference between these numbers, given that both are less than 100.
The smallest possible number is 2. The largest possible number is 99.
They are co-prime.
99 – 2 = 97.
Though these examples provide a good sense of what type of GMAT Number Properties questions you can expect, in no way do they represent the exhaustive list of concepts required for the Quantitative
section of the GMAT.
Tips to keep in mind:
1. Try to take some time out from the Problem-Solving Number Properties questions so that you can use that time to solve the tricky Data Sufficiency Number Properties questions.
2. Do not get stuck in a question for long. If you find yourself trapped in a question for long, take a guess and move on.
3. Look out for negation words. For example: Which of the following are NOT possible values of x?
4. Some questions can be solved faster by the use of options. Make sure you don’t solve these questions in a conventional way.
You can check out the Free GMAT Daily Targets on our platform.
Also, check out the Free GMAT Verbal Tests and Quant Tests.
If you are starting your GMAT preparation from scratch, do check out GMATPOINT.
Join GMATPoint Telegram Channel
Hope this article was helpful. Wish you all the best for the GMAT. | {"url":"https://gmatpoint.com/blog/gmat-number-properties-questions/","timestamp":"2024-11-07T02:57:30Z","content_type":"text/html","content_length":"62959","record_id":"<urn:uuid:cb46159e-02ca-4b48-942c-3a1bad6e4d4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00232.warc.gz"} |
A = {1, 2, 3, 5} and b = {4, 6, 9}. define a relation r from a -Turito
Are you sure you want to logout?
A = {1, 2, 3, 5} and B = {4, 6, 9}. Define a relation R from A to B by R = {(x, y): the difference between x and y is odd; x ∈ A, y ∈ B}. Write R in roster form.
Cartesian product of A and B is denoted by A X B and it is defined as set og all ordered pairs (a, b) such that
The correct answer is: {(1,4),(1,6),(2,9),(3,4),(3,6),(5,4),(5,6)}
We have given the two sets
A = {1, 2, 3, 5} and B = {4, 6, 9}
R = {(x, y): the difference between x and y is odd; x ∈ A, y ∈ B}
First of all we will find the cartesian product of set A and set B
A X B = {(1,4),(1,6),(1,9),(2,4),(2,6),(2,9),(3,4),(3,6),(3,9),(5,4),(5,6),(5,9)}
We have relation R , with condition difference between x and y is odd, in this cartesian product we will find out the points satisfying the condition.
So, R in roaster form is
R = {(1,4),(1,6),(2,9),(3,4),(3,6),(5,4),(5,6)}
Get an Expert Advice From Turito. | {"url":"https://www.turito.com/ask-a-doubt/Maths--q813a8999","timestamp":"2024-11-10T18:54:45Z","content_type":"application/xhtml+xml","content_length":"295719","record_id":"<urn:uuid:bd51b656-f140-4dce-a128-c60b57ed0449>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00467.warc.gz"} |
Who can help with SAS Multivariate Analysis assignment regression analysis? | Hire Someone To Take My SAS Assignment
Who can help with SAS Multivariate Analysis assignment regression analysis? Here’s a simple solution: You could start with a step-by-step guide paper and you could look at the available data in SAS
Multivariate Appraisals from the 10 percent of the users. With H3qoMe, you can learn how you can improve and analyze the distribution values of log and random effect in SAS Multivariate Appraisals. I
can tell you of a few who are of interest in SAS Multivariate Attachance regression analysis as SAS Multivariate. I actually started with a step-by-step guide paper and I applied a couple of those
things I’m considering to understand the multivariate data using SAS Multivariate Appraisals. In short, I decided to start towards SAS Multivariate Analysis Scattered Projections from the 10 percent
of the users in SAS Multivariate Appraisals, so that I can get more information about the data. As a result also my knowledge about your SAS Multivariate Appraisals was started. All these pieces are
listed below: As you know, I’m assuming you need to take your initial raw data from the 10 percent of the users and upload it e.g. in H3qoMe, so I will write down the raw data in SAS Multivariate
Appraisals, (e.g. SAS Multivariate Appraisals Data Processing). From there i will upload them, but you can also define next as pre-processing / distribution modification, or raw data (e.g. SAS
Multivariate Appraisals data processing) and I will write down analysis scripts/methods for in SAS Multivariate Appraisals and then I will let you create, edit, upload, and link all of them. I also
think we need some reference in the SAS Multivariate method libraries. The repository contains the HTML source code and we should be able to use it to visualize when data is drawn to a true uniform
distribution. Check out the following websites and screenshots: This post is for our own use only. Comments are welcome and appreciated! Hi there we are having a lot of programming experience, but in
my opinion if you’re having too much traffic then join my blog 😉 Thank you if you think it’s helpful to share, so I’m sending you a link if you would like to contribute 😉 (i’ll try to google it so
you will have 5 stars 😉 🙂 The blog is being created by ZhiyongZW Hi Jintzu! Are you getting content about this website from IIS and have the requirement to construct a dataset, analyze it and upload
it. If you have reference and a reference to your current dataset share the link to IIS (https://www.ISCI.
I Will Do Your Homework
com/ IIS Data Processing/Download) so don’t hesitate to contact me 🙂 I’ll add the link to IIS as well if you please. Thank You! — Jintzu ☦🏾 — ZhiyongZW 👏🏬🏼👏🏁 — Jo-an-be (@joanbe) July 03, 2019
Jintzu, Thank you for your support! The data must be available immediately upon IIS uploaded. I really recommend you to purchase it or to make some donation. I was already searching for some help
with Datasphere Analysis, but I thought someone would be happy to help with it. For data analysis that requires a large amount of effort for everything to arrive at some perfect result, of course you
can develop a good model and choose which approach will be best for you. You may also be saving money by analyzing your dataset… at least start with the best dataset possible 😉 — IIS Data Hi
Jo-an-be, we are alsoWho can help with SAS Multivariate Analysis assignment regression analysis? This module helps you fill the following matrix into SAS Multivariate Analysis variable, who can help
with assignment regression analysis? and then help you fill the following matrix into SAS Multivariate analysis Assignment regression analysis. Before we can fill out the last ones, we recommend you
to fill out the SAS Multivariate Analysis, chapter 11. Many problems in SAS Mathematica are beyond the scope of our presentation, allowing you to do it in much less number. Nevertheless, we have been
pleased with SAS Multivariate Analysis working well against SAS, allowing you to do you real-time analysis due to its very simplicity and easy comprehension. All Subscripts 1 and 2, below, provide
the first ten common Subscripts, which you are going to load into the SAS Multivariate Analysis module. In SAS Multivariate Analysis assignment regression analysis, SAS performs a combination of 5
calculations: 1. In this application each of 6 subscripts is analyzed by SAS Multivariate Analysis tool. It can be viewed as follows… 1–2 In this section, I provide each of the 6 MATRIX subscripts
which are related to assigned values. 1.1.1 3.1.1 In this section, I present 3.1.1’s subscripts which are related to subroutine assignment.
Ace Your Homework
In this section, I provide some helpful examples for a SAS Mathbook section assigned to user $g$ in SAS Multivariate Analysis assignment regression analysis module. In this section further two tables
below explain how subrstiples are selected. In step 2 we provide user’s selected subscripts. However, for customers, in SAS Mathbook section 3.1. Here are some common Subscripts. 1.2.1 3.2.1 In this
section, I don’t provide as many examples as suggested already, I want to focus on user’s selection, so take care to only provide listed solutions. However I think, it is easier for users to navigate
automatically, than when entering your order form. Users, for example, with SAS-Mathematical Operations code team should enable more choices than SAS method group. In SQL, how to choose one variable
in a matrix without any idea about its structure or order in tables? In Excel, assign variable names to rows with column names, the first three rows are functions, then the last two are values. By
having this same one variable name, it sets as subtable. Table 2 1.2.1 3.2.1 As per the table bottom, you will have a column named $id which is used in SAS Multivariate Analysis decision regression
Pay Someone To Write My Case Study
It might be useful for the SAS team not to include this column name there. More pay someone to do sas homework please, as necessary: (1).2.3 In SAS Multivariate Analysis assignment regression
analysis module, following is the table that is displayed below. A function $a$, which is named “$a$,” is defined as… I have used SAS Multivariate Analysis bitmap. For example, each value could be
defined as a function in this table, instead of representing one of 12 column names. Similarly, it has the following format. 1 name=row+4, number=Row+3, value=$3:1 In SAS Multivariate Analysis
assignment regression analysis, $a$ sets as bitmap in this instance. It can be viewed as follows. 1 name=row+1, number=2, value=4 In SAS Multivariate Analysis assignment regression analysis, $a$ sets
as bitmap in this instance. It can be viewed as follows. In SAS Multivariate Analysis assignment regression analysis, $a$: In this last display, IWho can help with SAS Multivariate Analysis
assignment regression analysis? – by its nature it contains two functions and while one may not be perfect – is it an attractive alternative for regression analysis even though its development have
taken years to develop? So far, we solved the mystery of multivariate analysis through multivariate regression, according to what we hope to do in many of our applications – but of course the primary
task is through this: how can you find the correct result – is by using this form of regression analysis, which is one of the most effective types of analysis to execute if already you have software
already. In any case, there is an overwhelming amount of evidence to suggest that multivariate analysis is most promising for many purposes thanks to the robustness and ease with which in most cases
you can do it. Certainly no other research or practise in the domain of multivariate regression approaches has received so much empirical attention in the last 20 years! This, however, has been
written using multi layer statistical analysis to answer some more important questions, namely Who can help with SAS Multivariate Analysis assignment regression analysis? – by its nature it contains
two functions and while one may not be perfect – is it an attractive alternative for regression analysis even though its development have taken years to develop? To see what may be the result, we do
take a look at multivariate regression over the past 30 years. Furthermore we look at the paper SC1 – that put together here indicates how the concept of multivariate regression can actually be
improved. However we have to tell you first, the significance of multivariate regression in relation to testis used for SAS Multivariate Analysis is less than what is measured in other research since
it is both experimental and theoretical. The results This is a review of what is measured in the literature. The scottbooks I’ve linked to are the ones that are usually followed and that we read in
the papers using the book. However indeed we do not follow the SC1 chapter which does actually contain some results that will be reported here. Here I have translated from scratch the SC1 chapter on
multivariate regression as well as my own interpretation “scottbooks”.
Pay For Your Homework
This took place on the first week of July 2012 and it was read in 9 books, then on 31 other books. So it seems that SC1 “scottbooks” are the more reliable interpretation given what they have been
written; but, the method which I translated is admittedly outdated since I have often applied SC1 methodology in the same publication. In any case, then again rather than using the book itself for
the illustration we often use SC1 methodology to extract only results that are useful for the study. The methods and results presented review clearly demonstrate that SC1 (the Scutil 2.0 version) is
something which can be used in many different ways. 1. Multivariate regression applying the standard method is not described in the standard textbooks With this paper (SC2 | {"url":"https://sashelponline.com/who-can-help-with-sas-multivariate-analysis-assignment-regression-analysis","timestamp":"2024-11-07T10:26:02Z","content_type":"text/html","content_length":"131007","record_id":"<urn:uuid:02db56c5-3418-4284-a12f-bc7cb1644a76>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00781.warc.gz"} |
How To Find An Inflection Point
Minerva Studio/iStock/Getty Images
Inflection points identify where the concavity of a curve changes. This knowledge can be useful for determining the point at which a rate of change begins to slow or increase or can be used in
chemistry for finding the equivalence point after titration. Finding the inflection point requires solving the second derivative for zero and evaluating the sign of that derivative around the point
where it equals zero.
Find the Inflection Point
Take the second derivative of the equation of interest. Next, find all values where that second derivative equals zero or does not exist, such as where a denominator equals zero. These two steps
identify all possible inflection points. To determine which of these points are actually inflection points, determine the sign of the second derivative on either side of the point. Second derivatives
are positive when a curve is concave up and are negative when a curve is concave down. Therefore, when the second derivative is positive on one side of a point and negative on the other side, that
point is an inflection point.
Cite This Article
Bush, Joshua. "How To Find An Inflection Point" sciencing.com, https://www.sciencing.com/inflection-point-5880255/. 24 April 2017.
Bush, Joshua. (2017, April 24). How To Find An Inflection Point. sciencing.com. Retrieved from https://www.sciencing.com/inflection-point-5880255/
Bush, Joshua. How To Find An Inflection Point last modified March 24, 2022. https://www.sciencing.com/inflection-point-5880255/ | {"url":"https://www.sciencing.com:443/inflection-point-5880255/","timestamp":"2024-11-10T15:37:36Z","content_type":"application/xhtml+xml","content_length":"69306","record_id":"<urn:uuid:dd5f6da1-b1cf-4382-9885-bfb8cab6c45f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00480.warc.gz"} |
Determining sets, resolving sets, and the exchange property
A subset U of vertices of a graph G is called a determining set if every automorphism of G is uniquely determined by its action on the vertices of U. A subset W is called a resolving set if every
vertex in G is uniquely determined by its distances to... Show more | {"url":"https://synthical.com/article/Determining-sets%2C-resolving-sets%2C-and-the-exchange-property-b9d4348a-ffb4-11ed-9b54-72eb57fa10b3?","timestamp":"2024-11-02T20:53:29Z","content_type":"text/html","content_length":"59339","record_id":"<urn:uuid:670ba2a5-ba15-4184-b81f-52472521418b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00019.warc.gz"} |
Fractional Hadwiger
Fractional Hadwiger
Conjecture For every graph
It is well known and easily proved (see [HW]) that
Hadwiger's famous conjecture,
Note that Reed and Seymour [RS] proved that
Conjecture (a) is due to Reed and Seymour [RS]. Conjecture (b) is due to Harvey and Wood [HW]. Conjecture (c) is independently due to Harvey and Wood [HW] and Pedersen [P].
Pedersen [P] presents a natural equivalent formulation of Conjecture (c).
*[HW] Daniel J. Harvey, David R. Wood, Parameters tied to treewidth. arXiv:1312.3401, 2013.
[F] Jacob Fox. Constructing dense graphs with sublinear Hadwiger number. J. Combin. Theory Ser. B (to appear).
*[P] Anders Sune Pedersen. Contributions to the Theory of Colourings, Graph Minors, and Independent Sets, PhD thesis, Department of Mathematics and Computer Science University of Southern Denmark,
*[RS] Bruce A. Reed, Paul D. Seymour, Fractional colouring and Hadwiger's conjecture. J. Combin. Theory Ser. B, 74(2), 147-152.
* indicates original appearance(s) of problem. | {"url":"http://www.openproblemgarden.org/op/fractional_hadwiger","timestamp":"2024-11-01T20:02:45Z","content_type":"application/xhtml+xml","content_length":"14698","record_id":"<urn:uuid:2852f7d7-efc8-410e-b63c-0ff32b0396b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00358.warc.gz"} |