text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# Puzzle Hunt 02: Echoes of Change
This is part two in my clichéd treasure island puzzle series. The story carries directly on from part one.
You take an incredulous look at the map.
spoiler for part one
"X Marks the spot!?"
"That's a bit of a cliché. "Do you think that means Cross-bone Cliffs?"
"Not a chance." replies your Dad. "That place was barren. Nothing there other than rocks and bird crap. I think the map is just messing with you. Leave it."
But you can't leave it, can you? What sort of story would this be if you just ignored it and binned the map? Terrible. Through bribery and alcohol you convince your best mate, Frank, and his sister, Joan, to join you on an expedition to the island, which just so happens to be within sailing distance, and Joan just so happens to own a yacht. Convenient.
You land on the north-east of the island, on the beach by the echo caves. The three of you trudge up to the mouth of the cave and peer into the uninviting blackness. The caves disappear steeply down into the side of a hill, heading inland. You stand at the caves mouth and decide to test the relevance of the name. You holler into the void: "BOOM!" The cave seems to swallow up the shout, but a few seconds later a deep echo returns.
"BOOT!"
"That's odd. Not exactly what I said, but I suppose close enough."
"JOAN!" shouts Joan. A little egotistically.
"LOAN!" Comes the echo.
"Huh." She muses. "Strange."
It seems clear that the cave is twisting your words. But why? Is it trying to tell you something? To determine what it is saying, you shout different words and record the echo.
The clues below hint to what was spoken and what was echoed. Can you figure out the specifics and determine what the cave is trying to say?
$$\begin{array} {|l|l|}\hline \textbf { Shout } & \textbf { Echo } \\ \hline \text { Past Curfew } & \text { Spike Drink } \\ \hline \text { Vaccine Injection } & \text { Chimney Deposit } \\ \hline \text { Hearing Organs } & \text { Red Planet } \\ \hline \text { First Prize } & \text { Castrate Horse } \\ \hline \text { Heavenly Body } & \text { Fly High } \\ \hline \text { High Temperature } & \text { Without Ice } \\ \hline \text { Road Curve } & \text { Wrap Tightly } \\ \hline \text { Bacon Fat } & \text { Touch Down } \\ \hline \text { Buy and... } & \text { Aquatic Clapper } \\ \hline \text { Supply Meal } & \text { Ward Off } \\ \hline \text { Money Maker } & \text { Watch Temporarily } \\ \hline \text { Sharp Taste } & \text { Piece of } \\ \hline \text { Stocking stuffers } & \text { Snooker Surface } \\ \hline \text { Carnival attraction } & \text { Personal Assistant } \\ \hline \text { Lightly Scorch } & \text { Calendars Lifespan } \\ \hline \end{array}$$
I'm having fun with these tables, so if it doesn't show for any reason here is a CSV of the same:
Shout,Echo
Past Curfew,Spike Drink
Vaccine Injection,Chimney Deposit
Hearing Organs,Red Planet
First Prize,Castrate Horse
Heavenly Body,Fly High
High Temperature,Without Ice
Bacon Fat,Touch Down
Supply Meal,Ward Off
Money Maker,Watch Temporarily
Sharp Taste,Piece of
Stocking stuffers,Snooker Surface
Carnival attraction,Personal Assistant
Lightly Scorch,Calendars Lifespan
• Calendars or Calanders? – Omega Krypton Nov 6 '19 at 1:23
• @OmegaKrypton - Spelling is decidedly not my strong point :) Corrected it now, thanks. – Johnson Nov 6 '19 at 1:29
Arranging the answers in columns, with the relevant letters next to them:
Yielding
Come on in and play.
• well done! +1... – Omega Krypton Nov 6 '19 at 1:41
• Great work @hdsdv - quickly solved, on the money. – Johnson Nov 6 '19 at 2:26
Logic:
What you shout is a crossword clue. Find the answer, change one letter, and the echo is a crossword clue of the new word. The changed letter is in Caps.
Past Curfew, Spike Drink
laTe, laCe
Vaccine Injection, Chimney Deposit
sHot, soOt
Hearing Organs, Red Planet
Ears, Mars
First Prize, Castrate Horse
gOld, gEld
Heavenly Body, Fly High
High Temperature, Without Ice
Heat, Neat
bEnd, bInd
Bacon Fat, Touch Down
laRd, laNd
Supply Meal, Ward Off
feEd, feNd
Money Maker, Watch Temporarily
Sharp Taste, Piece of
Stocking stuffers, Snooker Surface
feEt, feLt
Carnival attraction, Personal Assistant
Aide, Ride
Lightly Scorch, Calanders Lifespan
Sear, Year
Then the answer can be extracted by
looking for the changed letter of the new word, i.e. COME?NIN?N??LRY
• Nice one @OmegaKrypton - definitely on the right track. – Johnson Nov 6 '19 at 1:41
• +1 from me too - sorry, I didn't see this until after I posted. – hdsdv Nov 6 '19 at 1:58
|
{}
|
### 4.3.2. Format of an invariant
As we previously saw, we have graph invariants that hold for any digraph as well as tighter graph invariants for specific graph classes. As a consequence, we partition the database in groups of graph invariants. A group of graph invariants corresponds to several invariants such that all invariants relate the same subset of graph parameters and such that all invariants are variations of the first invariant of the group taking into accounts the graph class. Therefore, the first invariant of a group has no precondition, while all other invariants have a non -empty precondition that characterises the graph class for which they hold.
EXAMPLE: As a first example consider the group of invariants denoted by Proposition 68, which relate the number of arcs $\mathrm{𝐍𝐀𝐑𝐂}$ with the number of vertices of the smallest and largest connected component (i.e., $\mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}$ and $\mathrm{𝐌𝐀𝐗}_\mathrm{𝐍𝐂𝐂}$).
$\begin{array}{cc}& \mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}\ne \mathrm{𝐌𝐀𝐗}_\mathrm{𝐍𝐂𝐂}⇒\mathrm{𝐍𝐀𝐑𝐂}\ge \mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}+\mathrm{𝐌𝐀𝐗}_\mathrm{𝐍𝐂𝐂}-2+\hfill \\ & \left(\mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}=1\right)\hfill \end{array}$
$\begin{array}{cc}& \mathrm{𝚎𝚚𝚞𝚒𝚟𝚊𝚕𝚎𝚗𝚌𝚎}:\mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}\ne \mathrm{𝐌𝐀𝐗}_\mathrm{𝐍𝐂𝐂}⇒\hfill \\ & \mathrm{𝐍𝐀𝐑𝐂}\ge \mathrm{𝐌𝐈𝐍}_{\mathrm{𝐍𝐂𝐂}}^{2}+\mathrm{𝐌𝐀𝐗}_{\mathrm{𝐍𝐂𝐂}}^{2}\hfill \end{array}$
On the one hand, since the first rule has no precondition it corresponds to a general graph invariant. On the other hand the second rule specifies a tighter condition (since $\mathrm{𝐌𝐈𝐍}_{\mathrm{𝐍𝐂𝐂}}^{2}+\mathrm{𝐌𝐀𝐗}_{\mathrm{𝐍𝐂𝐂}}^{2}$ is greater than or equal to $\mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}+\mathrm{𝐌𝐀𝐗}_\mathrm{𝐍𝐂𝐂}-2+\left(\mathrm{𝐌𝐈𝐍}_\mathrm{𝐍𝐂𝐂}=1\right)\right)$, which only holds for a final graph that is reflexive, symmetric and transitive.
EXAMPLE: As a second example, consider the following group of invariants corresponding to Proposition 51, which relate the number of arcs $\mathrm{𝐍𝐀𝐑𝐂}$ to the number of vertices $\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}$ according to the arc generator (see Figure 2.2.4) used for generating the initial digraph:
$\mathrm{𝐍𝐀𝐑𝐂}\le {\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}}^{2}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐼𝑅𝐶𝑈𝐼𝑇}:\mathrm{𝐍𝐀𝐑𝐂}\le \mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐻𝐴𝐼𝑁}:\mathrm{𝐍𝐀𝐑𝐂}\le 2·\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}-2$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}\left(\le \right):\mathrm{𝐍𝐀𝐑𝐂}\le \frac{\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}·\left(\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}+1\right)}{2}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}\left(\ge \right):\mathrm{𝐍𝐀𝐑𝐂}\le \frac{\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}·\left(\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}+1\right)}{2}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}\left(<\right):\mathrm{𝐍𝐀𝐑𝐂}\le \frac{\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}·\left(\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}-1\right)}{2}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}\left(>\right):\mathrm{𝐍𝐀𝐑𝐂}\le \frac{\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}·\left(\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}-1\right)}{2}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}\left(\ne \right):\mathrm{𝐍𝐀𝐑𝐂}\le {\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}}^{2}-\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝐶𝑌𝐶𝐿𝐸}:\mathrm{𝐍𝐀𝐑𝐂}\le 2·\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}$
$\mathrm{𝐚𝐫𝐜}_\mathrm{𝐠𝐞𝐧}=\mathrm{𝑃𝐴𝑇𝐻}:\mathrm{𝐍𝐀𝐑𝐂}\le \mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}-1$
|
{}
|
# Topic of the Month: March
The March 2021 topic of the month is about nature and its beauty.
Greenery and colorfulness , growth and emergence , organic patterns , beautiful scenery , life simulations .
i'll start making a project for the nature collection soon. also, thanks for doing it on time! (
)
Does this count?
Probably
All I really hope for is luck this month.
BTW why dont you pin this @bromagosa
Good idea
Might join in on this, but I got to finish my current MASSIVE project before I can focus on anything else, might sneak it in though
same here, i am making a huge one, too! but ive already added my one to the totm collection Terrain Generator
I thought you pinned this @bromagosa
Take a look
The topic of the month is at the bottom now
Yay its back to the top @bromagosa
probably because you already read it so now it's unpinned for you
I created something, but it's not really nature, but it was actually meant to be a rain animation but I messed up. Plus there is still raindrops in there, so it should still count as nature XD
Btw @coder2195snap, you don't have to @ mention the creator of this topic when you reply in their topic, they are already gonna get a notification anyways, so instead of @ mentioning the creator of the topic, you can just call them by their username if you reply reply to them in their topic.
Plus it's kinda useless to @ mention the creator of the topic, in their topic.
... except that it's easier to type @fu than to spell out people's long names. :~P
I just copy+paste the long names :) copy+pasting things is useful
|
{}
|
Lakhmir Singh Manjit Kaur Chemistry 2019 Solutions for Class 9 Science Chapter 10 Model Test Paper 5 are provided here with simple step-by-step explanations. These solutions for Model Test Paper 5 are extremely popular among Class 9 students for Science Model Test Paper 5 Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Lakhmir Singh Manjit Kaur Chemistry 2019 Book of Class 9 Science Chapter 10 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Lakhmir Singh Manjit Kaur Chemistry 2019 Solutions. All Lakhmir Singh Manjit Kaur Chemistry 2019 Solutions for class Class 9 Science are prepared by experts and are 100% accurate.
Question 1:
Name the scientist who first studied living cell?
Living cells were first studied by Antonie Van Leeuwenhoek.
Question 2:
What kills bacteria in our food in the mouth and stomach?
In mouth, salivary lysozyme kills the bacteria whereas in stomach, hydrochloric (HCl) acid kills bacteria.
Question 3:
If a balloon filled with air and its mouth untied, is released with its mouth in the downward direction, it moves upwards. Why?
The air filled in the balloon ejects at very high speed, which exerts an upward thrust force on the balloon to conserve the momentum of the whole system. Hence, the balloon moves in an upward direction.
Question 4:
When a ball is thrown vertically upwards, its velocity goes on decreasing. What happens to its potential energy as its velocity becomes zero?
When a ball is thrown in an upward direction, its velocity decreases continuously but its potential energy goes on increasing. As the ball reaches to a maximum height, the kinetic energy gradually gets converted into potential energy of the ball. At the highest point where the velocity of the ball becomes zero, the potential energy of the ball becomes maximum. Because all of the kinetic energy gets converted into potential energy.
Question 5:
Why it is difficult to develop vaccines for some diseases?
Certain infectious agents such as viruses have a relatively simple structure, but they are able to undergo rapid modifications. These modifications allow these infectious agents to easily evade the immune system and most of the vaccines fail to work against these viruses. Thus, certain infections cannot be effectively treated by vaccination.
Question 6:
If 2 mL of acetone is present in 45 mL of its aqueous solution, calculate the concentration of this solution.
Volume of solute (acetone) = 2 mL
Volume of solution (solute+solvent) = 45 mL
Question 7:
What mass of nitrogen, N2, will contain the same number of molecules as 1.8 g of water, H2O? (Atomic masses: N = 14 u; H = 1 u; O = 16 u)
Now,
Mass of 1 mole of nitrogen = Molar mass of nitrogen = 14 g
Mass of 0.1 mole of nitrogen = 14 $×$ 0.1
Mass of 0.1 mole of nitrogen = 1.4 g
So, 1.4 g of nitrogen contains an equal number of molecules as present in 1.8 g of water.
Question 8:
(b) Give any two uses of radioactive isotopes.
(a) Radioactive isotopes are those isotopes of an element which have an unstable nucleus. Because of this, the radioactive isotopes undergo radioactive decay by emitting alpha, beta, and gamma radiation to gain the stability of the nucleus. For eg, O15, Co60, etc.
(i) Radioactive isotopes are used to determine the age of rocks and minerals.
(ii) It is used to treat cancer, detect blood clots and diagnose thyroid.
Question 9:
A ball X of mass 1 kg travelling at 2 m/s has a head-on collision with an identical ball Y at rest. X stops and Y moves off. Calculate the velocity of Y after the collision.
By conservation of momentum,
Total momentum before the collision = Total momentum after the collision
Mass of the ball X, MX = 1 kg
The velocity of the ball X before the collision, uX = 2 m/s
The velocity of the ball X after the collision, vX = 0 m/s
Mass of the ball Y, MY =1 kg
The velocity of the ball Y before the collision, uY = 0 m/s
The velocity of the ball Y after the collision, vY = v
Using conservation of momentum, ${M}_{X}{u}_{X}+{M}_{Y}{u}_{Y}={M}_{X}{v}_{X}+{M}_{Y}{v}_{Y}$
Hence, the velocity of ball Y, after the collision is 2 m/s.
Question 10:
Name these forces:
(a) the upward push of water on a submerged object
(b) the force which wears away two surfaces as they move over one another
(c) the force which pulled the apple off Isaac Newton's tree.
(a) When a body is submerged in water, it feels an upward push. That upward push of the water on the body is known as Buoyant force.
(b) When two rough surfaces come in contact, they move over one another, the force involved is known as Frictional force.
(c) The force which pulled the apple off is the gravitational force of attraction of the earth. It always acts downwards, towards the center of the earth.
Question 11:
When a ball is thrown inside a moving bus, does its kinetic energy depend on the speed of the bus? Explain.
Yes, the kinetic energy of the ball depends upon the speed of the bus. When we are sitting on a bus, we are actually moving along with the bus with a speed equal to the speed of the bus. So, if we are holding a ball, that already means that the ball also has the same speed as ours or the same as the bus. This simply means that the ball and we both are a part of the system, which is moving at the same speed as the speed of the bus. So, if the speed of the bus changes then the speed of the ball changes too.
When we throw a ball inside the bus with some speed, then we are just going to provide some extra speed to the ball along with the speed of the bus. But this can only be observed by a person standing outside the bus, i.e. a stationary observer (with respect to the earth). If the observer would be inside the bus, then he would also become a part of the system which is moving with a speed same as the speed of the bus, then the observer will observe the ball with respect to him.
Hence, we can say that the kinetic energy of the ball depends on the speed of the bus but only if we are a stationary observer (with respect to earth)
Question 12:
What factors may be responsible for losses of grains during storage?
Loss of grains during storage can occur because of following factors:
1. Biotic factors: During storage, insects, fungi, mites and rodents may grow. The growth of these organisms can lead to grain spoilage, which eventually leads to loss of grains.
2. Abiotic factors: Certain abiotic factors such as temperature and moisture content can promote the growth of microorganisms. This leads to grain spoilage as well as grain loss.
Question 13:
Match the contents of the column I and II
Column I Column II 1. Photosynthetic tissue a. Transport 2. Epithelial tissue b. Protection 3. Connective tissue c. Message 4. Blood tissue d. Feeding 5. Nervous tissue e. Strength 6. Collenchyma f. Division 7. Bone g. Flexibility 8. Meristem h. Calcium and phosphorus
The correct match for the given terms is as follows:
Column I Column II 1. Photosynthetic tissue d. Feeding 2. Epithelial tissue b. Protection 3. Connective tissue g. Flexibility 4. Blood tissue a. Transport 5. Nervous tissue c. Message 6. Collenchyma e. Strength 7. Bone h. Calcium and phosphorous 8. Meristem f. Division
Question 14:
List a few flight adaptations in birds.
Some of the flight adaptations in birds are as follows:
1. Some of the bones of birds are hollow, which makes them lightweight and allows them to fly.
2. The forelimbs of birds have been modified into wings, which assist in flying.
3. The body of birds is streamlined, which decreases air resistance.
4. The flight muscles in birds are very strong.
Question 15:
How do organisms contribute in the formation of soil?
Living organism contribute to the formation of soil by performing biological weathering. In biological weathering, the living organisms secrete chemical secretions that help in the breakdown of large rocks into smaller fragments. Typical examples of organisms that help in soil formation are Lichens and Bryophytes.
Lichens induce biological weathering by extracting minerals from the rocks, which results in the formation of small cervices that serve as the site for soil formation.
Bryophytes can readily grow in the cervices formed by lichens and further deepen the cervices. These deeper cervices form cracks. In due course of time, because of growth of bigger plants, rocks eventually pulverise and form soil.
Question 16:
(a) Define the term 'latent heat of fusion' of a solid. How much is the latent heat of fusion of ice?
(b) Draw a labelled diagram of the experimental set-up to study the latent heat of fusion of ice.
(a) Latent heat of fusion is the heat required to change 1 kg of solid substance into the liquid state at the melting point of the substance. For eg. amount of heat required to melt ice at 0 $°$C into the water at 0 $°$C, will be known as the latent heat of fusion of ice.
For ice, the latent heat of fusion is 334 kJ kg-1. This means that 334 kJ of heat is required to convert 1 kg of ice at 0 $°$C into 1 kg of water at 0 $°$C.
(b) Experimental setup to study the latent heat of fusion of ice:
Question 17:
(a) What are valence electrons? Where are valence electrons situated in an atom?
(b) What is the number of valence electrons in the atoms of an element having atomic number 13? Name the valence shell of this atom.
(a) Valence electrons are electrons present in the outermost orbital of the atom. They basically represent the number of electrons which can participate in bond formation.
Valence electrons are present in the outermost orbit.
(b) Electronic configuration of the element with atomic number 13 is 2, 8, 3.
So, the valence electrons present in this element are 3.
The valence shell of the atom will be M.
Question 18:
(a) State and explain the law of conservation of energy with an example.
(b) Explain how, the total energy a swinging pendulum at any instant of time remains conserved. Illustrate your answer with the help of a labelled diagram.
(a) In physics, the law of conservation of energy states that the total energy of an isolated system remains constant or conserved over time. It also says that the energy can neither be created nor be destroyed, but it can be changed from one form to another.
The law of conservation of energy can be seen in the following example of energy transference:
In a hydroelectric plant, the potential energy of the water stored at some height is converted into kinetic energy by setting it into motion. Then the water falls upon the turbine and the turbine rotates. So, the kinetic energy of the water gets converted into mechanical energy. That mechanical energy is used to generate electricity out of a generator attached to the turbine. Hence, the total energy remains conserved, but it changed from one form to another throughout the process.
(b) When a pendulum is stationary, it remains at its mean position or center position as shown in the figure. If the bob of the pendulum is lifted towards one side, then it gains some potential energy. Now, if the bob is left from one of the extreme positions then it will start swinging about the mean position, up to left extreme position then back again at the right extreme position passing through mean position every time.
At both the extremes, the energy possessed by the bob is purely potential. But, when the bob goes down, the potential energy gradually converts into kinetic energy and at the mean position where potential energy becomes zero, the bob has the maximum kinetic energy. That simply means the whole of the potential energy gets converted into kinetic energy at the mean position. In between the motion, the bob has both kinetic and potential energy but the total energy always remains constant or conserved throughout the motion.
Question 19:
(a) Explain the terms 'crests' and 'troughs of a wave? What type of waves consist of crests and troughs?
(b) The flash of a gun is seen by man 3 seconds before the sound is heard. Calculate the distance of the gun from the man (Speed of sound in air is 332 m/s).
(a) When a wave travels through a medium, the particles of the medium oscillate about their mean position, up and down. The maximum displacement of the particles above the mean position is referred to as a 'Crest' and the maximum displacement below the mean position is referred to as a 'Trough'. All the transverse waves consist of crests and troughs.
(b) Let the distance of the man from the gun = d
Speed of the light =
Speed of the sound =
Let the time taken by the sound to reach the man = = $\frac{d}{332}$ .......(1)
Then the time taken by the light to reach the man = t$-$3 = $\frac{d}{3×{10}^{8}}$ ........(2)
Putting the value of 't' from equation (1) into equation (2), we get,
Hence, the distance of the man from the gun is 1000 m.
Question 20:
Write short notes on
(a) Tuberculosis; (b) Polio.
(a) Tuberculosis: Tuberculosis is caused by the infection of bacterium called Mycobacterium tuberculosis. This disease can be transmitted from one organism to another through indirect or direct contact. The common symptoms of tuberculosis include weakness, loss of apetite, weight loss and fever. This disease is treated by the administration of antibiotics such as Streptomycin and Rifampicin.
(b) Polio: Polio is a viral disease which primarily affects the nervous system. This disease is caused by polio virus. This virus destroys the motor neurons present in the spinal cord which regulate muscles. The destruction of these neurons result in a paralytic condition. Early symptoms of polio include headache and soar throat. Polio can be prevented by administration of Oral Polio Vaccine (OPV).
Question 21:
Explain the following:
(i) Eutrophication; (ii) Biomagnification.
(i) Eutrophication: Eutrophication is characterised by the excessive algal growth in water bodies such as ponds. This excessive growth is primarily caused due to the over enrichment of the water body with nutrients which occur due to agricultural runoff and water pollution. The algal growth depletes the oxygen content of the lake and severely affects the lives of aquatic organisms.
(ii) Biomagnification: Biomagnification is a process in which the concentration of a non-biodegradable substance such as insecticide progressively increases from one trophic level to another. These non-biodegradable substances primarily accumulate in the fatty tissues of organisms and when an organism of higher trophic level consumes another organism, these substances are propagated to next trophic level.
Question 22:
If the back of your hand is moistened with alcohol, you will find that it rapidly becomes dry. Why is it that while it is drying, your hand feels cool?
The alcohol absorbs latent heat of vaporization from the surface of our hands and evaporates. Drawing heat and therefore, cooling our hand in the process.
Question 23:
A student wants to have 3.011 × 1023 atoms each of magnesium and carbon elements. For this purpose, how much magnesium and carbon will he weigh in grams?
12 g of carbon and 24.3 g of magnesium, each contains 6.022$×$1023 number of atoms.
Now
Let X g of carbon have 3.011$×$1023 number of atoms.
Therefore,
So, 6 g of carbon contains 3.011$×$1023 number of atoms.
Again,
Let X g of magnesium have 3.011$×$1023 number of atoms.
Therefore,
So, 12 g of magnesium contains 3.011$×$1023 number of atoms.
Hence, he will weigh 12 g of magnesium and 6 g of carbon.
Question 24:
Show by means of a graphical method that: v = u + at where the symbols have their usual meanings.
Let's suppose 'u' is the initial velocity of a body and 'v' is the final velocity of a body after time interval 't'. As you can see in the figure given below.
Then, acceleration 'a' of the body can be written as,
= Slope of the velocity-time graph
Question 25:
The echo of a sound is heard after 5 seconds. If the speed of sound in air be 342 m/s, calculate the distance of the reflecting surface.
An echo is heard when a sound hits any hard surface and the reflected sound reaches back to our ears.
Let the distance of the reflecting surface = d
Speed of the sound in air = 342 m/s
Time taken by an echo to be heard = 5 seconds
Total distance travelled by the sound waves = 2d
Question 26:
What will happen if all RBCs are removed from the blood?
Red Blood Cells (RBCs) contain haemoglobin protein, which is responsible for transportation of oxygen throughout the body. In case all RBCs are removed from the blood, the oxygen transport capacity of blood will be severely affected. The tissues and organs will not get the required amount of oxygen and the organism will eventually die.
Question 27:
"We can treat an infectious disease by killing microbe". Justify the statement with suitable examples.
|
{}
|
0
2466
Top-15 RRB NTPC Physics Problems PDF
Download Top-15 Expected Physics Problems for RRB NTPC Stage-1 exam. Go through the video of Repeatedly asked and most important RRB NTPC physics problems questions. These questions are based on previous year questions in Railways and other Govt exams.
Practice:
Practice 4500+ Solved Questions for RRB NTPC
Question 1: If a person moves a trolley for a distance of 10 m with a force of 50 N, then the work done is:
a) 0.2 J
b) 5 J
c) 20 J
d) 500 J
Question 2: A mass of 20 kg is at a height of 8 m above the ground. Then the potential energy possessed by the body is: [Given g = 9.8$ms^{-2}$]
a) 1568 J
b) 1568 C
c) 1568 W
d) 1568 N
Question 3: The focal length of a lens is 50 centimetre. Its power is:
a) 50 dioptre
b) 1 dioptre
c) 10 dioptre
d) 2 dioptre
Question 4: The potential energy (P.E.) of a body at a certain height is 200 J. The kinetic energy possessed by it when it just touches the surface of the earth is:
a) zero
b) = P.E.
c) <P.E.
d) >P.E.
Question 5: A sound wave has a frequency of 3.5 kHz and wave length 0.1 m. How long will it take to travel 700 m?
a) 3.0 s
b) 1.5 s
c) 2.0 s
d) 1 s
Question 6: A ball thrown up vertically returns to the ground after 10 second. Find the velocity with which it was thrown up? (if g = 10 m/s2).
a) 120 m/s
b) 50 m/s
c) 600 m/s
d) 60 m/s
Question 7: An object is placed 30 cm before a concave mirror of focal length of 20 cm to get a real
image. What will be the distance of the image from the mirror?
a) 60 cm
b) 20 cm
c) 30 cm
d) 40 cm
Question 8: A certain house hold has consumed 320 units of energy during a month. How much energy is this in joules?
a) $9 \times 10^{8} J$
b) $5 \times 10^{8} J$
c) $10 \times 10^{5} J$
d) $1152 \times 10^{6} J$
Question 9: Calculate the work done by the force of gravity when satellite moves in an orbit of
radius 40,000 km around the earth.
a) 8,000 J
b) 4,00,000 J
c) 0 J
d) 4,000 J
Question 10: An object of 1.2 cm height is placed 30 cm before a concave mirror of focal length of 20 cm to get a real image at a distance of 60 cm from the mirror. What is the height of the image formed?
a) -3.6 cm
b) -2.4 cm
c) 1.2 cm
d) 2.4 cm
Question 11: The radius of curvature of a concave mirror is 30 cm. Following Cartesian Sign Convention, its focal length is expressed as:
a) -30 cm
b) -15 cm
c) +30 cm
d) +15 cm
Question 12: Calculate the current flowing through a resistor of 10 ohms when potential difference of 140V is applied across it.
a) 14 Amperes
b) 140 Amperes
c) 1400 Amperes
d) 1.4 Amperes
Question 13: A transformer has 1000 primary turns. It is connected to 250 volts A.C. supply. Find the number of secondary turns to get secondary voltage of 400 volts.
a) 1600
b) 625
c) 100
d) 1250
Question 14: The time period of a vibrating body is 0.04s, then the frequency of the wave is……
a) 25HZ
b) 20Hz
c) 250Hz
d) 200Hz
Question 15: The power of a lens is —2.5 D. The type of lens and its focal length are respectively:
a) Convex, -0.40 m
b) Concave, -0.40 m
c) Concave, 0.40 m
d) Convex, 0.40 m
The velocity of the soundwave is wavelength*frequency = 3500*0.1 = 350 m/s
Hence, the amount of time it takes to travel 700m is 700/350 = 2 seconds.
The ball comes to a stop in the air after 10/2 = 5 seconds.
Hence, the initial velocity of the ball is 5*10 = 50 m/s
Given, height of the object = 1.2 cm
Focal length f = -20 cm (focal length is negative for a concave mirror)
Distance of the image v = -60 cm
We know that
$\dfrac{1}{u} + \dfrac{1}{v} = \dfrac{1}{f}$
Here, u is object distance, v is image distance, f is focal length
$\dfrac{-1}{60} + \dfrac{1}{u} = \dfrac{-1}{20}$
$\dfrac{1}{u} = \dfrac{1}{60} – \dfrac{1}{20}$
$\dfrac{1}{u} = \dfrac{-1}{30}$
=> u = -30 cm
We know that,
$\dfrac{\text{Height of the image}}{\text{Height of the object}} = \dfrac{-v}{u}$
$\dfrac{\text{Height of the image}}{1.2} = \dfrac{-(-60)}{-30}$
Height of the image = -2.4 cm
Focal length is half of radius.
Then, f = 30/2 = 15 cm
For a concave mirror, focal length will be negative. Hence, f = -15 cm
|
{}
|
# Must the action be a Lorentz scalar?
Page 580, Chapter 12 in Jackson's 3rd edition text carries the statement:
From the first postulate of special relativity the action integral must be a Lorentz scalar because the equations of motion are determined by the extemum condition, $\delta A = 0$
Certainly the extremeum condition must be an invariant for the equation of motion between $t_1$ and $t_2$, whereas I don't see how the action integral must be a Lorentz scalar. Using basic classical mechanics as a guide, the action for a free particle isn't a Galilean scalar but still gives the correct equations of motion.
First, observe that although the non-relativistic Lagrangian is not invariant. It changes by a total derivative, thus the equations of motions remain invariant. The reason of the difference between the Lorentzian and the Galilean cases is that the group action of the Lorentz group on the classical variables (positions and momenta) is a by means of a true representation, while in the case of the Galilean group the representation is projective. In the Language of geometric quantization, $exp(i \frac{S}{\hbar})$, where $S$ is the action is a section in $L \otimes \bar{L}$, where $L$ is the prequantization line bundle and $\bar{L}$ its dual. In other words, the action needs not be a scalar, only an exprssion of the form: $\bar{\psi}(t_2)exp(i \frac{S(t_1, t_2)}{\hbar})\psi(t_1)$, where $\psi(t)$ is the wavefunction at time $t$ and $S(t_1, t_2)$ is the classical action between $t_1$ and $t_2$. The reason that the representation in the Galilean case is projective is related to the nontriviality of the cohomology group $H^2(G, U(1))$ in the Galilean case in contrast to the Lorentz case. I have given a more detailed answer on a very similar subject in my answer to Anirbit: Poincare group vs Galilean group and in the comments therein.
• @lalala Unlike the Galilean group, in the relativistic case the Poincaré group in which the Lorentz group is a subgroup has a vanishing second group cohomology group $H^2(G, U(1))=0$. This is elaborated explicitly by Nesta van der Schaaf in (section 5.2.): math.ru.nl/~landsman/Nesta.pdf. – David Bar Moshe Feb 14 '18 at 9:48
|
{}
|
# LaTeX math support ?
I'm hoping we can get the same math support that other SE sites (math.SE in particular) are making use of. Does anyone know how to enable this ?
Here's a test to see if the current system works by default: $t^2$
• Without LaTeX support, I would probably heavily favour MathOverflow instead. Aug 16 '10 at 21:52
• The other sites do have the support, so it's not technically difficult. I've also asked a question on meta.stackoverflow about this. Aug 16 '10 at 21:54
• After just a few questions and answers, I am ready to buckle: dear moderators, PLEEEEASE, pretty please, pretty please with sprinkles and unicorns on top, turn on TeX support! How on earth is one supposed to convey anything in TCS without at least a few Greek letters, reasonable-looking sub/superscripts, and summation/binomials/ceiling&floor/top&bottom/arrows/models/turnstile/subset/square-root/infinity? Aug 17 '10 at 1:26
• András, it took a few days for all the other sites, so we might need to be a little patient :). At least technically it seems merely to be a matter of turning on a flag for this subdomain. Aug 17 '10 at 3:19
• Just posted a question on meta.math: meta.math.stackexchange.com/questions/671/… Aug 17 '10 at 22:34
Alright, community support for this looks pretty high.
I've enabled Tex.
• Great, many thanks! Aug 19 '10 at 20:36
• Huge kudos! Thanks! Aug 21 '10 at 18:17
• It's working great -- thank you. Aug 24 '10 at 12:54
This discussion on meta.math appears to have a solution to the problem, and in fact Geoff Dalgas is one of our moderators too. Geoff, can we have math markup enabled for this site ?
Update: in the meantime, we can use John Gietzen's greasemonkey script directly (if you use firefox). I have it installed and can see the $t^2$ in the original question.
• I've tried installing greasemonkey as well as the script, but I still see the dollar sign notation... Hopefully the LaTeX switch is flipped soon! Aug 17 '10 at 17:17
• you have to edit the script to include cstheory.stackexchange.com and meta.cstheory.stackexchange.com on the list of applicable sites. Aug 17 '10 at 17:21
MathOverflow just switched to MathJax. It looks really nice.
• that's cool. I know they were trying it out. I'm quite happy with the MathJax rendering here (and even installed it on my personal webpage) Aug 29 '10 at 6:02
• I wish Wikipedia would use MathJax. Their math rendering is...not as good. Aug 29 '10 at 7:14
They use jsMath, but some people seem to prefer the newest MathJax, AFAIU. A good starting point may be this thread : add LaTeX support to Markdown/WMD
Hope this helps.
• Problem is that I don't think we have control over this. The SE overlords have to implement the fix, I believe. Aug 16 '10 at 22:04
• The math.SE (math.stackexchange.com) is already using MathJax, which seems to work well. We just need someone at StackExchange to switch it on here. Aug 16 '10 at 22:15
I have javascript errors running MathJax.js on IE 8.0.7600.16385.
• Have you tried mathjax.org/demos ? Aug 23 '10 at 22:30
• Interestingly, the demos work incredibly well. Here I get the following error: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.0) Timestamp: Tue, 24 Aug 2010 00:04:48 UTC Message: No such element Line: 35 Char: 3 Code: 0 URI: meta.cstheory.stackexchange.com/content/js/third-party/MathJax/… Is anyone else with IE8 having problems?
– Ross Snider
Aug 24 '10 at 0:05
• have you tried poking around meta.{math,stats,tex} ? Aug 24 '10 at 4:03
• Nothing. :-(. Due to the lack of resonance from other IE8 users (if there are any here...) I have to conclude that I am the lonely source of these errors. For the record, the LaTeX scripts running on MathOverflow also break. Luckily this is only my work computer and the LaTeX scripts work just fine from home.
– Ross Snider
Aug 24 '10 at 22:45
• Magically it started working just now!
– Ross Snider
Aug 24 '10 at 23:19
Can anyone tell me why I get a [Math Processing Error] in most of the questions? I see Suresh's t^2 in this quote just fine, but I can't see anything in the parent site.
• It seems that some cache poisoning is causing trouble. Try clearing your browser cache explicitly Sep 1 '10 at 22:20
• That worked, thank you. :-) Sep 1 '10 at 23:56
|
{}
|
# How to decrypt a PGP message with only the two primes and the public exponent?
I'm trying to decrypt a PGP Message that is encrypted with an RSA key, but I only have this information:
Public exponent: 65537
|
{}
|
# How do I calculate the pressure of a known liquid in a sealed container heated above boiling point?
If I have water in a sealed container heated to say 150 degrees, how do I determine the amount of pressure being generated in the container? What about for other liquids? I have searched extensively and cannot figure this out.
I was looking for a formula of sorts, but for example reasons let's say a 5000 ml container with 4500 ml of water in it, with the rest of the space air, heated to 150 degrees Celsius.
• First we would need to know how much water per volume in the sealed container. Also we would need to know if any air is in the container or not. If we knew those things, then the next step is to consult steam tables (either equilibrium or superheated) to determine all the thermodynamic properties of the steam at the right temperature and bulk density. Feb 15, 2015 at 17:27
Note: for convenience, "gas" refers to any gas at a temperature beyond its boiling point; "vapor" refers to any gas evaporated from its liquid state, also implying that the liquid itself is below its boiling point, such as the 150 degrees water in your example.
Dalton's law of partial pressure would come in handy here. The law states that the partial pressure generated by each type of gas particles sum up to the total pressure in a sealed container. The partial pressure of "gasses" can directly be calculated using the ideal gas law. Of course the partial pressure of "gasses" is 0Pa if there is no gas but only vapor in your container.
Now the partial pressure of the vapor of liquids. In a sealed container, if left to attain dynamic equilibrium (which is the steady state in this case) the partial pressure of the vapor of any liquid would be equal to its equilibrium vapor pressure, which is a function of temperature only. Though not excessively accurate, the Antoine equation can be used to estimate this:
$\log P = A- {B \over C+T}$
where P is the equilibrium vapor pressure, A, B and C are substance-specific constants and T is the thermodynamic temperature of the substance. The three substance-specific constants can be checked up on the internet, or provided by other sources, whereas conducting experiments can be a last resort (as it is a very tedious experiment that you need the whole curve of the function to approximate the constants).
If the liquid in question is a mixture of more than one liquid substance, then you'd need a third law - Raoult's law to determine the partial vapor pressure of each component of the mixture:
$p_i = p_i^\star x_i$
where $p_i$ is the partial vapor pressure of component $i$ of the mixture, $p_i^\star$ is the equilibrium vapor pressure of a pure sample of component $i$(in itself, not in a mixture), and $x_i$ is the mole fraction of $i$ in the liquid.
At dynamic equilibrium, combining Dalton's law of partial pressure and Raoult's law yields the expression for total vapor pressure:
$p_{total} = \sum p_i^\star x_i$
Since steam (water vapor) is the most common working fluid in external-combustion engines, steam tables, as mentioned by Curt F above, are widely used and will show the relationship more accurately than the ideal gas law, "PV=nRT", http://en.wikipedia.org/wiki/Ideal_gas_law. See http://www.wolframalpha.com/examples/SteamTables.html, http://www.efunda.com/materials/water/steamtable_sat.cfm or http://www.tlv.com/global/TI/calculator/steam-table-pressure.html for an online steam table.
• DrMoishe, what about for other liquids such as organics? Water was just an example of convenience. Will the ideal gas law alone allow me calculate the pressures that will build up in a tank if I heat a liquid beyond its boiling point? I ask because I feel as if the ideal gas law would be come unapplicable when the pressure is so much that more water cannot evaporate. Is there simply not straightforward way/formula to calculate how much pressure would build up in a container at certain temperatures? Feb 15, 2015 at 19:04
• There are additional factors not included in the ideal gas equation, such as van der Waals force, en.wikipedia.org/wiki/Van_der_Waals_force. In theory, if you know all the forces acting on a substance, you can calculate its pressure/volume/temperature relationship, but in practice the formula is derived empirically. Tables are available for many substances such as CO2 and NH3. J. Willard Gibbs, en.wikipedia.org/wiki/Josiah_Willard_Gibbs, was one of the first to explore chemical thermodynamics; you might want to read his work. Feb 15, 2015 at 22:26
There are various empirically derived equations which are used to predict pressure given a temperature. As already mentioned the Antoine equation is for vapor pressure but can be used (although extrapolation should never be preferred to interpolation, and most empirical equations list the range of temperatures they are applied to (and derived from). These equations all require experimentally determined parameters and are essentially "curve fitting". (but see discussion on Van der Waals equation of state) Note that the ideal gas law rarely has accuracy to +/- 1% and occasionally as bad as 5% error...if I recall, the fit for CO2 near STP is in the "not so good" category for the IGL. Wikipedia discusses several alternatives to the IGL and The Handbook of Chemistry and Physics has tables for some of them (at least it used to, I don't have access to the most recent editions), as does Perry's Handbook of Chemical Engineering.
• Welcome to Chemistry.SE! Take the tour to get familiar with this site. Mathematical expressions and equations can be formatted using $\LaTeX$ syntax. For more information in general have a look at the help center. At the moment this reads more like a comment than an actual answer - could you elaborate a little more. With a bit more rep, you will be able to post comments on any question/answer. Jun 24, 2016 at 6:18
|
{}
|
# Short answer find the quotient 6x^3-x^2-7x-9 ------------ 2x+3
143,010 questions, page 827
1. ## Maths
A picture 40cm long and 30cm wide is surrounded by a 2cm frame.Find the area of the frame?
asked by Remmy on January 26, 2015
2. ## mathematics
find the mean and RMS of v=25sin50πt over the range t=0 to t=20ms using integration. i have integrated to: v=-cos(50πt)/2π+c but i don't know how to get the mean and rms.
asked by jerome on April 21, 2014
3. ## MATHS
Two circles of radii 5 cm and 12 cm are drawn, partly overlapping. Their centers are 13cm apart. Find the area common to the two circles.
asked by Clive on July 17, 2016
4. ## English
Youngsters find it difficult to ------the criticism in the peer group. What phrasal verb can be used to this sentence take in,put through or put across?
asked by PAkkjk on March 26, 2018
5. ## maths
the father is 3 times as old as his son in 10 years time he will be double of his sons age find their peasant age ?
6. ## physics
a stone is dropped from the peak of a hill . IT covers the distance of 30m in last second of its motion. Find the height of the peak?
asked by help on October 12, 2014
7. ## Physics
If the radius of curvature of the bump is r = 29 m, find the apparent weight of a 60 kg person in your car as you pass over the top of the bump at 12 m/s.
asked by Rachelle on January 7, 2010
8. ## math
how do you find the reference angle of csc,sec, cot? for example: csc theta= 2 times square root of 3 / 2
asked by dave on February 4, 2008
9. ## calculus
Find the volume of the solid obtained by rotating the region bounded by the given curves about the specified axis. y=x2x=y2 about the axis x=–3
10. ## Area of a triangle plz help
|BM|,|CN| are altitude of a triangle ABC if AB=3.5 AC=3.2 BN=2.1 calculate the area of the triangle and hence find |CN|.......show working plz i don't no it
asked by Collins on October 19, 2015
11. ## Physics
Each of +3micro C are placed at three corners of a square whose diagonal is 6 m long find feild intensity at the point if intersection of diagonal
asked by Manahil on October 5, 2018
12. ## prealgebra
the height of a tringle is 3 ft longer then the base. the area of the trianlge is 35 sqft. find the hight and base of the triangel.
asked by jackson on November 3, 2008
13. ## pre-calc
find real numbers a,b,c so that the graph of the function y=ax^2+bx+c contains the points (-11,4), (2,3) , and (0,1) Write three equations, with each point in one. Then solve the three equations.
asked by Erica on October 24, 2006
14. ## calculus need help studying now
find the point of coordinates of the point of inflexion on the curves (a):y=(x-2)²(x-7) (b) y=4x^3+3x²-18x-9 plz i tried my best and i got(11/6, -31/216) but keep saying am wrong
asked by ... on November 28, 2016
15. ## Calculus
Find the instantaneous rate of change of the volume V=(1/3) pi r^2 H (all put together) of a cone with respect to the radius r at r=a if the height H does not change. I truly appreciate all of ya'll help! Thanks!
asked by Ryan on January 6, 2010
16. ## Geometry.
Triangles ABC and DEF are similar. If ∠ABC = 121°and ∠BCA = 35°, find the measure of angle FDE.
asked by Summer. on June 3, 2015
17. ## MAths algebra
A diagonal of a square parking lot is 75 meters.Find to the nearest meter, the length of a side of a lot
asked by Firdous on April 2, 2013
18. ## math
In a classroom, the students are 12 boys and 6 girls. If one of the students is selected at random, find the probability that the student is a girl
asked by yorkie16 on April 22, 2009
19. ## Algebra
The length of a rectangular banner is 3 feet longer than its width. If the area is 70 square feet, find the dimensions.
asked by .... on October 20, 2012
20. ## world religons
Can someone help me I am looking for a Christmas carol that mentions the golden,silver,bronze and iron age? I have been looking and can't find anything that mentions it. I think I found it.
asked by teri on January 28, 2007
21. ## Math, Stats
the probability is 0.6 that a person shopping at a certian store will spend less than $20. For group of size 19, find the mean number who spend less then$20.
asked by Olivia on February 21, 2011
22. ## Math, Stats
the probability is 0.6 that a person shopping at a certian store will spend less than $20. For group of size 19, find the mean number who spend less then$20.
asked by Olivia on February 21, 2011
23. ## math
triangle QRS ~ triangle EFG. Find the measures of the missing sides. QR = ? EF = 10 QS = 24 EG = 5 RS = 45 FG = ? Im having trouble setting up the equations. Please help!!
asked by reece on January 21, 2015
24. ## math ( Ms,Sue)ans i got
for part 2 of the questions is it six $1.00 stamps + one$2.50 + one $1.20 and four$4.00 and how do i find the largest number of stamps that she can use from the collection to post a parcel and to list them
asked by luke on May 22, 2008
25. ## Music
Could somebody help me with my music homework,please...Try to find examples of three pieces of music that are played in three different tempos(fast,medium and slow)....Thank you for help.
asked by Damian on October 16, 2014
Use the compound interest formula to find the value of the investment after 5 years, compounded semiannually. $1,000 at 6% annual interest asked by india on September 30, 2013 27. ## ALGEBRA A triangle has sides 3x +7, 4x - 9, and 5x + 6. Find the polynomial that represents its perimeter. Am I suppose to just add these together to get the perimeter 3x+7 4x-9 5x+6 12x+4 that's how easy it is. asked by TOMMY on April 21, 2007 28. ## geometry The sides of a quadrilateral are 3, 4, 5, and 6. Find the length of the shortest side of a similar quadrilateral whose area is 9 times as great. asked by callie on September 29, 2010 29. ## 11th grade A 16.0 kg box is released on a 39.0° incline and accelerates down the incline at 0.267 m/s2. Find the friction force impeding its motion asked by Anonymous on November 27, 2010 30. ## English Does anyone know of a website that translates english into the shakespearean language? I can only find websites that translate shakespeare to modern english. asked by Morgan on January 11, 2010 31. ## Statistics Find the normal approximation for the binomial probability that x = 5, where n = 12 and p = 0.7. Compare this probability to the value of P(x=5) found in Table 2 of Appendix B in your textbook. asked by jane on October 20, 2011 32. ## Music Could somebody help me with my music homework,please...Try to find examples of three pieces of music that are played in three different tempos(fast,medium and slow)....Thank you for help. asked by Damian on October 16, 2014 33. ## calculus For the function y = 9x2 + 6x + 2, at the point x = 7, find the following. (a) the slope of the tangent to the curve (b) the instantaneous rate of change of the function asked by Anonymous on October 20, 2012 34. ## MATH Find the z-score related to the raw score, mean, and standard deviation as follows. Assume a normal probability distribution. asked by Anonymous on August 19, 2010 35. ## Physics In a stair case power lab, if you have the height of the steps, the time and the mass, how do you find the force and the power? asked by Jasmine on January 10, 2010 36. ## math a bag contains 6 white 2 black and 10 green marbles if a marble is selected at random find the probability that it is green asked by Stephany on March 8, 2012 37. ## Algebra Write each repeating decimal as the sum-of two fractions. Find the sum and simplify. Verify . 0.75483 = 0.127 = asked by Carmen on October 17, 2012 38. ## math (SIMPLE INTEREST) 1.find a)simple interest earned b) simple amount for the following investment i)RM20000 for 4 years 6 months at 11% per annum asked by fizz on April 22, 2014 39. ## math (SIMPLE INTEREST) 1.find a)simple interest earned b) simple amount for the following investment ii)RM15000 for 5 1/4 years 6 at 9 percent per annum asked by fizz on April 22, 2014 40. ## statistics Suppose that a given population of trees have an average height of 10.2 ft and a standard deviation of 1.3 ft. Find the proportion of the population with heights above 11.8 ft. asked by Erik on May 3, 2012 41. ## Cal 2 Find the volume of a frustum of a right circular cone with height 25, lower base radius 30 and top radius 13. asked by Lamar on February 10, 2016 42. ## Cal 2 Find the volume of a frustum of a right circular cone with height 25, lower base radius 30 and top radius 13. asked by Michael on February 10, 2016 43. ## Math The yearly returns of a stock are normally distributed with a mean of 5.1% and standard deviation of 2.7%. Find the probability of a yearly return being greater than 6%. asked by Nick on April 15, 2014 44. ## math find to the nearest minute, all positive values in the interval 0° ≤ θ < 180° that satisfy the equation 2 tan^2 θ - tan θ = 3 asked by Natasha on March 7, 2012 45. ## math If a ball is thrown into the air with a velocity of 41 ft/s, its height (in feet) after t seconds is given by y = 41t − 16t2. Find the velocity when t = 1. asked by Yuxiang Nie on February 21, 2019 46. ## Maths Find the absolute maximum and minimum values of the function f (x) = [x^(2/3)]*[x − 1] on the closed interval [0, 4], and state where these values occur. asked by Khyati on October 21, 2014 47. ## maths find volume and curved surface area of a cone of radius 6.00cm and perpendicular height 8.00cm asked by mark on October 16, 2012 48. ## Trig A silo is 40 feet high and 16 feet across. find the angle of depression from the top edge to floor. How do you solve this? asked by Jen on April 28, 2011 49. ## Math Find the exact roots of 0= x^2 -7x +7 I think you have to use the quadratic formula, but I can't figure out how to do this or how to verify. If you could show me step by step, that would be awesome. Thanks asked by Heaven on May 10, 2016 50. ## math Find the area of the region enclosed between y=2sin(x) and y=3cos(x) from x=0 to x=0.7π. Hint: Notice that this region consists of two parts. asked by Josh on April 27, 2019 51. ## math Good evening. I do not know how to solve this kind of problem. Could someone help me. Find both x-intercept and the y-intercept of the line given by the equation for 2.4x+2.9y+6.4=0 Thank you for your kindness. asked by mary ann on January 16, 2015 52. ## Algebra the length of a rectangular road sign is 3 feet more than 2 times its width. Find the dimensions if the perimeter is 30 feet asked by Kara Dalby on February 4, 2015 53. ## solid ABC is a right triangle with C as the right triangle and sides AC=6 cm and BC= 8 cm. line segment CD is drawn perpendicular to both AC and BC at C. if CD= 12 cm, find the distance from D to midpointS of AB asked by edong on February 4, 2015 54. ## Maths A circular hut has a diameter of 10m Find The cost of a mat covering the floor of the hut at R18/m^2 asked by Nompumelelo on January 28, 2019 55. ## Trig A silo is 40 feet high and 16 feet across. find the angle of depression from the top edge to floor. How do you solve this? asked by Jen on April 28, 2011 56. ## Physics 3 A potential energy function for a two-dimensional force is of the form U = 3x3y – 7x. Find the force that acts at the point (x, y). asked by Anonymous on October 20, 2011 57. ## math Find the percent increase in the area of a circular pizza if the radius is increased from nine inches to ten inches. asked by john on February 14, 2011 58. ## Calculus Find two positive numbers whose sum is 8 such hat when the cube of th first number is multiplied by the second number, the result is maximum. asked by Ashley on April 16, 2013 59. ## Math If y blue marbles are removed and 30 orange marbles are added to the bag , the probability of getting a green marble become 1/4.Find the value y. asked by Anonymous on May 27, 2012 60. ## math find to the nearest minute, all positive values in the interval 0° ≤ θ < 180° that satisfy the equation 2 tan^2 θ - tan θ = 3 asked by Natasha on March 7, 2012 61. ## calculus Find the break-even point for the firm whose cost function C and revenue function R are given. C(x) = 16x + 10,000; R(x) = 21x asked by Anonymous on September 24, 2010 62. ## Probability From set of 20 natural no.s 2 are selected. find probability that their sum is 1.odd 2.even 3.selected pair is twin prime asked by Edward on February 8, 2016 63. ## algebra Find the break-even point for the firm whose cost function C and revenue function R are given. C(x) = 16x + 10,000; R(x) = 21x asked by Anonymous on September 24, 2010 64. ## math Note: y^1 means the deriviative of... If f(2)=3 and y^1(2)=5 find an equation of the tangent line and the normal line to the graph of y=f(x) at the point where x=2. asked by joclyn on September 28, 2009 65. ## Math Cutting a circle into equal sections of a small central angle to find the area of a circle by using the formula A=pi*r*r asked by Shubhi on October 22, 2012 66. ## math how would I solve 10=7-m? I don't understand how to find the variable and I use flowchart!none of my math books show me how! so how can I solve problems like these? asked by Emi on January 11, 2013 67. ## maths A girl starts at A and walks 2km south to B.she then walks 3km west to C.find the distance and bearing of C from A. asked by Gift on April 27, 2019 68. ## PHYSICS Mass of empty bucket of capacity 10 liter is 1 kg. find its mass when completely with a liquid of relative density .08 asked by IMTEYAZ on February 17, 2016 69. ## College Algebra If the sides of a square are lenthened by 7 cm, the area becomes 222 cm squared. Find the length of a side of the original square. asked by Jacqueline on August 12, 2010 70. ## Physics An object of mass 6.10 has an acceleration a = (1.13m/s^2)x + (-0.699m/s^)y. Three forces act on this object:F1, F2,and F3.Given that F1=(3.31N) and F2 = (-1.20 N)x + (1.83N)y, find F3. asked by Anonymous on October 2, 2010 71. ## Calculus A ball of radius 16 has a round hole of radius 4 drilled through its center. Find the volume of the resulting solid. asked by Sam on October 30, 2008 72. ## maths for the curve with equation y=(x^2+1)/(x^2-4), find (i) the cordinates of the turning point(s) (ii) the equations of the asymptotes (iii) sketch the curve asked by changaya on October 9, 2013 73. ## math's find the compound interest of principal=rs 12,550, amount=?, rate of interest=9% semi annually, time= 2.5 asked by aryan on November 6, 2015 74. ## TRIANGLE length of AB,BC of scalene triangle ABC are 12,8 rerspectively.the size of angle is 59degree.find the length of side AC asked by PUNAM on September 25, 2014 75. ## physics A car accelerates uniformly from rest to a speed of 73.6 mi/h in 7.68 s. How do i find the distance the car traveled during this time, in units of m asked by tom on October 22, 2014 76. ## U.S history connections academy does anyone know what pages to go to in the text book to find the answers for the 8th grade civics unit test? or the answers \_(-_-)_/ jk asked by dude on November 13, 2016 77. ## Math The capacity of the box is 36000 cubic centimeters. Find the least materials used in making the box if the length is twice its width. asked by mark on October 12, 2018 78. ## maths A particle with a velocity of 2m / s at t = 0 moves along a straight line with a constant acceleration of 0.2m / s s.find the displacement of the particle in10 second asked by ashu on May 11, 2016 79. ## Calculus For the following integral find an appropriate TRIGONOMETRIC SUBSTITUTION of the form x=f(t) to simplify the integral. INT (x)/(sqrt(-191-8x^2+80x))dx x=? asked by Salman on December 14, 2009 80. ## maths for the curve with equation y=(x^2+1)/(x^2-4), find (i) the cordinates of the turning point(s) (ii) the equations of the asymptotes (iii) sketch the curve asked by changaya on October 9, 2013 81. ## maths for the curve with equation y=(x^2+1)/(x^2-4), find (i) the cordinates of the turning point(s) (ii) the equations of the asymptotes (iii) sketch the curve asked by changaya on October 9, 2013 82. ## physics A stone is thrown at 15 m/s an angle \theta below the horizontal from a cliff of height H. It lands 78 m from the base 6 s later. Find \theta and H. asked by karine on September 27, 2014 83. ## Victoria Park Algebra: Measurement One of the equal sides of an isosceles triangle is 3 m less than twice its base. The perimeter is 44 m. Find the lengths of the sides. asked by Anthony on November 4, 2015 84. ## chemistry The vapor pressure of water @ 80 degrees C is 0.467 atm. Find value of Kc for the reaction H2O(l) to H2O(g) @ this temp. asked by Robin on October 9, 2011 85. ## geometry Find the measure of each exterior angle of a regular polygon whose central angle measures 120 degrees. asked by Mark on November 15, 2016 86. ## Quick Question I am trying to find an online radio podcast that middle school students can use to host their own radio show. Can you help me research some? asked by Max on April 11, 2014 87. ## physics A 23.0 kg box is released on a 39.0° incline and accelerates down the incline at 0.268 m/s2. Find the friction force impeding its motion. asked by md8 on March 1, 2011 88. ## PHYSICS Mass of empty bucket of capacity 10 liter is 1 kg. find its mass when completely with a liquid of relative density 0.8 asked by IMTEYAZ KHAN on February 17, 2016 89. ## maths a pendulum 45 cm long swings through a vertical angle of 30 degree.find the height through which the pendulum bob rises. asked by akeb on May 11, 2011 90. ## Math When trying to find the probability of drawing a violet marble from a jar containing 9 red, 4 brown, 10 yellow, and 8 violet marbles asked by Alex on March 16, 2018 91. ## Maths Radius of a circle,13cm. Length of the chords is 10cm find the distance of the chord from the center of a circle asked by Muskan on December 17, 2017 92. ## Calculus For the following integral find an appropriate TRIGONOMETRIC SUBSTITUTION of the form x=f(t) to simplify the integral. INT x(sqrt(8x^2-64x+120))dx x=? asked by Salman on December 14, 2009 93. ## Math Orchard Supply sells lawn fertilizer at a price of$12.50 per bag. If the markup is 25% of the cost, find the cost.
asked by Heyhi on May 9, 2018
94. ## Math
write the equation of the circle in standard form. find the center, radius, intercepts, and graph the circle. x^2+y^2-6x-8y+25=36
asked by Rachel on May 27, 2012
95. ## Maths
By picking a natural number randomly upto 100, find the probability of the number being a perfect cube
asked by Yash on March 21, 2017
96. ## maths
By picking a natural number randomly upto 100 find the probability of the number being perfect cube
asked by probability on March 21, 2017
97. ## Physics
Find the acceleration produced by a force of 1650N against a friction force of 500N while pushing on a mass of 6.50kg.
asked by Brent on March 7, 2012
98. ## Calculus
The velocity of an object t seconds after it started moving is t(t-6). Find the total change in the object's position in the 6th second.
asked by Katie on April 12, 2013
99. ## Math
The cost of pulling a fence around a square field at Rs 15 per meter is 432. Find the length of each side of the field.
asked by Ashok on November 25, 2017
100. ## Precal
Find all values of theta in radians over radians over the set of all real nunbers given the equation: 2sinthetacostheta=sintheta
asked by Manny on February 16, 2016
|
{}
|
# How can I add a field to an existing biblatex type?
I wish to add a new entry to the biblatex bibliography type @online. For example,
@online{abc,
author = {A Author},
title = {Some lengthy title that's awesome},
url = {http://tex.stackexchange.com},
breakurl = {}
}
I've added a breakurl field (that could be blank), that I want to condition on when using the online bibliography driver to possibly insert a line break. However, I'm unable to successfully achieve this, even after following the guidelines in Add field "tome" to biblatex entries.
Here is a minimal example:
\documentclass{article}
\usepackage{filecontents,showframe}
\usepackage{biblatex}
\begin{filecontents*}{mybib.bib}
@online{abc,
author = {A Author},
title = {Some lengthy title that's awesome},
url = {http://tex.stackexchange.com},
breakurl = {}
}
\end{filecontents*}
% http://tex.stackexchange.com/q/163303/5764
\DeclareDatamodelFields[type=field,datatype=verbatim,nullok=true]{breakurl}
\DeclareDatamodelEntryfields{breakurl}
\DeclareFieldFormat[online]{breakurl}{}% Used as a boolean variable
\begin{document}
\nocite{*}
\printbibliography
\end{document}
After compiling with biber, the breakurl field is not visible in the .bbl:
\refsection{0}
\sortlist{nty}{nty}
\entry{abc}{online}{}
\name{author}{1}{}{%
{{hash=1318a946c3fffa54cec1130748f21c17}{Author}{A\bibinitperiod}{A}{A\bibinitperiod}{}{}{}{}}%
}
\strng{namehash}{1318a946c3fffa54cec1130748f21c17}
\strng{fullhash}{1318a946c3fffa54cec1130748f21c17}
\field{sortinit}{A}
\field{sortinithash}{b685c7856330eaee22789815b49de9bb}
\field{labelnamesource}{author}
\field{labeltitlesource}{title}
\field{title}{Some lengthy title that's awesome}
\verb{url}
\verb http://tex.stackexchange.com
\endverb
\endentry
\endsortlist
\endrefsection
The motivation behind adding a new field is that it can easily be ignored by biber or by the driver. An alternative would be to include such line breaks as part of the field entries in the @online source, but that is not acceptable.
The more general question would be: How can I add a field to an existing biblatex type?
-
In short: the data model commands should be put in a .dbx file that you then load in the options: \usepackage[datamodel=mydbxfile]{biblatex}. I'd probably also do: \DeclareDatamodelEntryfields[online]{breakurl} since this seems like a fairly specific use. – jon Jan 10 at 19:47
Can you pass the information to biber via another (existing) field, i.e. define \breakurl which expands to nothing when you are not using it? – ienissei Jan 10 at 19:48
– moewe Jan 10 at 21:31
The answer to Add field “tome” to biblatex entries was written back in the good old days when datamodel commands were still allowed in the document preamble, starting from version 2.9 those commands cannot be used in the preamble in order to avoid complications and unwanted behaviour. The answer has now been amended to reflect this change.
So if you want to use datamodel commands now you will have to use a datamodel file (either a .dbx or biblatex-dm.cfg). You could call this file werner.dbx, its content would be
\DeclareDatamodelFields[type=field,datatype=verbatim,nullok=true]{breakurl}
\DeclareDatamodelEntryfields{breakurl}
You would then have to load biblatex with the option datamodel=werner.
Please note that nullok=true does not ensure that the field appears in the .bbl even if it is empty, it just doesn't generate a warning. So breakurl = {}, does not make the field breakurl appear in the .bbl.
You seem to want to use the breakurl field as a boolean value. This can be done slightly easier with \DeclareEntryOption.
\newtoggle{blx@breakurl}
\DeclareEntryOption{breakurl}[true]{%
\settoggle{blx@breakurl}{#1}%
% or whatever you need to do here
}
Defines a new option breakurl that sets a toggle.
In your .bib entry you then have
options = {breakurl},
to toggle breakurl on.
You can have a look at biblatex - citing dead author, How to remove comma from authoryear citation, Functionality of apacites \nocitemeta with biblatex-apa: adding asterisks to author lastnames (meta-analysis) and Change 'Chapter' in @inbook to 'Appendix' for one BibLaTex entry for more examples and uses of \DeclareEntryOption.
MWE
\documentclass{article}
\usepackage{filecontents}
\usepackage{biblatex}
\begin{filecontents*}{\jobname.bib}
@online{abc,
author = {A Author},
title = {Some lengthy title that's awesome},
url = {http://tex.stackexchange.com},
options = {breakurl},
}
\end{filecontents*}
\newtoggle{blx@breakurl}
\togglefalse{blx@breakurl}
\DeclareEntryOption{breakurl}[true]{%
\settoggle{blx@breakurl}{#1}%
% or whatever you need to do here
}
\begin{document}
\nocite{*}
\printbibliography
\end{document}
-
According to the biblatex manual (texdoc biblatex)
It is not possible to add to a loaded data model by using the macros below in your preamble as the preamble is read after Biblatex has defined critical internal macros based on the data model. If any data model macro is used in a document, it will be ignored and a warning will be generated.
The command to declare data model (and thus new field) should be in configuration files.
To insert them in a document one can use filecontents environment to generate the configuration file.
\begin{filecontents}{biblatex-dm.cfg}
\DeclareDatamodelFields[type=field,datatype=verbatim,nullok=true]{breakurl}
\DeclareDatamodelEntryfields{breakurl}
\DeclareFieldFormat[online]{breakurl}{}% Used as a boolean variable
\end{filecontents}
-
|
{}
|
# mvnBvs: The function to perform variable selection for multivariate... In mBvs: Multivariate Bayesian Variable Selection Method Exploiting Dependence among Outcomes
## Description
The function can be used to perform variable selection for multivariate normal responses incorporating not only information on the mean model, but also information on the variance-covariance structure of the outcomes. A multivariate prior is specified on the latent binary selection indicators to incorporate the dependence between outcomes into the variable selection procedure.
## Usage
1 mvnBvs(Y, lin.pred, data, model = "unstructured", hyperParams, startValues, mcmcParams)
## Arguments
Y a data.frame containing q continuous multivariate outcomes from n subjects. It is of dimension n\times q. lin.pred a list containing two formula objects: the first formula specifies the p covariates for which variable selection is to be performed; the second formula specifies the confounders to be adjusted for (but on which variable selection is not to be performed) in the regression analysis. data a data.frame containing the variables named in the formulas in lin.pred. model a character that specifies the covariance structure of the model: either "unstructured" or "factor-analytic". hyperParams a list containing lists or vectors for hyperparameter values in hierarchical models. Components include, eta (a numeric value for the hyperparameter η that regulates the extent to which the correlation between response variables influences the prior of the variable selection indicator), v (a numeric vector of length q for the standard deviation hyperparameter v of the regression parameter β prior), omega (a numeric vector of length p for the hyperparameter ω in the prior of the variable selection indicator), beta0 (a numeric vector of length q+1 for hyperparameter μ_0 and h_0 in the prior of the intercept β_0), US (a list containing numeric vectors for hyperparameters in the unstructured model: US.Sigma), FA (a list containing numeric vectors for hyperparameters in the factor-analytic model: lambda and sigmaSq). See Examples below. startValues a numeric vector containing starting values for model parameters: c(beta0, B, gamma, Sigma) for the unstructured model; c(beta0, B, gamma, sigmaSq, lambda) for the factor-analytic model. See Examples below. mcmcParams a list containing variables required for MCMC sampling. Components include, run (a list containing numeric values for setting the overall run: numReps, total number of scans; thin, extent of thinning; burninPerc, the proportion of burn-in). tuning (a list containing numeric values relevant to tuning parameters for specific updates in Metropolis-Hastings algorithm: mhProp_beta_var, variance of the proposal density for B; mhrho_prop, degrees of freedom of the inverse-Wishart proposal density for Σ in the unstructured model; mhPsi_prop, scale matrix of inverse-Wishart proposal density for Σ in the unstructured model; mhProp_lambda_var, variance of the proposal density for λ in the factor-analytic model. See Examples below.
## Value
mvnBvs returns an object of class mvnBvs.
## Author(s)
Kyu Ha Lee, Mahlet G. Tadesse, Brent A. Coull
Maintainer: Kyu Ha Lee <klee@hsph.harvard.edu>
## References
Lee, K. H., Tadesse, M. G., Baccarelli, A. A., Schwartz J., and Coull, B. A. (2015), Multivariate Bayesian Variable Selection Exploiting Dependence Structure Among Outcomes: Application to Air Pollution Effects on DNA Methylation, submitted.
## Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 # loading a data set data(simData) Y <- simData$Y data <- simData$X form1 <- as.formula( ~ cov.1+cov.2) form2 <- as.formula( ~ 1) lin.pred <- list(form1, form2) p <- dim(data)[2] p_adj <- 0 q <- dim(Y)[2] ##################### ## Hyperparameters ## ## Common hyperparameters ## eta = 0.1 v = rep(10, q) omega = rep(log(0.5/(1-0.5)), p-p_adj) common.beta0 <- c(rep(0, q), 10^6) ## Unstructured model ## rho0 <- q + 4 Psi0 <- diag(3, q) US.Sigma <- c(rho0, Psi0) ## Factor-analytic model ## FA.lam <- c(rep(0, q), 10^6) FA.sigSq <- c(2, 1) ## hyperParams <- list(eta=eta, v=v, omega=omega, beta0=common.beta0, US=list(US.Sigma=US.Sigma), FA=list(lambda=FA.lam, sigmaSq=FA.sigSq)) ################### ## MCMC SETTINGS ## ## Setting for the overall run ## numReps <- 100 thin <- 1 burninPerc <- 0.5 ## Tuning parameters for specific updates ## ## - those common to all models mhProp_beta_var <- matrix(0.5, p+p_adj, q) ## ## - those specific to the unstructured model mhrho_prop <- 1000 mhPsi_prop <- diag(1, q) ## ## - those specific to the factor-analytic model mhProp_lambda_var <- 0.5 ## mcmc.US <- list(run=list(numReps=numReps, thin=thin, burninPerc=burninPerc), tuning=list(mhProp_beta_var=mhProp_beta_var, mhrho_prop=mhrho_prop, mhPsi_prop=mhPsi_prop)) ## mcmc.FA <- list(run=list(numReps=numReps, thin=thin, burninPerc=burninPerc), tuning=list(mhProp_beta_var=mhProp_beta_var, mhProp_lambda_var=mhProp_lambda_var)) ##################### ## Starting Values ## ## - those common to all models beta0 <- rep(0, q) B <- matrix(sample(x=c(0.3, 0), size=q, replace = TRUE), p+p_adj, q) gamma <- B gamma[gamma != 0] <- 1 ## ## - those specific to the unstructured model Sigma <- diag(1, q) ## ## - those specific to the factor-analytic model lambda <- rep(0.5, q) sigmaSq <- 1 #################################### ## Fitting the unstructured model ## #################################### startValues <- vector("list", 2) startValues[[1]] <- as.vector(c(beta0, B, gamma, Sigma)) beta0 <- rep(0.2, q) Sigma <- diag(0.5, q) startValues[[2]] <- as.vector(c(beta0, B, gamma, Sigma)) fit.us <- mvnBvs(Y, lin.pred, data, model="unstructured", hyperParams, startValues, mcmcParams=mcmc.US) fit.us summ.fit.us <- summary(fit.us); names(summ.fit.us) summ.fit.us ####################################### ## Fitting the factor-analytic model ## ####################################### startValues <- vector("list", 2) startValues[[1]] <- as.vector(c(beta0, B, gamma, sigmaSq, lambda)) beta0 <- rep(0.2, q) sigmaSq <- 0.5 startValues[[2]] <- as.vector(c(beta0, B, gamma, sigmaSq, lambda)) fit.fa <- mvnBvs(Y, lin.pred, data, model="factor-analytic", hyperParams, startValues, mcmcParams=mcmc.FA) fit.fa summ.fit.fa <- summary(fit.fa); names(summ.fit.fa) summ.fit.fa
mBvs documentation built on May 29, 2017, 5:55 p.m.
|
{}
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 05 Dec 2013, 19:16
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A container has 3L of pure wine. 1L from the container is
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Manager
Joined: 19 Aug 2010
Posts: 78
Followers: 2
Kudos [?]: 8 [0], given: 2
Re: Ratio and Proportion [#permalink] 09 Feb 2011, 06:43
Bunuel,
Do you know some similar questions that are more GMAT-like?
Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes
Senior Manager
Joined: 08 Nov 2010
Posts: 426
WE 1: Business Development
Followers: 6
Kudos [?]: 28 [0], given: 161
Re: Ratio and Proportion [#permalink] 09 Feb 2011, 22:23
ye, some more questions like that will be great.
Senior Manager
Joined: 08 Nov 2010
Posts: 426
WE 1: Business Development
Followers: 6
Kudos [?]: 28 [0], given: 161
Re: Ratio and Proportion [#permalink] 12 Feb 2011, 11:54
What is the source for this question? is it a real gmat question? can we meet questions similar to this one?
thanks.
Math Expert
Joined: 02 Sep 2009
Posts: 15070
Followers: 2517
Kudos [?]: 15454 [0], given: 1551
Re: Ratio and Proportion [#permalink] 12 Feb 2011, 11:59
Expert's post
144144 wrote:
What is the source for this question? is it a real gmat question? can we meet questions similar to this one?
thanks.
It was mentioned several times on the previous page that it's not a GMAT question!
_________________
Senior Manager
Joined: 08 Nov 2010
Posts: 426
WE 1: Business Development
Followers: 6
Kudos [?]: 28 [0], given: 161
Re: Ratio and Proportion [#permalink] 12 Feb 2011, 13:12
Bunuel - i am sry. i read the post before and i just forgot.
its very late here and im out of focus.
didnt mean to make u angry.
have a good day.
Manager
Joined: 18 Jun 2004
Posts: 107
Location: san jose , CA
Followers: 1
Kudos [?]: 4 [0], given: 0
Re: Ratio and Proportion [#permalink] 01 Aug 2011, 00:45
Alternative explanation from Dabral, always enjoyed his problem solving approach.
http://www.gmatquantum.com/shared-posts ... stion.html
_________________
---- Hero never chooses Destiny
Destiny chooses Him ......
Last edited by rahul on 07 Aug 2011, 23:31, edited 1 time in total.
Manager
Joined: 06 Jul 2011
Posts: 124
Schools: Columbia
Followers: 1
Kudos [?]: 11 [1] , given: 3
Re: Ratio and Proportion [#permalink] 02 Aug 2011, 01:54
1
KUDOS
Wow, it took me quite a while to figure this out.
Basically, just examine how the wine content works out
at n=0 w=3L
n=1 w=2
n=2 w=1.5
n=3 w=1.2
n=4 w=1.0
thank figure out w at n=1 = 3 x (2/3)
w at n=2, w = 3 x (2/3) x (3/4)
at n=3, w = 3 x (2/3) x (3/4) * (4/5) and so forth
therefore at n=x, w = 3 x (2 / (x+2))
n=19 w = 3 x (2/21), w = 6 / 21 = 2 / 7
Took me 5 minutes, which is embarrassing....
Intern
Joined: 27 Feb 2011
Posts: 49
Followers: 0
Kudos [?]: 0 [0], given: 9
Re: Ratio and Proportion [#permalink] 02 Aug 2011, 07:44
Bunuel wrote:
Let's go step by step:
First operation: 3L-1L=2=6/3L of wine left, total 4L;
#2: 6/3L-(6/3)/4=6/3-6/12=18/12=6/4L of wine left, total 5L;
#3: 6/4L-(6/4)/5=6/4-6/20=24/20=6/5L, total 6L;
#4: 6/5L-(6/5)/6=6/5-6/30=30/30=6/6L, total 7L;
....
At this point it's already possible to see the pattern: x=6/(n+2)
n=19 --> x=6/(19+2)=6/21=2/7L
nice.. I just gave up after the first 2 iterations..
Intern
Status: Mission MBA
Joined: 02 Jul 2010
Posts: 45
Schools: ISB, IIMs
Followers: 0
Kudos [?]: 7 [0], given: 5
Re: Ratio and Proportion [#permalink] 16 Sep 2011, 20:04
VeritasPrepKarishma wrote:
The question can be solved in under a minute if you understand the concept of concentration and volume.
Removal and addition happen 19 times so:
C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * .......* (\frac{19}{21}) * (\frac{20}{22})
All terms get canceled (4 in num with 4 in den, 5 in num with 5 in den etc) and you are left with C_f = \frac{1}{77}
Since Volume now is 22 lt, Volume of wine = 22*(\frac{1}{77}) = \frac{2}{7}
Theory:
1. When a fraction of a solution is removed, the percentage of either part does not change. If milk:water = 1:1 in initial solution, it remains 1:1 in final solution.
2. When you add one component to a solution, the amount of other component does not change. In milk and water solution, if you add water, amount of milk is the same (not percentage but amount)
3.
Amount of A = Concentration of A * Volume of mixture
Amount = C*V
( e.g. In a 10 lt mixture of milk and water, if milk is 50%, Amount of milk = 50%*10 = 5 lt)
When you add water to this solution, the amount of milk does not change.
So Initial Conc * Initial Volume = Final Conc * Final Volume
C_i * V_i = C_f * V_f
C_f = C_i * (V_i/V_f)
In the question above, we find the final concentration of wine. Initial concentration C_i = 1 (because it is pure wine)
When you remove 1 lt out of 3 lt, the volume becomes 2 lt which is your initial volume for the addition step. When you add 2 lts, final volume becomes 4 lt.
So C_f = 1 * 2/4
Since it is done 19 times, C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * .......* (\frac{19}{21}) * (\frac{20}{22})
The concentration of wine is 1/77 and since the final volume is 22 lt (the last term has V_f as 22, you get amount of wine = 1/77 * 22 = 2/7 lt
Karishma!!!!!
Great Explanation. Kudos +1
_________________
If you find my posts useful, Appreciate me with the kudos!! +1
Intern
Status: Mission MBA
Joined: 02 Jul 2010
Posts: 45
Schools: ISB, IIMs
Followers: 0
Kudos [?]: 7 [0], given: 5
Re: Ratio and Proportion [#permalink] 16 Sep 2011, 20:17
rahul wrote:
Alternative explanation from Dabral, always enjoyed his problem solving approach.
http://www.gmatquantum.com/shared-posts ... stion.html
Rahul
Really great explanation at the above mentioned link.
Thanks
Manager
Status: Trying to get 720+ - DIDN'T GIVE UP !!
Joined: 24 Aug 2011
Posts: 218
Location: India
Concentration: Entrepreneurship, Finance
GMAT 1: 600 Q48 V25
GMAT 2: 660 Q50 V29
WE: Engineering (Computer Software)
Followers: 1
Kudos [?]: 16 [0], given: 165
Re: Ratio and Proportion [#permalink] 24 Nov 2011, 22:24
too tough to figure out the pattern during exam pressure
I don't think I would be able to answer it during exam
_________________
Didn't give up !!! Still Trying!!
Intern
Joined: 03 Jan 2013
Posts: 15
Followers: 0
Kudos [?]: 0 [0], given: 7
Re: Ratio and Proportion [#permalink] 06 Feb 2013, 08:08
VeritasPrepKarishma wrote:
The question can be solved in under a minute if you understand the concept of concentration and volume.
Removal and addition happen 19 times so:
C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * .......* (\frac{19}{21}) * (\frac{20}{22})
All terms get canceled (4 in num with 4 in den, 5 in num with 5 in den etc) and you are left with C_f = \frac{1}{77}
Since Volume now is 22 lt, Volume of wine = 22*(\frac{1}{77}) = \frac{2}{7}
Theory:
1. When a fraction of a solution is removed, the percentage of either part does not change. If milk:water = 1:1 in initial solution, it remains 1:1 in final solution.
2. When you add one component to a solution, the amount of other component does not change. In milk and water solution, if you add water, amount of milk is the same (not percentage but amount)
3.
Amount of A = Concentration of A * Volume of mixture
Amount = C*V
( e.g. In a 10 lt mixture of milk and water, if milk is 50%, Amount of milk = 50%*10 = 5 lt)
When you add water to this solution, the amount of milk does not change.
So Initial Conc * Initial Volume = Final Conc * Final Volume
C_i * V_i = C_f * V_f
C_f = C_i * (V_i/V_f)
In the question above, we find the final concentration of wine. Initial concentration C_i = 1 (because it is pure wine)
When you remove 1 lt out of 3 lt, the volume becomes 2 lt which is your initial volume for the addition step. When you add 2 lts, final volume becomes 4 lt.
So C_f = 1 * 2/4
Since it is done 19 times, C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * .......* (\frac{19}{21}) * (\frac{20}{22})
The concentration of wine is 1/77 and since the final volume is 22 lt (the last term has V_f as 22, you get amount of wine = 1/77 * 22 = 2/7 lt
If the operation is only done 19 times then where and why does "22" Lt pop up in the final volume of mixture I was following how the demoninators increased but dont understand the "22".
Also if 1 L of wine is removed every operation how is the concentration of the wine mixture go up since part of it is being removed...only thing that is increasing the total volume of the solution..
Thanks a lot.
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 3736
Location: Pune, India
Followers: 804
Kudos [?]: 3172 [0], given: 136
Re: Ratio and Proportion [#permalink] 06 Feb 2013, 19:11
Expert's post
pharm wrote:
If the operation is only done 19 times then where and why does "22" Lt pop up in the final volume of mixture I was following how the demoninators increased but dont understand the "22".
Also if 1 L of wine is removed every operation how is the concentration of the wine mixture go up since part of it is being removed...only thing that is increasing the total volume of the solution..
Thanks a lot.
After the first step, the volume is 4 lt. After the second, it will be 5 lt. By the same logic, after the 19th step, it will be 19+3 = 22.
or Initial volume is 3 lt and you add net 1 lt in every step. So after the 19th step you will have 3+19 = 22 lt
From a homogeneous mixture, if you remove some quantity of the mixture, the concentration of the elements stays the same. e.g., say you have a solution of 50% milk. If you take out some solution, what will be the concentration of milk in the leftover solution? It will still be 50%. The quantity of milk will reduce but not the concentration.
Check out this post for more details:
http://www.veritasprep.com/blog/2012/01 ... -mixtures/
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Intern Joined: 23 Dec 2012 Posts: 3 GMAT Date: 05-28-2013 GPA: 3.25 WE: Operations (Transportation) Followers: 0 Kudos [?]: 5 [0], given: 4 Re: Ratio and Proportion [#permalink] 06 Feb 2013, 22:54 VeritasPrepKarishma wrote: The question can be solved in under a minute if you understand the concept of concentration and volume. Removal and addition happen 19 times so: C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * .......* (\frac{19}{21}) * (\frac{20}{22}) All terms get canceled (4 in num with 4 in den, 5 in num with 5 in den etc) and you are left with C_f = \frac{1}{77} Since Volume now is 22 lt, Volume of wine = 22*(\frac{1}{77}) = \frac{2}{7} Theory: 1. When a fraction of a solution is removed, the percentage of either part does not change. If milk:water = 1:1 in initial solution, it remains 1:1 in final solution. 2. When you add one component to a solution, the amount of other component does not change. In milk and water solution, if you add water, amount of milk is the same (not percentage but amount) 3. Amount of A = Concentration of A * Volume of mixture Amount = C*V ( e.g. In a 10 lt mixture of milk and water, if milk is 50%, Amount of milk = 50%*10 = 5 lt) When you add water to this solution, the amount of milk does not change. So Initial Conc * Initial Volume = Final Conc * Final Volume C_i * V_i = C_f * V_f C_f = C_i * (V_i/V_f) In the question above, we find the final concentration of wine. Initial concentration C_i = 1 (because it is pure wine) When you remove 1 lt out of 3 lt, the volume becomes 2 lt which is your initial volume for the addition step. When you add 2 lts, final volume becomes 4 lt. So C_f = 1 * 2/4 Since it is done 19 times, C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * .......* (\frac{19}{21}) * (\frac{20}{22}) The concentration of wine is 1/77 and since the final volume is 22 lt (the last term has V_f as 22, you get amount of wine = 1/77 * 22 = 2/7 lt Kudos +1 Karishma Is there an fast way to compute the result of the multiplacation series like we have for Cf? I actually did the long way . Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 3736 Location: Pune, India Followers: 804 Kudos [?]: 3172 [0], given: 136 Re: Ratio and Proportion [#permalink] 07 Feb 2013, 01:57 Expert's post y7214001 wrote: Kudos +1 Karishma Is there an fast way to compute the result of the multiplacation series like we have for Cf? I actually did the long way . It would have taken forever! C_f = 1 * (\frac{2}{4}) * (\frac{3}{5}) * (\frac{4}{6}) * (\frac{5}{7}) * ....... (\frac{18}{20}) * (\frac{19}{21}) * (\frac{20}{22}) You need to observe here that other than first two numerators and last two denominators, all other terms will cancel out. First term's denominator will cancel out third term's numerator. Second term's denominator will cancel out fourth term's numerator. The last two denominators will have no numerators to cancel them out. The first two numerators have no denominators to cancel them out. Usually, in such expressions (where terms have a pattern), things simplify easily. You just need to observe the pattern. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Save$100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Intern
Joined: 31 Mar 2013
Posts: 46
Followers: 0
Kudos [?]: 5 [0], given: 65
Re: Ratio and Proportion [#permalink] 05 Jul 2013, 05:47
Hi Bunuel and VeritasPrepKarishma,
Would it be possible to use this formula in this case?
New Concentration of wine= Old concentration of wine * (V1/V2)^n
n= number of iterations (it is 19 in this case)
v1 = volume of liquid withdrawn
v1 = initial volume of liquid
I noticed a similar formula being used here:
a-20-litre-mixture-of-milk-and-water-contains-milk-and-water-22212.html
Regards,
Intern
Joined: 20 Feb 2011
Posts: 33
Location: United States
GMAT 1: 560 Q45 V23
WE: Information Technology (Consulting)
Followers: 0
Kudos [?]: 6 [0], given: 42
Re: A container has 3L of pure wine. 1L from the container is [#permalink] 16 Aug 2013, 10:56
I made a guess on this question.
If we are left with 4L of mixture which has 2L of wine and 2L of water after 1st process, the ratio of wine is about 1/2. So after 19 successive processes, ratio must be significantly less than 1/2.
Option B is little less than 1/2 so can't be the answer and we are left with option A and C. At least this helped me narrowed down to two options in 15 sec.
_________________
And many strokes, though with a little axe, hew down and fell the hardest-timbered oak. - William Shakespeare
Senior Manager
Joined: 17 Dec 2012
Posts: 329
Location: India
Followers: 6
Kudos [?]: 106 [0], given: 8
Re: A container has 3L of pure wine. 1L from the container is [#permalink] 16 Aug 2013, 17:20
The trick is to identify there should be a pattern as it is not possible to carry out all the calculations.
1. Initially wine was 3L.
2. After first operation wine was 2L
3. After second operation wine was 1.5L
4. After third operation wine was 1.2L
Now we can see the pattern (2) is 2/3 of (1), (3) is 3/4 of (2), (4) is 4/5 of (3) and so on
So in 3 operations wine left is 3 * 2/3 * 3/4 * 4/5 , after cancelling out of numbers we have 3* 2/5 = 1.2 L
So in 19 operations after cancelling out of numbers 3* 2/21 = 2/7 L of wine left
_________________
Srinivasan Vaidyaraman
sravna@gmail.com
Sravna Test Prep
http://www.sravna.com
Free Online course for the GMAT and the GRE
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 3736
Location: Pune, India
Followers: 804
Kudos [?]: 3172 [0], given: 136
Re: Ratio and Proportion [#permalink] 19 Aug 2013, 04:33
Expert's post
emailmkarthik wrote:
Hi Bunuel and VeritasPrepKarishma,
Would it be possible to use this formula in this case?
New Concentration of wine= Old concentration of wine * (V1/V2)^n
n= number of iterations (it is 19 in this case)
v1 = volume of liquid withdrawn
v1 = initial volume of liquid
I noticed a similar formula being used here:
a-20-litre-mixture-of-milk-and-water-contains-milk-and-water-22212.html
Regards,
Actually, it is a play on the same formula.
Cf = Ci * (V1/V2)*(V3/V4).....
Usually, in replacement questions, you remove n lts and put back n lts. So initial and final volume in each step is the same. That is why you get (V1/V2)^n.
In case V1 and V2 are different in subsequent steps, you use those volumes V1/V2 * V3/V4 *.....
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Save \$100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Intern
Joined: 11 Jun 2012
Posts: 10
Followers: 0
Kudos [?]: 1 [0], given: 6
A container has 3 liters of pure lime juice. 1 liter from [#permalink] 04 Oct 2013, 05:49
A container has 3 liters of pure lime juice. 1 liter from the container is taken out and 2 liter water is added. The process is repeated several times. After 19 such operations, quantity of lime juice in the mixture is
(A) 2/7 L
(B) 3/7 L
(C) 5/14 L
(D) 5/19 L
(E) 6/19L
A container has 3 liters of pure lime juice. 1 liter from [#permalink] 04 Oct 2013, 05:49
Similar topics Replies Last post
Similar
Topics:
A butler stole wine from a butt of sherry which contained 5 05 Feb 2004, 03:15
Is lxl <1? (1) lx+1l = 2lx-1l (2) lx-3l does not equal 0 8 17 Feb 2005, 09:29
Is lxl < 1? (1) lx + 1l = 2lx - 1l (2) lx - 3l ≠ 0 6 04 Jun 2007, 19:19
Is lxl < 1? (1) lx + 1l = 2lx - 1l (2) lx - 3l ≠ 0 8 02 Aug 2007, 09:05
2 A zinc-copper alloy with 3 kg of pure zinc would contain 90% 6 25 May 2012, 00:20
Display posts from previous: Sort by
# A container has 3L of pure wine. 1L from the container is
Question banks Downloads My Bookmarks Reviews Important topics
Go to page Previous 1 2 3 Next [ 42 posts ]
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
{}
|
# Algebra again
1. Sep 29, 2005
### DB
can some 1 give me a hand with this? thanks
$$\frac{150}{v+20}+\frac{150}{v-5}=3.5$$
2. Sep 29, 2005
### Leong
1. make the denominator the same.
2. cross multiply the equation to get rid of the fraction form.
3. Sep 29, 2005
### DB
i get the -65 n 80 which i think is right so thanks leong
4. Sep 29, 2005
### Leong
5. Sep 29, 2005
### DB
80 works, -65 doesnt
6. Sep 29, 2005
### DB
it came down to:
$$3.5v^2+52.5v-650$$
7. Sep 30, 2005
### VietDao29
Uhmmm, I think you should check your calculation again.
Here we go:
$$\frac{150}{v + 20} + \frac{150}{v - 5} = 3.5$$
Multiply both sides by (v + 20) (v - 5)
$$\Leftrightarrow (v + 20)(v - 5)\left(\frac{150}{v + 20} + \frac{150}{v - 5}\right) = 3.5(v + 20)(v - 5)$$
$$\Leftrightarrow 150(v - 5) + 150(v + 20) = 3.5(v ^ 2 + 15v - 100)$$
$$\Leftrightarrow 300v + 2250 = 3.5v ^ 2 + 52.5v - 350$$
$$\Leftrightarrow ...$$
Can you go from here?
By the way, it does not come down to 3.5v2 + 52.5v - 650 = 0.
Viet Dao,
|
{}
|
# Trigonometric Equations
An equation involving one or more trigonometrical ratios of unknown angle is called a trigonometric equation
e.g. cos2x – 4 sinx = 1
It is to be noted that a trigonometrical identity is satisfied for every value of the unknown angle whereas trigonometric equation is satisfied only for some values (finite or infinite) of unknown angle.
e.g. sin2x + cos2x = 1 is a trigonometrical identity as it is satisfied for every value of x ∈ R.
Solution of a Trigonometric Equation:
A value of the unknown angle which satisfies the given equation is called a solution of the equation
e.g. sinθ = 1/2 ⇒ θ = π/6.
General Solution:
Since trigonometrical functions are periodic functions, solutions of trigonometric equations can be generalized with the help of the periodicity of the trigonometrical functions.
The solution consisting of all possible solutions of a trigonometric equation is called its general solution.
We use the following formulae for solving the trigonometric equations:
• sinθ = 0 ⇒ θ = nπ ,
• cosθ = 0 ⇒ θ = (2n+1),
• tanθ = 0 ⇒ θ = nπ ,
• sinθ = sinα ⇒ θ = nπ + (– 1)nα , where α ∈ [– π/2 , π/2]
• cosθ = cosα ⇒ θ = 2nπ ± α , where α ∈ [ 0 , π]
• tanθ = tanα ⇒ θ = nπ + α , where α ∈ (– π/2, π/2)
• sin2θ = sin2α , cos2θ = cos2α , tan2 θ = tan2α ⇒ θ = nπ ± α ,
• sinθ = 1 ⇒ θ = (4n + 1)π/2 ,
• sinθ = –1 ⇒ θ = (4n – 1)π/2 ,
• cosθ = 1 ⇒ θ = 2nπ ,
• cosθ = – 1 ⇒ θ = (2n + 1)π ,
• sinθ = sinα and cosθ = cosα ⇒ θ = 2nπ + α .
### Note:
• Everywhere in this chapter n is taken as an integer, If not stated otherwise.
• The general solution should be given unless the solution is required in a specified interval or range.
• a is taken as the principal value of the angle. Numerically least angle is called the principal value.
### Important points to Remember:
* While solving a trigonometric equation, squaring the equation at any step should be avoided as far as possible. If squaring is necessary, check the solution for extraneous values.
*Never cancel terms containing unknown terms on the two sides, which are in product. It may cause loss of genuine solution.
*The answer should not contain such values of angles which make any of the terms
*Domain should not be changed. If it is changed, necessary corrections must be incorporated.
*Check that denominator is not zero at any stage while solving equations.
*At times you may find that your answers differ from those in the package in their notations. This may be due to the different methods of solving the same problem. Whenever you come across such situation, you must check their authenticity. This will ensure that your answer is correct.
*While solving trigonometric equations, you may get same set of solution repeated in your answer. It is necessary for you to exclude these repetitions. e.g. in nπ + π/2 , kπ/5 + π/10 (n , k ∈ I ) , the set nπ + π/2 forms a part of the second set of solution ( you can check by putting k = 5m +2 (m ∈ I). Hence the final answer should be kπ/5 + π/10 , k ∈ I
* Sometimes the two solution set consist partly of common values. In all such cases the common part must be presented only once.
Solving the different forms of trigonometric equations
Equations Reducible to Lower Degree:
Illustration 1 :
Solve the trigonometric equation
Solution:
Given trigonometric equation can be written as
⇒ (1 − cos2x)2 + (1 + sin2x)2 = 1
⇒ 2 + 1 − 2(cos2x − sin2x) = 1
⇒ cos(2x + π/4) = 1/√2
⇒ 2x + π/4 = 2nπ ± π/4
∴ x = nπ , nπ − π/4
Equations Reducible by Direct Formula / Multiple Angle Formula:
Illustration 2.
Solve the trigonometric equation
Solution :
and ,
for θ = 2nπ ; cotθ/2 and sin θ are undefined. Hence do not satisfy the domain of given equation.
The only solution is ,θ = 4nπ ± 2π/3 where n ∈ I.
Next Page »
|
{}
|
## Want to keep learning?
This content is taken from the UNSW Sydney's online course, Maths for Humans: Linear and Quadratic Relations. Join the course to learn more.
2.1
## UNSW Sydney
Skip to 0 minutes and 12 secondsIn this activity, we're going to be looking at extending our understanding of direct proportions to more general linear relationships. And we're going to be using the Cartesian framework introduced by Descartes to do that. So we're looking at general lines. We'll be able to talk not just about their slopes, but also their intercepts, x and y-intercepts, and really positioning them in the Cartesian plane, getting good control over where they are. We'll then be able to apply this understanding to important examples-- in particular, to temperature measured in the Fahrenheit and Celsius systems and also about the important subject of supply and demand in economics.
Skip to 0 minutes and 53 secondsSo it all rests on really understanding the geometry of the lines and their equations in the Cartesian setup. So to get at a more general line, let's have a look at our line l and translate it so that it goes through, say, this particular point here. What would that involve? Well, if we're wanting a line which is still parallel to this one, it still should have the property that if we go over 2, we go up 1. So if it's going to go through this point, it should also go through this point here. So that's over 2 and up 1, and then over 2 up 1, it should go through that point.
Skip to 1 minute and 31 secondsAnd over 2 up 1, should go through that point. So let's draw such a line.
Skip to 1 minute and 39 secondsTake our ruler and connect some of our points. And we'll draw it in purple.
Skip to 1 minute and 59 secondsGood. So this is a new line. Let's give it a new name. Let's say Line K. And the natural question is, what would be the equation of this line? What is the relationship between points lying on this line? So let's have a look. So this point here is the point minus 1, 1. So this point here is the point 1, 2. This is the point 3, 3. This is the point 5, 4. So what's the relationship between these various x and y-coordinates for this line here? Well, it's going to be pretty close to the line l that it's translate of.
Skip to 2 minutes and 50 secondsIn fact, we'll be able to write it as y equals 1/2 x plus some adjustment, plus or minus some adjustment. Let's see what that might be. So if x is equal to minus 1 and y is equal to 1, then we would plug minus 1 in there and y equals 1 there. Then we see in order to make that happen, we would have to put a 3/2 there. Now they would actually be satisfied. Let's check about this point here-- if x is 1, then the right hand side is 1/2 plus 3/2, which is, in fact, 2. So you can check that all of these points here actually satisfy this equation here.
Skip to 3 minutes and 35 secondsSo this is the equation of the line k, which is a translate of the line l. And it has the same slope. Slope is 1/2 there. That slope is the same. That's still telling us that if we go over 1, we're going up 1/2. What's the significance of this 3/2? Well, the 3/2 is-- it's called the y-intercept.
Skip to 4 minutes and 17 secondsThat's the value of y when x is 0. So when x equals 0, then y has to be exactly 3/2. That's corresponding to this point right here on the y-axis where the line crosses the y-axis. That's why it's called the y-intercept. So another alternate form for this line would be to multiply by 2 and then get the x's and y's on the same side. So that would be x minus 2y equals minus 3. So you can check that that's another form for a line equation that describes this purple line. So the general situation that we're talking about here is we're talking about lines of the form y equals mx plus b.
Skip to 5 minutes and 10 secondsThis is a line with slope m and y-intercept b. In other words, when x is 0, then y equals b.
# Lines and linear relationships
Welcome to Week 2. We hope you have enjoyed learning about the basics of linear relations. Now we are going to move towards linking geometry and algebra through the remarkably efficient use of the Cartesian plane. We are really talking about graph paper here!
Lines are represented in the Cartesian plane by linear equations, usually in the forms $$\normalsize{ax+by=c}$$ or $$\normalsize{y=mx+b}$$. The second form emphasizes the importance of the slope $$\normalsize{m}$$ and the $$\normalsize{y}$$-intercept $$\normalsize{b}$$.
The $$\normalsize{y=mx+b}$$ form of a line is very convenient if we want to think of the line as representing a function which inputs a value $$\normalsize{x}$$, and outputs another value $$\normalsize{y}$$. To emphasize this functional aspect, it is common to also introduce a specific name of the function in question, say $$\normalsize{f}$$. Thus we would write
$\Large{y=f(x)=-2x+4}$
and sometimes we dispense with the reference to $$\normalsize{y}$$, so writing $$\normalsize{f(x)=-2x+4}$$.
For example you can verify that $$\normalsize{f(0)=4}$$, $$\normalsize{f(1)=2}$$, $$\normalsize{f(10)=-16}$$ and $$\normalsize{f(-3)=10}$$.
Any function of the form $$\normalsize{f(x)=ax+b}$$ for fixed $$\normalsize{a,\;b}$$ is called a linear function provided $${\normalsize a \neq 0}$$. We say that the physical line on the page for $$\normalsize{y=ax+b}$$ is the graph of that function.
The line $$\normalsize x=0$$ is not a function. This is because when $$x=0$$ there are too many possible values for $$\normalsize y$$. In this case we use a more general term, and call $$\normalsize x=0$$ a relation.
|
{}
|
# Do aes_256_gcm IVs just need to be unique for that key?
I'm using GCM (via openssl's EVP_EncryptInit_ex(ctx, EVP_aes_256_gcm(), NULL, NULL, NULL))).
The IV's default length is 12 bytes.
It says that the IV must be unique, in the context of a particular key.
Does it matter if the IV is smaller or predictable? I intend re-using a 64-bit sequence number and a 64-bit millisecond timestamp as an 8-byte IV. So long as a restart introduces a 1ms delay, there can be no duplicate IVs. (I will guard against 1970 timestamps.)
Is it necessary to add in some random bytes? (These would need to be transported along with each message; I'm trying to keep the messages as small as is securely possible.)
If so, how many random bytes?
For GCM, IV can be predictable (contrary to some other modes such as CBC, there is no unpredictability or uniformness requirement), but they must not be reused. You are free to use any method you wish to ensure this uniqueness. If an IV is reused, then the "authentication key" is revealed, and the encryption itself becomes an instance of the "two-times pad", i.e. may leak a lot of information on the plaintext, depending on its format.
A timestamp is sort-of fine. The problem with timestamps is that they depend on a local clock, and since there is no such thing as a perfect clock, computers routinely include ways to adjust clocks, either manually or automatically. Note that automatic clock adjustment over the Internet uses NTP, which is usually unprotected (no authentication or encryption). Therefore, remote attackers may feed a connected system with fake NTP packets to force a clock adjustment and possibly an IV reuse.
Using a random value is a common way to achieve uniqueness with a high probability; with 96-bit IV, if you encrypt no more than $2^{32}$ messages with a given key, then probability of an IV collision (assuming a strong random source) will be at most $2^{-32}$, which is low enough to deter attackers (i.e. it's less worthwhile to wait for such an event than to simply buy lottery tickets).
Some extra notes:
• GCM actually supports all IV lengths, not just "12 bytes". An IV with length exactly 12 bytes is used "as is" and lets you know exactly if you have a collision or not. If you use a 64-bit timestamp (assumed non-repeating), you are encouraged to "pad it" with 4 extra bytes (e.g. zeros) so that the GCM implementation receives a 12-byte IV (even if you do not actually transmit the padding bytes in the message on the wire).
• Conversely, you could use a longer random IV. 16-byte random IV are arguably "better" than 12-byte random IV (it makes risks of collisions lower as long as individual messages are substantially shorter than 64 gigabytes, and collisions are less immediately detectable by attackers). But since you try to reduce on-wire size, I suppose you'd prefer not to do that.
• In some contexts, you can get a non-repeating IV "for free". E.g., in a TLS connection, each connection has its own key, so that IV collisions matter only within a given connection. But then, messages ("records" in TLS terminology) are successive; thus, a simple sequence number can be used (first record gets number 0, second record gets number 1, and so on). The sequence number is implicit and thus needs not be transmitted at all.
• There may be other solutions. For instance, a possible method would be to replace GCM with the following mode:
• Let x be the IV (see below for its length) and m the plaintext to encrypt.
• Compute HMAC/SHA-256 over the concatenation of the IV and the plaintext, and truncate it to the AES block size (16 bytes). This yields the authentication tag t.
• Use AES-CTR encryption over m with t as IV; this yields the ciphertext m'.
• Transmit x, t and m'.
With such a mechanism, you could use a very short IV x, even an empty one (of length 0). Note that this saves space, compared to GCM: with GCM, the encrypted message is sent along with an authentication tag (16 bytes) and an IV; here, I suggest use the HMAC-derived authentication tag as IV for encryption.
If you use that kind of mechanism with an empty IV x, then the whole thing becomes deterministic: if you encrypt twice the exact same message with the same key, then you get the exact same encrypted message. However, this should be the full extent of the leak. A small but non-empty IV x can help in hiding that.
Note that this is an encrypt-and-MAC setup, usually frowned upon for theoretical reasons. It is reasonably safe in this case, because HMAC/SHA-256 also protects confidentiality of the input (this is not necessarily the case of any other MAC mechanism), and CTR decryption implies no padding, and thus can be safely implemented even over unvalidated input data.
Summary: if your timestamp are really unique, then they are enough for GCM, and you can pad them to 12 bytes with zeros (that need not be transmitted along with the message). However, if clocks can be adjusted or rewinded, then you should probably use random IV instead, to get "probabilistic uniqueness"; and, in that case, don't go lower than 12 bytes. If you are desperate for size, then there are other possible avenues, but they are outside of existing published standards, which means that you need more external review and development care.
• So is my understanding correct, that when using a 16 byte random nonce we are on the safe side? I cannot imagine when those additional 4 bytes would become a problem... – martinstoeckli Apr 20 '18 at 11:39
• A mention of GCM-SIV (which is currently supported by BoringSSL) might be a useful addition to this answer, since it completely eliminates the potentially catastrophic failure mode of GCM if the nonce is repeated. – Ilmari Karonen May 13 '19 at 15:47
|
{}
|
# Definition of a plane-polarized harmonic plane waves having the same propagation constant
Tags:
1. Nov 15, 2014
### physicsjn
1. The problem statement, all variables and given/known data
Hi! The entire problem is this:
(a) Two plane-polarized harmonic plane waves having the same propagation constant are polarized, respectively, along two perpendicular directions. Show that if the phases of the two waves are different, their superposition yields generally an elliptically polarized plane wave.
(b) Show that the time-average Poynting vector of an elliptically polarized plane wave is equal to the sum of the time-average, Poynting vectors of the two orthogonal plane-polarized waves into which it can be decomposed.
2. Relevant equations
Plane waves
Def: a constant-frequency wave whose wavefronts (surfaces of constant phase) are infinite parallel planes of constant peak-to-peak amplitude normal to the phase velocity vector (Wikipedia).
$A(x,t)=A_ocos(kx-\omega t +\phi)$
$A(\mathbf{r},t)=A_o cos(\mathbf{k} \cdot \mathbf{r}-\omega t +\phi)$
$A(\mathbf{r},t)=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t +\phi)}$
where
$A(x,t)$ is the wave height at position x and t.
$A_o$ is the amplitude
$k$ is the wave number
$\phi$ is the phase constant
$\omega$ is the angular frequency
Propagation constant:
$\frac{A_o}{A_x}=e^{\gamma x}$
$\gamma=\alpha+i\beta$
$\beta=k=\frac{2\pi}{\lambda}$
where
$A_x$ and $A_o$ are the amplitude at position x and the amplitude at source of propagation, respectively.
$\gamma$ is the propagation constant
$\alpha$ is the attenuation constant
$\beta$ is the phase constant
Equation of an ellipse:
$\frac{x^2}{a}+\frac{y^2}{b}=1$
whose parametric equations are
$x=a ~ cos ~t$
$y=b ~sin ~t$
3. The attempt at a solution
So far these are the things that I am not sure:
• I now know that plane waves have mathematical forms as given above. My question is how will they change if they become harmonic?
• I assume that plane polarization means that if $\mathbf{A}(\mathbf{r},t)$ is a vector, the disturbance is along a certain direction only. That is,$\mathbf{A}(\mathbf{r},t)=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t +\phi)}\mathbf{\hat{x}}$ is said to be plane polarized along the x direction. Right?
• If the propagation constant is the same, I assume the phase constant is also the same which means that k is the same for both plane waves. Also by the definition of propagation constant above, the amplitude of the two plane waves are equal any time. Right?
• I am utterly confused on which among these quantities are complex and which are real. Hence, I don't know how to manipulate the exponential parts or if I can apply Euler's formula to simplify these.
My attempt for (a):
Let the first plane wave be
$\mathbf{A_1}(\mathbf{r},t)=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t +\phi)}\mathbf{\hat{x}}=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t )}e^{\phi}\mathbf{\hat{x}}$
and the second plane wave be
$\mathbf{A_2}(\mathbf{r},t)=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t +\psi)}\mathbf{\hat{y}}=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t )}e^{\psi}\mathbf{\hat{y}}$
Taking their superposition:
$\mathbf{A}=\mathbf{A_1}+\mathbf{A_2}$
$\mathbf{A}=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t )}e^{\phi}\mathbf{\hat{x}}+A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t )}e^{\psi}\mathbf{\hat{y}}$
$\mathbf{A}=A_o e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t )}(e^{\phi}\mathbf{\hat{x}}+e^{\psi}\mathbf{\hat{y}})$
$1=\frac{A_o}{\mathbf{A}}e^{i(\mathbf{k} \cdot \mathbf{r}-\omega t )}(e^{\phi}\mathbf{\hat{x}}+e^{\psi}\mathbf{\hat{y}})$
I want to recast this to the form of equation of an ellipse (see relevant equations above) but I'm stuck.
Thank you very much.
2. Nov 20, 2014
|
{}
|
## shamil98 one year ago Solve algebraically. $\frac{ e^x + e^{-x} }{ e^x - e^{-x} } = 5$ I started out by multiply both sides by the bottom fraction and whatnot and took the natural logs of both sides and resulted in error.. haven't done math in months..
1. ganeshie8
let $$e^x=u$$, rearrange the equation and get a quadratic
2. triciaal
|dw:1441511606981:dw|
3. shamil98
Yeah that's what i did originally ^
4. shamil98
@ganeshie8 i'll try that right now
5. dan815
ya thats good enuff too
6. dan815
|dw:1441511754857:dw|
7. dan815
|dw:1441511833267:dw|
8. shamil98
OH
9. shamil98
IM DUMB
10. triciaal
|dw:1441511696871:dw|
11. shamil98
thanks guys forgot that e^-x = 1/e^x xD
12. ganeshie8
if you're not really interested in the numeric value, you could save all that algebra by simply saying $$\coth x = 5 \implies x = \coth^{-1}(5)$$
13. ganeshie8
|
{}
|
# CTCGreedyDecoder¶
Versioned name : CTCGreedyDecoder-1
Category : Sequence processing
Short description : CTCGreedyDecoder performs greedy decoding on the logits given in input (best path).
Detailed description : Given an input sequence $$X$$ of length $$T$$, CTCGreedyDecoder assumes the probability of a length $$T$$ character sequence $$C$$ is given by
$p(C|X) = \prod_{t=1}^{T} p(c_{t}|X)$
Sequences in the batch can have different length. The lengths of sequences are coded as values 1 and 0 in the second input tensor sequence_mask. Value sequence_mask[j, i] specifies whether there is a sequence symbol at index i in the sequence i in the batch of sequences. If there is no symbol at j -th position sequence_mask[j, i] = 0, and sequence_mask[j, i] = 1 otherwise. Starting from j = 0, sequence_mass[j, i] are equal to 1 up to the particular index j = last_sequence_symbol, which is defined independently for each sequence i. For j > last_sequence_symbol, values in sequence_mask[j, i] are all zeros.
Note : Regardless of the value of ctc_merge_repeated attribute, if the output index for a given batch and time step corresponds to the blank_index, no new element is emitted.
Attributes
• ctc_merge_repeated
• Description : ctc_merge_repeated is a flag for merging repeated labels during the CTC calculation.
• Range of values : true or false
• Type : boolean
• Default value : true
• Required : no
Inputs
• 1 : data - input tensor with batch of sequences of type T_F and shape [T, N, C], where T is the maximum sequence length, N is the batch size and C is the number of classes. Required.
• 2 : sequence_mask - input tensor with sequence masks for each sequence in the batch of type T_F populated with values 0 and 1 and shape [T, N]. Required.
Output
• 1 : Output tensor of type T_F and shape [N, T, 1, 1] which is filled with integer elements containing final sequence class indices. A final sequence can be shorter that the size T of the tensor, all elements that do not code sequence classes are filled with -1.
Types
• T_F : any supported floating point type.
Example
<layer ... type="CTCGreedyDecoder" ...>
<data ctc_merge_repeated="true" />
<input>
<port id="0">
<dim>20</dim>
<dim>8</dim>
<dim>128</dim>
</port>
<port id="1">
<dim>20</dim>
<dim>8</dim>
</port>
</input>
<output>
<port id="0">
<dim>8</dim>
<dim>20</dim>
<dim>1</dim>
<dim>1</dim>
</port>
</output>
</layer>
|
{}
|
# Difference between revisions of "Full-waveform inversion, Part 1: Forward modeling"
Since its re-introduction by Pratt (1999)[1], full-waveform inversion (FWI) has gained a lot of attention in geophysical exploration because of its ability to build high resolution velocity models more or less automatically in areas of complex geology. While there is an extensive and growing literature on the topic, publications focus mostly on technical aspects, making this topic inaccessible for a broader audience due to the lack of simple introductory resources for newcomers to geophysics. We will accomplish this by providing a hands-on walkthrough of FWI using Devito, a system based on domain-specific languages that automatically generates code for time-domain finite-differences.[2]
As usual, this tutorial is accompanied by all the code you need to reproduce the figures. Go to http://github.com/seg/tutorials-2017 and follow the links. In the Notebook, we describe how to simulate synthetic data for a specified source and receiver setup and how to save the corresponding wavefields and shot records. In Part 2 of this series, we will address how to calculate model updates, i.e. gradients of the FWI objective function, via adjoint modeling. Finally, in Part 3 we will demonstrate how to use this gradient as part of an optimization framework for inverting an unknown velocity model.
## Introduction
Devito provides a concise and straightforward computational framework for discretizing wave equations, which underlie all FWI frameworks. We will show that it generates verifiable executable code at run time for wave propagators associated with forward and (in Part 2) adjoint wave equations. Devito frees the user from the recurrent and time-consuming development of performant time-stepping codes and allows the user to concentrate on the geophysics of the problem rather than on low-level implementation details of wave-equation simulators. This tutorial covers the conventional adjoint-state formulation of full-waveform tomography[3] that underlies most of the current methods referred to as full-waveform inversion.[4] While other formulations have been developed to improve the convergence of FWI for poor starting models, in these tutorials we will concentrate on the standard formulation that relies on the combination of a forward/adjoint pair of propagators and a correlation-based gradient. In part one of this tutorial, we discuss how to set up wave simulations for inversion, including how to express the wave equation in Devito symbolically and how to deal with the acquisition geometry.
## What is FWI?
FWI tries to iteratively minimize the difference between data that was acquired in a seismic survey and synthetic data that is generated from a wave simulator with an estimated (velocity) model of the subsurface. As such, each FWI framework essentially consists of a wave simulator for forward modeling the predicted data and an adjoint simulator for calculating a model update from the data misfit. This first part of this tutorial is dedicated to the forward modeling part and demonstrates how to discretize and implement the acoustic wave equation using Devito.
## Wave simulations for inversion
The acoustic wave equation with the squared slowness ${\displaystyle m}$, defined as ${\displaystyle m(x,y)=c^{-2}(x,y)}$ with ${\displaystyle c(x,y)}$ being the unknown spatially varying wavespeed, is given by:
${\displaystyle m{\frac {d^{2}u(t,x,y)}{dt^{2}}}-\Delta u(t,x,y)+\eta {\frac {du(t,x,y)}{dt}}=q(t,x,y;x_{s},y_{s})}$ (1)
where ${\displaystyle \Delta }$ is the Laplace operator, ${\displaystyle q(t,x,y;x_{s},y_{s})}$ is the seismic source, located at ${\displaystyle (x_{s},y_{s})}$ and ${\displaystyle \eta (x,y)}$ is a space-dependent dampening parameter for the absorbing boundary layer.[5] As shown in Figure 1, the physical model is extended in every direction by nbpml grid points to mimic an infinite domain. The dampening term ${\displaystyle \eta du/dt}$ attenuates the waves in the dampening layer and prevents waves from reflecting at the model boundaries. In Devito, the discrete representations of ${\displaystyle m}$ and ${\displaystyle \eta }$ are contained in a model object that contains a grid object with all relevant information such as the origin of the coordinate system, grid spacing, size of the model and dimensions time, x, y:
model = Model(vp=v, # A velocity model.
origin=(0, 0), # Top left corner.
shape=(101, 101), # Number of grid points.
spacing=(10, 10), # Grid spacing in m.
nbpml=40) # boundary layer.
Figure 1: (a) Diagram showing the model domain, with the perfectly matched layer (PML) as an absorbing layer to attenuate the wavefield at the model boundary. (b) The example model used in this tutorial, with the source and receivers indicated. The grid lines show the cell boundaries.
In the Model instantiation, vp is the velocity in km/s, origin is the origin of the physical model in meters, spacing is the discrete grid spacing in meters, shape is the number of grid points in each dimension and nbpml is the number of grid points in the absorbing boundary layer. Is is important to note that shape is the size of the physical domain only, while the total number of grid points, including the absorbing boundary layer, will be automatically derived from shape and nbpml.
## Symbolic definition of the wave propagator
To model seismic data by solving the acoustic wave equation, the first necessary step is to discretize this partial differential equation (PDE), which includes discrete representations of the velocity model and wavefields, as well as approximations of the spatial and temporal derivatives using finite-differences (FD). Unfortunately, implementing these finite-difference schemes in low-level code by hand is error prone, especially when we want performant and reliable code. The primary design objective of Devito is to allow users to define complex matrix-free finite-difference approximations from high-level symbolic definitions, while employing automated code generation to create highly optimized low-level C code. Using the symbolic algebra package SymPy to facilitate the automatic creation of derivative expressions, Devito generates computationally efficient wave propagators.[6]
At the core of Devito's symbolic API are symbolic types that behave like SymPy function objects, while also managing data:
• Function objects represent a spatially varying function discretized on a regular Cartesian grid. For example, a function symbol f = Function(name='f', grid=model.grid, space_order=2) is denoted symbolically as f(x, y). The objects provide auto-generated symbolic expressions for finite-difference derivatives through shorthand expressions like f.dx and f.dx2 for the first and second derivative in x.
• TimeFunction objects represent a time-dependent function that has time as the leading dimension, for example g(time, x, y). In addition to spatial derivatives TimeFunction symbols also provide time derivatives g.dt and g.dt2.
• SparseFunction objects represent sparse components, such as sources and receivers, which are usually distributed sparsely and often located off the computational grid — these objects also therefore handle interpolation onto the model grid.
To demonstrate Devito's symbolic capabilities, let us consider a time-dependent function u(time,x,y) representing the discrete forward wavefield:
u = TimeFunction(name="u", grid=model.grid,
time_order=2, space_order=2,
save=True, time_dim=nt)
where the grid object provided by the model defines the size of the allocated memory region, time_order and space_order define the default discretization order of the derived derivative expressions.
We can now use this symbolic representation of our wavefield to generate simple discretized expressions for finite-difference derivative approximations using shorthand expressions, such as u.dt and u.dt2 to denote du/dt and (d^2 u)/(dt^2 ) respectively:
>>> u.dt
-u(time - dt, x, y)/(2*dt) + u(time + dt, x, y)/(2*dt)
>>> u.dt2
-2*u(time, x, y)/dt**2 + u(time - dt, x, y)/dt**2 + u(time + dt, x, y)/dt**2
Using the automatic derivation of derivative expressions, we can now implement a discretized expression for Equation 1 without the source term q(x,y,t;x_s,y_s). The model object, which we created earlier, already contains the squared discrete slowness model.m and damping term model.damp as Function objects:
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
If we write out the (second order) second time derivative u.dt2 as shown earlier and ignore the damping term for the moment, our pde expression translates to the following discrete the wave equation:
${\displaystyle {\frac {\mathbf {m} }{dt^{2}}}(\mathbf {u} [time-dt]-2\mathbf {u} [time]+\mathbf {u} [time+dt])-\Delta \mathbf {u} [time]=0,time=1\cdots n_{t-1}}$ (2)
with time being the current time step and dt being the time stepping interval. To propagate the wavefield, we rearrange to obtain an expression for the wavefield u(time+dt) at the next time step. Ignoring the damping term once again, this yields:
${\displaystyle \mathbf {u} [time+dt]=2\mathbf {u} [time]-\mathbf {u} [time-dt])+{\frac {dt^{2}}{\mathbf {m} }}\Delta \mathbf {u} [time]}$ (3)
We can rearrange our pde expression automatically using the SymPy utility function solve, then create an expression which defines the update of the wavefield for the new time step u(time+dt), with the command u.forward:
stencil = Eq(u.forward, solve(pde, u.forward)[0])
stencil represents the finite-difference approximation derived from Equation 3, including the finite-difference approximation of the Laplacian and the damping term.
Although it defines the update for a single time step only, Devito knows that we will be solving a time-dependent problem over a number of time steps because the wavefield u is a TimeFunction object.
## Setting up the acquisition geometry
The expression for time stepping we derived in the previous section does not contain a seismic source function yet, so the update for the wavefield at a new time step is solely defined by the two previous wavefields. However as indicated in Equation 1, wavefields for seismic experiments are often excited by an active (impulsive) source q(x,y,t;x_s), which is a function of space and time (just like the wavefield u). To include such a source term in our modeling scheme, we simply add the the source wavefield as an additional term to Equation 3:
${\displaystyle \mathbf {u} [time+dt]=2\mathbf {u} [time]-\mathbf {u} [time-dt])+{\frac {dt^{2}}{\mathbf {m} }}(\Delta \mathbf {u} [time]+\mathbf {q} [time])}$ (4)
Because the source appears on the right-hand side in the original equation (Equation 1), the term also needs to be multiplied with dt^2/m (this follows from rearranging Equation 2, with the source on the right-hand side in place of 0). Unlike the discrete wavefield u however, the source q is typically localized in space and only a function of time, which means the time-dependent source wavelet is injected into the propagating wavefield at a specified source location. The same applies when we sample the wavefield at receiver locations to simulate a shot record, i.e. the simulated wavefield needs to be sampled at specified receiver locations only. Source and receiver both do not necessarily coincide with the modeling grid.
Here, RickerSource acts as a wrapper around SparseFunction and models a Ricker wavelet with a peak frequency f0 and source coordinates src_coords:
f0 = 0.010 # kHz, peak frequency.
src = RickerSource(name='src', grid=model.grid, f0=f0,
time=time, coordinates=src_coords)
The src.inject function now injects the current time sample of the Ricker wavelet (weighted with dt^2/m as shown in Equation 4) into the updated wavefield u.forward at the specified coordinates.
src_term = src.inject(field=u.forward,
expr=src * dt**2 / model.m,
offset=model.nbpml)
To extract the wavefield at a predetermined set of receiver locations, there is a corresponding wrapper function for receivers as well, which creates a SparseFunction object for a given number npoint of receivers, number nt of time samples, and specified receiver coordinates rec_coords:
rec = Receiver(name='rec', npoint=101, ntime=nt,
grid=model.grid, coordinates=rec_coords)
Rather than injecting a function into the model as we did for the source, we now simply save the wavefield at the grid points that correspond to receiver positions and interpolate the data to their exact possibly of the computational grid location:
rec_term = rec.interpolate(u, offset=model.nbpml)
## Forward simulation
We can now define our forward propagator by adding the source and receiver terms to our stencil object:
op_fwd = Operator([stencil] + src_term + rec_term)
The symbolic expressions used to create Operator contain sufficient meta-information for Devito to create a fully functional computational kernel. The dimension symbols contained in the symbolic function object (time, x, y) define the loop structure of the created code,while allowing Devito to automatically optimize the underlying loop structure to increase execution speed.
The size of the loops and spacing between grid points is inferred from the symbolic Function objects and associated model. grid object at run-time. As a result, we can invoke the generated kernel through a simple Python function call by supplying the number of time steps time and the time-step size dt. The user data associated with each Function is updated in-place during operator execution, allowing us to extract the final wavefield and shot record directly from the symbolic function objects without unwanted memory duplication:
op_fwd(time=nt, dt=model.critical_dt)
When this has finished running, the resulting wavefield is stored in u.data and the shot record is in rec.data. We can easily plot this 2D array as an image, as shown in Figure 2.
Figure 2. The shot record generated by Devito for the example velocity model. As demonstrated in the notebook, a movie of snapshots of the forward wavefield can also be generated by capturing the wavefield at discrete time steps. Figure 3 shows three timesteps from the movie.
Figure 3. Three time steps from the wavefield simulation that resulted in the shot record in Figure 2. You can generate an animated version in the Notebook at github.com/seg.
## Conclusions
In this first part of the tutorial, we have demonstrated how to set up the discretized forward acoustic wave equations and associated wave propagator with run-time code generation. While we limited our discussion to the constant density acoustic wave equation, Devito is capable of handling more general wave equations, but this is a topic beyond this tutorial on simulating waves for inversion. In Part 2 of our tutorial, we will show how to calculate a valid gradient of the FWI objective using the adjoint-state method. In Part 3, we will demonstrate how to set up a complete matrix-free and scalable optimization framework for acoustic FWI.
## Acknowledgments
This research was carried out as part of the SINBAD II project with the support of the member organizations of the SINBAD Consortium. This work was financially supported in part by EPSRC grant EP/L000407/1 and the Imperial College London Intel Parallel Computing Centre.
## References
1. Pratt, R. G., 1999, Seismic waveform inversion in the frequency domain, part 1: Theory and verification in a physical scale model: GEOPHYSICS, 64, 888–901. http://dx.doi.org/10.1190/1.1444597
2. Lange, M., Kukreja, N., Louboutin, M., Luporini, F., Zacarias, F. V., Pandolfo, V., Gorman, G., 2016, Devito: Towards a generic finite difference DSL using symbolic python: 6th workshop on python for high-performance and scientific computing. http://dx.doi.org/10.1109/PyHPC.2016.9
3. Tarantola, A., 1984, Inversion of seismic reflection data in the acoustic approximation: GEOPHYSICS, 49, 1259–1266. http://dx.doi.org/10.1190/1.1441754
4. Virieux, J., and Operto, S., 2009, An overview of full-waveform inversion in exploration geophysics: GEOPHYSICS, 74, WCC1–WCC26. http://dx.doi.org/10.1190/1.3238367
5. Cerjan, C., Kosloff, D., Kosloff, R., and Reshef, M., 1985, A nonreflecting boundary condition for discrete acoustic and elastic wave equations: GEOPHYSICS, 50, 705–708. http://dx.doi.org/10.1190/1.1441945
6. Meurer A, Smith CP, Paprocki M, et al., 2017, SymPy: symbolic computing in Python. PeerJ Computer Science 3:e103 http://dx.doi.org/10.7717/peerj-cs.103
## Corresponding authors
• Corresponding author: Mathias Louboutin, Seismic Laboratory for Imaging and Modeling (SLIM), The University of British Columbia, mloubouteoas.ubc.ca
• Philipp Witte, Seismic Laboratory for Imaging and Modeling (SLIM), The University of British Columbia
• Michael Lange, Imperial College London, London, UK
• Navjot Kukreja, Imperial College London, London, UK
• Fabio Luporini, Imperial College London, London, UK
• Gerard Gorman, Imperial College London, London, UK
• Felix J. Herrmann, Seismic Laboratory for Imaging and Modeling (SLIM), The University of British Columbia, now at Georgia Institute of Technology, USA
|
{}
|
Article Text
A network analysis of relationship dynamics in sexual dyads as correlates of HIV risk misperceptions among high-risk MSM
1. Kayo Fujimoto1,
2. Mark L Williams2,
3. Michael W Ross1
1. 1Department of Health Promotion & Behavioral Sciences, School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, USA
2. 2Department of Health Policy and Management, College of Public Health & Sciences, Florida International University, Miami, Florida, USA
1. Correspondence to Dr Kayo Fujimoto, Department of Health Promotion & Behavioral Sciences, School of Public Health, The University of Texas Health Science Center at Houston, 7000 Fannin Street, UCT 2514, Houston, TX 77030-5401, USA; Kayo.Fujimoto{at}uth.tmc.edu
Abstract
Objectives Relationship dynamics influence the perception of HIV risk in sexual dyads. The objective of this study was to examine the effect of relational dynamics on knowledge or perception of a partner's HIV status in a sample of most at-risk men who have sex with men (MSM): drug-using male sex workers. The study identified relationship dimensions and examined their association with misperceptions about a particular partner's HIV status.
Methods The analytical sample for the study consisted of 168 sexual partnerships of 116 male sex workers and their associates. Exploratory factor analysis was conducted to identify dimensions of the interpersonal relationships in sexual partnerships that were then regressed on ‘risky misperceptions’ (misperceiving HIV negative when partner's self-report was positive or unknown).
Results Six relationship dimensions of intimate, commitment, socialising, financial, trust and honesty were extracted. Commitment was found to be protective against misperception (adjusted OR (AOR)=0.45), while trust was not (AOR=2.78). Other factors also were found to be associated with misperception. HIV-negative MSM (AOR=7.69) and partners who were both self-identified as gay (AOR=3.57) were associated with misperception, while encounters identified as sex work (AOR=0.29), in which both partners were Caucasian (AOR=0.16), and involved with an older partner (AOR=0.90) were protective.
Conclusions Couple-based HIV intervention efforts among MSM should consider that less trust and more commitment are protective factors in sexual partnerships.
• SOCIAL SCIENCE
• SEXUAL NETWORKS
• HIV
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Men who have sex with men (MSM) are at disproportionately high risk of HIV infection.1 Male sex workers, a subgroup of MSM, are at especially elevated risk due to their sexual behaviours, drug use and number of sex partners.2–4 Moreover, male sex workers may provide a bridge for HIV transmission between several at-risk groups and the general population,3 ,5 which suggests that they may be key nodes in HIV transmission networks. A social network perspective defines ‘risk-potential network’ for disease transmission as a pattern of risk-potential linkages between two people who involve an infectious agent, and in the case of HIV, the risk-potential linkage is sex or injection drug use.6 HIV transmission is affected by a social network of confidants, such as close friends, who provide the social and normative contexts in which risky behaviour is facilitated or inhibited.7 These contexts are expected to influence the perception of the risk of HIV infection within a sexual network. For example, MSM often presume the HIV status of a sex partner and rely on personal impressions and beliefs to evaluate the riskiness of a sexual encounter with the partner.8 While male sex workers tend to endorse social norms about unsafe sex,2 the social conditions under which the risk perceptions of their sexual encounters are affected may differ significantly from those of other MSM and have rarely been investigated.
The perception that a partner presents risk may greatly affect whether an individual engages in a preventive behaviour in a sexual situation.9 In turn, interpersonal dynamics within a sexual partnership may influence perception. Familiarity and trust are two such dynamics.10 In general, individuals perceive a sex partner as less likely to be HIV infected if the individual has greater trust in the partner,11 perhaps because trust precludes a belief that a sex partner might present risk.12 At least one study found that having a high level of trust in a primary partner hinders regular HIV testing.13 In sex work encounters, a lack of trust in a sexual relationship has been shown to impede communication about risk between partners.3 Among MSM, interpersonal relationships that involve emotional and/or substantive support, such as money, also have been found to be associated with the disclosure of HIV status to the social network members.14 Other relational dimensions, such as emotional significance, honesty, intimacy, caring, importance and connectedness, also have been found to be associated with disclosure among MSM.15
Interpersonal relations within a sexual partnership may sway the perception of the risk of a sexual encounter. The objective of this study was to examine the relational dynamics that influence the perception of HIV risk associated with a sex partner. The main aims of this study were to: (1) explore the relational dimensions of sexual partnerships and (2) examine their associations with risky misperceptions of a partner’s HIV status. This study examined the perceptions of a sex partner's HIV status8 ,11 ,16 ,17 and used it as the dependent variables. ‘Misperception’ was defined as ‘incorrect knowledge of a sex partner's diagnosed infection’. Independent variables were the relational dynamics of the sexual partnership. To conduct the analysis, the study expanded the concept of the risk-potential network by identifying relational characteristics16 as risk-potential network linkages.
Methods
Study design
Data were collected between May 2003 and February 2004 as part of a larger study of the social, drug use and sexual networks of drug-using male sex workers in Houston, Texas. The sample was recruited using a combination of targeted sampling and participant referral,4 ,18 ,19 as explained in-depth elsewhere.20 ,21 Briefly, focal participants were first recruited and interviewed. Focal participants were eligible to participate if there was a male sex worker 17 years old or older who self-identified as male, had exchanged sex for money with a man in the last 7 days and had smoked crack cocaine or injected an illicit substance in the 48 h before being screened for the study. Focal participants were then asked to recruit individuals with whom they had used drugs and/or had sex, preferably, or whom they knew socially. In turn, secondary contacts were asked to recruit tertiary contacts. Secondary and tertiary participants were eligible to participate if they were 17 years old and linked to the focal or secondary (referring) participant. To increase the rate of successful referrals, participants were given US$20 incentive for recruiting a contact who was then interviewed. Interviewed participants were paid US$30 for their time and to defray the cost of transportation. The study was approved by the Committee for the Protection of Human Subjects at the University of Texas Health Science Center at Houston (IRB# HSC-SPH-02-009).
Measures
Respondent's HIV status. Respondent's HIV status was measured by self-report and coded as positive, negative and unknown (including indeterminate).
Sex partner's HIV status. Knowledge of a sex partner's HIV status was determined by the following question, ‘Do you think your partner is HIV positive?’ (yes or no). The response was linked to that partner's self-reported HIV status.
Outcome variable: Respondent's misperception of his sex partner's HIV status. A two-by-three contingency table of a respondent's perception of the partner's HIV status (indexed in a row) by the partner's self-reported HIV status (indexed in a column) was created (table 1).
Table 1
Cross-tabulation of respondent's knowledge by sex partner's self-report on his HIV status (dyads = 168)
Perceptions by self-report were then coded as a risky misperception when: (1) a respondent's perception of the partner's status was his being negative while his partner's self-report was positive (n12=19) and (2) a respondent's perception of the partner's status as negative while the partner's self-report was status unknown (n13=17).
Risky sex. Risky sex was measured by the involvement of drug use before or during sex (yes or no) and/or having unprotected sex the last time the dyad had sex (yes or no).
Relationship dynamics. Respondents were asked to answer 14 questions about their relationships with their sex partners. Questions were related: (1) the connection felt with the partner (‘connect’), (2) confidence in the partner (‘confide’), (3) emotional attachment to the partner (‘emotion’), (4) partner's concern for the respondent (‘matter’), (5) respondent's concern for the partner (‘care’), (6) respondent's willingness to live with the partner (‘live’), (7) knowledge of the partner's whereabouts (‘contact’), (8) respondent's willingness to spend time with the partner (‘hang out’), (9) respondent's willingness to be seen with the partner (‘seen’), (10) respondent's willingness to lend money to the partner (‘ego-money’), (11) the partner's willingness to lend money to the respondent (‘partner-money’), (12) trust in the partner (‘ego-trust’), (13) partner's trust in the respondent (‘partner-trust’) and (14) partner's honesty with the respondent (‘partner-honest’) (see the online supplement for questionnaire items). All items were scaled from 1 (not at all) to 10 (very much/extremely) or 1 (very little) to 10 (very much).
Data
In the original study, 334 men (84%) and 62 women (16%) were interviewed. The 396 respondents were also asked about their contacts’ characteristics, HIV status, risky sexual behaviours and relationships to the respondent. Information was obtained on 4880 respondent-contact dyads. Contacts provided data on an average of 12 others (SD=10, Min=1, Max=59). Of the 4880 dyads, only 179 dyads in which both the participant and the contact were interviewed were included in the analysis. Data on 11 of these were excluded due to missing relational information. Thus, the analytical sample used for this study consisted of 168 respondent-contact dyads (including two man–woman dyads) that involved sex. These 168 dyads comprised 116 unique male respondents. Among these 116 respondents, 33 (28.5%) were focal participants, 53 (45.7%) were secondary contacts and 30 (25.9%) were tertiary contacts.
Exploratory factor analysis
Exploratory factor analysis, using the iterated principal-factor estimation method,22 was conducted to identify relationship dynamics. With the assumption that the factors were correlated, loadings were rotated (oblique rotation). Then, parallel analysis was conducted to determine the number of factors to be retained.23 In the parallel analysis, the eigenvalues obtained from the 10 correlation matrices that were generated from random datasets were averaged. The averaged eigenvalues were compared with the eigenvalues derived from the factor model. When the former was larger than the latter, factors were indicated as mostly random noise.23 Then, the scale reliability coefficient (Cronbach's α) of each extracted factor was computed for derived factors with more than two items with loadings >0.40. The final factors were calculated by computing regression coefficients based on all items included in the regression, which were included as relationship dynamics variables in the subsequent regression analysis.
Regression analysis
The unit of analysis for the study was the sexual dyad. Because 32.8% of the 116 respondents named more than one sex partner, dyadic data were treated as correlated binary data clustered on the respondent. To account for clustering, a generalised estimating equation24 with a logit link function was used to estimate the population-averaged odds of misperceiving the partner's HIV status as a function of the covariates in the model. An exchangeable correlation structure with robust empirical variance estimates was specified to address potential misspecification of the correlation structure. All analyses were conducted using Stata V.13.
Descriptive statistics
Table 2 shows descriptive statistics for HIV status, sociodemographic characteristics and risky sexual behaviour of both respondents and their sex partners as well as their dyadic characteristics and relationship dynamics (dyads=168, with 116 MSM).
Table 2
Percentages or means (SDs; min and max) for respondent and his relational characteristics (MSM=116, dyads=168)
The majority of the sample (86%) had traded sex for money and had an average of 36 partners in the 30 days before the interview. Approximately two-thirds of sex partners (66%) were reported by respondents to trade sex for money. Of the sample, 32% reported being HIV positive and 55%, negative. The remainder was of unknown status.
Exploratory factor analysis
The result of the exploratory factor analysis indicated that the first six of the eigenvalues were greater than the eigenvalues averaged over 10 replications. Therefore, the first six factors, ranging from 0.21 to 8.83, were retained. The original items were grouped into six latent factors.
The first factor comprised the highly loaded items of ‘connect’ (0.79), ‘confide’ (0.51) and ‘emotion’ (0.62) (Cronbach's α=0.91) to describe intimate relationships. The second factor comprised the highly loaded items of ‘matter’ (0.97), ‘care’ (0.73) and ‘live’ (0.49) (Cronbach's α=0.88) to describe committed relationships. The third factor comprised ‘contact’ (0.87), ‘hang out’ (0.88) and ‘seen (0.78) (Cronbach's α=0.92) to describe socialising. The fourth factor comprised ‘ego-money’ (0.42) and ‘partner-money’ (0.88) to describe a financial dimension. A fifth factor comprised ‘ego-trust’ (0.58) and ‘partner-trust’ (0.72) to describe the trust dimension. A sixth factor comprised the item of ‘partner-honest’ and was treated as a single item dimension, honesty (0.81). Eigenvalues (>0.2), uniqueness statistics and loadings (>0.4) for each factor are provided in the table in the online supplement.
Regression analysis
Table 3 shows the adjusted ORs (AOR) for the relationship dimensions.
Table 3
GEE results of adjusted ORs (AOR), standard errors, 95% CIs, and p values in parenthesis (dyads=168, N=116)
The dimension commitment was found to be associated with a decrease in the odds of misperceiving the partner's HIV status (AOR=0.45; p=0.039). Higher trust was found to increase the odds of misperceiving the partner's HIV status (AOR=2.78; p=0.026). Other relational dimensions, intimacy (connect, confide and emotion), socialising (contact, hang out and seen), financial (ego-money and partner-money) and honesty (partner-honest), were not significantly associated with misperception of the partner's status.
Risky behaviour. Having neither a drug use relationship nor unprotected sex was associated with risky misperception of the partner's HIV status.
Respondent's and partner's characteristics. HIV-negative respondents had greater odds of misperceiving the sex partner's HIV status than did HIV-positive respondents (AOR=7.69 (1/0.13) p=0.002). Almost as risky is that non-sex workers had greater odds of misperceiving the partner's status than did male sex workers (AOR=3.45 (1/0.29); p=0.039). However, a respondent's knowledge of the sex partner as a sex worker was not associated with misperception. Additionally, older sex partners were associated with decreased odds of misperception of HIV status, but the upper 95% CI was very close to 1 (AOR=0.90; p=0.042).
Dyadic characteristics. Dyads in which both partners were African American were associated with increased odds of misperceiving the partners’ HIV status; however, the effect was marginal (AOR=5.11; p=0.057). Conversely, dyads in which both partners were Caucasian were associated with decreased odds of misperception (AOR=0.16; p=0.011). Dyads in which both partners self-identified as gay were found to be associated with increased odds of misperception (AOR=3.57; p=0.048).
Discussion
Our findings indicated that six dynamic components of sexual relationships appear to associate with knowledge of HIV status among male sex workers: intimacy, commitment, socialising, financial, trust and honesty. Among these, greater trust was associated with higher odds of misperceiving a sex partner's HIV status. This result is consistent with previous studies that have found that greater trust in a sexual relationship is related to engaging in risky sexual behaviours.10 ,11 Studies have shown that homeless men have greater feelings of trust in sex partners of short duration than do non-homeless men.12 More than half the sample in this study were homeless at the time that they were interviewed. This may account for the strong association between greater trust and misperception of a partner's HIV status.
Conversely, the findings showed that stronger feelings of commitment to a sex partner (i.e., respecting what a partner thinks, caring about a partner and willingness to live with a partner) were associated with correctly perceiving a partner's HIV status. This result is consistent with other studies that report that individuals in committed partnerships are more likely to accurately perceive their sex partner's infection status.25 These findings suggest that stronger relationships are an important component in communications about HIV status in sexual encounters, which can be distinguished from intimacy and honesty. We found that financial and socialising relationships (frequency of contact, hanging out and seeing) were not associated with misperceiving the partner's HIV status. Given that these men also used drugs extensively, it is interesting that drug use was not associated with misperception of HIV status.
Interestingly, male sex workers’ perceptions of their sex partners’ HIV status were likely to be more accurate than were others in the study. This finding may be because sex workers tend to assume that all sex partners are positive, given the risk environment in which they work.3 Assuming that all partners are HIV positive may decrease the likelihood of an incorrect assumption about a partners’ status. The study also found that HIV-negative individuals and couples in which both partners were gay were more likely to misperceive their partners’ status. HIV-negative MSM who engaged in unprotected anal sex with their primary partners tend to assume that the partner was negative.26 This may be strongly related to feelings of trust in the partner. Conversely, dyads in which both partners were Caucasian or in which one's partner was older, were more likely to have correct knowledge of their partner's status.
Our study has certain limitations. First, the study defined sex partners as ‘ever having had sex’. Some forms of sex carry a lesser risk of HIV transmission. Second, the majority of the sample was MSM who had exchanged sex for money, which represents only a part of the risk-potential partnerships of HIV transmission. Therefore, our results are not generalisable to other MSM populations. Future research should test the conclusions against a different dataset to claim any generality for our findings.
Despite these limitations, our study provides a more comprehensive understanding of the relational dynamics that may enhance or curtail HIV risk in sexual relationships. Social networks have substantial effects of HIV risk perceptions among MSM, and the process through which they are formed has been underscored.9 The study addressed this issue in the context of sexual encounters that involve a sample of most at-risk MSM, i.e., men who use drugs and exchange sex for money. HIV prevention efforts that focus on sexual partnerships, such as couple-based voluntary HIV counselling and testing,27 could be more effectively delivered with a nuanced consideration of their relational constituents.
Key messages
• Sexual partnerships that involve the most at-risk MSM consist of six relational dimensions: intimacy, commitment, socialising, financial, trust and honesty.
• Strong trust with sex partners was a risk factor for incorrectly knowing the sex partner's HIV status.
• Strong feelings of commitment to the partner (i.e., respecting partner's thoughts, caring and willingness to live together) were associated with non-risk knowledge of the partner's HIV status.
• HIV prevention efforts in couple-based voluntary HIV counselling should identify and adjust for the potential impacts of the six relational dimensions of the interpersonal relationship.
Acknowledgments
We also acknowledge Ju Yeong Kim for assisting the validation of the data and results.
• Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
• Abstract in Japanese
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
Footnotes
• Handling editor Stefan Baral
• Contributors KF initiated this study, formulated the conception and conducted data analysis. MLW and MWR assisted in formulating the conception, acquired data and interpreted the results. All authors were involved in drafting the article or in revising the intellectual content. All authors approved the submitted version.
• Funding This study used the dataset collected by social network project funded by the following National Institutes of Health grant: National Institutes of Health/NIDA R01DA015025. This study was, in part, supported by the National Institutes of Health/NIMH 1R01MH100021.
• Competing interests None.
• Ethics approval University of Texas Health Science Center at Houston.
• Provenance and peer review Not commissioned; externally peer reviewed.
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
{}
|
# How do you write the equation of a line which passes through (0, -3) and has a slope of -5?
Oct 23, 2016
$y = - 5 x - 3$
#### Explanation:
When given a set of points and the slope, use the point slope formula, which is:
$y - {y}_{1} = m \left(x - {x}_{1}\right)$
Where ${y}_{1}$ is the $y$ in the set of points
${x}_{1}$ is the $x$ in the set of points
And $m$ is the slope
Plug in the numbers into the equation
$y + 3 = - 5 \left(x - 0\right)$
Distribute the $- 5$ throughout the set of parenthesis
#y+3=-5x+0
Subtract $3$ on both sides of the equation
$y = - 5 x - 3$
|
{}
|
Lemma 29.26.4. Let $X$ be a scheme. The following are equivalent
1. every finite flat quasi-coherent $\mathcal{O}_ X$-module is finite locally free, and
2. every closed subset $Z \subset X$ which is closed under generalizations is open.
Proof. In the affine case this is Algebra, Lemma 10.108.6. The scheme case does not follow directly from the affine case, so we simply repeat the arguments.
Assume (1). Consider a closed immersion $i : Z \to X$ such that $i$ is flat. Then $i_*\mathcal{O}_ Z$ is quasi-coherent and flat, hence finite locally free by (1). Thus $Z = \text{Supp}(i_*\mathcal{O}_ Z)$ is also open and we see that (2) holds. Hence the implication (1) $\Rightarrow$ (2) follows from the characterization of flat closed immersions in Lemma 29.26.1.
For the converse assume that $X$ satisfies (2). Let $\mathcal{F}$ be a finite flat quasi-coherent $\mathcal{O}_ X$-module. The support $Z = \text{Supp}(\mathcal{F})$ of $\mathcal{F}$ is closed, see Modules, Lemma 17.9.6. On the other hand, if $x \leadsto x'$ is a specialization, then by Algebra, Lemma 10.78.5 the module $\mathcal{F}_{x'}$ is free over $\mathcal{O}_{X, x'}$, and
$\mathcal{F}_ x = \mathcal{F}_{x'} \otimes _{\mathcal{O}_{X, x'}} \mathcal{O}_{X, x}.$
Hence $x' \in \text{Supp}(\mathcal{F}) \Rightarrow x \in \text{Supp}(\mathcal{F})$, in other words, the support is closed under generalization. As $X$ satisfies (2) we see that the support of $\mathcal{F}$ is open and closed. The modules $\wedge ^ i(\mathcal{F})$, $i = 1, 2, 3, \ldots$ are finite flat quasi-coherent $\mathcal{O}_ X$-modules also, see Modules, Section 17.21. Note that $\text{Supp}(\wedge ^{i + 1}(\mathcal{F})) \subset \text{Supp}(\wedge ^ i(\mathcal{F}))$. Thus we see that there exists a decomposition
$X = U_0 \amalg U_1 \amalg U_2 \amalg \ldots$
by open and closed subsets such that the support of $\wedge ^ i(\mathcal{F})$ is $U_ i \cup U_{i + 1} \cup \ldots$ for all $i$. Let $x$ be a point of $X$, and say $x \in U_ r$. Note that $\wedge ^ i(\mathcal{F})_ x \otimes \kappa (x) = \wedge ^ i(\mathcal{F}_ x \otimes \kappa (x))$. Hence, $x \in U_ r$ implies that $\mathcal{F}_ x \otimes \kappa (x)$ is a vector space of dimension $r$. By Nakayama's lemma, see Algebra, Lemma 10.20.1 we can choose an affine open neighbourhood $U \subset U_ r \subset X$ of $x$ and sections $s_1, \ldots , s_ r \in \mathcal{F}(U)$ such that the induced map
$\mathcal{O}_ U^{\oplus r} \longrightarrow \mathcal{F}|_ U, \quad (f_1, \ldots , f_ r) \longmapsto \sum f_ i s_ i$
is surjective. This means that $\wedge ^ r(\mathcal{F}|_ U)$ is a finite flat quasi-coherent $\mathcal{O}_ U$-module whose support is all of $U$. By the above it is generated by a single element, namely $s_1 \wedge \ldots \wedge s_ r$. Hence $\wedge ^ r(\mathcal{F}|_ U) \cong \mathcal{O}_ U/\mathcal{I}$ for some quasi-coherent sheaf of ideals $\mathcal{I}$ such that $\mathcal{O}_ U/\mathcal{I}$ is flat over $\mathcal{O}_ U$ and such that $V(\mathcal{I}) = U$. It follows that $\mathcal{I} = 0$ by applying Lemma 29.26.1. Thus $s_1 \wedge \ldots \wedge s_ r$ is a basis for $\wedge ^ r(\mathcal{F}|_ U)$ and it follows that the displayed map is injective as well as surjective. This proves that $\mathcal{F}$ is finite locally free as desired. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# Sombor index of some graph transformations
Document Type : Original paper
Authors
1 Department of Mathematics, Karnatak University, Dharwad
2 University of Kragujevac
Abstract
The Sombor index of the graph $G$ is a recently introduced degree based topological index. It is defined as $SO = \sum_{uv \in E(G)} \sqrt{d(u)^2+d(v)^2}$, where $d(u)$ is the degree of the vertex u and $E(G)$ is the edge set of $G$. In this paper we calculate $SO$ of some graph transformations.
Keywords
Main Subjects
#### References
[1] R. Aguilar-Sánchez, J.A. Méndez-Bermúdez, J.M. Rodríguez, and J.M. Sigarreta, Normalized Sombor indices as complexity measures of random networks, Entropy 23 (2021), no. 8, ID: 976.
[2] S. Alikhani and N. Ghanbari, Sombor index of polymers, MATCH Commun. Math. Comput. Chem. 86 (2021), no. 3, 715–728.
[3] S. Amin, A. Ur Rehman Virk, M.A. Rehman, and N.A. Shah, Analysis of dendrimer generation by Sombor indices, J. Chem. 2021 (2021), ID: 9930645.
[4] B. Bommanahal, I. Gutman, and V.R. Desai, Zagreb indices of generalized transformation graphs and their complements, Kragujevac J. Sci. 37 (2015), 99–112.
[5] H. Chen, W. Li, and J. Wang, Extremal values on the Sombor index of trees, MATCH Commun. Math. Comput. Chem. 87 (2022), no. 1, 23–49.
[6] R. Cruz, J. Rada, and J.M. Sigarreta, Sombor index of trees with at most three branch vertices, Appl. Math. Comput. 409 (2021), ID: 126414.
[7] K.C. Das, A.S. Çevik, I.N. Cangul, and Y. Shang, On Sombor index, Symmetry 13 (2021), no. 1, ID: 140.
[8] T. Došlic, T. Réti, and A. Ali, On the structure of graphs with integer Sombor indices, Discrete Math. Lett. 7 (2021), 1–4.
[9] I. Gutman, Geometric approach to degree-based topological indices: Sombor indices, MATCH Commun. Math. Comput. Chem. 86 (2021), no. 1, 11–16.
[11] B. Horoldagva and C. Xu, On Sombor index of graphs, MATCH Commun. Math. Comput. Chem. 86 (2021), no. 3, 703–713.
[12] Y. Lihui, X. Ai, and L. Zhang, The Zagreb coindices of a type of composite graph, Hacettepe J. Math. Stat. 45 (2016), no. 4, 1135–1142.
[13] H.S. Ramane, B. Basavanagoud, and R.B. Jummannaver, Harmonic index and Randi´c index of generalized transformation graphs, J. Nigerian Math. Soc. 37 (2018), no. 2, 57–69.
[14] I. Redžepović, Chemical applicability of Sombor indices, J. Serb. Chem. Soc. 86 (2021), no. 5, 445–457.
[15] E. Sampathkumar and S.B. Chikkodimath, Semitotal graphs of a graph-I, Karnatak Univ. J. 18 (1973), 281–284.
[16] J. Wang and Q. Ma, Some results on edge cover coloring of double graphs, Appl. Math. 3 (2012), no. 3, 246–266.
17] D. West, Introduction to Graph Theory, Prentice Hall, New Delhi, 2001.
[18] W. Zhang, L. You, H. Liu, and Y. Huang, The expected values and variances for Sombor indices in a general random chain, Appl. Math. Comput. 411 (2021), ID: 126521.
|
{}
|
## Difference equations in the complex plane: quasiclassical asymptotics and Berry phase.(English)Zbl 1484.39018
Summary: We consider the equation $$\Psi (z+h) = M(z) \Psi (z)$$, where $$z \in \mathbb{C}$$, $$h>0$$ is a parameter, and $$M: \mathbb{C} \mapsto \mathrm{SL}(2,\mathbb{C}$$ is a given analytic function. We get asymptotics of its analytic solutions as $$h \rightarrow 0$$. The asymptotic formulas contain an analog of the geometric (Berry) phase well known in the quasiclassical analysis of differential equations.
### MSC:
39A45 Difference equations in the complex domain
### Keywords:
complex plane; quasiclassical asymptotics; geometric phase
Full Text:
### References:
[1] Fedoryuk, MV., Asymptotic analysis. Linear ordinary differential equations (2009), Berlin, Heidelberg GmbH: Springer-Verlag, Berlin, Heidelberg GmbH [2] Wasow, W., Asymptotic expansions for ordinary differential equations (1987), New York: Dover Publications, New York · Zbl 0169.10903 [3] Geronimo, JS; Bruno, O.; Assche, WV., WKB and turning point theory for second-order difference equations, Oper Theory Adv Appl, 69, 269-301 (1992) [4] Helffer, B.; Sjöstrand, J., Analyse semi-classique pour l’équation de Harper (avec application à l’équationde Schrödinger avec champ magnétique), Mémoires de la SMF (nouvelle série), 34, 1-113 (1988) · Zbl 0714.34130 [5] Maslov, VP; Fedoriuk, MV., Semi-classical approximation in quantum mechanics (1981), Amsterdam: Reidel, Amsterdam [6] Dobrokhotov, SY; Tsvetkova, AV., Lagrangian manifolds related to asymptotics of Hermite polynomials, Math Notes, 104, 810-822 (2018) · Zbl 1409.42019 [7] Buslaev, V.; Fedotov, A., The complex WKB method for Harper equation, St Petersburg Math J, 6, 495-517 (1995) [8] Fedotov, AA; Shchetka, EV., The complex WKB method for difference equations in bounded domains, J Math Sci, 224, 157-169 (2017) · Zbl 1476.39025 [9] Fedotov, A.; Shchetka, E., Complex WKB method for a difference Schrödinger equation with the potential being a trigonometric polynomial, St Petersburg Math J, 29, 363-381 (2018) · Zbl 1385.39001 [10] Fedotov, A.; Klopp, F., The complex WKB method for difference equations and Airy functions, SIAM J Math An, 51, 6, 4413-4447 (2019) · Zbl 1425.39011 [11] Fedotov, A.; Klopp, F., WKB asymptotics of meromorphic solutions to difference equations, Appl Anal (2019) · Zbl 1476.39023 [12] Berry, M., Quantal phase factors accompanying adiabatic changes, Proc R Soc Lond A, 392, 45-57 (1984) · Zbl 1113.81306 [13] Simon, B., Holonomy, the quantum adiabatic theorem and Berry’s phase, Phys Rev Lett, 51, 2167-2170 (1983) [14] Wilkinson, M., An exact renormalisation group for Bloch electrons in a magnetic field, J Phys A Math Gen, 20, 4337-4354 (1987) [15] Guillement, JP; Helffer, B.; Treton, P., Walk inside Hofstadter’s butterfly, J Phys France, 50, 2019-2058 (1989) [16] Avila, A.; Jitomirskaya, S., The ten martini problem, Ann Math, 170, 303-342 (2009) · Zbl 1166.47031 [17] Fedotov, AA., Monodromization method in the theory of almost-periodic equations, St Petersburg Math J, 25, 303-325 (2014) · Zbl 1326.39011 [18] Buslaev, V, Fedotov, A.The monodromization and Harper equation. Séminaire sur les Équations aux Dérivées Partielles 1993-1994, Exp. no EXXI, École Polytech Palaiseau; 1994. 23 pp. · Zbl 0880.34082 [19] Lyalinov, MA; Zhu, NY., A solution procedure for second-order difference equations and its application to electromagnetic-wave diffraction in a wedge-shaped region, Proc R Soc Lond A, 459, 3159-3180 (2003) · Zbl 1092.78008 [20] Fedotov, A, Shchetka, E.Berry phase for difference equations. Days on Diffraction. Institute of Electrical and Electronics Engineers Inc.; 2017. p. 113-115. [21] Springer, G., Introduction to Riemann surfaces (1957), New York: Addison-Wesley, New York · Zbl 0078.06602 [22] Fedotov, A.; Klopp, F., Strong resonant tunneling, level repulsion and spectral type for one-dimensional adiabatic quasi-periodic Schrödinger operators, Annales Scientifiques de l’Ecole Normale Superieure, 38, 889-950 (2005) · Zbl 1112.47038
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
# SimObjects in the FhSimTutorialLibrary
In this tutorial the SimObjects in FhSimTutorialLibrary are elaborated with focus on the discretization of the system and the implementation as SimObjects.
## Linear spring in 3D
#### Discretization
The input to the linear spring is the position at the ends A and B, $p_A$ and $p_B$. The outputs are the forces at A and B, $F_A$ and $F_B$. $F_A[p_A(t),p_B(t)]$ and $F_B[p_A(t),p_B(t)]$ must be implemented in the SimObject.
The length of the spring is found as
The spring forces are then calculated as
where $L_{relaxed}$ is the relaxed length of the spring. If $k$ is the spring stiffness and $L_{eff}$>0, we can calculate the forces from the spring as
$F_{Ax} = -k L_{eff}{p_{Ax}-p_{Bx}\over L}\tag{3a}$ $F_{Ay} = -k L_{eff}{p_{Ay}-p_{By}\over L}\tag{3b}$ $F_{Az} = -k L_{eff}{p_{Az}-p_{Bz}\over L}\tag{3c}$
$F_{Bx} = -F_{Ax}\tag{4a}$ $F_{By} = -F_{Ay}\tag{4b}$ $F_{Bz} = -F_{Az}\tag{4c}$
#### Implementation
A thorough explaination of how to implement a SimObject in FhSim is out of scope here, but is found in the FhSim documentation. The constructor of the linear spring is shown in the following code-block.
#include "CLinearSpring.h"
#include <cmath>
//Constructor for the linear spring class
CLinearSpring::CLinearSpring(std::string sSimObjectName, ISimObjectCreator* pCreator)
: SimObject(sSimObjectName) {
#ifdef FH_VISUALIZATION
m_iNumPoints = 2; // The number of points for the visualization.
#endif
//Input ports
// Register common computation function
pCreator->RegisterCommonCalculation(COMMON_COMPUTATION_FUNCTION(CLinearSpring::CalcOutput), &m_CalcOutputs);
// Output ports
// Parameters
pCreator->GetDoubleParam("Stiffness", &m_dStiffness);
pCreator->GetDoubleParam("RelaxedLength", &m_dRelaxedLength);
}
CLinearSpring inherits the SimObject class, so the constructor must pass any arguments to the SimObject class. The constructor sets the SimObject parameters, as well as its signature in terms of states, input ports and output ports. In this case, the input ports are PosA and PosB, both of size 3. The output ports are ForceA and ForceB, both of size 3. The parameters Stiffness and RelaxedLength are set from the FhSim configuration file. Also note that the function CalcOutput is registered as a CommonComputation-function, which means that it will be calculated at most one time each time-step.
As the linear spring does not contain any states, no integration is needed, and no states are defined. The OdeFcn-function must still be defined, and it is defined as an empty function:
//Function for setting up the state space model.
void CLinearSpring::OdeFcn(const double dT, const double* const adX, double* const adXDot, const bool bIsMajorTimeStep)
{
// Does nothing, as the object contains no states.
}
The value of the output ports are calculated in the CalcOutput function. This can be implemented as:
void CLinearSpring::CalcOutput(const double dT, const double* const adX) {
double dL = 0;
for(int i = 0;i < 3;i++) {
}
if (dL <= 0.0) {
for (int i = 0; i < 3; i++) {
}
} else {
dL = sqrt(dL);
double dDeltaL = dL - m_dRelaxedLength;
for (int i = 0; i < 3; i++) {
m_adOutForceA[i] = -m_dStiffness * dDeltaL*adDeltaPos[i] / dL;
}
}
}
The constructor associates two functions with the output ports, namely ForceA and ForceB. These functions each call the common computation CalcOutput and then uses the results from this to set the output ports:
const double* CLinearSpring::ForceA( const double dT, const double* const adX ) {
}
const double* CLinearSpring::ForceB( const double dT, const double* const adX ) {
}
To include 3D-Visualization, RenderInit() and RenderUpdate must be defined. e.g. as:
#ifdef FH_VISUALIZATION
void CLinearSpring::RenderInit(Ogre::Root* const pOgreRoot, ISimObjectCreator* const pCreator) {
auto scenemgr = pOgreRoot->getSceneManager("main");
m_pLines = new C3DLine(scenemgr, Ogre::RenderOperation::OT_LINE_LIST,2);
}
void CLinearSpring::RenderUpdate(const double dT, const double *const adX) {
const double* const adPos1In = m_pInPosA->GetPortValue(dT, adX);
const double* const adPos2In = m_pInPosB->GetPortValue(dT, adX);
// Assumes just values have changed, use 'setPoint' instead of 'addPoint'
m_pLines->Update();
}
#endif
## Mass in 3D (translations only)
#### Discretization
The mass object is a point mass with translations only. This means it has 6 states; three positions ($p_x$,$p_y$,$p_z$) and three velocities ($v_x$,$v_y$,$v_z$). It will have one input port, which is the forces acting on it ($F_x$,$F_y$,$F_z$). The output will correspond to the six states. When taking gravity into account, its dynamics can be written as
where g is the acceleration of gravity.
#### Implementation
The constructor can be implemented as
#include "CMass.h"
CMass::CMass(std::string sSimObjectName, ISimObjectCreator* pCreator):SimObject(sSimObjectName) {
//Input ports.
// Output ports.
// States
m_IStatePos = pCreator->AddState("Pos", 3);
m_IStateVel = pCreator->AddState("Vel", 3);
// Parameters
pCreator->GetDoubleParam("Mass", &m_dMass);
pCreator->GetDoubleParam("g", &m_dg,0.0);
if(m_dMass <= 0)
pCreator->ReportParameterError("Mass", "Must be a real number greater than zero.");
#ifdef FH_VISUALIZATION
pCreator->GetStringParam("Material", m_sMaterial, "Simple/Black");
pCreator->GetStringParam("Mesh", m_sMeshName, "fhSphere.mesh");
pCreator->GetDoubleParam("Scale",&m_dScale, 1.0);
#endif
}
In contrast to the linear spring object, the mass object contains six states. The OdeFcn must therefore calculate the time derivatives of the states:
void CMass::OdeFcn(const double dT, const double* const adX, double* const adXDot, const bool bIsMajorTimeStep) {
for (int i = 0; i<3 ; i++) {
adXDot[m_IStatePos + i] = adX[m_IStateVel + i];
adXDot[m_IStateVel + i] = adForceIn[i] / m_dMass ;
// Adding acceleration of gravity
if (i==2)
adXDot[m_IStateVel + i] += -m_dg;
}
}
The output ports (position and velocity) are returned by the functions (Position and Velocity):
const double* CMass::Position( const double dT, const double* const adX ) {
return adX + m_IStatePos;
}
const double* CMass::Velocity( const double dT, const double* const adX ) {
return adX + m_IStateVel;
}
To add visualization, the functions RenderInit and RenderUpdate must be implemented. Note that all reference to Ogre must only be compiled when FH_VISUALIZATION is defined. This enables the same source code to be compiled completely without visualization, presumably giving a smaller memory footprint, better use of the different computer caches and a faster simulation. The rendering functions:
#ifdef FH_VISUALIZATION
void CMass::RenderInit(Ogre::Root* const pOgreRoot, ISimObjectCreator* const pCreator) {
m_pSceneMgr = pOgreRoot->getSceneManager("main");
m_pRenderNode = m_pSceneMgr->getRootSceneNode()->createChildSceneNode( m_SimObjectName + "FollowNode" );
m_pRenderEntity = m_pSceneMgr->createEntity( m_SimObjectName + "Entity", m_sMeshName);
m_pRenderEntity->setMaterialName(m_sMaterial);
m_pRenderNode->attachObject( m_pRenderEntity );
m_pRenderNode->scale(m_dScale,m_dScale,m_dScale);
}
void CMass::RenderUpdate(const double T, const double* const X) {
m_pRenderNode->setPosition(Ogre::Vector3(X[m_IStatePos + 0],X[m_IStatePos + 1],X[m_IStatePos + 2]));
}
#endif
## Defining the simulation in an input file
To simulate the two SimObjects connected to each other, the input file MassLinearSpring.xml is written:
<Contents>
<OBJECTS>
<Lib
LibName="FhsimTutorialLibrary"
SimObject="Cable/LinearSpring"
Name="S"
Stiffness="100"
RelaxedLength="10"
/>
<Lib
LibName="FhsimTutorialLibrary"
SimObject="Body/Mass"
Name="B"
Scale="1.0"
Mass="1.0"
g="-9.81"
Material="Simple/Red"
/>
</OBJECTS>
<INTERCONNECTIONS>
<Connection
S.PosB="B.Pos"
B.Force="S.ForceB"
S.PosA="0,0,-10"
/>
</INTERCONNECTIONS>
<INITIALIZATION>
<InitialCondition
B.Pos="0,0,-5"
B.Vel="0,0,0"
/>
</INITIALIZATION>
<INTEGRATION>
<Engine
IntegratorMethod="2"
NumCores="1"
TOutput="0, 0:0.1:10, 100"
LogStates ="1"
stepsize ="0"
HMax="0.002"
HMin="0.0000001"
AbsTol="1e-3" RelTol="1e-3"
When compiling the project with visualization enabled and running FhRtVis.exe with MassLinearSpring.xml as input, the visualization should look something like the figure below.
|
{}
|
## Lecture 1: Course Overview and Problem-Solving Techniques
August 22, 2017
• Study: Ex 1.1
### Objectives
• (Continuing objective throughout PHYS 211, PHYS 212, and your entire life.) Write solutions to problems with sufficient detail (i.e., show your work) so that other people can understand your approach.
• Convert from one set of units to a different set of units.
• Check answers to problems to see if they have the correct dimensions.
• Use ratios to solve quantitative problems. Specifically, use ratios and Kepler's Third Law to relate the periods and semi-major axes of objects rotating around a common planet or star.
### Homework
• Wednesday's Assigned Problems: A80, A82, A84, A85; CH 1: 27, 43
Notes: For CH 1 #43, you can assume that the surface area of the US is around 10 million square km or $1.0 \times 10^{13}\,\text{m$^2$}$. Also, FYI, there are about 300 million people in the US.
• Monday's Hand-In Problems: A4, A81, A83, A86, A87; CH 1: 26; CH 2: 14, 20, 24, 46
### Question to ponder
When you think about it, you are just borrowing those atoms for about a century or so, but they have been around for 15 billion years (and not all of that time on the planet Earth). I like to imagine a movie that could somehow follow the entire 15 billion year existence of one of these atoms -- sped up, of course. (Are there any movie majors in this course? This would be really cool to do!)
### Pre-Class Entertainment
• Ana Ng, by They Might be Giants
• Ashes to Ashes, by David Bowie
• Awkward, by San Cisco
• Ain't No Sunshine, by Bill Withers
• Ain't No Rest for the Wicked, by Cage the Elephant
• All I Wanna Do, by Sheryl Crow
|
{}
|
# Math Help - Sets and Elements
1. ## Sets and Elements
Q: Show that the set
A = {±1 ± 2 ± 3 ± ... ± 2006}
contains an even number of elements.
I have no idea where to start or how to do this
2. Originally Posted by unstopabl3
Q: Show that the set
A = {±1 ± 2 ± 3 ± ... ± 2006}
contains an even number of elements.
If one adds two even numbers does one get an even number?
Is the number of elements in $\{1,2,3,\cdots,2006\}$ even?
3. it doesn't matter whether the number of elements in (1...2006) is even (the answer is yes, because 2006 is an even number). The point is that you have the negative numbers as well. So the number of elements in the set is double the number in (1....2006), so it must be even.
Step by step
that is just the positive integers, and the negative integers (up to 2006).
ie, every positive integer and its negative partner
ie, every element can be paired with another one
so there must be an even number
Edit Posted while plato was typing and not intended to disrespect his answer
4. Thanks for the replies, but how would I show this in mathematical terms or a simple statement?
5. Originally Posted by unstopabl3
Thanks for the replies, but how would I show this in mathematical terms or a simple statement?
6. Well "Yes" the number of elements are even ...
7. Originally Posted by unstopabl3
Well "Yes" the number of elements are even ...
That is not what I meant. How many elements are there in the set?
8. 2006 x 2 = 4012 , which is an even number ???
9. Correct! Now you have done the question.
10. Thanks, both of you!
|
{}
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
You are reading an older version of this FlexBook® textbook: CK-12 Algebra - Basic Go to the latest version.
# 2.4: Multiplication of Rational Numbers
Difficulty Level: At Grade Created by: CK-12
When you began learning how to multiply whole numbers, you replaced repeated addition with the multiplication sign (×)$(\times)$. For example,
6+6+6+6+6=5×6=30
Multiplying rational numbers is performed the same way. We will start with the Multiplication Property of –1.
The Multiplication Property of –1: For any real numbers a,(1)×a=a$a, (-1) \times a = -a$.
This can be summarized by saying, "A number times a negative is the opposite of the number."
Example 1: Evaluate 19,876$-1 \cdot 9,876$.
Solution: Using the Multiplication Property of 1$-1$: 19,876=9,876$\ -1 \cdot 9,876 = -9,876$.
This property can also be used when the values are negative, as shown in Example 2.
Example 2: Evaluate 1322$-1 \cdot -322$.
Solution: Using the Multiplication Property of 1$-1$: 1322=322$\ -1 \cdot -322 = 322$.
A basic algebraic property is the Multiplicative Identity. Similar to the Additive Identity, this property states that any value multiplied by 1 will result in the original value.
The Multiplicative Identity Property: For any real numbers a, (1)×a=a$a, \ (1) \times a = a$.
A third property of multiplication is the Multiplication Property of Zero. This property states that any value multiplied by zero will result in zero.
The Zero Property of Multiplication: For any real numbers a, (0)×a=0$a, \ (0) \times a = 0$.
## Multiplying Rational Numbers
You’ve decided to make cookies for a party. The recipe you’ve chosen makes 6 dozen cookies, but you only need 2 dozen. How do you reduce the recipe?
In this case, you should not use subtraction to find the new values. Subtraction means to make less by taking away. You haven’t made any cookies; therefore, you cannot take any away. Instead, you need to make 26$\frac{2}{6}$ or 13$\frac{1}{3}$ of the original recipe. This process involves multiplying fractions.
For any real numbers a,b,c,$a, b, c,$ and d$d$, where b0$\ b \neq 0$ and d0$\ d \neq 0$,
abcd=acbd
Example 3: The original cookie recipe calls for 8 cups flour. How much is needed for the reduced recipe?
Solution: Begin by writing the multiplication situation. 813$8 \cdot \frac{1}{3}$. You need to rewrite this product in the form of the property above. In order to perform this multiplication, you need to rewrite 8 as the fraction 81$\frac{8}{1}$.
8×13=81×13=8113=83=223
You will need 2 23$2 \ \frac{2}{3}$ cups flour.
Multiplication of fractions can also be shown visually. For example, to multiply 1325$\frac{1}{3} \cdot \frac{2}{5}$, draw one model to represent the first fraction and a second model to represent the second fraction.
By placing one model (divided in thirds horizontally) on top of the other (divided in fifths vertically), you divide one whole rectangle into bd$bd$ smaller parts. Shade ac$ac$ smaller regions.
The product of the two fractions is the shaded regionstotal regions.$\frac{shaded \ regions}{total \ regions}.$
1325=215
Example 4: Simplify 3745.$\frac{3}{7} \cdot \frac{4}{5}.$
Solution: By drawing visual representations, you can see that
3745=1235
## Multiplication Properties
Properties that hold true for addition such as the Associative Property and Commutative Property also hold true for multiplication. They are summarized below.
The Associative Property of Multiplication: For any real numbers a, b,$a, \ b,$ and c,$c,$
(ab)c=a(bc)
The Commutative Property of Multiplication: For any real numbers a$a$ and b,$b,$
a(b)=b(a)
The Same Sign Multiplication Rule: The product of two positive or two negative numbers is positive.
The Different Sign Multiplication Rule: The product of a positive number and a negative number is a negative number.
## Solving Real-World Problems Using Multiplication
Example 5: Anne has a bar of chocolate and she offers Bill a piece. Bill quickly breaks off 14$\frac{1}{4}$ of the bar and eats it. Another friend, Cindy, takes 13$\frac{1}{3}$ of what was left. Anne splits the remaining candy bar into two equal pieces, which she shares with a third friend, Dora. How much of the candy bar does each person get?
Solution: Think of the bar as one whole.
114=34$1- \frac{1}{4} = \frac{3}{4}$. This is the amount remaining after Bill takes his piece.
13×34=14$\frac{1}{3} \times \frac{3}{4} = \frac{1}{4}$. This is the fraction Cindy receives.
3414=24=12$\frac{3}{4} - \frac{1}{4} = \frac{2}{4} = \frac{1}{2}$. This is the amount remaining after Cindy takes her piece.
Anne divides the remaining bar into two equal pieces. Every person receives 14$\frac{1}{4}$ of the bar.
Example 6: Doris’s truck gets 1023$10 \frac{2}{3}$ miles per gallon. Her tank is empty so she puts in 512$5 \frac{1}{2}$ gallons of gas.
How far can she travel?
Solution: Begin by writing each mixed number as an improper fraction.
10 \frac{2}{3} = \frac{32}{3} && 5 \frac{1}{2} = \frac{11}{2}
Now multiply the two values together.
323112=3526=5846=5823
Doris can travel 58 23$58 \ \frac{2}{3}$ miles on 5.5 gallons of gas.
## Practice Set
Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Multiplication of Rational Numbers (8:56)
Multiply the following rational numbers.
1. 1234$\frac{1}{2} \cdot \frac{3}{4}$
2. 7.852.3$-7.85 \cdot -2.3$
3. 2559$\frac{2}{5} \cdot \frac{5}{9}$
4. 132725$\frac{1}{3} \cdot \frac{2}{7} \cdot \frac{2}{5}$
5. 4.53$4.5 \cdot -3$
6. 12233445$\frac{1}{2} \cdot \frac{2}{3} \cdot \frac{3}{4} \cdot \frac{4}{5}$
7. 512×910$\frac{5}{12} \times \frac{9}{10}$
8. 2750$\frac{27}{5} \cdot 0$
9. 23×14$\frac{2}{3} \times \frac{1}{4}$
10. 11.1(4.1)$-11.1 (4.1)$
11. 34×13$\frac{3}{4} \times \frac{1}{3}$
12. 1511×97$\frac{15}{11} \times \frac{9}{7}$
13. 273.5$\frac{2}{7} \cdot -3.5$
14. 113×111$\frac{1}{13} \times \frac{1}{11}$
15. 727×914$\frac{7}{27} \times \frac{9}{14}$
16. (35)2$\left (\frac{3}{5} \right )^2$
17. 111×2221×710$\frac{1}{11} \times \frac{22}{21} \times \frac{7}{10}$
18. 5.750$5.75 \cdot 0$
Multiply the following by negative one.
1. 79.5
2. π$\pi$
3. (x+1)$(x + 1)$
4. |x|$|x|$
5. 25
6. –105
7. x2$x^2$
8. (3+x)$(3 + x)$
9. (3x)$(3 - x)$
In 28 – 30, state the property that applies to each of the following situations.
1. A gardener is planting vegetables for the coming growing season. He wishes to plant potatoes and has a choice of a single 8 by 7 meter plot, or two smaller plots of 3 by 7 meters and 5 by 7 meters. Which option gives him the largest area for his potatoes?
2. Andrew is counting his money. He puts all his money into $10 piles. He has one pile. How much money does Andrew have? 3. Nadia and Peter are raising money by washing cars. Nadia is charging$3 per car, and she washes five cars in the first morning. Peter charges \$5 per car (including a wax). In the first morning, he washes and waxes three cars. Who has raised the most money?
Mixed Review
1. Compare these rational numbers: 1627$\frac{16}{27}$ and 23$\frac{2}{3}$.
2. Define rational numbers.
3. Give an example of a proper fraction. How is this different from an improper fraction?
4. Which property is being applied? 16(14)=16+14=30$16 - (-14) = 16 + 14 = 30$
5. Simplify 1112+29$11 \frac{1}{2} + \frac{2}{9}$.
## Quick Quiz
1. Order from least to greatest: (56, 2326, 3132, 314)$\left (\frac{5}{6}, \ \frac{23}{26}, \ \frac{31}{32}, \ \frac{3}{14} \right )$.
2. Simplify 59×274.$\frac{5}{9} \times \frac{27}{4}.$
3. Simplify |5+11||937|$|-5 + 11| - |9 - 37|$.
4. Add 215+78.$\frac{21}{5} + \frac{7}{8}.$
8 , 9
## Date Created:
Feb 22, 2012
Dec 11, 2014
Files can only be attached to the latest version of None
|
{}
|
# Cogent Engineering
Volume 7, 2020 - Issue 1
69,277
Views
27
CrossRef citations to date
0
Altmetric
CIVIL & ENVIRONMENTAL ENGINEERING
# The design for wastewater treatment plant (WWTP) with GPS X modelling
| (Reviewing editor)
Article: 1723782
Accepted 19 Jan 2020
Published online: 06 Feb 2020
Abstract
Wastewater treatment is a process, which is being done on the wastewater to change its quality for drinking or other suitable purposes. Wastewater treatment takes place in wastewater treatment plants, which should be designed under different circumstances. The criteria are being considered in this design for wastewater treatment plant (WWTP) Al-Hay. Moreover, the characteristics of physical, chemical and biological wastewater also are described. Based on the population of Al-Hay city, the project is undertaken to design a wastewater treatment plant. The girt chamber, equalization basin, oil and grease removal, aeration tank and secondary settling tank have been designed, and then the values for mean cell residence time, volume of aeration tank, hydraulic retention time, F/M ratio, return sludge flow rate, sludge production and oxygen requirement have been calculated. Modelling using GPS X also has been done on this data. It is exhibited a typical diagram of WWTP staring with influent flow, aeration tank and settling (clarifier) tank. The simulation time is also illustrated. With increasing the time, the parameters such as TSS and solids are typically enhanced. This is as indicator to improve the fit of the model and the actual data for the secondary effluent TSS. The research shows the treatment process design of Al-Hay wastewater treatment plant (WWTP). The paper also described the equations process design for WWTP. Sludge age (θc) has been calculated and associated to the observed yield (Yobs). There is correlation between sludge age and the mixed liquor suspended solid (MLSS). The value of the observed yield has been noticed, with values ranging from 0.2 to 0.6 kgVSS/kg(BOD5). The sludge retention time is equal to 27.7 day and the sludge produce is 3339.18 Kg/day. These results are related to biological tank for Al-hay WWTP is worked during with high efficiency.
## PUBLIC INTEREST STATEMENT
The aim of our paper is to verify the treatment process design of Al-Hay wastewater treatment plant (WWTP), taking into account some characteristics occur in that city. Designing a WWTP depends on the characteristics of the wastewater so the designing process should be analysed carefully because even a small mistake can be fatal. Modelling using GPS X also has been done on this data. The paper can be beneficial for public area especially in Iraq. Design wastewater treatment plant can reduce the amount of waste that is usually released into the environment. By doing so, the reduction in the health risks associated with environment pollution thus improving environment’s health. Wastewater treatment is essential to remove the suspended solids as much as before the effluent is discharged back to the environment. Design WWTP is utilised to purify contaminated substances such as solid, liquid and semi-solids. The article has the potential impact in wider level in which designing a WWTP with immunizing the mistakes in the results.
## Conflict of Interest
I certify that they have no affiliations with or involvement in any organization or entity with any financial interest, or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this article.
## 1. Introduction
The purpose of wastewater treatment is to remove or reduce contaminants in water that impose threats to human and environment if discharged to surface and/or ground waters without proper treatment. While developed countries are continuing working on setting more efficient treatment processes in the WWTP or establishing new technologies to meet the increasing demand for water, the developing countries are still straggling to establish the required infrastructure for treatment. Although the damage from lack of such infrastructure is obvious, the public concern still limited due to lack of governmental programs that explain the environmental problems for public and the influence of crisis and political conflicts in these countries (Avijit, Md, & Mhia, 2018).
Current outreach programs in these countries are still limited and not operative. Increases in water pollution, concomitant with water scarcity may limit economic development and lead to the commonness of poverty, hunger and disease (Steve, Jin, & Arnold, 2016). Problem of improper wastewater treatment is acute in countries suffering crisis like Iraq. Decades of wars and sanctions in Iraq combined with limited environmental awareness in both public and governmental representatives, have highly contributed to the destruction of Iraq’s national water system. According to the United Nations report, six million people have no access to clean water and more than 500,000 Iraqi children access their water from a river or creek and that over 200,000 access their water from open wells. In the first six months of 2010, there were over 360,000 diarrhoea cases as a result of polluted drinking water and a lack of hygiene awareness among local communities, particularly vulnerable groups such as women and children. The report showed that “Every day at least 250,000 tonnes of raw sewage is pumped into the Tigris River threatening unprotected water sources and the entire water distribution system”. Currently, the lack of permanent governmental programs for environment protection, lack of expenses, unavailability of professionals, engineers, and skilled operators, unprofessional design and treatment of most of existing plants, and lack of public awareness about the danger of direct discharge of wastewater to water courses had led to serious deficiencies in operations in the country’s wastewater treatment plants. Most of these plants were not designed based on a proper local data and were constructed by inexperienced companies. In addition to the improper design and implementation, the mechanical and electrical equipment at these plants have suffered from lack of spare parts and no preventative maintenance due to lack of expenses and trained operators. In many cases, untreated raw sewage is directly discharged into Rivers, endangering the health of residents and downstream populations.
Conventional wastewater treatment consists of a combination of physical and biological processes to remove solids, organic matter and nutrients from wastewater. General terms used to describe different degrees of treatment, in order of increasing treatment level, are preliminary, primary, secondary and tertiary or advanced wastewater treatment (Janssen, Meinema, & van der Roest, 2002). However, the key treatment process in the conventional sewage treatment is the secondary treatment process, which consists of biological treatment by utilizing mixed types of microorganisms in a controlled environment. Several aerobic and anaerobic biological processes are used for secondary treatment, including activated sludge process, total oxidation process, contact stabilization, aerated lagoons, waste stabilization ponds, trickling filters and anaerobic treatment. Activated sludge process is the most widely applied process comparable to other biological process for its facility design is well known as well as having characterized operation parameters (Culp, 1978).
This research is to explain the physical, chemical and biological properties of wastewater. Secondly, it is to design some parts of wastewater treatment plant in Al-Hay station. Therefore, the study of wastewater properties are important regarding to the biological and chemical wastewater treatment processes, which include (aerobic treatment such as (oxidation ponds and activated sludge) and anaerobic treatment. Also, the chemical wastewater treatment processes include chemical precipitation (coagulation and flocculation), ion exchange, neutralization and adsorption. Thus, studying the properties of wastewater has to be investigated because all important in terms of plant layout, plant design, plant sizing and plant location. In this study, it describes the process of preliminary design; secondary design is done based on extended aeration activated sludge system. This system is commonly used is Iraq because it is known technology with less expensive sludge treatment requirement and it can satisfy the standards criteria for effluent disposal to surface water. Finally, chlorination unit is a common method for pathogen reduction. Moreover, modelling and simulation of the process of design Al-Hay station is performed by GPS X program with some parameters.
## 2. The characteristics of wastewater
The first important information should have for the design of wastewater treatment system is the strength and characteristics of the wastewater to be treated. The strength of wastewater is normally expressed in terms of pollution load, which is determined from the concentrations of significant physical, chemical and biological contents of the wastewater (Davis and Cornwell, 2008).
The characteristics or the quality of wastewater is expressed in terms of its physical, chemical and biological characteristics on the basis of the parameters given in the table below.
Characteristics of wastewater depend on the quality of water used by the community, conservation practice and culture of population, type of industries present and treatment given by industries and their wastewater. Many of the above parameters are interrelated. For example, the concentration of dissolved gases and microbial activities in wastewater are affected by temperature.
One of the most important physical characteristics of wastewater is its content of solids, which consists of floating matter, sediment, suspended material and soluble matter. Other physical properties are temperature, colour and degree of turbidity. In addition, for chemical characteristics, it includes organic materials and inorganic substances.
Organic substances consist of a mixture of carbon, hydrogen, oxygen, and sometimes nitrogen, as well as other important elements such as sulphur, phosphorus and iron. Regarding to organic substances, many inorganic indicators of wastewater are important for the development and control of wastewater quality standards. Concentrations of inorganic compounds are increased due to the natural evaporation process, which disposes of some of the water, leaving the inorganic materials in the wastewater (Hubble, Roth and Clark [HRC] INC, 2019).
## 3. Wastewater treatment processes
Sewage is generated from multiple sources. It is a mixture of toilet water, washing water, ice, bathing, washing clothes, and all cleaning work at home, institutions, streets and rainwater. It consists of more than 95 percent water, only 5% of pollutants of different types, nature and quantity.
This diversity in the sources it generates leads to the multiplicity of types of contaminants it contains. Wastewater contains organic matter, trace microorganisms, salts, minerals, ammonia, pesticide residues, residues of pharmaceutical drugs and their metabolites, and highly toxic chemical pollutants for disruption of the endocrine system. Among all this diversity of pollutants, heavy metals and pathogens can be considered the most serious to public health and the environment. Nitrate and phosphorus are an important contaminant if the water after treatment is thrown into the surface and sea water media, leading to nutrient contamination, which in turn disrupts water quality and affects aquatic organisms by significantly reducing the dissolved oxygen content. They are acceptable substances, but nutrients if the goal is to use treated water in agricultural irrigation. The contamination of water is considered a crucial issue, thus the achievement of water purification is needed to reduce nutrient contamination.
In Karkush, Abdul Kareem, and Jasim (2018) studys’, it is shown the lateral load-bearing capacity of a single pile and piles group calculations that it was installed in contaminated claying soil. Two-line slopes intersection method and a proposed model are utilized in pile foundation calculations. The results showed that the increasing of the number of loading cycles is presented by the decreasing of the ultimate lateral capacity. In addition, the concentration of contaminant in the soil increased with the decreasing of the ultimate lateral capacity (Karkush et al., 2018).
Kim, Jung, and Han (2019) study showed that water purification can be achieved by using ABFT system (autotrophic biofloc technology). The power of ABFTsystem at the remaining stages (seedling to adult farming) was demonstrated for industrial-level implementation. An excellent water purification effect and about 97% reduction of water conservation were presented two by microalgae. The wastewater from the ABFT system can be reused by use for the growth of different plants (Kim et al., 2019).
In Feyzbakhsh, Telvari, and Lork (2017)study, it is presented the delay in some project in Tehran City regarding to the three factors which are quality, cost and time. As some circumstances happed in the last decades such as climate change, enhancement of population, decrease of raining, as well as increasing in water harvesting from groundwater, thus the importance of water is intensified as the project of water and wastewater treatment. Based on this study, it is verified that some factors contributed in the delay of wastewater treatment projects such as uncertainty and buying project site and failure in paying to contractor and employers (Feyzbakhsh et al., 2017).
In Parsa, Khajouei, Masigol, Hasheminejad, and Moheb (2018) study’, it is investigated that a new technique is used to reduce the electrical conductivity (EC) of composting leachate-polluted water using electrodialysis (ED) process. In this experimental study, the removal of COD is shown a reduction. The decreasing in COD removal is improved by the increasing of applied voltage, decreasing in feed concentration and reduction of EC. This study showed an acceptable ED method to reduce salt and organic content (Parsa et al., 2018).
Wastewater treatment is designed to improve water quality to meet the specific safety and safety requirements of the wastewater after treatment. Different treatment processes reduce the concentration of pollutants in water. It reduces the content of suspended solids, whose molecules can contaminate the rivers and impede the movement of water in the channels and pipes after deposition. It also reduces the content of biodegradable organic matter, measured by the Biological Oxygen Demand BOD (BOD) index (Ronan et al., 2019).
Treatment processes can also remove or neutralize many industrial pollutants and toxic chemicals. In principle, processes of industrial waste and toxic chemicals treatment should be carried out in the same industrial establishments, and should not be dumped in sewage sewers without treatment, and without complying with the regulations on the specifications of industrial effluents allowed in sewerage.
In the area of wastewater management and treatment, we talk about three main levels of treatment, each of which involves a range of processes and targets a specific type of contaminant present in the water. There are those who talk about two additional processes, one at the beginning and one at the end, and the number of treatments becomes five.
For the treatment processes, it starts with preliminary treatment units. This phase involves the removal of large solid objects through the use of nets to capture and remove them, as well as the deposition of sand and gravel by passing water when entering the station through a hole where heavy solids fall before proceeding to the next stage. This stage is of great importance in terms of protecting the plant’s equipment from faults, especially pipes and pumps (Rungnapha, Hardy, Huub, & Karel, 2015).
The initial and primary treatment process removes about 25% of organic matter load and theoretically removes all inorganic solids and for water containing industrial effluents. It may be necessary to balance flows and adjust pH value or add chemicals. It includes unit operations such as: screen, grit chamber and equalization basin.
Additionally, for the primary treatment, it is including primary sedimentation the purpose of this unit is to remove the settle able organic solids. Normally a primary sedimentation will remove 50–70 percent total suspended solids.
Primary sedimentation (or clarification) is achieved in large basins under relatively quiescent conditions. The settled solids are collected by mechanical scrapers into hopper, from which they are pumped to sludge—processing area oil, grease and other floating materials are skimmed from the surface. The effluent is discharged over weirs into a collection trough.
There are some types of clarifiers. Also, the common types of horizontal flow clarifiers are rectangular, square, or circular. On the other hand, the types of include surface are tube settler and parallel plate settler. In general, the design of most of the clarifiers falls into three categories: (1) Horizontal flow, (2) solids contact and (3) inclined surface.
The main objective of this phase of treatment is to obtain a homogeneous liquid that can be biologically processed at later stage, on the one hand, and one the other hand, obtainable and around to be treated separately. Sedimentation ponds are usually equipped with mechanical equipment that helps to collect the mud in the pelvic floor, and from there it is pumped to the treatment in subsequent stages. As well as to remove the floating material and dispose of the treated water stream. As well as, the mechanical equipment is to transfer homogeneous water to the next stages of treatment. As this initial stage, some chemicals are used to help the materials float on the surface of the water, as well as to help the solids settle on the bottom. The mulch is derived from primary treatment, the primary mulch. This process can reduce the BOD index, the level of contamination with biodegradable organic matter by more than 20–30 percent, and reduce the total TSS by more than 50–60%. Primary treatment is the first phase of treatment, followed by other processes. It is divided into: floating basins and deposition basins.
After that, secondary treatment is required. The purpose of secondary treatment is to remove the soluble organics that escape the primary treatment and to provide further removal of suspended solids. Secondary treatment may remove than 85% of the organics. Additionally, rand suspended solids, it does not remove significant amount of nitrogen, phosphor heavy metals, no degradable organics, bacteria and viruses. These pollutants may require further removal (advanced one) (Soomaree, 2015).
This treatment can remove more than 90% of the organic matter found in wastewater through bioremediation processes. It also removes dissolved organic matter, which evaporates from the initial treatment phase. The process of biological treatment is carried out by groups of microorganisms that consume organic matter as their food and turn it into the end products of metabolism, carbon dioxide, water and energy (Karia and Christian, 2006). This energy is necessary for germ growth and reproduction. Biological treatment is accompanied by an effective ventilation process that provides the aquarium with large quantities of air (oxygen) to facilitate the dissolution of the organic matter. After biological treatment, the water is pumped into secondary sedimentation basins, where the remaining solids and living microorganisms descend to the bottom. They are treated separately from the liquid that continue to be transferred to sterilization (Metcalf & Eddy, 2003).
This phase is divided into five stages: ventilation and mixing, sedimentation basins, activated sludge, filtration and disinfection.
Finally, advanced treatment might be used in some treatment. It is an additional treatment process, such as filtration, carbon adsorption, and chemical precipitation of phosphorus, to remove those constituents that are not adequately removed in the secondary treatment plant. These include nitrogen, phosphorus, and other soluble organic and inorganic compounds (Anjum, Al-Makishah, & Barakat, 2016).
## 4. Methodology
In the methodology part, for treatment processes, it consists of preliminary treatment units. Pre-treatment means are designed for removing large suspended solids or minimizing their size by fragmentation these solid materials may be of wood, doth, paper, plastic … .etc. For removing of heavy inorganic solids such as sand and gravel as well as metal and glass. These materials are called Grit (sand and any coarse material). Finally, it is designed to remove excess amounts of grease or oils. The pre-treatment consists of screens, grit chamber and equalization basin.
The general purpose of screens is to remove large objects such as rags, paper, plastics, metals, and the like. These units are used to store and remove large solid materials, hair, fibres, cloth, paper and coarse materials and prevent their entry with sewage to the subsequent treatment stages where they are placed at the beginning of the treatment plants at the entrance of the pumping station to protect the mechanical installations. Usually fine screens are preceded be a preliminary screening for the purpose of protection. Screens may also be classified into manually and mechanically cleaned.
For the grit chamber, it is necessary to remove the grits and other materials that are heavier than organic matter, in order to protect moving mechanical equipment and pumps from unnecessary wear and abrasion. There are different types of grit chambers, rectangular horizontal flow type, detritus tanks, aerated grit chamber, and equalization basins.
The Equalization (EQ) Basins are designed to provide consistent influent flow to downstream processes by retaining high flow fluctuations. Due to the additional retention time, aeration and mixing is required in equalization basins to prevent the raw wastewater from becoming septic and to maintain solids in suspension.
After that, biological Treatment (secondary treatment) is needed. Biological waste treatment involves bringing the active microbial growth in contact with wastewater so that they can consume the impurities as food. A great variety of microorganisms come into play that include bacteria, protozoa, rotifers, fungi, algae and so forth. These organisms in the presence of oxygen convert the biodegradable organics into carbon dioxide, water, more cell material, and other inert products. Biological treatment process can be achieved by two types of growth: suspended growth biological treatment and attached growth biological treatment.
In the methodology section, following steps are involved in the design of wastewater treatment in Al-Hay city as they are considered in the following steps. Firstly, the assessment of water quality is needed. It is important to select of treatment process, so selection is needed. Design of the treatment system is chosen. Some of water quality parameters are investigated too. This diagram expressed the methodology section for WWTP in Al-Hay city is shown in Figure 1. In addition, GPS X modelling has been an important part of carrying out the simulation and testing the design. The GPS X simulation has been done on biological treatment. For calibration of the model, it is needed to evaluate the operation of the plant; it needs to include all physical processes of the scale plant. The GPS X is motivated in a way only for secondary treatment to get results for some physical parameters such as TSS, BOD and COD because our design for Al-hay plant can be specified for a kind of wastewater treatment plant of extended aeration activated sludge system process.
Figure 1. Flow diagram for wastewater treatment process
### 4.1. Introduction
Al-Hay City lies in the south of Iraq some (39) kilometres to the south-east of Kut City being situated in the south of Iraq. The area is located between E (46° 00ʹ 13.5”—46° 04ʹ 01”) and N (32° 08ʹ 28”-32° 13ʹ 03”). The city area inside the municipality boundary is densely occupied especially the central part of the city as shown in the Figure 2.
Due to the master plan of the city the area within the municipality is about (8000), dounum. The city of Al-Hay has an estimated 2014 78,820 capita, and it is expected to grow to 169,984 capita by 2040.
Figure 2. Arial image for Al-Hay city by GoogleTM Earth
### 4.2. Climate
The climate of Iraq has been classified as continental. However, it is modified by the presence of the Arabian Gulf and especially the Mediterranean Sea, making it in the southern part of the country to resemble the Mediterranean type during the winter.
The summer months, from June to September, are completely dry with extremely high temperatures and very low relative humilities. In October and November, temperature drops and showers occur with increasing frequency. Lowest air temperatures are reached from December to February, with minimum temperatures occasionally below zero.
### 4.3. Study period and population forecast
The population figures for Al-Hay District are based on the 2014 population data obtained from the Central Organization of Statistics and Information Technology of the Ministry of Planning and Development. The planning timeframe used in this study is 22 years, through the year 2040. A growth rate of 3.0% per year has been adopted in accordance with the MMPW standard for water and wastewater facilities planning. Based on the above growth rate, an increase in Al-Hay population based on Table 2 down below over the time period of planning. Consequently, the population of Al-Hay District is projected to increase over the next 22 years. The phase I means the populations in which started from 2018 to 22 years later which is should be at 2030. That means the WWTP starts working in 2018 to 2030 with this quality and parameters. The phase II means the design for WWTP will be started from 2030 to 2040, so the plant will be worked with these parameters and the estimated populations are calculated based on the year that starts with. Indeed, the project of the WWTP is designed to work for 22 years starting from the year with is 2018 to 2030 and the second part to the project to still be working is started from 2030 to 2040. Over the time period of planning in which started with 2014 the population is provided corresponding to phase I, which is 2018 going up to phase II which is 2040. Table 2 showed the increasing in the population over the time period of planning in the design.
Table 1. Significant parameters for physical, chemical and biological characteristics
Table 2. Population projections for Al-Hay sub-district
### 4.4. Projected wastewater generation
The projected wastewater generation is determined based on the population and level of service. The per capita wastewater generation rate is presented in Table 3 shown below for Al-Hay. In Table 3, it displays the projected wastewater generation in which starts with the population of 2018 ending to 2030 as phase I and from 2030 to phase II, which is ending up to 2040 as indicated earlier based on the projected design. The gross population is calculated based on the formula Pf = Pi × (1 +0.03) ^m
Table 3. Projected wastewater generation for Al-Hay
Where Pi: the initial population (present) capita.
t: the number of year for future period = 1,2,3 … … .22 years.
Pf: the future population estimated (capita).
For example: Pf = Pi (1 +0.03) m
Pf = 78,820 (1.03)4 = 88,713 … . … . population for 2018, and the population for 2040 is calculated as Pf = 146,630 (1.03)5 = 169,984. The cross daily flow is calculated from the equation which is {Gross daily flow = population × 0.2} (m3/day).
The average flow in sanitary system is sum of gross waste water generation plus infiltration. The infiltration is water-entering sewer from ground through defective connections, pipes, pipes joints and manhole wells.
Let infiltration = 0.1 X Gross daily flow
Qav. = Gross daily flow + infiltration
### 4.5. Peak factors
In the design analysis of wastewater mains, average flows do not represent the flows that the mains must handle. Wastewater mains should be designed to carry the projected peak flows that could reach as high as five times the average daily flows, depending on the population served by the wastewater mains. For purposes of this project, peak factors are based on the Babbitt—Herman formula and have been summarized. In Table 4, it shows the computation of wastewater discharge peak factors for both possibilities when the population is larger or smaller than 80,000.
Table 4. Computation of wastewater discharge peak factors
Factors of minimum flow are calculated based on the following formula (P = population x 103): ${P}_{min}=0.2{P}^{1\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{negativethinmathspace}{0ex}}6}$
### 4.6. Population forecasting
The city of Al-Hay is undergoing rapid expansion in population as shown in Table 5. The population of 2018 and the estimated population is up to 22 year by the following equation:
Table 5. Show the design flow rate for Al-Hay waste water treatment plant
Gross daily = Population × 0.2 (m3/day) = 17,742.6 m3/day
Infiltration = 0.1 X Gross daily flow = 1774.26 m3/day
Qav = Gross daily flow + infiltration = 19,516.86/24 = 813.2 m3/hour
Qpeak = 2.486 × 813.2 = 2021.62 m3/hour
Pmin = 0.2 P1/6 = 0.2 × (88,713×10−3)1/6 = 0.422 m3/hour
### 4.7. Standards and design criteria
In this section will be describe the standards and criteria adopted in this study for the purpose of developing preliminary designs of the project facilities and establishing the bases for evaluating the various project alternatives.
In this wastewater treatment plant design, the characteristics of effluent of wastewater are adopted based on Iraqi National Standards for Discharge of Treated Municipal Wastewater to surface water courses. Based on these guidelines, the treatment processes must reduce BOD5 by more than 90%, ammonia by more than 80% and total nitrogen by more than 50% (nitrification/denitrification) (Marc et al., 2018). There are three specific treatment processes have been acknowledged as being feasible options for implementation, these are the Conventional Activated Sludge Process, Extended Aeration process and the Waste Stabilization process. Only the first two can meet the established effluent discharge criteria to River Tigris. The first process, mentioned to as the activated sludge/nitrification/denitrification process, is a conventional treatment process used successfully at sites in Iraq and throughout the world for treatment of domestic wastewaters. The second process, the Extended Aeration Activated Sludge system, although, now, commonly used in Iraq, is a well-known technology with less extensive sludge treatment requirements, which makes the capital and operational costs less than those required by the conventional activated sludge process. Waste Stabilization Ponds have the lowest capital and operational costs that can treat the domestic wastewater to the level that complies with the WHO and FAO guidelines for effluent reuse. Because this process is unable to meet the standards, so the extended aeration processes will be adopted for this purpose.
Extended Aeration can satisfy the selection criteria requirements for effluent disposal to surface water courses as it has the highest BOD, TSS and Nitrogen removal efficiency and meets the National Iraqi Standards for Wastewater Discharge to surface water courses, in addition to its ability to reduce phosphates considerably with minor upgrading. In the Figure 3, it displays a flow chart of conventional treatment plant.
Figure 3. A diagrammatic representation of a flow chart of conventional treatment plant. Reversed fromhttp://autozone.2.sisamben.de/process-flow-diagram-for-wastewater-treatment-plant.html
Additionally, the effluent standards for the category—Discharge to Streams, were selected from Iraqi National Standards, Table 1 in (Appendix A) is referred to in Act 25 published in the year 1967 from Iraqi Authorities to regulate the Treated Domestic Wastewater be applied to this project. These standards are to be used to determine the performance requirements of the wastewater treatment plants proposed for the Al-Hay City.
Moreover, in Table 2 in (Appendix A), the Iraqi National Standard for Treated Domestic Wastewater is represented. The most significant parameter limits (maximum concentrations) applicable to the wastewaters from the Study Area are 20 mg/1 for BOD5, 30 mg/1 for TSS, 10 mg/1 for ammonia (NH4-N)), 50 mg/1 for total nitrogen (N), and 100 MPN/100 ml for faecal coliforms and so on. In addition, the main characteristics of wastewater from the study area are those of domestic wastewater. Those prime parameters could be achieved by the conventional method of treatment proposed. Industrial discharges to public sewers have to be controlled by regulations for discharge of industrial wastewaters into the sanitary sewer system that require pretreatment before discharge (such as phosphorus). Chlorides, sulphates and many other inorganic dissolved solids are not considered to be problems. Thus, their concentrations in the effluent depend on the drinking water source.
## 5. Design calculations and GPS × modelling
### 5.1. Design calculation
#### 5.1.1. Preliminary treatment units
Along with treatment system units, we have to design pumping stations, approach channels, the flow equalization basin, which is transporting and providing a uniform flow to succeeding treatment unit and screen units, grit chamber and oil grease removal which is treatment unit in true sense.
##### 5.1.1.1 Design of inlet pumping station
At the inlet stage of the wastewater treatment plant, screw pumps are usually utilized for this purpose. The pumping station or pump house at the treatment plant is normally of RCC and consists of a wet well and a dry well. When the raw wastewater reaches the treatment plant, it is first collected in the wet well or wet sump, and it is then lifted by pumps installed in the dry pit and conveyed to the first unit of the treatment system. Pumps are normally installed in the dry wells, or alternatively, submersible pumps are installed within the wet well itself. There are several significant parameters are considered in the design of pump station, which includes:
1. HRT (Hydraulic retention Time) of waste water in the wet well usually does not exceed 20 min.
2. Screens: These are provided before the influent enters the wet well to screen out the material that may clog the pumps.
3. Stand by pump: At least one extra pump more than the number of pumps required as per design usually used for WWPT design.
Design of pumps of wastewater treatment plant depends on the following flow rates: (in which show the average discharge, peak discharge and minimum discharge for both phases I and II in the projected area). In Table 6, it shows the estimated flowrates for phase I and phase II.
For this design, three pumps are used with one pump extra. Thus, 3 pumps used + 1 stand by pump.
So, the discharge of each pump, we utilize the peak discharge for phase#2 (2040).
Qpump = Qpeak/3 = 77,560.299/3 = 25,853.433 m3/d (1077.226 m3/hour)
The screw diameter estimated about 1300 mm with overall length about 10 m.
To summarize, Al-Hay WWTP required 3 pumps + 1 stand by pump each of Q = 1077 m3/hour delivered as below:
Phase I (2030): 2- pumps +1- stand by pump.
Phase II (future): 1 pump.
##### 5.1.1.2. Design of approach channel
An approach channel usually used in WWTP is simple rectangular open channel. Wastewater collected in the wet well of a pump station is pumped into the approach channel located at a predetermined level normally determined using the gradient of hydraulic flow diagram. The sewage from the approach channel flows by gravity to the succeeding units of treatment plant.
The main function of an approach channel is to dampen the turbulence of the incoming flow to the subsequence units and ensure somewhat a steady and uniform flow after pumping. The approach channel designed with following design criteria:
1. Velocity of flow, v ≥ 0.45 m/s (usually use 1.5 m/s)
2. Length of the channel 2.0–3.0 m
3. For a rectangular section, the depth to width ratio, D: B = 1:1.5 to 1:2
4. The number of units, N = 2 usually.
5. Slope is computed by Manning’s equation.
Providing 2 channels in one unit, so, the maximum flow in each channel is;
Qmax = Qmax @ (2040)/2 = 77,560.299/2 = 38,780.15 m3/d (0.449 m3/s)
Let the velocity of flow 0.80 m/s in the channel
So, the cross section area: A = Qmax ÷ V = 0.449 ÷ 0.75 = 0.5987 m2
Assuming the width to depth ratio, B: D = 1.5: 1
B = 1.5 D
A = 1.5 D × D = 1.5D2
D = (A ÷ 1.5)1/2 = (0.598 ÷ 1.5)1/2 = 0.6314, use D = 0.6 m
B = 1.5 × 0.6 = 0.9 m
Assuming freeboard = 0.30 m, so, the total depth of the tank is:
∴Total depth of channel; h = 0.60 + 0.30 = 0.90 m
Check the velocity of flow with Manning’s equation:
Channel wetted perimeter; P = 2D+B = 2 × 0.60 + 0.9 = 2.10 m
Channel Hydraulic mean radius; R = A/P = 0.54 +2.10 = 0.257 m
Let the slope of channel; S = 1/1000
n = 0.013 (Manning’s coefficient of roughness).
V = $\frac{1}{n}$ × R 2/3 S1/2
V = $\frac{1}{n}$× R 2/3 S 1/2
V = $\frac{1}{0.013}$ 0.2572/3 × 0.00l1/2 = 0.98 m/s > 0.8 m/s as per design requirement. So, assumed velocity of 0.75 m/s and slope of 1 in 1000 are acceptable.
Now, Check the velocity of 1/3 flow depth (one third depth of flow):
1/3 D = 1/3×0.6 = 0.20 m $\to$ A = D×B = 0.2×0.9 = 0.18 m2
P (Wetted perimeter) = 2/3 × D +B = 2/3×0.2 +0.9 = 1.03 m $\to$ R (the hydraulic mean radius) = A/P = 0.175 m
V = $\frac{1}{0.013}$ 0.1752/3 ×0.00l1/2 = 0.76 m/s > 0.45 m/s. Hence, the slope of 1 in 1000 is acceptable.
#### 5.1.1.3. Design of equalization basin
In actual practice, flow of domestic wastewater is never constant but exhibits diurnal and seasonal variations, both in volume and strength. Dampening of flow and loading normally improve the performance of reactors, more particularly the biological reactors. Therefore, when it is required to equalize the strength of wastewater and to provide a uniform flow; equalization tank is employed to the WWTP, usually after the screens and grit chambers. The capacity (volume) of an equalization tank is determined by preparing an inflow mass diagram, which is the plot of cumulative inflow volume of wastewater versus the time of the day. But, with missing of inflow data required to draw inflow mass diagram, the capacity of equalization tank will take 0.25 Qave, so:
Capacity of equalization tank = 0.25 × 37,396.48 = 9349.12 m3
Use 2-rectangular tank with B: L = 2:1
And Total Depth = Water depth+ Free board = 5 m
The surface area of each tank = (9349.12 ÷ 4.5)÷ 2 = 1038.8 m2
∴2B × B = 1038.9 $\to$ B = 22.8 m $\to$ L = 2B = 2×22.8 = 45.6 ×46 m
If rectangular tanks are provided, therefore, provide 2 rectangular tanks with L = 46m and B = 23 m and total depth, D = 5.0 m.
#### 5.1.1.4. Design of screen unit
Screens are the device with clear opening of uniform size to re used to remove floating material and coarse solids from wastewater. It may consist of parallel bars, wires or grating. Mainly solids like sticks, rags, boards and other large objects, which find their way into a wastewater is removed by screenings. The screens are frequently required to be cleaned, as the retained solids (screenings) will tend to increase the head loss across the screens by clogging the screens. In the table 7, it displays the size of clear opening based on the type of screen.
Normally, screens are classified by two methods:
1. According to the method of cleaning: screens are known as hand cleaned or mechanically cleaned.
2. According to the size of clear openings: they are known as coarse, medium or fine screens depending on the size of clear openings between the bars, as under:
Providing 2 channels in one unit, so, the maximum flow in each channel is;
Qmax = 77,560.299 ÷2 = 38,780.15 m3/d (0.449 m3/s)
1. Coarse screen:
The design of screen units is as follows: $\left(Q\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}peak\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}.K\right)/\left(\rho .h.V\right)$
Where: n = no. of opening
K = factor = 1.05
$\sigma$ = bar spacing = 0.1 m
h = depth of water at Q peak
V = velocity through screen = 0.75 m/s
Use 7.0 bars
∴ Screen channel width = B = n. $\sigma$+ (n—1) × bar width = 8 × 0.1 + (7 × 0.008) = 0.86 m $\approx$ 0.9 m
The head loss through the screen:
hL = 0.0729 × (v2-vh2)
Where;
vh = velocity before the screen =0.98 m/s
v = velocity through the screen =1.0 m/s
hL =0.0729 × (l2−0.982) =0.003 m < 0.15 m. Acceptable as head loss is less than the design criteria, 0.15 m.
Assume the screening production = 0.01 m3/1000 m3 wastewater. The criteria are (0.0015–0.015 m3/1000 m3wastewater) for screen size of 10 to 25 mm, respectively.
∴ The quantity of screening = 0.0015 × 38.780 =0.058 m3/d
So, clean can be down manually daily.
Design of perforated plate: provide the length of the plate equal to the width of the chamber which is 0.90 m. Assuming the width of the plate is equal to 0.50 m and the depth of the pocket equal to 0.25 m for collecting screenings. So, the capacity of the screening pocket is: 0.90 × 0.50 × 0.25 = 0.113 m3
The length of screen channel:
Horizontal projected length is 0.90 × cos 45° =0.65 m
Let the length of outlet zone be the length of the perforated plate +0.2 m = 1.0 m Let the length of inlet zone = 0.85 m.
The total length of screen channel—0.65 + 1.0 + 0.85 = 2.5m. Use 2 no. screen units (one unit with two channels) as well as by pass channel with No. of bars are 8.0.
Fine screen:
The design of screen units is as follows: $\left(Q\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}peak\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}.\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}K\right)/\left(\rho .h.V\right)$
Where: n = no. of opening
K = factor = 1.05
$\sigma$ = bar spacing = 0.025 m
h = depth of water at Q peak
V = velocity through screen = 1 m/s $n=\left(0.449\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\ast \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}1.05\right)/\left(0.025\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\ast \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}0.60\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\ast \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}1.0\right)=31.43\approx 31$
Use 30 bars of size 6.0 × 50 mm
∴ Screen channel width = B = n. $\sigma$ + (n—1) × bar width = 30 × 0.025 4+ (30 × 0.006) = 0.93 m say 0.90 m
The head loss through the screen:
hL = 0.0729 × (v2 - vh2)
Where;
Vh = velocity before the screen, which is 0.98 m/s.
v = velocity after the screen, which is 1.0 m/s.
hL = 0.0729 × (l2–0.982) = 0.003 m < 0.15 m acceptable.
The length of screen channel:
Horizontal projected length is:
(h +1.2) × cos 60° = 2.1 × cos 60° = 1.05 m {for belt screen with flat teeth}
Let the length of outlet zone = 1.0 m
Let the length of inlet zone = 0.45 m
The total length of screen channel = 1.05 + 1.0 + 0.45 = 2.5 m
Use 2 no. Mechanical screen units (one units with two channels) as well as by pass channel.
#### 5.1.1.5. Design of grit, oil and grease removal
Grits are composed of sand, small gravel, cinders, broken glass or other heavy solid materials present in wastewater. There are some types of grits chambers; rectangular horizontal flow type and aerated grit chamber are most common and normally used in this field (Kulkarni, 2011). Using the aerated grit chamber will keep the organic solids that will settle down by gravity in suspension by rising air bubbles of aeration system. Removal of grease combined in the aerated grit chamber. Grits chamber are nothing but the sedimentation basins or settling basins, designed mainly to remove heavier or coarse inert and relatively dry suspended solids from the wastewater (Judd, 2015). The design depends on the following criteria, which are shown in Table 8.
## 6. Design criteria
The principal typical data of an aerated desanding tank for a good design, at peak flowrate, are (as per Wastewater Engineering, METCALF & EDDY, Fourth edition)(Janssen et al., 2002):
Use 2 aerated grit chamber, so, the maximum flow in each channel is;
Qmax = 77,560.299/2 = 38,780.15 m3/d =1615.84 m3/hr (26.930 m/min)
The dimension of each chamber is:
Let Depth D = 2 m and the width to the depth ratio W: D ratio of 2:1
∴Width B =2 × 2 = 4 m,
$\mathrm{\forall }$= Qpeak × t = 26.930 × 6 = 161.58 m3 = volume
∴Length L = $\mathrm{\forall }$+BD = 161.58 ÷2×4 = 20.1975 m
Check the maximum surface hydraulic load Cmax = Qmax ÷ (L×B) =1615.84÷ (20.198 × 4) = 20 m3/m2/h OK
Estimation of arriving sand quantity (m3 of dry sand) per 1000 m3 of sewage for each chamber:
Qs = 0.015 m3/103 m3 = 0.015 × 38.780 = 0.582 m3/d
Assume rate of air (m3/min) to supply for oil and grease flotation, of each tank, = 0.5 m3/m/min
∴ The requirement of air = 0.5 × length = 0.5 × 20 = 10 m3/min
### 6.1. Secondary treatment units
#### 6.1.1. Design aeration tanks
The combined process oxidizes a high proportion of the influent organics relative to the NH3-N concentration. In combined carbon oxidation-nitrification processes the ratio of BOD (biological oxygen demand) to TKN is greater than 5, whereas in separate processes the BOD to TKN ratio in the second stage is greater than 1 and less than 3. Total Kjeldahl Nitrogen (TKN) is a parameter used to measure organic nitrogen and ammonia. The TKN content of influent municipal wastewater is characteristically between 35 and 60 mg/L. TKN is a regular parameter that indicated the total concentration of organic nitrogen and ammonia. The ratio of BOD5 to TKN can be used as an indicator for efficiency of removal nitrogen and providing useful information in the rate of biodegradability in WWTP design. The term MLSS means (Mixed liquor suspended solids) is the concentration of suspended solids, in an aeration tank during the activated sludge process, which occurs during the treatment of waste water. The F/M ratio is a measure of the amount of food, or BOD, that is given to the microorganisms in the aeration tank. Additionally, MLVSS is defined as mixed liquor volatile suspended solids. MVLSS is generally defined as the microbiological suspension in the aeration tank of an activated-sludge biological wastewater treatment plant or it is represented the concentration of biomass in the activated sludge. The MLVSS is calculated as shown in Table 9. The hydraulic retention time (HRT) in wastewater treatment plant is a measure at an average length of time holding the wastewater in a tank. In addition, the biomass yield is defined as the ratio of the amount of biomass produced to the amount of substrate consumed (g biomass/g substrate). Moreover, in most aeration treatment systems, it is necessary to calculate the detention time is necessary in order for the microorganisms in the aeration system to absorb, adsorb and remove the contaminants (bacteria food) in the wastewater. Other calculations like sludge age, sludge produce and sludge produce in term of TTS are explained in Table 9 as a part of the calculations of the design of aeration tank.
Table 6. The estimated flowrates for phase I and phase II
Table 7. The size of clear opening based on the type of screen
Table 8. The design criteria for aerated grit chamber
Table 9. Design of Aeration Tank
To design an activated-sludge process for carbon oxidation-nitrification using the data in Table 9 and determine the volume of the aeration tank, daily oxygen requirement, and the mass of organisms removed daily from the system as below:
Where;
∀ = volume of biological tank (m3)
MLVSS = mixed liquor suspended solid (mg/l)
HRT = hydraulic retention time (hr)
Yobs = observed yield; Qw =sludge wasting discharge (m3/d)
θc = sludge retention time (d)
γshudge = sludge density (kg/m3)
Qave =average discharge (m3/d)
PXVSS = the sludge produce (kg/d)
PXSS = sludge produce in term of TSS (kg/d)
Q = return sludge discharge (m3/d)
MLSSR = return mixed liquor suspended solid (mg/l)
Check for anoxic zone requirement:
Total N- load (Nitrogen load) = Qave × N conc. = 27,826.48 × 60 = 1669.6 kg/day
T-N effluent (Total nitrogen effluent) = T-N effluent = Qave × N.eff = 27,826.48 × 0.050 = 1391.3 kg/day
N-waste sludge = 5% sludge produce = 5% × Pxvss = 0.05×3339.18 = 167 kg/day
Denitrified NO3-N = N load- N effluent- N waste sludge
Denitrified NO3.N = 1669.6–1391.3- 167 = 111.3 kg/d it can be removed in setting tank. Thus, anoxic zone is not required.
#### 6.1.2. Design secondary clarifier
The secondary settling tank (clarifier) is integral part of the activated sludge process. The main purpose of providing the secondary settling tank is to separate the large volume of suspended solid (MLSS) coming from the aeration tank and to obtained very clear stable effluent having low concentration of BOD and SS.
The secondary settling tank (SST) or clarifier is an integral part of the activated sludge process. As MLSS applied to tank is primarily biomass and flocculent in nature, normally type III or zone settling is assumed to take place in the tank, through discrete and flocculants settling also occur. The settled solids form the sludge blanket throughout the entire depth in the SST and may overflow the weir at peak flow rate if the size of the SST is inadequate. It is also essential that settling sludge is thickened in the tank to reduce its volume (Judd, 2015). So, while designing the SST, care should be taken for providing an extra depth for thickening or concentrating the settling sludge. In Table 10, it represents the design criteria for the settling tank, following criteria should be followed for extended aeration process (Janssen et al., 2002). In the table 11, it represents the design calculation of secondary settling tank.
Table 10. The criteria for the secondary settling tank
Table 11. The design calculation of secondary settling tank as follows
Table 12. The design criteria for the holding tank
Table 13. The design calculation of holding tank
Table 14. The design calculation for drying bed
Table 15. The design criteria for the chlorination tank
Table 16. The design calculation of chlorination tank as below
WLR = flow/π×d (m) = 27,826.48/π×33 = 269 m3/m.d. For phase#1
WLR = flow/π×d = 37,396.48/π×33 = 360 m3/m.d. For plase#2. Accepted.
To calculate the sludge hopper volume:
Mass of solid = Ms = SS × Qave × %removal = 320 × 37,396.48 × 0.90 × 10−3 = 10,770 kg/m3
Let moisture content = M = 97%
Then, mass of water, Mw given by, M = Mw/(Ms+Mw)*100 = 348,230 kg/d
sludge = ∀solid + ∀water = Mss+Mww = 10,770/1025 + 348,230/1000 = 358.74 m3.
The wasting sludge is adding to above volume because it is wasted from return, so 1122 + 359 = 1481 m3.
Thus, total ∀sludge = 1481m3.
#### 6.1.3. Design sludge treatment units
Sludge produced in secondary settling tank is large in volume due to high water content (0.2–0.12%) and contain more complex matter. The objective of sludge treatment is to reduce the water content of sludge and stabilize the organic content of sludge (Papoutsakis et al., 2015).
The sludge removed in wastewater treatment plants are mainly in the form of screenings, grit, and sludge. In the case of conventional domestic wastewater treatment plants, sludge is generated mainly from primary and secondary sedimentation tanks and its treatment normally includes:
1. Stabilization of sludge by digestion or treatment by lime or heat or chlorine oxidation.
2. Dewatering of sludge by filtration or centrifugation or drying on bed or in lagoons.
Therefore, the objective of sludge treatment is to reduce the water content of sludge and stabilize the organic content of sludge. One of the methods for reducing water content is thickening or concentrating solids content, dewatering and drying. The sources of sludge vary according to the systems adopted for wastewater treatment the sources of sludge generation are:
1. Primary settling tank (primary sludge).
2. Mixed liquor line for from the aeration tank or influent to the secondary clarifier (biological sludge).
3. Activated sludge settling tank (secondary sludge or activated sludge).
4. Trickling filter settling tank (secondary sludge or humus).
5. Chemical precipitation tank (chemical sludge).
##### 6.1.3.1. Design holding tanks
For the gravity holding tank for extended aeration process, Metcalf & Eddy (Janssen et al., 2002) had given the following criteria: as shown in table 12.
In table 12, it represents the design criteria for the holding tank.
In table 13, it shows the design calculation of holding tank.
##### 6.1.3.2. Design drying beds
The dewatering of digested sludge is normally accomplished on sludge drying beds where adequate land is economically available. The digested sludge is disposed on a well-drained sand and gravel bed. Thickness of sludge layer is usually 15 to 20 cm.
The water content of sludge reduced up to 70% and while the sludge volume can be reduced up to 60% in drying bed. The following design criteria are described for sludge drying beds: In table 14, it displays the design calculation for drying bed.
1. Bed surface area required for digested primary sludge = 0.1–0.25 m2/capita. And for digested primary and activated mixed sludge = 0.15–0.28 m2/capita.
2. Sludge drying time = 2–4 weeks.
3. Size of the bed (a) 6–30 m length (b) 3–8 m width.
#### 6.1.4. Disinfection unit
Disinfection refers to the partial destruction of disease-causing organism. In table 15, it shows the design criteria for the chlorination tank. Also, in the table 16, it represents the design calculation of chlorination tank.
## 7. Modelling and simulation using GPS X
GPS X is a modular, multi-purpose modelling environment for the simulation of municipal wastewater treatment plant. This allows examining the complex interactions between various units processes in the plant interactively and dynamically. The figure below shows the modelling for Al-Hay WWTP based on the influent flow, which is 37397 m3/d. In the figures below, they show the process of WWTP design starting with the influent flow going to aeration tank, clarifer1 and clarifier 2 ending up to effluent flow.
The results were calculated from the detailed sampling, after that GPS-X models were developed and calibrated to the plant data. This calibration effort involved detailed review and analysis of the plant data and development of influent fractions for the model. An example of one of the screen views of a model for the Al-hay WRRF is shown below. This calibrated model is referred to engineering models and is intended by the engineering department as well as engineers for design.
For better purification performance and for providing essential operating rules to get better technically and scientifically operation base of the WWTP, thus the simulation with the GPS-X software is used and the results showed significant and satisfactory control performance of the wastewater treatment plant. The results indicate a good functioning of WWTP along the studied period where almost all measured parameters were below the standards. Moreover, the GPS-X is utilized for improving capacity, operating efficiency and effluent quality by the existing facility can be got.
Figure 4. Modelling of extended aeration process
Figure 5. Modelling of Al-Hay WWTP
Based on the GPS X analysis, in Figures 4 and 5, they represent the systematic diagram of the extended aeration process. This process is started with the influent going up to the aeration tank ending with the effluent part. The program can be used to find appropriate control methods to minimize effluent concentrations from an activated sludge process. Basically, our design criteria are started with preliminary treatment through secondary treatment ending up with the effluent of the conventional wastewater treatment plant.
Figure 6. The relationship between TSS and index in final clarifier 1 solids profile
Figure 7. The relationship between TSS and index in final clarifier 2 solids profile
Figures 6 and 7 represent the relationship between the index and the total suspended solids. The results display an increasing in the amount of TSS with the increasing the index values. For example, it can predict a range of TSS concentrations given the typical range of growth rates, even though the true actual growth rate isn’t known. Those results have done for final clarifier 1 and clarifier 2 for secondary treatment process.
Figure 8. The relationship between total suspended solids and time (the time simulation is 5 days)
Figure 9. The relationship between the TSS and the time (the time simulation is 20 days)
In Figures 8 and 9, it displays the correlation between the total suspended solid and the time, but the time is various. Thus, the variation in the amount of TSS is considered with the variation of the time of the simulation. The time is specified in this analysis, but the effluent flow is considered with deliberation of the time of treatment process. The time is based on the data calculation is based on the population of Al-Hay starting from 2018 up to 2040. The project period is expected to 22 years. As an example, in Figures 8 and 9, it expresses the relationship between the times of the process with the amount of solids. Thus, the modeling results indicated that the increasing the time of simulation increased the removal of TSS, so the overall efficiency of the treatment system is enhanced too.
Figure 10. The relationship between the TSS and influent flow
Figure 11. The relationship between the TSS and influent flow
In the figures above, the correlations between TSS values and influent flow, which is clarified an increasing with the TSS values with flow values. But, the time of simulation is 20 days instead of 5 days.
Additionally, in Figures 10 and 11, they show the relationship between the amounts of total suspended solid with the influent flow. An enhancement in the amount of TSS is represented based on our analyses with the flow values at the simulation day 5. Meanwhile, an increasing in the amounts of TSS with the flow values at the 20 days simulation. After that, in Figure 10, it represents the correlation between TSS and the time. It can predict that at time 5 the TSS showed reduction while at time 20, the amount of TSS showed enhancement in it amount. The improvement of TSS increased with increasing the time of simulation. However, at time 5, the TSS is significantly reduced.
Figure 12. The COD bar diagram
Figure 13. Diagram of WWTP design data
Finally, in Figure 13, methodical diagram of the whole design of Al-hay wastewater treatment plant is represented. In Figure 12, the relationship between COD values during the process of plant. The COD value indicates the amount of oxygen, which is needed for the oxidation of all organic substances in water in mg/l. At mixed liquor suspended solid, the COD values is too high comparing to others values in the plant. Mixed liquor is the mixture of raw or settled wastewater and activated sludge with in aeration tank in the activated sludge process. Thus, the mixed liquor suspended solid (MLSS) is the concentration of SS in the mixed liquor. The mixed liquor is discharged into settlings tanks and the treated supernatant is run off to undergo further necessary before final discharge. Part of the sludge is returned to the aeration tank system to re-seed the new sewage entering the tank. If the MLSS content is too high, it indicates that the process of the treatment system works efficiently. It means that the process becomes overloaded that can cause the dissolved oxygen content to drop with the effect. Meanwhile, it also means that the organic matters are fully degraded and the biological is dying off. Measuring the MLSS is important to adjust the flowrate of return sludge from the secondary clarifier into the secondary treatment reactor, and to ensure that influent organic matter will be treated with a correct and appropriate concentration of microorganisms. The COD value is approximately 12,000 mg/L in the MLSS indicating that the process of wastewater treatment plant works efficiently.
## 8. Conclusion and recommendation
Wastewater treatment is required as a part for eliminating contaminants to a sufficient degree to protect water. Thus, Al-hay WWTP is one of the plants in Iraq that are not completed in its operation. So, the design is needed and is taken into consideration some parameters in the influent that is required to be controlled to enhance the efficiency of the plant.
Wastewater has been released to the environment that is defined as a combination of the water plus wastes added to the water from a variety of uses, such as industrial, commercial, residences, and there are two sources which release the wastewater into the environment.
First, sewage/community wastewater is the kind which has been expelled from domestic premises such as institutions, residence etc. and commercial establishments which are organic because of the consistency of carbon composites alike vegetables, human waste, paper etc (Zhou & Smith). Second, is the wastewater that has been produced by industrial procedures, which is also organic in composition ([HRC] INC, 2019). These pollutions can be dangerous for human body and environment so wastewater should be treated in order to prevent these damages to take place, the process, which purifies the wastewater in order to discharge it back into a watercourse is known as wastewater treatment. Wastewater treatment uses chemical, physical, and biological processes to cleanse wastewater to protect the environment and public health.
Wastewater treatment happens in some infra structures, which are called wastewater treatment plant (Hammer, 1986). Generally, a wastewater treatment plant consists of Mechanical treatment, Biological treatment and Sludge treatment sections.
There are different kinds of pollutants and wastes in the wastewater such as, nutrients, inorganic salts, pathogens, and coarse solids etc., which are very dangerous for ecology and human. In order to remove these pollutants, different processes have been exposed. There are specific processes and unit operations in wastewater treatment, which are chemical, physical or biological. All these processes should be considered before deigning a proper wastewater treatment plant, which depends on the characteristics of the wastewater. In this text, a wastewater treatment plant will be designed related to the characteristics of the wastewater.
This project is undertaken to design a wastewater treatment plant with some particular data for Al-Hay city. The data calculation is based on the population of Al-Hay starting from 2018 up to 2040. The project period is expected to 22 years. The grit chamber, equalization basin, screens, oil and grease removal, aeration tank, secondary settling tank, drying beds, and chlorination tank have been designed. Then the values for volume of aeration tank, hydraulic retention time (HRT), f/m ratio, return sludge flow rate, sludge production and oxygen requirement have been calculated. Some criteria have been made during designing the WWTP. Particularly, the recommendation is to reduce these assumptions as many as possible to achieve the more accurate and reliable results. In addition, this designing process is suitable for this particular situation, and it cannot be followed for every situations. Designing a wastewater treatment plant depends on the characteristics of the wastewater so the designing process should be analysed carefully because even a small mistake can be fatal. Based on the analysis, the sludge produce is from 3339.18 to to 4487.58 kg/day. The high observed yield as it is detected with values ranging from 0.2 to 0.6 kgVSS/kg(BOD5). The sludge retention time is equal to 27.7 day. These results are related to biological tank for Al-hay WWTP is worked during with high efficiency. Thus, the variation of principal parameters concentration of effluent of wastewater is given by GPS X analysis.
After design calculation, the data are analysed too by GPS X program. The design is applied and GPS-X modelling has been applied to the simulation of the WWTP scale. GPS X modelling has been an important part of carrying out the simulation and testing the design. The GPS X simulation has been done on biological treatment. For calibration of the model, it is needed to evaluate the operation of the plant; it needs to include all physical processes of the scale plant. In the analysis, sympathy was accomplished and some critical parameters that affect the performance of the treatment plant were predicated.
Modelling for Al-Hay WWTP has been done with extended aeration process, that one is started with the influent flow, aeration tank and two secondary tank (clarifier tank) ending with the effluent flow. Simulation is exhibited at 5 days and then at 20 days. The charts elucidate the correlation between total suspended solids and influent flow, COD bar chart, and the other results showing an emblematic diagram for the Al-Hay WWTP system. More analyses are needed to investigate other units in the WWTP system.
There are some recommendations for better performance regarding to the design criterion, management and operational issues. The excess flowrate must be treated by introducing a new concept that can help to improve in the removal of organic and nutrient of the plant. Besides that, at the beginning of the treatment, a flow meter should be installed to control the process. For operational denitrification, sufficient anoxic volume, proper carbon and mixed liquor recirculation are needed. Finally, monitoring and maintenance activities should be conducted and the operator maintaining the treatment plant should have aware of the unit processes in case of the failure of these units.
## Acknowledgements
I gratefully acknowledge Al-Hay Wastewater Treatment Plant for providing necessary information during the investigation of this research. I also thank Dr. Muhammad for his assistance during the GPS X modelling and for his helpful recommendations during the study.
### Funding
The author received no direct funding for this research.
### Notes on contributors
Nuralhuda Aladdin Jasim This is a short summary in which the author (Nuralhuda Aladdin Jasim) uses GPS X modelling for design wastewater treatment in Al-Hay city. The criteria also are considered in this research during the design of wastewater treatment plant. Also, the aim of this paper is to verify the treatment process design of Al-Hay wastewater treatment plant (WWTP). The author’ activities are to work on nanotechnologies technique. The papers have been published based on nanoparticles in which investigating the properties of natural organic matter affect the impact of nanoparticles on plants, the dispersion of nanoparticles and their effect on plants and other researches paper regarding to nanoparticles area. The author also is interested in water treatment using different coagulants either natural or synthetic coagulants used in water treatment. She is very interested in GIS technique by doing spatial analysis of forest biomass in some state located in different areas.
## References
• Anjum, M., Al-Makishah, N. H., & Barakat, M. A. (2016). Wastewater sludge stabilization using pre-treatment methods. ScienceDirect. doi:10.1016/j.psep.2016.05.022
• Avijit, M., Md, A., & Mhia, M. Z. (2018). Design and feasibility analysis of a low-cost water treatment plant for rural regions of Bangladesh. AIMS Agriculture and Food. 3(3), 18133. doi:10.3934/agrfood.2018.3.181
• Culp, R. L., Clup, G. L., & Wesner, G. Mack. (1978). Handbook of advanced wastewater treatment (2nd ed.
• Davis, M. L., & Cornwell, D. A. (2008). Introduction to environmental engineering. McGraw-Hill Companies, New York.
• Deborah, P., Silvia, F., Mariantonia, Z., Giuseppe, G., & Lorenza, M. (2016). Evaluation of the energy efficiency of a large wastewater treatment plant in Italy. Applied Energy. 161, 404411. doi:10.1016/j.apenergy.2015.10.027
• Eu, E. E. A. Sludge treatment and disposal: Management approaches and experiences. ISWA, Denmark.
• Feyzbakhsh, S., Telvari, A., & Lork, A. (2017). Investigating the causes of delay in construction of urban water supply and wastewater project in water and wastewater project in Tehran. 3(12), 12881300. doi:10.28991/cej-030958
• Hammer, M. J. (1986). Water and wastewater technology.
• Hubble, Roth and Clark (HRC) INC. (2019). Project plan for wastewater treatment plant upgrades (pp. 48302). Michigan: Bloomfield Hills.
• Janssen, P. M. J., Meinema, K., & van der Roest, H. F. (Eds.). (2002). Biological phosphorus removal: Manual for design and operation. London, IWA: STOWA.
• Judd, S. J. (2015). The status of industrial and municipal effluent treatment with membrane bioreactor technology. 305, 3745. doi:10.1016/j.cej.2015.08.141
• Karia, G. L. & Christian, R.A. (2006). Wastewater treatment :concepts and design approach (1st ed.). New Delhi: Prentice-Hill of India.
• Karkush, M., Abdul Kareem, M., & Jasim, M. (2018). Ultimate lateral load capacity of piles in soils contaminated with industrial wastewater. Civil Engineering Journal. 4(3), 509517. doi:10.28991/cej-0309111
• Kim, K., Jung, J., & Han, H. (2019). Utilization of microalgae in aquaculture system: Biological wastewater treatment. Emerging Science Journal. 3(4), 209221. doi:10.28991/esj-2019-01183
• Kulkarni, U. (2011). Grit removal and treatment for sustainable grit recycling. Indian Environmental Association. Retrieved from: http://ev.ldcealumni.net/papers/ATE_HUBER.pdf
• Marc, B., Birgit, B., Marc, B., Ewa, B., Julian, F., Elisabeth, S., … Christa, M. (2018). Evaluation of a full-scale wastewater treatment plant upgraded with ozonation and biological post-treatments: Abatement of micropollutants, formation of transformation products and oxidation by-products. Water Research. 129, 486498. doi:10.1016/j.watres.2017.10.036
• Metcalf & Eddy, Inc. (2003). Wastewater engineering treatment and reuse.Boston: McGraw-Hill.
• Papoutsakis, S., Miralles-Cuevas, S., Oller, I., Sanchez, J. L. G., Pulgarin, C., & Malato, S. (2015). Microcontaminant degradation in municipal wastewater treatment plant secondary effluent by EDDS assisted photo-Fenton at near-neutral pH: An experimental design approach. Catalysis Today. 252, 6169. doi:10.1016/j.cattod.2015.02.005
• Parsa, N., Khajouei, G., Masigol, M., Hasheminejad, H., & Moheb, A. (2018). Application of electrodialysis process for reduction of electrical conductivityand COD of water contaminated by composting leachate. Civil Engineering Journal. 4(5), 10341045. doi:10.28991/cej-0309154
• Ronan, G., Julien, R., Romain, M., Emmanuelle, V., Catherine, M., Fabrice, N., … Vincent, R. (2019). Organic micropollutants in a large wastewater treatment plant: What are the benefits of an advanced treatment by activated carbon adsorption in comparison to conventional treatment? Chemosphere, 218, 10501060. doi:10.1016/j.chemosphere.2018.11.182
• Rungnapha, K., Hardy, T., Huub, R., & Karel, J. K. (2015). Energy and nutrient recovery for municipal wastewater treatment: How to design a feasible plant layout? Environmental Modelling & Software. 68, 156165. doi:10.1016/j.envsoft.2015.02.011
• Soomaree, K. (2015). Detail design of wastewater treatment plant. doi:10.13140/RG.2.1.3503.4327.
• Steve, A. C., Jin, L., & Arnold, G. T. (2016). Transport and fate of microplastic particles in wastewater treatment plants. Water Research. 91, 174182. doi:10.1016/j.watres.2016.01.002
• Zhou, H., & Smith, D. W. Advanced technologies in water and wastewater treatment
Appendix A
Table A1. Influent basic design data
Table A2. The effluent standards discharge to streams
|
{}
|
Monday, March 8, 2021 12:30 PM
Jiuya Wang (Duke University)
Abstract: The \ell-torsion conjecture states that the size of the \ell-torsion subgroup Cl_K[\ell] of the class group of a number field K is bounded by Disc(K)^{\epsilon}. It follows from a classical result of Brauer-Siegel, or even earlier result of Minkowski, that the class number |Cl_K| of a number field K is always bounded by Disc(K)^{1/2+\epsilon}, therefore we obtain a trivial bound (Disc}(K)^{1/2+\epsilon} on |Cl_K[\ell]|. We will talk about results on this conjecture, and recent work on breaking the trivial bound for \ell-torsion of class groups in some cases based on the work of Ellenberg-Venkatesh.
|
{}
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# 1) a) A thick book with 1200 pages has quite a few typographical errors. There are only 180 pages without typographical errors in the whole book.
1) a)A thick book with 1200 pages has quite a few typographical errors. There are only 180 pages without typographical errors in the whole book. If typographical errors occur randomly, about how many pages in the book have three typographical errors?What is the median number of typographical errors per page?2.A.J.Jones works at Acme Economics Think Tank. The employees’ cafeteria offers a daily special called Box Lunch Surprise for $6.50. The cook has a limited repertoire, so that the surprise lunch is always either a ploughman’s lunch (cheese and bread) or a grilled-cheese sandwich with fries. On December 19th, 2011 there are 20 unmarked box lunches, of which there are 8 ploughman’s lunches and 12 grilled-cheese sandwiches. The boxes are arranged in a random fashion so that there is no way of knowing what is in a box before it is bought. Once opened, a box lunch cannot be returned.a)Today A.J.Jones has decided to buy a Box Lunch Surprise for each of the 4 members of his team. What is the probability that there will be two ploughman’s lunches and two grilled-cheese sandwiches?b)Two members of the team will absolutely not eat grilled cheese. How many lunches will A.J. have to buy in order to have at least a 90% probability of including two ploughman’s lunches (or more)?c)A.J. buys boxes one at a time until he gets three grilled cheese. What is the probability that he will have to spend$45.50 in order to achieve his goal?3.A time-and-motion-study consultant has been hired at Eurelia Industries Limited. She has identified a certain work station as a bottleneck in production. Initial data suggest that the times required for processing pieces at this station may be treated as having an exponential distribution with a mean of three hundred seconds.a)If fifty pieces are processed, about how many of them take between two hundred forty and three hundred sixty seconds to process?b)What is the probability that more than the mean number of pieces will be processed in a ten-minute period?c)What is the median processing time at the station?d)What proportion of processing times are within two standard deviations of the mean?
|
{}
|
It is currently 23 Mar 2018, 17:45
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If |x| = |y| and xy = 0, which of the following must be true?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 18 Feb 2015
Posts: 87
If |x| = |y| and xy = 0, which of the following must be true? [#permalink]
### Show Tags
22 Jan 2016, 12:51
5
This post was
BOOKMARKED
00:00
Difficulty:
5% (low)
Question Stats:
87% (00:39) correct 13% (00:58) wrong based on 342 sessions
### HideShow timer Statistics
If |x| = |y| and xy = 0, which of the following must be true?
A $$xy^2>0$$
B. $$x^2y>0$$
C. $$x+y=0$$
D. $$\frac{x}{(y+1)}=2$$
E. $$\frac{1}{x}+\frac{1}{y}=\frac{1}{2}$$
[Reveal] Spoiler: OA
Last edited by Bunuel on 27 Nov 2017, 21:36, edited 5 times in total.
Reformatted the question
Current Student
Joined: 20 Mar 2014
Posts: 2689
Concentration: Finance, Strategy
Schools: Kellogg '18 (M)
GMAT 1: 750 Q49 V44
GPA: 3.7
WE: Engineering (Aerospace and Defense)
Re: If |x| = |y| and xy = 0, which of the following must be true? [#permalink]
### Show Tags
22 Jan 2016, 13:04
HarveyKlaus wrote:
If lXl = lYl and XY=0, which of the following must be true?
A $$xy^2>0$$
B. $$x^2y>0$$
C. $$x+y=0$$
D. $$x/(y+1)=2$$
E. $$1/x+1/y=1/2$$
Follow posting guidelines, including proper formatting of the question.
For the question, confirm that you have transcribed option E correctly.
You are given that |x| = |y| and xy = 0. For a MUST BE TRUE question, make sure to use POE for the options as the only option remaining will be true for ALL possible cases.
xy=0 ---> either x=0 and y $$\neq$$ 0 or y=0 and x $$\neq$$ 0 or both x =y=0. for the sake of simplicity, I will choose the case when x=y=0 and this also satisfies |x|=|y|.
Substitute x=y=0 in the options A-D and see which one remains true.
A $$xy^2>0$$ . Not true. Eliminate
B. $$x^2y>0$$. Not true. Eliminate
C. $$x+y=0$$. True . Keep.
D. $$x/(y+1)=2$$. x/(y+1) = 0 $$\neq$$ 2. Eliminate.
E. $$1/x+1/y=1/2$$. Not true. Eliminate. The only way you are going to get 1/x + 1/y = 1/2 is when you have x=y=1 . Although you will satisfy |x|=|y| , xy $$\neq$$ 0. Thus eliminate this option.
Hence C is the correct answer.
Hope this helps.
Manager
Joined: 21 Jun 2017
Posts: 78
Re: If |x| = |y| and xy = 0, which of the following must be true? [#permalink]
### Show Tags
13 Oct 2017, 08:22
HarveyKlaus wrote:
If |x| = |y| and xy = 0, which of the following must be true?
A $$xy^2>0$$
B. $$x^2y>0$$
C. $$x+y=0$$
D. $$x/(y+1)=2$$
E. $$1/x+1/y=1/2$$
Any number multiplied by zero, gives the product zero. Since the absolute value of 0 is 0, and |x| = |y|, x,y = 0
Therefore, x + y = 0 is the only answer that must be true.
(C)
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2166
Re: If |x| = |y| and xy = 0, which of the following must be true? [#permalink]
### Show Tags
27 Nov 2017, 19:08
HarveyKlaus wrote:
If |x| = |y| and xy = 0, which of the following must be true?
A $$xy^2>0$$
B. $$x^2y>0$$
C. $$x+y=0$$
D. $$x/(y+1)=2$$
E. $$1/x+1/y=1/2$$
Since |x| = |y| and xy = 0, both x and y must be zero.
Thus, we see that A and B are not true, because both answers are equal to zero.
C, however, must be true because 0 + 0 = 0.
_________________
Jeffery Miller
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: If |x| = |y| and xy = 0, which of the following must be true? [#permalink] 27 Nov 2017, 19:08
Display posts from previous: Sort by
|
{}
|
Marine Food Security
# Ocean acidification emerges as new climate threat
Republished From: Waves of Change
Source: The Washington Post
Topics:
Video: The Post’s Juliet Eilperin talks with Kris Holderied, director of the the National Oceanic and Atmospheric Administration’s Kasitsna Bay Laboratory, about the pH levels of Kachemak Bay in Alaska.
HOMER, Alaska — Kris Holderied, who directs the National Oceanic and Atmospheric Administration’s Kasitsna Bay Laboratory, says the ocean’s increasing acidity is “the reason fishermen stop me in the grocery store.”
“They say, ‘You’re with the NOAA lab, what are you doing on ocean acidification?’ ” Holderied said. “This is a coastal town that depends on this ocean, and this bay.”
This town in southwestern Alaska dubs itself the Halibut Fishing Capital of the World. But worries about the changing chemical balance of the ocean and its impact on the fish has made an arcane scientific buzzword common parlance here, along with the phrase “corrosive waters.”
In the past five years, the fact that human-generated carbon emissions are making the ocean more acidic has become an urgent cause of concern to the fishing industry and scientists.
The ocean absorbs about 30 percent of the carbon dioxide we put in the air through fossil fuel burning, and this triggers a chemical reaction that produces hydrogen, thereby lowering the water’s pH.
The sea today is 30 percent more acidic than pre-industrial levels, which is creating corrosive water that is washing over America’s coasts. At the current rate of global worldwide carbon emissions, the ocean’s acidity could double by 2100.
What impact it is having on marine life, how this might vary by geography and species, and what can be done about it if humans do not cut their carbon output significantly are some of the difficult questions scientists and policymakers are seeking to answer.
The decline in pH will likely disrupt the food web in many ways. It is making it harder for some animals, such as tiny pteropods and corals, to form their shells out of calcium carbonate, while other creatures whose blood chemistry is altered become disoriented and lose their ability to evade predators.
To study what is happening off the West Coast, Gretchen Hofmann, a professor of marine biology at the University of California at Santa Barbara, has recruited everyone from sea-urchin divers to Bureau of Ocean Energy Management, Regulation and Enforcement officials.
She calls it “an all-hands-on-deck moment in our country, and it’s happening before our eyes.”
The NOAA has started tracking changes in the ocean’s pH over time in eight coastal and coral reef ecosystems, ranging from the Gulf of Maine to coastal Hawaii, and is evaluating its impact on more than two dozen commercially important species, such as red king crab, summer flounder and black sea bass.
“One of the primary questions is how is the chemistry of the water changing and how variable is that change across the water we’re responsible for, which is a lot of coastline,” said Libby Jewett, director of the program.
Federal and state authorities are searching for ways to cope with a problem whose obvious solution — slashing global carbon emissions — remains elusive. A blue-ribbon panel established by outgoing Washington Gov. Chris Gregoire (D), which will issue its recommendations in November, is examining local contributors such as agricultural runoff. Federal officials and scientists, meanwhile, are trying to determine which species may be able to adapt to more acidic seas and explore what other protections could bolster fish populations under pressure.
In the 1970s, NOAA senior scientist Richard Feely and his colleagues began talking about measuring carbon concentrations in the ocean, the way Charles David Keeling had charted atmospheric carbon from a station in Hawaii’s Mauna Loa starting in 1958. Keeling pushed the oceanographer to refine his methods before taking any measurements, and Feely conducted his first transect of the Pacific Ocean in 1982.
By the late 1990s, scientists such as the National Center for Atmospheric Research’s Joan Kleypas were demonstrating that the sea’s declining pH posed a threat to marine life . At first, scientists assumed that the growing acidity of the ocean would dismantle ecosystems around the world in a uniform way, by dissolving the coral reefs that provide essential habitats and impeding the development of the smallest organisms that form the basis of the food web.
But now, scientists are beginning to tease out a more complex picture, in which some parts of the world could be more vulnerable and others may demonstrate resilience. Water from the deep ocean normally comes up and spills over the continental shelf in a process called upwelling; in the Pacific Northwest this water is increasingly acidic, killing oyster larvae that farmers are growing. Much of Alaska’s waters already have lower pH levels, because the water is colder and cold water can hold more carbon dioxide, and the water that reaches the Arctic has been circulating around the planet, absorbing CO2 along the way.
According to NOAA supervisory oceanographer Jeremy Mathis, “It doesn’t take much to push it past the thresholds we’re concerned about.”
And last year, a team of researchers led by Oregon State University professor George Waldbusser found that the pH in the lower part of the Chesapeake Bay is declining at a rate that’s three times faster than the open Pacific Ocean, partly because of increased nutrient runoff from farming and other activities. This stream of nutrients causes phytoplankton to take more carbon dioxide out of the upper Bay; as the plankton release CO2 as they move to the lower Bay, it increases carbon concentrations and lowers the overall pH.
A.J. Erskine, aquaculture manager for the Kinsale, Va.-based Bevans Oyster Co. , and Cowart Seafood Corp. in Lottsburg, Va., said they started focusing on the issue when “two years ago we were seeing production losses, and we didn’t know where it was from.”
Six shellfish hatcheries in Virginia have used state funds to conduct their first year of water chemistry monitoring and hope to do more; Erskine said they suspect nutrient runoff from the land contributes to the problem.
Oyster farmers off the coasts of Washington and Oregon were the first to see how ocean acidification threatened their business. Alan Barton, an employee at Oregon’s Whiskey Creek Shellfish Hatchery, suspected that lower pH waters were killing off oyster larvae, or spat. Working with Oregon State University and NOAA researchers, they were able to prove it was the case, and now time their intakes to ensure that their oysters are exposed to less-acidic water.
“The scientists helped provide an adaptation strategy to help that industry, and it worked,” Feely said, adding that a $500,000 investment in pH-monitoring equipment “saved that industry$34 million in one year,” in 2011.
But Feely and Jewett acknowledged that tackling the problem in the open ocean will be harder. Jewett said that if they can identify which species are most vulnerable, “we can try to be even more protective of them for the future” by limiting their catch.
The die-off of oyster larvae in the Pacific Northwest has implications for oyster growers in places as far away as Homer, Alaska, since they traditionally buy their spat from Washington and Oregon farms. Out on the Homer spit, a slim strip of land jutting out into Kachemak Bay, the Kachemak Shellfish Growers cooperative office now boasts a small hatchery where it hopes to produce 3 million spat this year.
“We just can’t rely on the Lower 48 anymore,” said co-op manager Sean Crosby, whose group recieved \$150,000 in federal funds over the past two years to start up and run the hatchery. “Even though we’re not seeing ocean acidification in Kachemak Bay, we’re feeling its effects.”
Alaska and the NOAA are jointly funding four buoys throughout the state to monitor pH levels, while other NOAA scientists are testing how species such as surf smelt would likely gain from a lower pH because they thrive under those conditions, while others, including dungeness crab, would lose.
These species interact with each other, which is why ocean acidification could have such large ripple effects. The highly vulnerable pteropods, for example, can make up as much as 40 percent of the diet of Alaska’s juvenile pink salmon.
“When you ask why does ocean acidification matter, often we’re interested because of the fish we eat and the things we make money off of,” said Shallin Busch, a research ecologist at the NOAA’s Northwest Fisheries Science Center.
Other species, such as purple sea urchins off California’s coast, have shown some genetic capacity to adapt to more acidic conditions, in part because they are periodically exposed to corrosive waters. Hofmann described her job as seeking an answer to the question, “Will there be sushi?”
“The question is, can they adapt quickly enough in this rapidly changing environment?” Hofmann asked. “And the answer, at least in the case of sea urchins, could be yes.”
Glossary
|
{}
|
# Characteristic equation (calculus)
In mathematics, the characteristic equation (or auxiliary equation[1]) is an algebraic equation of degree upon which depends the solution of a given th-order differential equation[2] or difference equation.[3][4] The characteristic equation can only be formed when the differential or difference equation is linear and homogeneous, and has constant coefficients.[1] Such a differential equation, with as the dependent variable and as constants,
will have a characteristic equation of the form
whose solutions are the roots from which the general solution can be formed.[1][5][6] Analogously, a linear difference equation of the form
has characteristic equation
discussed in more detail at Linear difference equation#Solution of homogeneous case.
The characteristic roots (roots of the characteristic equation) also provide qualitative information about the behavior of the variable whose evolution is described by the dynamic equation. For a differential equation parameterized on time, the variable's evolution is stable if and only if the real part of each root is negative. For difference equations, there is stability if and only if the modulus (absolute value) of each root is less than 1. For both types of equation, persistent fluctuations occur if there is at least one pair of complex roots.
The method of integrating linear ordinary differential equations with constant coefficients was discovered by Leonhard Euler, who found that the solutions depended on an algebraic 'characteristic' equation.[2] The qualities of the Euler's characteristic equation were later considered in greater detail by French mathematicians Augustin-Louis Cauchy and Gaspard Monge.[2][6]
## Derivation
Starting with a linear homogeneous differential equation with constant coefficients ,
it can be seen that if , each term would be a constant multiple of . This results from the fact that the derivative of the exponential function is a multiple of itself. Therefore, , , and are all multiples. This suggests that certain values of will allow multiples of to sum to zero, thus solving the homogeneous differential equation.[5] In order to solve for , one can substitute and its derivatives into the differential equation to get
Since can never equate to zero, it can be divided out, giving the characteristic equation
By solving for the roots, , in this characteristic equation, one can find the general solution to the differential equation.[1][6] For example, if is found to equal to 3, then the general solution will be , where is an arbitrary constant.
## Formation of the general solution
Solving the characteristic equation for its roots, , allows one to find the general solution of the differential equation. The roots may be real and/or complex, as well as distinct and/or repeated. If a characteristic equation has parts with distinct real roots, repeated roots, and/or complex roots corresponding to general solutions of , , and , respectively, then the general solution to the differential equation is
.
### Example
The linear homogeneous differential equation with constant coefficients
has the characteristic equation
.
By factoring the characteristic equation into
one can see that the solutions for are the distinct single root and the double complex roots . This corresponds to the real-valued general solution
with constants .
### Distinct real roots
The superposition principle for linear homogeneous differential equations with constant coefficients says that if are linearly independent solutions to a particular differential equation, then is also a solution for all values .[1][7] Therefore, if the characteristic equation has distinct real roots , then a general solution will be of the form
.
### Repeated real roots
If the characteristic equation has a root that is repeated times, then it is clear that is at least one solution.[1] However, this solution lacks linearly independent solutions from the other roots. Since has multiplicity , the differential equation can be factored into[1]
.
The fact that is one solution allows one to presume that the general solution may be of the form , where is a function to be determined. Substituting gives
when . By applying this fact times, it follows that
.
By dividing out , it can be seen that
.
However, this is the case if and only if is a polynomial of degree , so that .[6] Since , the part of the general solution corresponding to is
.
### Complex roots
If the characteristic equation has complex roots of the form and , then the general solution is accordingly . However, by Euler's formula, which states that , this solution can be rewritten as follows:
where and are constants that can be complex.[6]
Note that if , then the particular solution is formed.
Similarly, if and , then the independent solution formed is . Thus by the superposition principle for linear homogeneous differential equations with constant coefficients, the part of a differential equation having complex roots will result in the following general solution: .
## References
1. Edwards, C. Henry; Penney, David E. "3". Differential Equations: Computing and Modeling. David Calvis. Upper Saddle River, New Jersey: Pearson Education. pp. 156–170. ISBN 978-0-13-600438-7.
2. Smith, David Eugene. "History of Modern Mathematics: Differential Equations". University of South Florida.
3. Baumol, William J., Economic Dynamics, 3rd edition, 1970, p. 172.
4. Chiang, Alpha, Fundamental Methods of Mathematical,Economics, 3rd edition, 1984, p. 578, p. 600.
5. Chu, Herman; Shah, Gaurav; Macall, Tom. "Linear Homogeneous Ordinary Differential Equations with Constant Coefficients". eFunda. Retrieved 1 March 2011.
6. Cohen, Abraham (1906). An Elementary Treatise on Differential Equations. D. C. Heath and Company.
7. Dawkins, Paul. "Differential Equation Terminology". Paul's Online Math Notes. Retrieved 2 March 2011.
|
{}
|
The Nintendo stork
In January, the online magazine spiked asked me to write 200 words or less on the question, “What is the greatest innovation in your field?” I thought it was a dumb question, but I answered it anyway. The magazine still hasn’t put up the responses — I guess not enough “thinkers” got back to them yet — but today, since I feel like blogging and don’t have anything else to post, here is my response. Enjoy.
The greatest innovation in computer science was to represent machines — objects that do things, respond to their environments, surprise their creators — as nothing but strings of information. When I was a kid, my overriding ambition was to write my own Nintendo games. But while I could draw the characters and the levels, I had no idea what it would be like to breathe life into a game — to teach the game how to respond to the controller. I pictured thousands of engineers in white lab coats crafting a game cartridge using enormous factory equipment, as they would a 747.
Then a friend showed me a rudimentary spaceship game written in AppleBASIC. Look: here were the lines of code, and here was the game. Slowly it dawned on me that these screenfuls of funny-looking commands weren’t just some sort of blueprint for the game — they were the game. Change the code, and the game would do something different. Better yet, the task of writing the commands was ultimately just a big math problem. This was Alan Turing’s great insight of 1936. For me, it was a revelation comparable only to finding out where babies came from.
(Unfortunately, I’ve long since lost touch with the AppleBASIC-game-playing friend — last I heard through mutual acquaintances, he went off to fight in Afghanistan, and came back injured by a shrapnel bomb.)
48 Responses to “The Nintendo stork”
1. Ran Halprin Says:
Excellent post…
I personally don’t trust programmers or computer scientists who don’t attribute their entrance into the field to the burning desire to create their own games… 🙂
2. Geordie Says:
Thousands of engineers in white lab coats did create the hardware that AppleBASIC code ran on.
3. Osias Says:
But not the games themselves, Gerodie, that’s the point!
4. Blake Stacey Says:
But. . . but. . . I thought BASIC was “considered harmful”! How does a clever, talented, handsome, etc. physics and computer science expert handle the shame of BASIC in their past?
5. Scott Says:
I hope you’re sitting down for this, Blake, but clever, talented, and handsome as your humble blogger might be, he still prefers QBASIC for doing quick calculations.
Just as I have zero ability to learn human languages other than English (as both my Hebrew and Mandarin teachers will attest), so too am I abysmally bad at learning new programming languages. Sure I can do C/C++ when I have to (e.g., when I need more than 64K of memory), but I think in QBASIC, the first language I learned. Maybe I just have the Church-Turing Thesis in my bones?
So let the hotshot coders laugh at me. Let them jeer. But how many of them could make it through a PhD in CS at Berkeley without really learning any OS besides Windows, any text editor besides Notepad, or any programming language besides QBASIC?
6. mick Says:
Sweet mother of God. Scott, that’s a horrific revelation. YOU STILL THINK IN QBASIC! I literally just fell out of my chair.
The last time I used BASIC was in an experimental physics class where we were forced to use ancient PCs to do data analysis. I think it was meant to be like some sort of weird survival course for physicists. You know the type, just in case we ever got stuck on a desert island which only had 286 processors we could still do our analysis….
7. Scott Says:
Oh, come on. QBASIC was an enormous advance over GW-BASIC — it doesn’t even require line numbers!
8. mick Says:
Scott, seriously dude. Go find yourself an undergrad to teach you some of the fancy new-fangled languages out there (says me who has written two simple MATLAB codes in the last 3 years).
9. Walt Says:
Don’t take any lips from the young-uns. The first programming language I ever used for programs of any complexity was Pascal. I haven’t used Pascal in years, but every once in a while I’ll type “:=” instead of “=” in my C programs.
10. mick Says:
Young-uns – I’m like 3 years older than Scott.
11. Blake Stacey Says:
A few months back, SF writer David Brin had a piece in Salon called “Why Johnny Can’t Code” (14 September 2006), whose main thesis was something like the following:
In the olden days, every personal computer came with a programming language which kids could readily sink their claws into. Somehow, despite the saturation of daily life with computers and the profusion of novel programming languages, we lost the easy avenue into programming. For all its clunkiness, BASIC still had a low barrier to entry: its syntax and operation were simple, it was omnipresent, and it was often available by simply turning a computer on. (Contrast this with our modern situation: Python might be better on linguistic grounds, and it might be more suitable for “serious work”, but if you’re a kid in a Windows household, you can’t get it without finding and downloading files. And you damn sure don’t see Python scripts in math and science textbooks.)
When I put the argument that way, most everybody says, “Yeah. Huh. Maybe we could do better about that.” Or, “I work for Apple/MIT Media Lab/Google now, but I started with BASIC when I was a kid. ‘Tain’t like no original sin.”
Which might be why I found the Slashdot crowd’s response to Brin’s essay a little, um, odd. Most of it boiled down to the lament, “David Brin, that nostalgic old fool, loves BASIC,” maybe with an undercurrent of “we’ve been betrayed by an SF writer.” There was also a considerable amount of screaming, “Python! Java! Ruby! KPL!” with nobody actually providing data about whether the proportion of young’uns entering programming (normalized by total number of computer owners) has risen since the BASIC days. Offhand, I don’t know those numbers either; what do I type into Google to get them?
The essay did provoke somebody to make Quite BASIC, which you might appreciate. It’s even got the Henon Strange Attractor!
12. Ran Halprin Says:
I grew up on GW-Basic and later QBasic and Quick Basic (literally grew up – ages 10-17, for the most of them several hours daily) – yet I believe I was not, as Dijkstra put it, “mentally mutilated”… I passed via Pascal and C, but today I think in Java (although I usually still use Matlab).
I still didn’t write any games in any other language but Basic…
13. John Sidles Says:
You young ‘uns know nothing! Fresh off the farm, I learned to code by pushing binary buttons to set the bits of an instruction register, then flipping a lever to execute the instruction! Flipping another lever would output the memory onto a punched tape!
Yessir … 2K of 12 bit words — it was a Control Data 8000 series, housed in a chassis bigger than double-wide home refrigerator.
My job was to write the data-collection programs for this humming beast. Also, whenever the computer broke — which was often — to diagnose which logical gate was broken — there were two gates per circuit card — and solder in a new transistor.
Oh, the glorious feeling of satisfaction! They paid me $300 a month, and beers were a dime on Thursday afternoons. Life was good. Just to mention, that machine was light-years ahead of today’s quantum computers. 🙂 14. Scott Says: Yeah, I learned some Pascal for the AP test, assembly for architecture class, MATLAB for scientific computing class, Java because it seemed cool at the time, and Lisp because I was fired up by Paul Graham’s essays. I’ve since completely forgotten all of them. 15. Scott Says: The essay did provoke somebody to make Quite BASIC, which you might appreciate. That’s terrific — I hadn’t seen that! Except I don’t think it should require line numbers. 16. Koray Says: Scott, Haskell has some following among the pure math types. You may find sigfpe’s posts interesting (esp. this one about quantum computing and monads). 17. Greg Kuperberg Says: I have first-hand experience with this. Python is not only perfectly adequate as a programming language for children, it does come automatically with both Linux and MacOS. Unfortunately, it might not be there in Windows, and in any case the great programming subculture of the old days may have been washed out. Or maybe not? Maybe there are actually more children programming now than before, but their subculture is spread out within a much larger society of non-programming users. 18. Carl Says: I agree that Python is an excellent language for teaching programming to beginners, but I’d like to point out there’s a very common language (or perhaps “language”) which I think is introducing a lot of kids today to programming: HTML. Think about it, basically every kid today has a MySpace or Live Journal or some such. So, they all want to write blog posts and leave each other comments. And if you want what you write to look nice you learn how to use the <b> tag and the <i> tag and so on. Then you decide you don’t just want to do a little post inside a bigger template, you want to do your own template. So, you learn CSS and about how HTML is really supposed to be used. Next you start using a CMS, but you want to tweak a little part of it, so you look at the PHP behind it and change a little. Before too long, you’ve gotten sucked into writing web apps. Crappy PHP ones at first, and eventually ones that use more respectable programming languages. Yeah, I don’t see programming dying out just because BASIC is dead. If anything more kids today are in a position to work their way up the chain from super high level markup (HTML) to custom middleware (managing a CMS) to using a scripting language (PHP) all the way on until they’re doing assembly or whatever. Remember that while we all used BASIC as kids, we were also usually one of the only kids in our classroom who did. Since nowadays everyone has a need for a basic understanding of HTML whereas learning BASIC wasn’t really a necessary part of our social environment, more people will be in a position to enter CS in the years to come. 19. zevans Says: I still feel most comfortable programming with the language I ‘grew up with’ as well. But in my case it happened to be C. I suspect that it is common for people to develop a strong bias toward the first programming language that they learnt. But in my case, I just use C because it’s the most clean, powerful, well standardised, and flexible language ever made. :p 20. Nagesh Adluru Says: Hey Scott, Just some curious questions. Hope you don’t mind. Do you make money with your popularity in blogging? Do you intend to make? Did you get any such offers? I was told people make money by blogging. I was bold enough to ask this because of your nice open nature:) 21. Scott Says: Do you make money with your popularity in blogging? No, not unless this blog leads to higher-paying tenure-tracks (which seems exceedingly doubtful). Do you intend to make? No. I guess I could make a few bucks from ads, but it never seemed worth it. Did you get any such offers? Yes. My parent corporation, Shtetl-Optimized Ventures International (NASDAQ: SHTLOPT), did entertain several$multibillion buyout offers — one from a consortium of Japanese investors, one from BQP Holdings Ltd., one from the same VC’s who invested in D-Wave. In the end, though, I decided to keep controlling stake in the family.
22. tgm Says:
Scott, your last answer is really funny (or is it that I am just tired). But what shocked me reading the post and comments, is that the only text editor you ever used (at least until completing your phd) is notepad. Is that really true? Neither emacs nor vi? I cannot believe it…
23. Scott Says:
I tried emacs and vi and couldn’t stand them. No doubt if I studied them for several decades I’d think differently (“oh, of course — it’s just Ctrl-Shift-Alt-F10 to delete!”). These days I use WinEdt for text editing. I’ve always written my papers in Scientific Workplace.
24. Ryan Budney Says:
The first language I used was some reverse-polish native language to my dad’s HP calculator… that was back around 1979-1980. I typed in games from the calculator instructions.
My first real language was Basic, on a 4.77MHz PC XT, with something like 64kb of RAM and a math co-processor. My dad had the hot-rod of the neighbourhood. A friend would come over and write games, and I learned by reverse-engineering his code.
I moved to Pascal, wrote a few games myself, then started the migration to C++. But my Microsoft C++ compiler didn’t implement templates correctly and my learning was stunted until grad school, years later. I remember taking my old code that I could never get to compile as a high-school kid on my Microsoft compiler, and it compiled without error in GNU C++ without modification.
I entered mathematics with a desire to write a general-relativity compliant Space-War style multiplayer game. But after learning GR, I realized the time lag stuff would be problematic to implement.
Scott, did you take Hubbard’s multivariable calculus course as an undergrad, and if so, do you remember who your TA was?
26. Scott Says:
Scott, did you take Hubbard’s multivariable calculus course as an undergrad
No, I didn’t. I’d already taken multivar at Clarkson University when I started at Cornell.
27. anonymous Says:
Scott, why not use TeXmacs? Yes, it doesn’t support LaTeX style files, but maybe you could convince some conferences/journals to provide TeXmacs style files.
28. Scott Says:
Scott, why not use TeXmacs?
29. anonymous Says:
To see what TeXmacs typesetting looks like, see the pdf here:
http://www.texmacs.org/Samples/Galois
The main novelty with TeXmacs is that you get this high quality typesetting while editing as well.
http://www.texmacs.org/Samples/texmacs.pdf
30. Scott Says:
I’m just messing around with TeXmacs now. I can see already that I won’t be able to use it exclusively — besides the lack of support for style files, there are all sorts of LaTeX commands that don’t show up right and things I find annoying about the interface. But it should be great as an additional tool, on those occasions where either Scientific Workplace isn’t doing something right or I don’t have access to it. Thanks!!
31. anonymous Says:
Yes, the lack of support for TeX/LaTeX style files is a problem, but if enough people asked, conferences and journals may start supplying TeXmacs style files.
Another possibility it to add a feature to TeXmacs that would take some sample papers from the conference/journal (say in pdf format) and automatically generate a TeXmacs style file for that conference/journal. Yes, it’s only heuristic, but it might be good enough.
32. anonymous Says:
BTW, is scientific workplace truly wysiwyg like texmacs? Does the output look exactly like what you edit?
33. Scott Says:
No, it doesn’t even try to be wysiwyg — I’d describe it as wysic (what you see is comprehensible), as opposed to straight TeX, which is wysisopcipbyc (what you see is something other people can immediately parse but you can’t).
34. anonymous Says:
Well that’s why TeXmacs is an amazing technical achievement. It’s like TeX but with a real-time typesetter. Why are people still using TeX/LaTeX?
35. alfalfa Says:
Do you make money with your popularity in blogging?
No, not unless this blog leads to higher-paying tenure-tracks (which seems exceedingly doubtful).
It is not so unreasonable that your blog (which has most likely led to increased name recognition in the TCS community at large) will lead to 1 or 2 offers that you wouldn’t otherwise have gotten. And this, of course, leads to a better bargaining position, which leads to a better starting salary.
Look at it this way: Via the publicity of your blog, whatever school you join instantly becomes known as a quantum center. Schools like recognition.
Mick said:
The last time I used BASIC was in an experimental physics class where we were forced to use ancient PCs to do data analysis. I think it was meant to be like some sort of weird survival course for physicists. You know the type, just in case we ever got stuck on a desert island which only had 286 processors we could still do our analysis….
I think I can safely hazard a guess that this was at MIT. That school has a very peculiar and specific culture.
37. Greg Kuperberg Says:
I agree that Python is an excellent language for teaching programming to beginners, but I’d like to point out there’s a very common language (or perhaps “language”) which I think is introducing a lot of kids today to programming: HTML.
HTML has everything going for it other than, unfortunately, Turing completeness. Turing completeness is “home plate” in this discussion. Many introductions to computers cop out at second or third base.
I understand that Javascript and PHP are Turing-complete extensions to HTML. That is why they are used. I do not like the way that they blur the distinction between Turing completeness and markup. Combining them with HTML also mixes oil and water, to an extent.
I like Python.
38. Carl Says:
Obviously, HTML isn’t a “programming language” per se, but I stand by the claim that by introducing kids to the idea that “you can control what computers do by changing some psuedo-English text” it sets them at the head of the path to CS enlightenment.
39. mick Says:
Elad, sorry to dissapoint. It was at the University of Queensland, which isn’t quite up to MIT’s standard in, well, anything. We did have some well-known experimental physics profs with an evil sense of humour. I’m sure at least one of them is reading this comment as well…
40. Ryan Budney Says:
Remember the old PC game “Omega”?
http://en.wikipedia.org/wiki/Omega_(1989_computer_game)
That was a pretty addicitive game. The script that you program in was a lot like Basic, too.
41. astephens Says:
Negative, though after poking around on Wikipedia I found this. Fascinating.
42. chris Says:
BBC BASIC II is the business. If I have to write pseudocode in papers I like to number the lines 10,20,30,… Unfortunately co-authors always edit these out.
43. Blake Stacey, OM Says:
Carl said:
Obviously, HTML isn’t a “programming language” per se, but I stand by the claim that by introducing kids to the idea that “you can control what computers do by changing some psuedo-English text” it sets them at the head of the path to CS enlightenment.
Granted. However, I think many of the people around here are interested in using computers to teach sciencey things. We want those up-and-coming whippersnappers to explore what equations mean, how to put together a simulation and so forth. (Poke through the Quite BASIC examples to see what I’m getting at; compare also MIT’s StarLogo. Imagine letting middle- or high-school kids see molecules in motion, bouncing inside a cylinder, holding up the weight of a piston. So much more engaging than being handed the Ideal Gas Law on a dusty platter!) HTML might be an easy path to start upon, but considering the destination we care about, it’s a longer journey than it should be.
44. The Quantum Pontiff » Debug First? Says:
[…] After Scott confessed to still programming in BASIC, I had a good time recalling how I first learned to program. My first interaction with a computer and “programming” was through LOGO, that cute little program for programming a little turtle to draw graphics. Drawing cool and crazy pictures was fun! But I don’t really remember learning that as “programming” so much as it was learning geometry (I was in the second grade at the time) and I certainly recall sharing the computer (probably not very sharefully.) But the real way I learned to program was not through LOGO but through debuging! […]
45. Drew Arrowood Says:
When I teach intro to computers to liberal arts college students, I always show them how to use Javascript. The advantage is that hello, world requires only typing hello, world into notepad and saving a file. Typically, I also talk about the Windows Scripting Host , which uses that syntax to do interesting stuff on their machines.
My first programming (1978) was not by typing, however — though there were really neat Lanier word processors in my Dad’s chambers — it was recording 8-Track tapes that would play in a robot that would ask various questions of the user — I had to coordinate the timing just right, with a stopwatch (given to me by my Aunt Mary, who was a “Bosslady” in the Mill), yellow legal pads, and Skillcraft US Government Pens. My machine couldn’t do math, but it sure could pass the Turing Test!
46. agm Says:
Blake, I think it’s GOTO that’s considered harmful…
47. Sujit Says:
Thanks for this post, it brings back good memories. I remember playing Gorilla in sixth grade computing class and learning way more QBasic (and perhaps elementary projectile motion, too) trying to figure out the code than doing whatever it was we were supposed to be doing. These newfangled computer games just don’t cut it – Gorilla was 237 lines of code goodness…
48. jrl Says:
This reminds me of the first time I was expelled from high school. Of course the classical counterpart to QBASIC Gorilla was QBASIC Nibbles. In those days (the mid 90’s) the QBASIC games were loaded off of a Novel server; in any case, we broke into the network, and uploaded a modified Nibbles (the new title was “Nipples” and, well, you can imagine what modifications two teenage boys would come up with to go along with the name change). Innocent joke? Yeah, except we were suspended from school for the rest of the year…
|
{}
|
# How can change in entropy be the same for all processes if the entropy production $\sigma$ is present for irreversible processes?
From the definition of entropy change,
$$S_2-S_1=\left ( \int_{1}^{2} \frac{\delta Q}{T}\right )_{int.rev}$$
From the closed system entropy balance, we have
$$S_2-S_1=\left ( \int_{1}^{2} \frac{\delta Q}{T}\right )_{b}+\sigma$$ where $$\sigma$$ is the entropy produced within the system, vanishing to zero in the absence of irreversibilities. I don't quite understand how the entropy change between two states is the same for all processes. Is it the case that the entropy transfer in the case of internal irreversibilities present is lower than that of the entropy transfer in an internally reversible process between these same two states, and the entropy production makes up for this difference?
• Yes, that is exactly correct. In both cases, the heat transfer takes place at the boundary temperature. What you have stated is the essence of the Clausius inequality. Feb 15 at 4:22
• @ChetMiller Thanks! Feb 15 at 6:31
• Seems the post and commentaries could be restructured and gain a lot from a more elaborate answer. Otherwise this will just "hang" in perpetuity as an unanswered "yes/no" question. Feb 15 at 7:40
• @BuckThorn How should I reconstruct it? I would love someone to expand on it! Feb 15 at 19:34
• Well, I meant that it would be nice to have an open-shut post with a clean answer.@ChetMiller evidently knows his stuff but aimed for conciseness and answered your "yes/no" question with a "yes, you're right" comment.. I think if he could expand on the answer it would be nice, but ultimately the fate of the post is up to him, you and the wisdom of the crowds. Feb 15 at 19:58
When a system changes from state 1 to state 2, the change in entropy of the system is the same for all processes, because entropy is a state function.
The difference between a reversible and an irreversible process is that, in the latter case, there is net entropy production for the universe, i.e., for the system+surroundings.
Since the entropy change for the system is independent of the process, the difference in entropy production between rev. and irrev. processes is seen in the entropy change of the surroundings.
More broadly, since path differences (which in turn means differences in heat flow and/or work flow) have no effect on the final state of the system, they instead always manifest themselves as differences in the final state of the surroundings.
• Makes perfect sense, thank you. I also have another question. Does entropy vary within different areas of a system? Is there like an entropy gradient of some sort? Feb 15 at 20:15
• @CalebWilliamsUIC You'll want to post that as a separate question. Feb 15 at 20:20
• Alright! I will make another question at some point today. Feb 15 at 21:02
The way that you describe it is exactly correct. This is the essence of the Clausius Inequality.
|
{}
|
# The great Bantu expansion was massive
Lots of stuff at SMBE of interest to me. I went to the Evolution meeting last year, and it was a little thin on genetics for me. And I go to ASHG pretty much every year, but there’s a lot of medical stuff that is not to my taste. SMBE was really pretty much my style.
In any case one of the more interesting talks was given by Pontus Skoglund (soon of the Crick Institute). He had several novel African genomes to talk about, in particular from Malawi hunter-gatherers (I believe dated to 3,000 years before the present), and one from a pre-Bantu pastoralist.
At one point Skoglund presented a plot showing what looked like an isolation by distance dynamic between the ancient Ethiopian Mota genome and a modern day Khoisan sample, with the Malawi population about $\frac{2}{3}$ of the way toward the Khoisan from the Ethiopian sample. Some of my friends from a non-human genetics background were at the talk and were getting quite excited at this point, because there is a general feeling that the Reich lab emphasizes the stylized pulse admixture model a bit too much. Rather than expansion of proto-Ethiopian-like populations and proto-Khoisan-like populations they interpreted this as evidence of a continuum or cline across East Africa. I’m not sure if this is the right interpretation of the plot presented, but it’s a reasonable one.
Malawi is considerably to the north of modern Khoisan populations. This is not surprising. From what I have read Khoisan archaeological remains seem to be found as far north as Zimbabwe, while others have long suggested a presence as far afield as Kenya. Perhaps more curiously: the Malawi hunter-gatherers exhibit not evidence of having contributed genes to modern Bantu residents of Malawi.
Surprising, but not really. If you look at a PCA plot of Bantu genetic variation it really starts showing evidence of local substrate (Khoisan) in South Africa. From Cameroon to Mozambique it looks like the Bantu simply overwhelmed local populations, they are clustered so tight. Though it is true that African populations harbor a lot of diversity, that diversity is not necessarily partition between the populations. The Bantu expansion is why.
Of more interest from the perspective of non-African history is the Tanzanian pastoralist. This individual is about 38% West Eurasian, and that ancestry has the strongest affinities with Levantine Neolithic farmers. Specifically, the PPN, which dates to between 8500-5500 BCE. More precisely, this individual was exclusively “western farmer” in the Lazaridis et al. formulation. Additionally, Skoglund also told me that the Cushitic (and presumably Semitic) peoples to the north and east had some “eastern farmer.” I immediately thought back to Hogdson et al. Early Back-to-Africa Migration into the Horn of Africa, which suggested multiple layers. Finally, 2012 Pagani et al. suggested that admixture in the Ethiopian plateau occurred on the order of ~3,000 years ago.
Bringing all of this together it suggests to me two things
1. The migration back from Eurasia occurred multiple times, with an early wave arriving well before the Copper/Bronze Age east-west and west-east gene flow in the Near East (also, there was backflow to West Africa, but that’s a different post….).
2. The migration was patchy; the Mota sample dates to 4,500 years ago, and lacks any Eurasian ancestry, despite the likelihood that the first Eurasian backflow was already occurring.
Skoglund will soon have the preprint out.
|
{}
|
Louis A. Graham
American engineering executive
A diller, A dollar, A witless trig scholar / On a ladder against a wall. / If length over height / Gives an angle too slight / The cosecant may prove his downfall.
TOPICS: humor
|
{}
|
A fabric store sells flannel and calico fabrics. Joan pays $25 for 3 yards of flannel and 4 yards of calico. Chris pays$11 for 1 yard of flannel and 2 yards of calico. What is the price of 1 yard of calico?
Updated On: 17-04-2022
Get Answer to any question, just click a photo and upload the photo
and get the answer completely free,
Text Solution
$3$4$5$6
B
this is a question of fabric store sells flannel and casual fabrics join a $25 for three years of flannel and four years of space$11 for one year of flannel and two years of calcium what is the price of one year of calcium what we need to do is let us take the price of 1 yard of flannel as X and price of 1 yard of calcium as why so according to the question we need to find the value of Y
form some system of equations and help us form find the value of Y so it is told that three hours of planning and four years of cash flow is $25 worth so we can write 3 X + 4 Y equal to 25 and weight was that one year of flannel and two years of Casio is$11 w X + 2 Y + 2 is equal to these two equations that this is equal to 2 why does one have just taken written as X the one is retarded this this this expression from this equation to the value into the first equation
|
{}
|
1. Jun 20, 2009
### evilpostingmong
make much sense to me (I mean, even basic properties). I
<x+y, z> is <x,z>+<y,z> what is the purpose of doing this?
I'm almost completely clueless about inner products.
2. Jun 20, 2009
### Phrak
I'm not a fan of that notation. Are you familiar with upper and lower indices?
3. Jun 20, 2009
### tiny-tim
Hi evilpostingmong!
It's the same rule as for dot-products of ordinary 3D vectors …
(a + b).c = a.c + b.c
4. Jun 20, 2009
### evilpostingmong
No, but if it would help, I'm willing to hear about them, since I'm really stuck.
Don't mean to sound pushy, just in a desperate mood.
5. Jun 20, 2009
### HallsofIvy
What they are saying is that the inner product is linear in the first variable. Don't you think "linear" is an important property in Linear algebra?
6. Jun 20, 2009
### evilpostingmong
Yes, it is important.
7. Jun 20, 2009
### Hurkyl
Staff Emeritus
It's hard to give a good answer without more context (e.g. what is your background? What are you actually studying?), but I'll make a try anyways.
The big overarching incredible point about algebra is that it's not just something you do with numbers. You do also do it with vectors, matrices, sets, geometric shapes -- pretty much anything you would ever want to study can be studied with some algebraic technique.
Linear algebra has a special role, because it is the simplest kind of algebra, and something we understand really, reallly, really well -- and yet it is powerful enough to be useful in a wide variety of situations.
In order to do algebra, we need to know how to manipulate equations. The reason you learn the law $\langle \vec{x}+\vec{y}, \vec{z} \rangle = \langle \vec{x}, \vec{z}\rangle + \langle \vec{y}, \vec{z} \rangle$ for manipulating vectors is exactly the same as the reason you learn the law $a(b+c) = ab + ac$ for manipulating numbers.
8. Jun 20, 2009
### evilpostingmong
Thanks for the input. But what confuses me (I was so confused that
I couldn't even figure out what exactly was confusing..I know) is
that the inner product is supposed to be a scalar but when
computing it <x+y, z>=<x,z>+<y,z> <x,z> and <y,z> look like
bases containing vectors, not scalars. I guess that's why
Phrak doesn't like this notation. Those don't look like scalars, so I
don't really know what x or y or z are. Unless that "," means multiplication.
Oh btw to answer Hurky's question, as far as linear algebra is concerned.
I know matrix arithmetic, vector spaces, linear transformations, and "eigenstuff".
9. Jun 20, 2009
### Hurkyl
Staff Emeritus
If inner products are scalar-valued...
And <x,z> denotes an inner product of x with z...
Then <x,z> is a scalar.
I'm not sure what you mean by "bases containing vectors".
10. Jun 20, 2009
### evilpostingmong
It's alright, you know how when you want to write out a basis
for a vector space of dim n you'd put <v1...vn>
Oh hold on x+y and z are vectors but you performed the cross
product to get xy and xz, which are scalars.
But why couldn't they write it as <x*y> or <x*z> isn't that more obvious?
I'm not blaming you or anyone else on this forum, so don't worry about that.
Wait, I thought of something. Take vector u to be a row [1 2] and v to be a column [3 4]
so the result is [4 6] after multilpication so I end up with (for <u, v>) u is the scalar 4 and v
is the scalar 6.
Last edited: Jun 20, 2009
11. Jun 21, 2009
### Hurkyl
Staff Emeritus
(Convention for this post: elements of Rn are treated as if they were nx1 matrices)
There aren't that many symbols useful for a binary operation. Asterisks (i.e. '*') are annoying to draw by hand, if you have to do it a lot. The dot (i.e. '$\cdot$') is good, but people often reserve that for one specific inner product: the dot product on Rn; i.e. the one given by $\vec{v} \cdot \vec{w} = v^T w$.
It's awkward to write the dot product as if it were ordinary multiplication, because 'ordinary multiplication' notation is already overused in linear algebra. e.g. if r is a real number, $\vec{v}$ is a vector, and A,B are matrices of the right shape, and T a linear transformation of the right domain, we have
* Scalar multiplication: $r \vec{v}$
* Scalar multiplication: $r A$
* Matrix-matrix product: $A B$
* Matrix-vector product: $A v$
* Applying a transformation: $T v$
I think it should be preferable to let ordinary multiplication denote only those 'products' that are either scalar multiplication or come from matrix arithmetic (or similar).
Another advantage to some sort of bracket notation (e.g. $\{ a, b \}$ or $\langle a, b \rangle$ or $[ a, b ]$) is that it can be more easily annotated. If you're working with two different inner products, we can label one G and the other H, and write $\langle \vec{v}, \vec{w} \rangle_G$ and $\langle \vec{v}, \vec{w} \rangle_H$ to tell them apart.
How did the notation actually originate? I don't know. I can speculate, though: I bet the it was originally written as an ordinary binary function: e.g. g(x,y). People got tired of writing g all the time, because you often work with only one inner product at a time, so it got shortened to (x,y). But it can be confusing to use parentheses, so they switched to angle brackets $\langle x, y \rangle$.
12. Jun 21, 2009
### HallsofIvy
I hope you are not confusing <x, y> with {x, y}! If so, you need to have your vision checked. x, y, and z are vectors here. So is x+ y. <x+y, z>, <x, z>, and <y, z> are all inner products of vectors and so are scalars. <x, z>+ <y, z> is a sum of scalars, equal to the scalar <x+ y, z>.
I presume that the first statement in the definition of "inner product" in your textbook is that the inner product "is a function from VxV to the underlying field": that is, that the inner product, symbolized by < , >, takes two vectors, say x and y, and changes them to the scalar <x, y>. From that <x, z>, <y, z>, and <x+ y, z> certainly should "look like scalars"! Anything of the form <u, v> is a scalar.
Last edited by a moderator: Jun 21, 2009
13. Jun 21, 2009
### tiny-tim
Hi Hurkyl!
I always thought Dirac invented the angle bracket notation, so that he could put straight dividers inside, and called them bra and ket …
or did they already exist, and he only gave them the humorous name?
14. Jun 21, 2009
### Hurkyl
Staff Emeritus
This is how I thought history went. (And also other bracket operators pre-existed, like the Poisson bracket) But I don't have great confidence in my knowledge of such things. Your version is entirely plausible.
15. Jun 21, 2009
### evilpostingmong
Oh, its ok I have a book that uses <> for bases as well, but I guess it was a stupid
idea for the book to use <> for bases since it would confuse students who are
learning linear algebra. But you have shown me what they are normally used for,
thank you very much! And thanks Hurkyl for solving my , and * dilemma!
Thank you all for responding!
Oh btw Hurkyl, when you say "normal multiplication" you mean take two vectors and multiply them
like this uv as opposed to dot product, which is uTv which is what "," denotes, right?
Last edited: Jun 21, 2009
|
{}
|
## Wednesday, April 13, 2016 ... /////
### Can Milner's Starshot reach Alpha Centauri?
This blog post is all about Yuri Milner's breathtakingly audacious $100 million plan to send a probe to Alpha Centauri. Milner has presented the plan yesterday, exactly 55 years after Gagarin's pioneering flight (good morning, Major Gagarin; Milner was born 7 months later and named Yuri after Gagarin), and famous minds like Dyson and Hawking were assisting him. Mark Zuckerberg should also play a role, probably a financial one. The idea is just a little bit less breathtaking if you think that it's more like than not that the project will fail. But it could very well succeed, too! Constellation Centaurus visualized by the state-of-the-art Greek technology. On this "Greek Week" in Lidl, I bought another ouzo which has always the same fun licorice-like taste and also a very cheap bottle of wine. Even I could figure out that it tastes bad. ;-) Alpha Centauri is a triplet of stars in Constellation Centaurus. All of them are 4.2-4.4 light years away. Alpha Centauri A,B are brighter and close enough to look like one object. Alpha Centauri C, a faint red dwarf, is dimmer and slightly closer to Earth; as the second closest star after the Sun (and #1 if you fail to realize that the Sun is a star, and many people did and still do), it's known as Proxima Centauri. Milner wants to pay$100 million and get a probe there which will get there in 20 years, take 2-megapixel pictures, and send them back to Earth, ideally around 2041. Wow. ;-)
As you can see, to travel 4 light years in some 20 years, you need the speed $c/5$, and we're therefore in the realm of "relativistic science-fiction". How do you reach speeds comparable to the speed of light? We don't even have fusion reactors yet. Milner proposes to make the probe extremely light, a few grams. It should look approximately like this:
And with this small weight, you may accelerate the probe by light. Don't forget that at the speed of $c$, the momentum of a photon is $p=E/c$. When a photon gets reflected, it changes the sign and deposits $2p=2E/c$ of momentum to the kite. The momentum change per unit time is the force so the force is $2P/c$ where $P$ is the power of the laser.
The kite should be very thin – perhaps hundreds of atoms – and have electronics including lasers to beam the images etc. This gadget must be able to reflect laser light, think, take pictures, and send information to Earth through its own laser beam whose direction has to be rather accurate. I am not even sure that it's physically possible to make the beam so sharp assuming the tiny width.
Needless to say, while we have lots of worthless "professors of women's studies" and similar garbage, the civilization isn't paying any true "professors of interstellar travel". So the people doing this stuff are at most "very smart and successful readers of science-fiction with some physics and engineering background and perhaps some engineering experience", like Milner. But maybe this human capital is all that you need.
Every day, one kite should be encouraged to leave the Milner Kite Mothership somewhere on the orbit.
Picture of a group known in Czech as "Zvídálkové" which is such a cute way to construct a word meaning "Curious Kids" that only the Czech language can do such things.
Now, what happens to the kite is dramatic. A very powerful laser beam somewhere on the Southern Hemisphere (with adaptive optics etc.) is directed to the kite – no one will allow you a space-based powerful laser shooting in all directions – and accelerates the kite to $c/5$ in a few minutes. In less than an hour, the kite meets Mars, and it greets Pluto on the following day.
It gets to the Alpha Centauri region and must very quickly find out what to photograph. Also, there could be problem with blurry pictures because you know the problems if you move your camera a little bit – and imagine that the speed is 60,000 kilometers per second. OK, the probe optimally hits a blue Earth-like planet, takes pictures of their continents and the ETs' bridges, and sends the visual information by a laser to the Earth. The signal from that laser gets diluted and is barely detectable within the Cosmic Microwave Background but we succeed.
It sounds easy. Milner should have revealed the plan earlier, Zvídálkové could have completed all the details of this straightforward project. ;-)
OK, let's calculate some basic numbers. We want to accelerate a few grams to $c/5$. The kinetic energy may still be "barely" computed by the non-relativistic formula and it is $E\sim m c^2 / 50$. If $m$ were 5 grams, we get 9 trillion joules. Is that right? We need to get these trillions of watts from the "powering laser" on Earth to the kite in a few minutes. It's obviously possible to make a strong enough laser – the world's strongest laser has some 2,000 trillion watts (in Osaka, Japan), some 5 orders of magnitude stronger than what is needed.
One kilowatthour costs about \$0.20 here and is equal to 3.6 million joules everywhere. Now, $9\times 10^{12}/3.6\times 10^{6}$ is equal to 2.5 million or so. It's 2.5 million kilowatthours. Not a problem, Milner should pay half a million dollars for the electricity bill for one kite and maybe he gets a cheaper deal from ČEZ at night.
It's being said that the acceleration to $c/5$ takes several minutes, let's say 3 minutes or 180 seconds. The acceleration in the quantitative sense is therefore equal to $a=v/t=c/900$ per second. It's over 300,000 meters per squared second or 30,000 $g$. You don't want to experience it with your stomach. Maybe the kite is much more resilient.
The integrity of the kite seems like the most serious problem. The gadget must survive the interstellar travel even though there may be some dust on the way – isn't it guaranteed that some dust must break it on the trajectory? Maybe Milner wants to turn the kite in the thin direction – for it to become a knife cutting the interstellar space – during much of the 20-year-long flight to minimize the "cross section". That could be another reason why the "thin design" could be a good idea. And the kite must also survive the extreme acceleration I mentioned – and the photon pressure from the terrestrial laser is unlikely to act quite uniformly. It's another reason why the uniform, thin design may be needed. The acceleration of 30,000 $g$ would almost certainly break the gadget if the 2D mass density were highly non-uniform.
On top of that, I have some worries that wave optics prohibits the accurate laser beams that come from these unusually thin gadgets.
I am ready to bet that the technology won't be completed by the end of the year. ;-) But the broader technology is arguably physically possible and there should be people working on such extreme things. What makes me think that it will require some time is the fact that we don't actually seem to have even pieces of the project that is needed.
People haven't really used the photon pressure to accelerate reflective sheets to modest speeds, let alone relativistic speeds. People haven't sent any thin objects to space. People haven't built superthin lasers. And so on. So all these things have to materialize separately and only afterwards, they may be combined to the Starshot project. In this sense, I feel that the required research isn't pure engineering at this moment. It's a lot of possibly nontrivial and non-straightforward applied physics.
At any rate, I wish Milner lots of good luck. He – and the mankind – will need many of these lots. ;-)
|
{}
|
# Time-series compression algorithms, explained
Delta-delta encoding, Simple-8b, XOR-based compression, and more - These algorithms aren't magic, but combined they can save over 90% of storage costs and speed up queries. Here’s how they work.
Computing is based on a simple concept: the binary representation of information. And as computing infrastructure has gotten cheaper and more powerful, we have asked it to represent more and more of our information, in the form of data we collect (which is often time-series data).
But computing is not free. The more efficiently we can represent that information, the more we can save on storage, compute, and bandwidth. Enter compression: “the process of encoding information using fewer bits than the original representation.” (source)
Compression has played an important role in computing for several decades. As a concept, compression is even older: “Morse code, invented in 1838, is the earliest instance of data compression in that the most common letters in the English language such as “e” and “t” are given shorter Morse codes.” (source)
In this post, we set out to demystify compression. To do this, we explain how several lossless time-series compression algorithms work, and how you can apply them to your own projects.
We also explain how we implement them in TimescaleDB, the first open-source relational database to use these time-series compression algorithms, and achieve 90%+ storage efficiencies. [1]
[1] We use the term “we” throughout this article to represent the engineers who developed this capability: Josh Lockerman, Matvey Arye, Gayathri Ayyapan, Sven Klemm, and David Kohn.
## Why compression matters for time-series data
We all collect time-series data. Whether we are measuring our IT systems, web/mobile analytics, product usage, user behavior, sensor/device data, business revenue, etc., time-series data flows through our data pipelines and applications, and enables us to better understand our systems, applications, operations in real-time.
One of the challenges of time-series data is storage footprint. In order to analyze data over time, we insert new data (i.e., instead of updating existing data) on every measurement. Some time-series workloads also have high insert rates (e.g., IT monitoring, IoT sensor data). As a result, time-series datasets often scale well into the terabytes and more.
In order to achieve high-performance while maintaining resource efficiency, we first identified several best-in-class time-series compression algorithms, and then implemented them in TimescaleDB.
These algorithms are quite powerful. According to our users, these algorithms have helped them achieve 90%+ lossless compression rates. This translates into 90%+ storage cost savings, which can mean thousands of dollars (and in some cases, tens of thousands of dollars) of savings per year. These algorithms also lead to compute performance improvements: as more data fits in less space, fewer disk pages need to be read to answer queries. [2]
[2] A 10TB disk volume in the cloud is more than $12,000 per year itself (at$0.10/GB/month for AWS EBS storage), and additional HA replicas and backups can grow this number by another 2-3x. Achieving 95% storage can save you over $10K-$25K per year in storage costs alone ($12K/10TB * 10TB/machine * 2 machines [one master and one replica] * 95% savings =$22.8K).
## What are these magical time-series compression algorithms?
First of all, they’re not magic, but clever computer science techniques. Here are the set of compression algorithms we'll explain, grouped by data type:
Integer compression:
• Delta encoding
• Delta-of-delta encoding
• Simple-8b
• Run-length encoding
Floating point compression:
• XOR-based compression
Data-agnostic compression:
• Dictionary compression
## Integer compression
### Delta-encoding
Delta-encoding (also known as Delta compression) reduces the amount of information required to represent a data object, by only storing the difference (or delta) between that object and one or more reference objects. These algorithms work best where there is a lot of redundant information: for example, in versioned file systems (e.g., this is how Dropbox efficiently syncs your files).
Applying delta-encoding to time-series data makes a lot of sense: we can use fewer bytes to represent a data point by only storing the delta from the previous data point. (In fact, given enough coffee and time, we would argue that versioned file systems themselves are time-series datasets, but we’ll save that discussion for another time.)
For example, imagine we were collecting a dataset that collected CPU, free memory, temperature, and humidity over time (time stored as an integer value, e.g., # seconds since UNIX epoch).
Under a naive approach, we would store each data point with its raw values:
| time | cpu | mem_free_bytes | temperature | humidity |
|---------------------|-----|----------------|-------------|----------|
| 2020-04-01 10:00:00 | 82 | 1,073,741,824 | 80 | 25 |
| 2020-04-01 10:05:00 | 98 | 858,993,459 | 81 | 25 |
| 2020-04-01 10:05:00 | 98 | 858,904,583 | 81 | 25 |
With delta-encoding, we would only store how much each value changed from the previous data point, resulting in smaller values to store:
| time | cpu | mem_free_bytes | temperature | humidity |
|---------------------|-----|----------------|-------------|----------|
| 2020-04-01 10:00:00 | 82 | 1,073,741,824 | 80 | 25 |
| 5 seconds | 16 | -214,748,365 | 1 | 0 |
| 5 seconds | 0 | -88,876 | 0 | 0 |
Now, after the first row, we are able to represent subsequent rows with less information.
Applying Delta-encoding to time-series data takes advantage of the fact that most time-series datasets are not random, but instead represent something that is slowly changing over time. The storage savings over millions of rows can be pretty substantial, especially when the value doesn’t change at all.
### Delta-of-delta encoding
Delta-of-delta encoding (also known as “delta-delta encoding”), takes delta encoding one step further: it applies delta-encoding a second time over delta-encoded data. With time-series datasets where data collection happens at regular intervals, we can apply delta-of-delta encoding to the time column, effectively only needing to store a series of 0’s.
Applied to our example dataset, we now get this:
| time | cpu | mem_free_bytes | temperature | humidity |
|---------------------|-----|----------------|-------------|----------|
| 2020-04-01 10:00:00 | 82 | 1,073,741,824 | 80 | 25 |
| 5 seconds | 16 | -214,748,365 | 1 | 0 |
| 0 | 0 | -88,876 | 0 | 0 |
In this example, delta-of-delta further compresses “5 seconds” down to “0” for every entry in the time column after the second row. (Note that we need two entries in our table before we can calculate the delta-delta, because we need two delta’s to compare.)
This compresses a full timestamp (8 bytes = 64 bits) down to just a single bit (64x compression). (In practice we can do even better by also applying Simple-8b + RLE. More below.)
In other words, delta-encoding stores the first derivative of the dataset, while delta-of-delta encoding stores the second derivative of the dataset.
### Simple-8b
With Delta (and delta-of-delta) encoding, we’ve reduced the number of digits we needed to store. Yet we still need an efficient way to store these smaller integers. Here’s an example that illustrates why: Say, in our previous example, we still use a standard integer datatype (which takes 64 bits on a 64-bit computer) to represent the value of “0” when delta-delta encoded. Thus, even though we are storing “0”, we are still taking up 64 bits – we haven’t actually saved anything.
Enter Simple-8b, one of the simplest and smallest methods of storing variable length integers.
In Simple-8b, the set of integers is stored as a series of fixed-size blocks. For each block of integers, each integer is represented in the minimal bit-length needed to represent the largest integer in that block. The first bits of each block denotes that minimum bit-length for the block.
This technique has the advantage of only needing to store the length once for a given block, instead of once for each number. Also, since the blocks are of a fixed-size, we can infer the number of integers in each block from the size of the integers being stored.
As an example, say we were storing the changing temperature over time, applied delta encoding, and ended up needing to store this set of numbers:
| temperature (deltas) |
|----------------------|
| 1 |
| 10 |
| 11 |
| 13 |
| 9 |
| 100 |
| 22 |
| 11
In other words, our data looks like this:
1, 10, 11, 13, 9, 100, 22, 11
With a block size of 10 digits, we could store this set of numbers in a Simple-8b-like scheme as two blocks, one storing 5 2-digit numbers, and a second storing 3 3-digit numbers.
{2: [01, 10, 11, 13, 09]} {3: [100, 022, 011]}
As you can see, both blocks store about 10-digits worth of data, even though some of the numbers have to be padded with a leading ‘0’. (Note in our example, the second block only stores 9 digits, because 10 is not evenly divisible by 3).
Simple-8b works very similarly, except it uses binary numbers instead of decimal ones, and generally uses 64-bit blocks. In general, the larger number length, the fewer number of numbers that can be stored in each block:
Additional reading: Decoding billions of integers per second through vectorization includes an even more detailed description of how Simple-8b works.
### Run-length encoding (RLE)
Simple-8b compresses numbers very well, using approximately the minimal number of digits for each number. However, in certain cases we can do even better.
We can do better in cases where we end up with a large number of repeats of the same value. This can happen if the value does not change very often, or (if you recall from the delta and delta-delta section) if an earlier transformation removes the changes.
As one example, consider our delta transformation of the time and humidity values from earlier. Here, the time column value repeats with “5”, and the humidity column with “0”:
| time | cpu | mem_free_bytes | temperature | humidity |
|---------------------|-----|----------------|-------------|----------|
| 2020-04-01 10:00:00 | 82 | 1,073,741,824 | 80 | 25 |
| 5 seconds | 16 | -214,748,365 | 1 | 0 |
| 5 seconds | 0 | -88,876 | 0 | 0 |
To see how we can do better in representing these repeats of the same value, let’s actually use a less simplified example. Say we were still storing the changing temperature over time, we again applied delta-encoding, but now ended up with this set of numbers:
| temperature (deltas) |
|----------------------|
| 11 |
| 12 |
| 12 |
| 12 |
| 12 |
| 12 |
| 12 |
| 1 |
| 12 |
| 12 |
| 12 |
| 12 |
In other words, our data now looks like this:
11, 12, 12, 12, 12, 12, 12, 1, 12, 12, 12, 12
For values such as these, we do not need to store each instance of the value, but merely how long the run, or number of repeats, is. We could store this set of numbers as {run; value} pairs like this:
{1; 11}, {6; 12}, {1; 1}, {4; 12}
This technique only takes 11 digits of storage ([1, 1, 1, 6, 1, 2, 1, 1, 4, 1, 2]), as opposed to the approximately 23 digits that an optimal series of variable-length integers would require ([11, 12, 12, 12, 12, 12, 12, 1, 12, 12, 12, 12]).
This is Run-length encoding (RLE), which is one of the classic compression algorithms (along with Dictionary compression, discussed later). If you see compression with seemingly absurd ratios -- e.g., fewer than 1 bit per value -- run-length-encoding (or a similar technique) is probably being used. Think about a time-series with a billion contiguous 0’s, or even a document with a million identically repeated strings: both run-length-encode quite well.
RLE is used as a building block in many more advanced algorithms: e.g., Simple-8b RLE, an algorithm that combines both techniques.
In practice, in TimescaleDB we implement a variant of Simple-8b RLE, where we detect runs on-the-fly, and run-length-encode if it would be beneficial. In this variant, we use different sizes than standard Simple-8b, in order to handle 64-bit values, and RLE. More information (and the source code) for the variant we built into TimescaleDB can be found here.
## Floating point compression
### XOR-based compression
Gorilla, an in-memory time-series database developed at Facebook (and research paper published in 2015), introduced two compression techniques that improve on delta-encoding. The first is delta-of-delta encoding for timestamps, which we already covered above. Here we cover the second, XOR-based compression, which is something that typically applies to floats. (Note: Developers will often refer to “Gorilla compression”, which is generally at least one, if not both, of these techniques.)
Floating point numbers are generally more difficult to compress than integers. Unlike fixed-length integers which often have a fair number of leading 0s, floating point numbers often use all of their available bits, especially if they are converted from decimal numbers, which can’t be represented precisely in binary. [3]
[3] Decimal is in base-10, while floats are in base-2 (binary). This means that decimal numbers can be built out of a series of base-10 fractions, like a/10 + b/100 + c/1000 ... etc, while floats are built out of base-2 fractions like a/2 + b/4 + c/8 ... etc. The numbers representable as sums of these different fractions don't completely overlap, so decimal numbers are rounded to the nearest value in binary. This causes some numbers that are simple to represent in base-10, like 93.9, to get represented in a float as much more complicated numbers, which gets represented as approximately 93.90000152587890625 in binary.
Furthermore, techniques like delta-encoding don’t work well for floats, as they do not reduce the number of bits sufficiently. For example, in our example above, if we stored CPU as a double, the delta remains a number with many significant digits:
| time | cpu |
|-------|--------------|
| t1 | 0.8204859587 |
| t2 | 0.9813528043 |
| DELTA | 0.1608668456 |
Due to these challenges, most floating-point compression algorithms tend to be either complex and slow, or lossy (e.g., by truncating significant digits). (It’s important when evaluating compression algorithms to distinguish between lossless and lossy compression: for example, in the above example, if we truncate the cpu float values to two significant digits, the delta of 0.16 is already much smaller, but we have lost information.)
One of the few simple and fast lossless floating-point compression algorithms is the XOR-based one that the Gorilla paper applies, with very good results:
“We addressed this by repurposing an existing XOR based floating point compression scheme to work in a streaming manner that allows us to compress time series to an average of 1.37 bytes per point, a 12x reduction in size.” (source)
In this algorithm, successive floating point numbers are XORed together, which means that only the different bits are stored. (Quick primer on how a binary XOR operation works.)
The first data point is stored with no compression. Subsequent data points are represented using their XOR’ed values, encoded using a bit packing scheme covered in detail in the paper (and also neatly diagrammed in this blog post).
According to Facebook, with this compression algorithm, over 50% of floating point values (all doubles) were compressed to a single bit, ~30% to 26.6 bits, and the remainder to 39.6 bits. (Reminder, a double is 64 bits).
## Data-agnostic compression
### Dictionary compression
One of the earliest lossless compression algorithms, Dictionary compression (in particular, LZ-based compression) is the ancestor of many compression schemes used today, including LZW (used in GIF) and DEFLATE (used in PNG, gzip).
(As a general concept, dictionary compression can also be found in areas outside of computer science: e.g., in the field of medical coding.)
Instead of storing values directly, Dictionary compression works by making a list of the possible values that can appear, and then just storing an index into a dictionary containing the unique values. This technique is quite versatile, can be used regardless of data type, and works especially well when we have a limited set of values that repeat frequently.
For example, say we had an additional column in our time-series dataset storing city location for each measurement:
| City |
|---------------|
| New York |
| San Francisco |
| San Francisco |
| Los Angeles |
| ⋮ |
Instead of storing all the City names directly, we could instead store a dictionary, such as {0: “New York”, 1: “San Francisco”, 2: “Los Angeles”, ...} and just store the indices [0, 1, 1, 2, ...] in the column.
For a dataset with a lot of repetition, like the one above, this can offer significant savings. In the above dataset, each city name is on average 11 bytes in length, while the indices are never going to be more than 4 bytes long, reducing space usage nearly 3x. (In TimescaleDB, we compress the list of indices even further using the Simple8b+RLE scheme described earlier, making the storage cost even smaller.)
Like all compression schemes, Dictionary compression isn’t always a win: in a dataset with very few repeated values, the dictionary will be the same size as the original data, making the list of indices into the dictionary pure overhead.
However, this can be managed: in TimescaleDB, we detect this case, and then fall back to not using a dictionary in those scenarios.
## Compression in practice
TimescaleDB is an open-source time-series database, engineered on PostgreSQL, that employs all of these best-in-class compression algorithms to enable much greater storage efficiency for our users (over 90% efficiency, as mentioned earlier).
TimescaleDB deploys different compression algorithms, depending on the data type:
• Delta-of-delta + Simple-8b with run-length encoding compression for integers, timestamps, and other integer-like types
• XOR-based compression for floats
• Whole-row dictionary compression for columns with a few repeating values (plus LZ compression on top)
• LZ-based array compression for all other types
In particular, we extended classic XOR-based compression and Simple-8b so that we could decompress data in reverse order. This enables us to speed up queries that use backwards scans, which are common in time-series query workloads. (For super technical details, please see our compression PR.)
We have found this type-specific compression quite powerful: In addition to higher compressibility, some of the techniques like XOR-based compression and Delta-of-delta can be up to 40x faster than LZ-based compression during decoding, leading to faster queries.
In addition, if we have data for which none of the aforementioned schemes work, we store the data directly in a specialized array. Doing this provides two notable benefits.
• For one, we can compress both the nulls-bitmap of the array, and the sizes of the individual elements using our integer-compression schemes, getting some small size savings.
• Another benefit is that this allows us to convert many (e.g., 1000) database rows into a single row. PostgreSQL stores a non-negligible overhead for each row stored (around 48 bytes last we checked), so for narrow rows, removing that overhead can be a non-trivial savings.
If you’d like to test out TimescaleDB and see compression in action, you can get started here. And if you have any further questions, please join our community on Slack.
|
{}
|
# Tag Info
33
I will expand on DKNguyen answer, because to my knowledge also the two reasons are: reduce contact/bearing stresses (having a significant effect on thin finishes live galvanisation) change the joint tightening characteristics (see joint diagram). reduce contact stresses on surfaces. The basic idea is that since contact stress is defined as: \sigma = \...
27
It is for spreading out the stress. But it is also for giving the bolt a bearing surface to turn on. The washer always goes on the side (nut or bolt) that is being turned. It prevents it from marring up the work surface and also changes the tightening characteristics. I don't know the specifics of that though but that's what I was told by a toolmaker. Always ...
20
My first thought is that it might be intended to be a wing nut driver of some sort, but those are usually hollow cylinders with slots for the wings. Ah ... sure enough, it's described as such in this Ebay ad:
16
Except for special applications, most washers are made of dead soft steel, which deforms under the compressive load imposed by a tightened bolt head. As the washer smooshes, it minimizes stress concentrations caused by bumps under the bolt head and surface flaws in the part the bolt is running through.
15
To visualize part of Nmech's answer: in the image, the washer actually greatly increases the contact area of the bolt head. The bolt head looks pretty big: But most of that is the shaft, which obviously does not spread out load on the material. So the actual contact area looks like this: Comparatively, the bolt head on the washer looks like this: That's a ...
10
As Dave Tweed points out, the ratio of torque to tension is lower the lower the lead angle is. Since the important measure of bolt tightness is generally the tension in the bolt, we want to achieve that minimum pretension with the least effort possible. Assuming we have to maintain a certain shear area of the thread (so the the threads are stronger than the ...
9
For any given size of fastener and given thread pitch, a single-start thread gives you the greatest mechanical advantage in terms of the torque required to achieve a given tension. Aside from acme threads that are often used on leadscrews for mechanical motion (e.g., CNC machinery), the only other place I have seen multiple-start screws is on self-tapping ...
8
When dealing with pressure vessels, you should not rely on rules of thumb. You should rely instead on the ASME Boiler and Pressure Vessel Code or whatever code is required by your local enforcement or regulatory body. There are far too many variables for a one-size-fits-all answer. The design pressure/temperature, materials of construction, flange ...
7
Not all cap nuts are self locking, some are, although they can be a bit weird looking. Nylocs are generally the first choice for general use as they provide good resistance to loosening due to vibration or flexing of the joint and will stay in position on a thread even when not under tension (unlike spring washers). Unlike locking adhesives they can be ...
6
Oxidation, dried lubricant, contaminants, rust - all that acts as a layer of glue binding the elements. As you apply a firm torque, you snap that bond. It takes a considerable time to re-form.
6
Granger Supply makes two different types: The round one is called A Steel Shaft Collar, Clamp, Threaded . The second one is called a "CADDY THREADED ROD NUT". They come in a number of sizes.
6
I think you might have a misconception regarding to how far the pressure from the fasteners extends. One subject you might want to have a look into is "bolt joint stiffness". The most popular is the "Rotscher’s pressure-cone method". Essentially there is a pressure cone which radiates outwards with an pressure cone angle a. According to ...
6
Another important part of the answer is the symmetry of the stress pattern. The stress caused by a bolt head varies greatly between the points of the bolt head and the straight sides. As a result local stresses, which are what you really care about because those are what the materials have to withstand, can be much higher than the average stress. A washer's ...
5
There are a few ways to do this obviously it's easier to clean a through hole than a blind one. Run a clean second or plug tap through the hole. If you keep a tap just for this purpose it also has the benefit of keeping one 'fresh' tap which clean clean up any burrs of defects from worn taps when initially cutting the thread. Flush the hole with cutting ...
5
A Nyloc nut is a form of locknut. The purpose is to prevent unintended loosening. A cap nut is not necessarily a locknut. You can get cap nuts that have some form of locknut on the threads; but in general this is not a standard feature of all cap nuts.
5
You are correct, this connection style is called a bayonet. It is secured by pushing the two mating parts together and rotating (usually clockwise) a small amount. There is generally chamfer on the male side (shown at the tip of your arrow in the second picture) which allows the two parts to locate easily and when rotated, this chamfer pulls the two parts ...
4
I've always referred to this style of female fastener as a "T-Nut"
4
It can also be used for fastening the 'eye' of hook and eye fasteners into wooden frames.
4
Under high pressure and high heat metal bonds by a process called diffusion. Where the atoms of the two parts intersperse over time. This and oxidation, slow chemical burning of unintentional matrial and debris between the two parts, cause a strong bond that many times cuase the fastener to break before it can be removed.
4
*Figure 1. For McMaster-Carr 'Lg' is the length of the screw." Note that they specify the threaded length separately so Lg will be the distance from the bottom of the head to the tip of the screw.
4
Metric bolts use thread pitch in millimeters. Your M3 screw likely uses the standard pitch of 0.5 millimeters. One complete rotation of the screw will advance the screw into the work piece by that amount. According to the Bolt Depot chart, there is only one standard pitch for M3 screws, although other sizes will have standard, fine, super fine pitches ...
4
No, they are usually fitted once and left. If you need to remove them then you need to drill new holes. Or you should consider fitting a wooden framework to the concrete and attaching to that so it can be easily removed and refitted, without disturbing the concrete fixings.
3
In 1841 Sir Joseph Whitworth produced a paper on a universal system of screw threads. He then collected a variety of screws and proposed a universal thread using their average pitch and depth. The result was the 'Whitworth thread' with the depth and pitch of constant proportion, giving the 'V' thread a mean angle of 55 degrees and the number of threads per ...
3
I have always used the rule of thumb that the bolt has to be engaged at least one bolt diameter. This partly comes from looking at structural nuts. Structural nuts develop the full strength of the bolt in tension. These nuts typically engage about one bolt diameter.
3
A standard bolt will stretch when tightened. This stretching increases force on the threads of both the bolt and hole and thus increases the force required to overcome static friction and loosen the bolt. If we exclude other materials and chemical changes to the surfaces then that snap that you experience is the moment you overcome that static friction ...
3
You are on track with your beam calculations, but I doubt you will be happy with the results, just because a 3-D printed part is unlikely to act the way a cast or milled solid plastic part would. You would be well served to just buy aluminum bar stock and cut it with a hack saw. As to the screw force produced, I doubt you will be happy with that either. ...
3
A metric thread table is what you want. Table 1. Source: Anzor. So is there a standard rule of thumb, or equation about the distance the end of a screw will travel per rotation? It's not a rule of thumb - it's defined by the pitch. Conversely, how many turns it will take to travel a specific distance? Turns required = distance / pitch. Say I have ...
3
You could have a 8mm hole drilled and reamed slightly undersize for a press fit 8mm dowel pin and use a 8mm/8mm swivel clamp which would let you set the angle and height as desired (photo from McMaster).
2
Typing "screws" in the search field of the Bopla website finds this link. It looks like the screw you need is has a shank diameter of 1.7–1.8 mm and a thread diameter of 2.2 mm. The head is rounded.
2
I am not aware of anyone that makes such a beast nor have I seen one in the wild. What I have seen is head collars. collars are a tapered washer normally made of nylon but also available in metal that are designed to accommodate mismatched head taper. As they have no threads they can be used with multiple screws. Somewhere I have a box of them for #8 screws. ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# Chapter 4 - Exponential and Logarithmic Functions - Exercise Set 4.4: 13
$x=4$
#### Work Step by Step
We are given the exponential equation $3^{1-x}=\frac{1}{27}$. We can express each side using a common base and then solve for $x$. $3^{1-x}=3^{-3}=\frac{1}{3^{3}}\frac{1}{27}$ Take the natural log of both sides. $ln(3^{1-x})=ln(3^{-3})$ $(1-x)ln(3)=-3ln(3)$ Divide both sides by $ln(3)$. $1-x=-3$ Subtract 1 from both sides. $-x=-4$ Divide both sides by -1. $x=4$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# What does this connection between Chebyshev, Ramanujan, Ihara and Riemann mean?
It all started with Chris' answer saying returning paths on cubic graphs without backtracking can be expressed by the following recursion relation:
$$p_{r+1}(a) = ap_r(a)-2p_{r-1}(a)$$
$$a$$ is an eigenvalue of the adjacency matrix $$A$$. Chris mentions Chebyshev polynomials there. It was Will who found the generating function for the given recursion to be:
$$G(x,a)=\frac{1-x^2}{1-ax+2x^2}$$
and just recently Hamed put Chebyshev back on the table:
$$\frac{1-x^2}{1-ax +2x^2} \xrightarrow{x=t/\sqrt{2}}\frac{1-\frac{t^2}2}{1-2\frac{a}{\sqrt 8} t+t^2}=\left[1-\frac{t^2}{2}\right]\sum_{r=0}^\infty U_r\left(\frac{a}{\sqrt{8}}\right)t^r\\ =\sum_{r=0}^\infty \left(U_r\left(\frac{a}{\sqrt{8}}\right)-\frac12 U_{r-2}\left(\frac{a}{\sqrt{8}}\right)\right)t^r\\$$ $$\Rightarrow p_r(t)=\begin{cases}1 & \text{if r=0,}\\ 2^{r/2}U_r(t/\sqrt{8})-2^{(r-2)/2}U_{r-2}(t/\sqrt{8}) & \text{if r\ge1.}\end{cases}$$ The final line as taken from Will's community answer...
My question how to relate Ihara's $$\zeta$$ function and Chebyshev seems therefore mostly settled, but...:
Is it just a funny coincidence that the scaling factor of $$\sqrt 8$$ coincides with $$\lambda_1\leq 2\sqrt 2$$, which is the definition of cubic Ramanujan graphs.
And, there is another interesting thing:
As observed by Sunada, a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis.
What does this connection between Chebyshev, Ramanujan, Ihara and Riemann mean?
EDIT
I thought maybe something like a corollary could be possible:
1. For Ramanujan graphs, the Ihara $$\zeta$$ function can be related to Chebyshev functions of the second kind, since the scaled eigenvalues of $$A$$ lie inside the range of convergence.
2. A Ramanujan graph $$G$$ obeys the Riemann Hypothesis.
3. Roots of the Ihara $$\zeta$$ function lie on the critical strip.
• The bunch of people above have contributed to $$1\leftarrow 2$$.
• $$2 \leftrightarrow 3$$ is proven here: Eigenvalues are of the form $$\lambda=2\sqrt 2\cos(b\log 2)$$
• $$3\overset{\rightarrow ?}{\leftarrow} 1$$ would be nice...
## migrated from math.stackexchange.comDec 24 '15 at 15:45
This question came from our site for people studying math at any level and professionals in related fields.
• I recommend to up-vote the linked answers. Thanks to all that helped so far... – draks ... Nov 25 '15 at 21:29
• Check out p. 18 of Murty's survey: mast.queensu.ca/~murty/ramanujan.pdf. Also, there's a book by Davidoff, Sarnak, and Valette is very readable. In this context, the Riemann hypothesis is equivalent to 2nd largest eigenvalue being the square root of the degree (square root corresponds to zeros on the line Re(s)=1/2; non-trivial bound on the size of the 2nd largest eigenvalue corresponds to zeros having Re(s) < 1, which I believe implies the analog of the "prime number theorem" in this context.) – Brendan Murphy Dec 24 '15 at 20:30
• @BrendanMurphy great paper, but I don't see how it helps here. Maybe I missed something? – draks ... Jan 1 '16 at 22:12
• Look at Davidoff, Sarnak, and Valette's book---the results you describe are in the first chapter. In particular, it's not a coincidence that the scaling factor is $\sqrt{8}$. I mentioned Murty's survey because it explains in more detail what the "Riemann Hypothesis" means in this context, and more generally the analogy between cycles and numbers. I'm not sure how to answer the question "what does this connection mean", since I think the connection provides different interpretations, just like Cartesian coordinates provide different interpretations for curves and equations. – Brendan Murphy Jan 2 '16 at 23:22
• To me, the mysterious part is that the Chebyshev polynomials and the matrices that count loops without backtracking satisfy a similar recurrence. Is this an accident? The last exercise in section 1.4 of DSV mentions a connection to representation theory; I think this is expanded upon in a book by Katz and Sarnak called "Random Matrices, Frobenious Eigenvalues, and Monodromy", although I've only skimmed this book and I'm not sure if the graph theory side is mentioned there. Anyway, perhaps more can be said, but I don't know any more :) – Brendan Murphy Jan 3 '16 at 18:12
Let $$G$$ be a $$d$$-regular connected graph on $$n$$ vertices. Let $$d=\lambda_0 \geq \lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_{n-1}$$ be the $$n$$ eigenvalues of the adjacency matrix $$A$$, and let $$\lambda = max( |\lambda_1|,|\lambda_{n-1}|)$$. Intuitively, the significance of the spectrum of $$A$$ is that many combinatorial quantities can be expressed linear-algebraically, and optimizations problems in this framework often involve the spectrum.
For instance, we saw that for two vertex subsets $$S,T \subseteq V$$, the cardinality of $$E(S,T)$$ is $$E(S,T)=\textbf{1}_{T}^{*} A \textbf{1}_S$$ and this in turn could be expressed as a sum over eigenvalues of $$A$$.\ In these cases, the summand corresponding to the eigenvalue $$d$$ turns out to be the "average" value that we would expect of the quantity, while the summands corresponding to the remaining eigenvalues would then be interpreted as the "error" term. In the case of $$E(S,T)$$, the summand corresponding to $$d$$ is $$d \frac{|S|}{\sqrt{n}} \frac{|T|}{\sqrt{n}} = \frac{d}{n} |S|.|T|$$ and this is the value we expect for a $$d$$-regular random graph on $$n$$ vertices. The remaining eigenvalues and their eigenvectors, through their interaction with $$S$$ and $$T$$ determine the deviation of $$|E(S,T)|$$ from this average value. So when the remaining eigenvalues are small in absolute value, this gives us a bound on the error term and allows us to conclude that for all sets $$S,T \subseteq V$$, $$|E(S,T)|$$ is close to the average value.\
This idea can be used in other enumeration problems too. All graph properties, by definition, depend on the adjacency matrix, and so can be expressed in terms of $$A$$. The challenge is then to ensure that the expression is amenable to linear-algebraic tools.\
Another simple example is that of a random walk on the graph. Suppose we are interested in the number of walks on $$G$$ of length $$k$$ that are cycles. Clearly this is $$Tr(A^k)$$ which is $$d^k + \lambda_1^k + \lambda_2^k + \dots + \lambda_{n-1}^k$$ Observe that the total number of walks of length $$k$$ on $$G$$ is $$n.d^k$$ In a random graph, we would expect a random walk to end up at its starting point with probability $$1/n$$. And so the "average" number of closed walks of length $$k$$ on a random $$d$$-regular graph is $$\frac{n.d^k}{n}=d^k$$ and this is again exactly the summand corresponding to the eigenvalue $$d$$ in the linear algebraic formulation of the expression for the fixed graph $$G$$. Furthermore the number of closed walks of length $$k$$ on $$G$$ is $$d^k \pm (n-1)\lambda^k$$ and $$(n-1)\lambda^k$$ is an upper bound on the error term.\
A more non-trivial example of this is in the case of non-backtracking closed walks of length $$k$$ on $$G$$. In this case we are interested in $$Tr(A_k)$$ Since $$A_k = (d-1)^{k/2} U_k \left( \frac{A}{2 \sqrt{d-1}} \right) - (d-1)^{k/2-1} U_{k-2} \left( \frac{A}{2 \sqrt{d-1}} \right)$$ we have $$Tr(A_k) = (d-1)^{k/2} \sum \limits_{j=0}^{n-1} U_k \left( \frac{\lambda_i}{2 \sqrt{d-1}} \right) - (d-1)^{k/2-1} \sum \limits_{i=0}^{n-1} U_{k-2} \left( \frac{\lambda_i}{2 \sqrt{d-1}} \right)$$ Since the total number of non-backtracking walks of length $$k$$ is $$n d(d-1)^{k-1}$$ we would expect a $$1/n$$ fraction of these to be closed. The summand corresponding to the eigenvalue $$d$$ is $$(d-1)^{k/2} \sum \limits_{j=0}^{n-1} U_k \left( \frac{d}{2 \sqrt{d-1}} \right) - (d-1)^{k/2-1} \sum \limits_{i=0}^{n-1} U_{k-2} \left( \frac{d}{2 \sqrt{d-1}} \right)$$ Using the closed form expression for $$U_k$$ given by $$U_k(x) = \frac{(x+\sqrt{x^2-1})^{k+1}- (x-\sqrt{x^2-1})^{k+1} }{2\sqrt{x^2-1}}$$ we get $$(d-1)^{k/2} \sum \limits_{j=0}^{n-1} U_k \left( \frac{d}{2 \sqrt{d-1}} \right) - (d-1)^{k/2-1} \sum \limits_{i=0}^{n-1} U_{k-2} \left( \frac{d}{2 \sqrt{d-1}} \right) = d(d-1)^{k-1}$$ as expected, and the summands corresponding to the remaining eigenvalues form the error term.\ It is here that the Ramanujan property has a natural interpretation!\
Consider the function $$U_k(x)$$. It is known that for $$|x| \leq 1$$, $$U_k(x)$$ has a trigonometric expression given by $$U_k(\cos{\theta}) = \frac{ \sin{(k+1)\theta}}{\sin{\theta}}$$ So whenever $$|x|\leq 1$$, $$|U_k(x)| \leq k+1$$ As for when $$|x| > 1$$, the expression $$U_k(x) = \frac{(x+\sqrt{x^2-1})^{k+1}- (x-\sqrt{x^2-1})^{k+1} }{2\sqrt{x^2-1}}$$ implies that $$U_k(x) = O(x^k)$$
So if the graph is Ramanujan, then for every $$|\lambda|\neq d$$, $$|(d-1)^{k/2}U_k\left( \frac{\lambda}{2 \sqrt{d-1}} \right) | \leq (k+1)(d-1)^{k/2} = O(k d^{k/2})$$ This means that the number of non-backtracking closed walks of length $$k$$ on $$G$$ is $$d(d-1)^{k-1} \pm O(nk d^{k/2})$$ On the other hand if the graph is not Ramanujan, then the error term could be significantly larger, and the most we can say is that the number of non-backtracking closed walks of length $$k$$ on $$G$$ is $$d(d-1)^{k-1} \pm O(n d^k)$$ which is not very useful since the error term is of the same order as the average!\ In fact this is the approach used in the original construction of Ramanujan graphs- where they forces the eigenvalues to be small by ensuring that the number of non-backtracking walks of a every given length is close to the average and the error term is small.\
Next consider the number of tailless non-backtracking cycles of length $$k$$ on $$G$$. This is given by the quantity $$N_k=\begin{cases} \sum \limits_{j=0}^{n-1} 2(d-1)^{k/2} T_k \left( \frac{\mu_j}{2\sqrt{d-1}} \right) & \text{ if k is odd}\\ \sum \limits_{j=0}^{n-1} 2(d-1)^{k/2} T_k \left( \frac{\mu_j}{2\sqrt{d-1}} \right) + d-2 & \text{ if k is even}\\ \end{cases}$$ https://arxiv.org/abs/1706.00851 The summand corresponding to the eigenvalue $$d$$ is $$(d-1)^k + 1$$ when $$k$$ is odd, and $$(d-1)^k + 1 + (d-2)$$ when $$k$$ is even. It is interesting to ask what this quantity can be interpreted as.\
For simplicity assume $$G$$ is a Cayley graph, so that the edges incident at each vertex can be represented by a symmetric generating set $$S$$ of size $$d$$. A tailless non-backtracking cycle of length $$k$$ is now a \emph{cyclically reduced word} of length $$k$$ over the alphabet $$S$$ that evaluates to $$1$$ in the group. It is known that the number of cyclically reduced words of length $$k$$ over a generating set of size $$d$$ is $$(d-1)^k + d/2 + (d/2-1)(-1)^k$$ https://math.stackexchange.com/questions/825830/reduced-words-of-length-l and so the total number of such special walks on the graph is $$n \left( (d-1)^k + d/2 + (d/2-1)(-1)^k \right)$$ and so the expected number of such walks that are closed is $$(d-1)^k + d/2 + (d/2-1)(-1)^k$$ which is precisely the expression we got earlier for the summand in $$N_k$$ corresponding to the eigenvalue $$d$$. \
For Chebyshev polynomials of the first kind, we know that $$T_k(\cos{\theta})=\cos{k \theta}$$ and so for $$|x| \leq 1$$, $$|T_k(x)| \leq 1$$ This gives us a strong expression for $$N_k$$ when $$G$$ is Ramanujan: $$N_k = (d-1)^k + d/2 + (d/2-1)(-1)^k \pm O(n d^{k/2})$$ which is slightly better than the error we got for non-backtracking but possibly tailed cycles.
• Hey @Bharat How are things? Cheers, draks... – draks ... Jan 25 '18 at 21:29
• @draks... Hello! Been meeting math researchers trying to interest them in the Bilu-Linial conjecture and Marcus-Spielman-Srivastava result. The problem is ambitious, but even special cases would be interesting. Whats up with you? – BharatRam Jan 28 '18 at 20:45
• @draks... Also, is there some direct reinterpretation of LPS construction of Ramanujan graphs in terms of the zeta function of the graph and that of curves over finite fields? That connection is hinted in some places, but never explicitly clarified. Tell me if you know anything on these lines. – BharatRam Jan 28 '18 at 20:48
|
{}
|
# Black–Scholes equation
In mathematical finance, the Black–Scholes equation is a partial differential equation (PDE) governing the price evolution of a European call or European put under the Black–Scholes model. Broadly speaking, the term may refer to a similar PDE that can be derived for a variety of options, or more generally, derivatives.
Simulated geometric Brownian motions with parameters from market data
For a European call or put on an underlying stock paying no dividends, the equation is:
${\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}$
where V is the price of the option as a function of stock price S and time t, r is the risk-free interest rate, and ${\displaystyle \sigma }$ is the volatility of the stock.
The key financial insight behind the equation is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently “eliminate risk". This hedge, in turn, implies that there is only one right price for the option, as returned by the Black–Scholes formula.
## Financial interpretation
The equation has a concrete interpretation that is often used by practitioners and is the basis for the common derivation given in the next subsection. The equation can be rewritten in the form:
${\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}=rV-rS{\frac {\partial V}{\partial S}}}$
The left hand side consists of a "time decay" term, the change in derivative value due to time increasing called theta, and a term involving the second spatial derivative gamma, the convexity of the derivative value with respect to the underlying value. The right hand side is the riskless return from a long position in the derivative and a short position consisting of ${\displaystyle {\frac {\partial V}{\partial S}}}$ shares of the underlying.
Black and Scholes' insight is that the portfolio represented by the right hand side is riskless: thus the equation says that the riskless return over any infinitesimal time interval, can be expressed as the sum of theta and a term incorporating gamma. For an option, theta is typically negative, reflecting the loss in value due to having less time for exercising the option (for a European call on an underlying without dividends, it is always negative). Gamma is typically positive and so the gamma term reflects the gains in holding the option. The equation states that over any infinitesimal time interval the loss from theta and the gain from the gamma term offset each other, so that the result is a return at the riskless rate.
From the viewpoint of the option issuer, e.g. an investment bank, the gamma term is the cost of hedging the option. (Since gamma is the greatest when the spot price of the underlying is near the strike price of the option, the seller's hedging costs are the greatest in that circumstance.)
## Derivation
The following derivation is given in Hull's Options, Futures, and Other Derivatives.[1]:287–288 That, in turn, is based on the classic argument in the original Black–Scholes paper.
Per the model assumptions above, the price of the underlying asset (typically a stock) follows a geometric Brownian motion. That is
${\displaystyle {\frac {dS}{S}}=\mu \,dt+\sigma \,dW\,}$
where W is a stochastic variable (Brownian motion). Note that W, and consequently its infinitesimal increment dW, represents the only source of uncertainty in the price history of the stock. Intuitively, W(t) is a process that "wiggles up and down" in such a random way that its expected change over any time interval is 0. (In addition, its variance over time T is equal to T; see Wiener process: Basic properties); a good discrete analogue for W is a simple random walk. Thus the above equation states that the infinitesimal rate of return on the stock has an expected value of μ dt and a variance of ${\displaystyle \sigma ^{2}dt}$.
The payoff of an option ${\displaystyle V(S,T)}$ at maturity is known. To find its value at an earlier time we need to know how ${\displaystyle V}$ evolves as a function of ${\displaystyle S}$ and ${\displaystyle t}$. By Itō's lemma for two variables we have
${\displaystyle dV=\left(\mu S{\frac {\partial V}{\partial S}}+{\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)dt+\sigma S{\frac {\partial V}{\partial S}}\,dW}$
Now consider a certain portfolio, called the delta-hedge portfolio, consisting of being short one option and long ${\displaystyle {\frac {\partial V}{\partial S}}}$ shares at time ${\displaystyle t}$. The value of these holdings is
${\displaystyle \Pi =-V+{\frac {\partial V}{\partial S}}S}$
Over the time period ${\displaystyle [t,t+\Delta t]}$, the total profit or loss from changes in the values of the holdings is:
${\displaystyle \Delta \Pi =-\Delta V+{\frac {\partial V}{\partial S}}\,\Delta S}$
Now discretize the equations for dS/S and dV by replacing differentials with deltas:
${\displaystyle \Delta S=\mu S\,\Delta t+\sigma S\,\Delta W\,}$
${\displaystyle \Delta V=\left(\mu S{\frac {\partial V}{\partial S}}+{\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t+\sigma S{\frac {\partial V}{\partial S}}\,\Delta W}$
and appropriately substitute them into the expression for ${\displaystyle \Delta \Pi }$:
${\displaystyle \Delta \Pi =\left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t}$
Notice that the ${\displaystyle \Delta W}$ term has vanished. Thus uncertainty has been eliminated and the portfolio is effectively riskless. The rate of return on this portfolio must be equal to the rate of return on any other riskless instrument; otherwise, there would be opportunities for arbitrage. Now assuming the risk-free rate of return is ${\displaystyle r}$ we must have over the time period ${\displaystyle [t,t+\Delta t]}$
${\displaystyle r\Pi \,\Delta t=\Delta \Pi }$
If we now equate our two formulas for ${\displaystyle \Delta \Pi }$ we obtain:
${\displaystyle \left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t=r\left(-V+S{\frac {\partial V}{\partial S}}\right)\Delta t}$
Simplifying, we arrive at the celebrated Black–Scholes partial differential equation:
${\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}$
With the assumptions of the Black–Scholes model, this second order partial differential equation holds for any type of option as long as its price function ${\displaystyle V}$ is twice differentiable with respect to ${\displaystyle S}$ and once with respect to ${\displaystyle t}$. Different pricing formulae for various options will arise from the choice of payoff function at expiry and appropriate boundary conditions.
Technical note: A subtlety obscured by the discretization approach above is that the infinitesimal change in the portfolio value was due to only the infinitesimal changes in the values of the assets being held, not changes in the positions in the assets. In other words, the porfolio was assumed to be self-financing. This can be proven in the continuous setting and uses basic results in the theory of stochastic differential equations.
### Alternate derivation
Here is an alternate derivation that can be utilized in situations where it is initially unclear what the hedging portfolio should be. (For a reference, see 6.4 of Shreve vol II).
In the Black—Scholes model, assuming we have picked the risk-neutral probability measure, the underlying stock price S(t) is assumed to evolve as a geometric Brownian motion:
${\displaystyle {\frac {dS(t)}{S(t)}}=r\ dt+\sigma dW(t)}$
Since this stochastic differential equation (SDE) shows the stock price evolution is Markovian, any derivative on this underlying is a function of time t and the stock price at the current time, S(t). Then an application of Ito's lemma gives an SDE for the discounted derivative process exp(-rt)V(t, S(t)), which should be a martingale. In order for that to hold, the drift term must be zero, which implies the Black—Scholes PDE.
This derivation is basically an application of the Feynman-Kac formula and can be attempted whenever the underlying asset(s) evolve according to given SDE(s).
## Solving the PDE
Once the Black—Scholes PDE, with boundary and terminal conditions, is derived for a derivative, the PDE can be solved numerically using standard methods of numerical analysis, such as a type of finite difference method. In certain cases, it is possible to solve for an exact formula, such as in the case of a European call, which was done by Black and Scholes.
To do this for a call option, recall the PDE above has boundary conditions
{\displaystyle {\begin{aligned}C(0,t)&=0{\text{ for all }}t\\C(S,t)&\rightarrow S{\text{ as }}S\rightarrow \infty \\C(S,T)&=\max\{S-K,0\}\end{aligned}}}
The last condition gives the value of the option at the time that the option matures. Other conditions are possible as S goes to 0 or infinity. For example, common conditions utilized in other situations are to choose delta to vanish as S goes to 0 and gamma to vanish as S goes to infinity; these will give the same formula as the conditions above (in general, differing boundary conditions will give different solutions, so some financial insight should be utilized to pick suitable conditions for the situation at hand).
The solution of the PDE gives the value of the option at any earlier time, ${\displaystyle \mathbb {E} \left[\max\{S-K,0\}\right]}$. To solve the PDE we recognize that it is a Cauchy–Euler equation which can be transformed into a diffusion equation by introducing the change-of-variable transformation
{\displaystyle {\begin{aligned}\tau &=T-t\\u&=Ce^{r\tau }\\x&=\ln \left({\frac {S}{K}}\right)+\left(r-{\frac {1}{2}}\sigma ^{2}\right)\tau \end{aligned}}}
Then the Black–Scholes PDE becomes a diffusion equation
${\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {1}{2}}\sigma ^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}}$
The terminal condition ${\displaystyle C(S,T)=\max\{S-K,0\}}$ now becomes an initial condition
${\displaystyle u(x,0)=u_{0}(x)\equiv K(e^{\max\{x,0\}}-1)}$
Using the standard method for solving a diffusion equation we have
${\displaystyle u(x,\tau )={\frac {1}{\sigma {\sqrt {2\pi \tau }}}}\int _{-\infty }^{\infty }{u_{0}[y]\exp {\left[-{\frac {(x-y)^{2}}{2\sigma ^{2}\tau }}\right]}}\,dy}$
which, after some manipulations, yields
${\displaystyle u(x,\tau )=Ke^{x+{\frac {1}{2}}\sigma ^{2}\tau }N(d_{1})-KN(d_{2})}$
where
{\displaystyle {\begin{aligned}d_{1}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\left(x+{\frac {1}{2}}\sigma ^{2}\tau \right)+{\frac {1}{2}}\sigma ^{2}\tau \right]\\d_{2}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\left(x+{\frac {1}{2}}\sigma ^{2}\tau \right)-{\frac {1}{2}}\sigma ^{2}\tau \right]\end{aligned}}}
Reverting ${\displaystyle u,x,\tau }$ to the original set of variables yields the above stated solution to the Black–Scholes equation.
## References
1. {{#invoke:citation/CS1|citation |CitationClass=book }}
|
{}
|
# How to vertically align table column
I am struggling to align vertically the text inside the column of the table below. I would like to obtain this vertical align to improve the table readability... I tried to add a color on alternating rows but it cancel the left and right board of the row. If you know how to solve that problem please add also that.
Thanks
\documentclass[margin=5pt]{article}
\usepackage{float}
\usepackage{caption}
\usepackage{tabularx,ragged2e,booktabs}
\usepackage{array}
\usepackage[table,x11names]{xcolor}
\usepackage{amssymb}% http://ctan.org/pkg/amssymb
\usepackage{pifont}% http://ctan.org/pkg/pifont
\newcommand{\xmark}{\ding{53}}%
\begin{document}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\newcolumntype{f}[1]{>{\raggedright\arraybackslash}p{#1}}
\begin{table}
\caption[AM process/material matrix.]{AM processes comparison based on materials processable. }
\label{tab:AM_material_matrix}
\centering
\begin{tabular}{|f{3cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|}
\hline
& Material extrusion & Material jetting & Binder jetting & Vat photopolymerization & Sheet lamination & Powder bed fusion & Directed energy depositon \\
\hline
Polymers, polymer blends & \xmark & \xmark & \xmark & \xmark & \xmark & \xmark & \\
Composites & \xmark & \xmark & \xmark & \xmark & & \xmark & \\
Metals & & & \xmark & & \xmark & \xmark & \xmark \\
Graded/hybrid metals & & & & & \xmark & & \xmark \\
Ceramics & & & \xmark & \xmark & & \xmark & \\
Investement casting patterns & & \xmark & \xmark & \xmark & & \xmark & \\
Sand molds and cores & \xmark & & \xmark & & & \xmark & \\
Paper & & & & & \xmark & & \\
\hline
\end{tabular}
\end{table}
\end{document}
• By "board", do you mean the vertical lines? That is likely just a viewer issue, try zooming in the PDF, or printing. For the vertical centering, try m instead of p in the definitions of the P and f column types. – Torbjørn T. Oct 11 '16 at 12:30
As you said "align vertically", if you mean the column names and row names such as "Polymers, polymer blends" in your example, an alternative approach is:
\newcommand\minitab[2][c]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{|c|l|c|r|}
\hline
& \minitab[l]{Column\\ 1} & \minitab{Column\\ 2} & \minitab[r]{Column\\ 3}\\
\hline
\minitab{Multiple\\ line} & 2 & 2 & 2\\
\hline
Single & 1 & 1 & 1\\
\hline
\end{tabular}
|
{}
|
## Functiones et Approximatio Commentarii Mathematici
Functiones et Approximatio, Commentarii Mathematici is a mathematical journal edited by the Faculty of Mathematics and Computer Science of Adam Mickiewicz University since 1974. The journal publishes original papers in mathematics, with special attention to analysis (in a broad sense) and number theory.
Bilateral $q$-series identities and reciprocal formulaeVolume 42, Number 2 (2010)
Mahler and Koksma classification of points in $\mathbb{R}^n$ and $\mathbb{C}^n$.Volume 35, Number 1 (2006)
Uniform mean ergodicity of $C_0$-semigroups\newline in a class of Fréchet spacesVolume 50, Number 2 (2014)
|
{}
|
# Pointed homotopy and pointed lax homotopy of 2-crossed module maps
Research output: Contribution to journalArticle
5 Citations (Scopus)
### Abstract
We address the (pointed) homotopy theory of 2-crossed modules (of groups), which are known to faithfully represent Gray 3-groupoids, with a single object, and also connected homotopy 3-types. The homotopy relation between 2-crossed module maps will be defined in a similar way to Crans' 1-transfors between strict Gray functors, however being pointed, thus this corresponds to Baues' homotopy relation between quadratic module maps. Despite the fact that this homotopy relation between 2-crossed module morphisms is not, in general, an equivalence relation, we prove that if A and A' are 2-crossed modules, with the underlying group F of A being free (in short A is free up to order one), then homotopy between 2-crossed module maps A \to A' yields, in this case, an equivalence relation. Furthermore, if a chosen basis B is specified for F, then we can define a 2-groupoid HOM_B(A,A') of 2-crossed module maps A \to A', homotopies connecting them, and 2-fold homotopies between homotopies, where the latter correspond to (pointed) Crans' 2-transfors between 1-transfors. We define a partial resolution Q^1(A), for a 2-crossed module A, whose underlying group is free, with a canonical chosen basis, together with a projection map {\rm proj}\colon Q^1(A) \to A, defining isomorphisms at the level of 2-crossed module homotopy groups. This resolution (which is part of a comonad) leads to a weaker notion of homotopy (lax homotopy) between 2-crossed module maps, which we fully develop and describe. In particular, given 2-crossed modules A and A', there exists a 2-groupoid {HOM}_{\rm LAX}(A,A') of (strict) 2-crossed module maps A \to A', and their lax homotopies and lax 2-fold homotopies. The associated notion of a (strict) 2-crossed module map f\colon A \to A' to be a lax homotopy equivalence has the two-of-three property, and it is closed under retracts.
Original language Unknown 986-1049 Advances In Mathematics 248 NA https://doi.org/10.1016/j.aim.2013.08.020 Published - 1 Jan 2013
### Cite this
@article{57c2fa4d8d7a4423b82dcb5b871d138d,
title = "Pointed homotopy and pointed lax homotopy of 2-crossed module maps",
abstract = "We address the (pointed) homotopy theory of 2-crossed modules (of groups), which are known to faithfully represent Gray 3-groupoids, with a single object, and also connected homotopy 3-types. The homotopy relation between 2-crossed module maps will be defined in a similar way to Crans' 1-transfors between strict Gray functors, however being pointed, thus this corresponds to Baues' homotopy relation between quadratic module maps. Despite the fact that this homotopy relation between 2-crossed module morphisms is not, in general, an equivalence relation, we prove that if A and A' are 2-crossed modules, with the underlying group F of A being free (in short A is free up to order one), then homotopy between 2-crossed module maps A \to A' yields, in this case, an equivalence relation. Furthermore, if a chosen basis B is specified for F, then we can define a 2-groupoid HOM_B(A,A') of 2-crossed module maps A \to A', homotopies connecting them, and 2-fold homotopies between homotopies, where the latter correspond to (pointed) Crans' 2-transfors between 1-transfors. We define a partial resolution Q^1(A), for a 2-crossed module A, whose underlying group is free, with a canonical chosen basis, together with a projection map {\rm proj}\colon Q^1(A) \to A, defining isomorphisms at the level of 2-crossed module homotopy groups. This resolution (which is part of a comonad) leads to a weaker notion of homotopy (lax homotopy) between 2-crossed module maps, which we fully develop and describe. In particular, given 2-crossed modules A and A', there exists a 2-groupoid {HOM}_{\rm LAX}(A,A') of (strict) 2-crossed module maps A \to A', and their lax homotopies and lax 2-fold homotopies. The associated notion of a (strict) 2-crossed module map f\colon A \to A' to be a lax homotopy equivalence has the two-of-three property, and it is closed under retracts.",
keywords = "Crossed module, Gray category, Quadratic module, Homotopy 3-type, Peiffer lifting, 2-crossed module, Simplicial group, Tricategory",
author = "Martins, {Jo{\~a}o Nuno Gon{\cc}alves Faria}",
note = "Sem PDF conforme despacho.",
year = "2013",
month = "1",
day = "1",
doi = "10.1016/j.aim.2013.08.020",
language = "Unknown",
volume = "248",
pages = "986--1049",
issn = "0001-8708",
publisher = "Elsevier Science B.V., Amsterdam.",
number = "NA",
}
In: Advances In Mathematics, Vol. 248, No. NA, 01.01.2013, p. 986-1049.
Research output: Contribution to journalArticle
TY - JOUR
T1 - Pointed homotopy and pointed lax homotopy of 2-crossed module maps
AU - Martins, João Nuno Gonçalves Faria
N1 - Sem PDF conforme despacho.
PY - 2013/1/1
Y1 - 2013/1/1
N2 - We address the (pointed) homotopy theory of 2-crossed modules (of groups), which are known to faithfully represent Gray 3-groupoids, with a single object, and also connected homotopy 3-types. The homotopy relation between 2-crossed module maps will be defined in a similar way to Crans' 1-transfors between strict Gray functors, however being pointed, thus this corresponds to Baues' homotopy relation between quadratic module maps. Despite the fact that this homotopy relation between 2-crossed module morphisms is not, in general, an equivalence relation, we prove that if A and A' are 2-crossed modules, with the underlying group F of A being free (in short A is free up to order one), then homotopy between 2-crossed module maps A \to A' yields, in this case, an equivalence relation. Furthermore, if a chosen basis B is specified for F, then we can define a 2-groupoid HOM_B(A,A') of 2-crossed module maps A \to A', homotopies connecting them, and 2-fold homotopies between homotopies, where the latter correspond to (pointed) Crans' 2-transfors between 1-transfors. We define a partial resolution Q^1(A), for a 2-crossed module A, whose underlying group is free, with a canonical chosen basis, together with a projection map {\rm proj}\colon Q^1(A) \to A, defining isomorphisms at the level of 2-crossed module homotopy groups. This resolution (which is part of a comonad) leads to a weaker notion of homotopy (lax homotopy) between 2-crossed module maps, which we fully develop and describe. In particular, given 2-crossed modules A and A', there exists a 2-groupoid {HOM}_{\rm LAX}(A,A') of (strict) 2-crossed module maps A \to A', and their lax homotopies and lax 2-fold homotopies. The associated notion of a (strict) 2-crossed module map f\colon A \to A' to be a lax homotopy equivalence has the two-of-three property, and it is closed under retracts.
AB - We address the (pointed) homotopy theory of 2-crossed modules (of groups), which are known to faithfully represent Gray 3-groupoids, with a single object, and also connected homotopy 3-types. The homotopy relation between 2-crossed module maps will be defined in a similar way to Crans' 1-transfors between strict Gray functors, however being pointed, thus this corresponds to Baues' homotopy relation between quadratic module maps. Despite the fact that this homotopy relation between 2-crossed module morphisms is not, in general, an equivalence relation, we prove that if A and A' are 2-crossed modules, with the underlying group F of A being free (in short A is free up to order one), then homotopy between 2-crossed module maps A \to A' yields, in this case, an equivalence relation. Furthermore, if a chosen basis B is specified for F, then we can define a 2-groupoid HOM_B(A,A') of 2-crossed module maps A \to A', homotopies connecting them, and 2-fold homotopies between homotopies, where the latter correspond to (pointed) Crans' 2-transfors between 1-transfors. We define a partial resolution Q^1(A), for a 2-crossed module A, whose underlying group is free, with a canonical chosen basis, together with a projection map {\rm proj}\colon Q^1(A) \to A, defining isomorphisms at the level of 2-crossed module homotopy groups. This resolution (which is part of a comonad) leads to a weaker notion of homotopy (lax homotopy) between 2-crossed module maps, which we fully develop and describe. In particular, given 2-crossed modules A and A', there exists a 2-groupoid {HOM}_{\rm LAX}(A,A') of (strict) 2-crossed module maps A \to A', and their lax homotopies and lax 2-fold homotopies. The associated notion of a (strict) 2-crossed module map f\colon A \to A' to be a lax homotopy equivalence has the two-of-three property, and it is closed under retracts.
KW - Crossed module
KW - Gray category
KW - Homotopy 3-type
KW - Peiffer lifting
KW - 2-crossed module
KW - Simplicial group
KW - Tricategory
U2 - 10.1016/j.aim.2013.08.020
DO - 10.1016/j.aim.2013.08.020
M3 - Article
VL - 248
SP - 986
EP - 1049
|
{}
|
#### PhD Thesis defence
##### Venue: Microsoft Teams (online)
Non-malleable codes (NMCs) are coding schemes that help in protecting crypto-systems under tampering attacks, where the adversary tampers the device storing the secret and observes additional input-output behavior on the crypto-system. NMCs give a guarantee that such adversarial tampering of the encoding of the secret will lead to a tampered secret, which is either same as the original or completely independent of it, thus giving no additional information to the adversary. The specific tampering model that we consider in this work, called the “split-state tampering model”, allows the adversary to tamper two parts of the codeword arbitrarily, but independent of each other. Leakage resilient secret sharing schemes help a party, called a dealer, to share his secret message amongst n parties in such a way that any $t$ of these parties can combine their shares to recover the secret, but the secret remains hidden from an adversary corrupting $< t$ parties to get their complete shares and additionally getting some bounded bits of leakage from the shares of the remaining parties.
For both these primitives, whether you store the non-malleable encoding of a message on some tamper-prone system or the parties store shares of the secret on a leakage-prone system, it is important to build schemes that output codewords/shares that are of optimal length and do not introduce too much redundancy into the codewords/shares. This is, in particular, captured by the rate of the schemes, which is the ratio of the message length to the codeword length/largest share length. This thesis explores the question of building these primitives with optimal rates.
The focus of this talk will be on taking you through the journey of non-malleable codes culminating in our near-optimal NMCs with a rate of 1/3.
Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in
Last updated: 09 Dec 2022
|
{}
|
# Weighted norm estimates for the Semyanistyi fractional integrals and Radon transforms
Weighted norm estimates for the Semyanistyi fractional integrals and Radon transforms - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Descargar gratis o leer online en formato PDF el libro: Weighted norm estimates for the Semyanistyi fractional integrals and Radon transforms
Semyanistyis fractional integrals have come to analysis from integral geometry. They take functions on \$R^n\$ to functions on hyperplanes, commute with rotations, and have a nice behavior with respect to dilations. We obtain sharp inequalities for these integrals and the corresponding Radon transforms acting on \$L^p\$ spaces with a radial power weight. The operator norms are explicitly evaluated. Similar results are obtained for fractional integrals associated to \$k\$-plane transforms for any \$1\le k
Autor: Boris Rubin
Fuente: https://archive.org/
|
{}
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Physical controls and ENSO event influence on weathering in the Panama Canal Watershed
## Abstract
Recent empirical studies have documented the importance of tropical mountainous rivers on global silicate weathering and suspended sediment transport. Such field studies are typically based on limited temporal data, leaving uncertainty in the strength of observed relationships with controlling parameters over the long term. A deficiency of long-term data also prevents determination of the impact that multi-year or decadal climate patterns, such as the El Niño Southern Oscillation (ENSO), might have on weathering fluxes. Here we analyze an 18-year hydrochemical dataset for eight sub-basins of the Panama Canal Watershed of high-temporal frequency collected between 1998 and 2015 to address these knowledge gaps. We identified a strongly positive covariance of both cation (Ca2+, Mg2+, K+, Na+) and suspended sediment yields with precipitation and extent of forest cover, whereas we observed negative relationships with temperature and mosaic landcover. We also confirmed a statistical relationship between seasonality, ENSO, and river discharge, with significantly higher values occurring during La Niña events. These findings emphasize the importance that long-term datasets have on identifying short-term influences on chemical and physical weathering rates, especially, in ENSO-influenced regions.
## Introduction
Empirical studies over the past two decades have documented the importance of small mountainous rivers (SMRs), particularly those in the tropics, on global silicate weathering and associated CO2 consumption budgets. In a search for controls, studies have recognized the strong positive feedback between physical and chemical weathering1,2,3 and noted the importance of underlying volcanic lithologies on maintaining elevated chemical yields4,5,6,7,8,9,10,11. Others have identified strong correlations/associations between weathering yields and precipitation/runoff2,3,9,10,12 and, more recently, land use/landcover practices10,11,13. Yet, despite their important contribution to silicate weathering, few datasets for SMRs exist at high temporal resolution10,14 and/or duration15 for such high-yield terrains. This lack of long-term records of high temporal resolution requires caution in generalizing the aforementioned statistical relationships, as years of anomalously high or low precipitation can result in misinterpretation of silicate weathering rates. This is particularly important as SMRs are presently being incorporated into newer global chemical weathering and associated CO2 drawdown models16,17,18. Furthermore, the absence of reliable long-term datasets prevents insight into the impact that multi-year or decadal climate patterns, such as the Pacific Decadal Oscillation (PDO) and El Niño Southern Oscillation (ENSO), can have on weathering fluxes.
In the tropics of the Caribbean and South America, ENSO events have been shown to have a pronounced and varied effect on rainfall19,20,21,22, streamflow22,23,24, soil moisture23, and vegetation23,25. For example, a review of hydrological data for Colombia observed that ENSO effects between 1997 and 1999 were stronger for stream flow than for precipitation, due to concomitant effects on soil moisture and evapotranspiration, with lower than normal soil moisture and stream flow occurring during El Niño conditions and the reverse situation pertaining during La Niña episodes23. Similar trends with ENSO events have been observed in Panama. For example, a previous study documented that an average of 8% less rainfall was received across almost all regions of Panama during 13 El Niño episodes between from 1920 to 198319. Such ENSO-related reductions in precipitation and consequent runoff26,27,28 cause water supply issues for operation of the Panama Canal, which requires nearly 200,000 m3 of water per vessel transit29. It follows that these pronounced hydrological anomalies should also impact weathering fluxes, but this has not been documented to date.
The Panama Canal Watershed (PCW; 2,982 km2; 9° N, 80° W), comprised predominantly of eight major sub-watersheds, offers an ideal location to evaluate the physical and climatic controls on long-term weathering rates and CO2 consumption in tropical SMRs (Fig. 1). For example, the largely forested, steeply-sloping sub-watersheds on the north side of the canal—Río Gatún, Río Boquerón, Río Pequení, Río Chagres, and Río Indio Este, differ markedly from their mostly deforested, gently-sloping counterparts to the south—Río Cano Quebrado, Río Trinidad, and Río Cirí Grande30,31. The region also exhibits a strong trans-isthmus rainfall gradient, with the amount of precipitation received on the windward Atlantic coast more than twice that received on the leeward Pacific coast (~ 4,000 mm/year versus < 1,800 mm/year)32. Lastly, pronounced differences in regional precipitation values during ENSO events permit an evaluation of the impact of short-term climate patterns on weathering fluxes21,26. Here we utilize a robust, high-temporal frequency, hydrochemical dataset collected over 18 years between 1998 and 2015 by the Panama Canal Authority (ACP) to calculate annual and long-term cation and sediment fluxes for the PCW. We compare the resulting fluxes to potential controlling variables, such as mean annual rainfall and temperature, stream gradient and land use/landcover, both to identify compositional controls on river chemistry and to determine the strength and variability of controlling relationships over annual and decadal time scales. We also used this dataset to explore the statistical relationship between weathering fluxes and ENSO events.
### Climatic, geologic, and geomorphologic influences on weathering rates
Our long-term cation yields (corrected for sea salt contribution in precipitation and non-silicate contribution of Ca and Mg) for the PCW range from 2.85 to 19.3 t/km2/year, whereas our suspended sediment yields range from 124 to 1,494 t/km2/year (Table 1; Supplementary Tables 1 and 2). Our cation yields are in the lower range of those previously determined for watersheds across the Panamanian isthmus10, whereas our suspended sediment yields are within the range of those calculated using an earlier dataset for the PCW31. When we analyzed each river individually over the length of the study period using a Pearson correlation, all but one watershed showed a statistically significant (p < 0.05) positive relationship (ravg ≥ 0.75) between cation yields and mean annual precipitation at Lake Gatun (Fig. 2; Supplementary Table 3). We observed a similar, but slightly more variable relationship (ravg = 0.50) between sediment yields and mean annual rainfall (Supplementary Table 4). Although we identified no discernable pattern between correlation strength and geographic location, we did observe regional differences in weathering rates. For example, a two-tailed t-test revealed significant differences in both cation (α = 0.05, p = 0.038) and suspended sediment (α = 0.05, p = 0.011) yields between the north and south watersheds despite no noted differences in average discharge values (α = 0.05, p = 0.28). Our observed lack of statistically significant differences in discharge points to the importance of lithology, as well as other variables, in maintaining high chemical erosion rates. For example, watersheds on the north side of the canal are largely underlain by mafic to intermediate volcanic rocks compared to the largely sedimentary cover for those on the south side11. The importance of volcanic lithology in maintaining disproportionately high cation yields was previously documented as part of an isthmus-wide study10,11. While we observed a negative, and somewhat variable statistically significant relationship between cation yields and basin-wide mean annual temperature (ravg = − 0.39), this is likely due to increased cloud cover associated with precipitation events. This idea is supported by an observed negative statistical relationship between basin wide mean annual rainfall and temperature (r = − 0.47, p < 0.05) over the study period.
Previous studies have documented a strong relationship between physical and chemical weathering rates worldwide in both large catchments33 and SMR watersheds1,2,3,10. Analyzed collectively, our watersheds confirmed this linkage, with 7 of the 11 study years exhibiting a significant positive relationship between cation and sediment fluxes (ravg = 0.62, pavg ≤ 0.17) (Supplementary Tables 5 and 6). However, we observed notable differences in correlation strength when watersheds were analyzed individually over time. For example, we observed positive, yet slightly weaker correlations between these two parameters in the Chagres (r = 0.69, p = 0.02) and Boqueron (r = 0.69, p = 0.02) watersheds despite their location on the windward side of Atlantic range where the precipitation rate is corresponding high (> 3,000 mm/year) (Supplementary Table 4) This counterintuitive result could be explained by the fact that these watersheds are characterized by heavy forest cover (> 90%), which has been previously shown to increase infiltration and soil strength and, therefore, reduce surface runoff in tropical watersheds34,35.
### Relationship of weathering rates with land use/landcover (LULC) practices
Comparisons between weathering fluxes and LULC practices in tropical SMR watersheds have been limited in scale to date10,11,13, due to low temporal resolution geospatial data. However, annual LULC data [forest and mosaic (i.e., forest plus croplands)] measured for the PCW on an annual scale by the ESA Climate Change Initiative (CCI) allowed for their direct comparison with cation values over the 17-year period (The presence of large-scale cropland in only one of the watersheds prevented a comparison with LULC practice). Forest cover was the dominant LULC (> 50%) in all but one of our watersheds and comprised upwards of 95% of LULC in three watersheds (Supplementary Table 7). Our Pearson statistical analysis revealed a positive relationship (r ≥ 0.47, ravg = 0.63) between percent forest cover and cation fluxes for the entire study period (Supplementary Table 5). Alternatively, we observed mosaic LULC practices to exhibit a strong statistically-significant negative relationship (r ≤ − 0.54, ravg = − 0.74) with cation fluxes, with 11 of 18 years exhibiting statistical significance.
Our findings are in general agreement with those from a previous isthmus-wide study for Panama based on spot sampling or rivers over a decade10,11 and suggests land use practices have fundamentally altered hydrological flow pathways and water residence times in this tropical hydrological system. Previous studies in tropical forested watersheds have observed that some ~ 80–90% of rainfall infiltrates into the soil34, where roots can retain water during periods of soil saturation and release it throughout the dry season as baseflow, thus reducing total forest catchment runoff, but increasing flow consistency34,35. The concomitant increase in water residence time in contact with fresh mineral surfaces in the soil regolith allows for increased mineral dissolution and solute export. Conversely, soil compaction in agricultural pasture and croplands increase the likelihood of runoff, which is supported by previous studies in the PCW that show mosaic catchments produce 1.8 times more runoff than their forested counterparts35,36. Interestingly, the decrease in correlation strength between amount of forest cover and cation fluxes in our study watersheds over time coincided with an increase in percent forest cover in the study catchments over the same period (Fig. 3; Supplementary Tables 5 and 8). This counterintuitive pattern may be attributed to the fact that trees in abandoned mosaic plots, reverting back to forest, have not yet established root networks capable of altering hydrological flow pathways.
### Impact of ENSO on stream discharge and weathering fluxes
We hypothesized that both seasonality and ENSO conditions would influence stream discharge (and by cation and sediment fluxes) and, furthermore, that the two factors would interact. To evaluate the effect of ENSO conditions on stream discharge and weathering fluxes, we used the Oceanic Niño Index (ONI), defined as a 3-month running mean of SST anomalies in the Niño 3.4 region (5º S–58º N, 170º W–120º W)37. We justify the use of ONI versus the Southern Oscillation Index (SOI), as previous global compilations studies have observed good agreement between these two indices28 and that ONI allows for distinct classification of ENSO events (The National Oceanic and Atmospheric Association classifies El Niño and La Niña events as 3-month ONI running means either < − 0.5 or > + 0.5 than the long-term average, respectively). We modelled the influence of seasonal periods and ONI on river-discharge and weathering fluxes for the pooled dataset using a mixed-effects model, with seasonal interval (i.e., JFM, etc.) and ONI as fixed effects, and river (i.e., watershed) as a random effect. We accounted for temporal auto-correlation inherent in time-series sampling using an autoregressive AR(1) correlation-structure for residuals. Our model identified a significant interaction of seasonality and ONI on river discharge (p < 0.0001) and cation fluxes (p < 0.0001) and near significance on sediment fluxes (p = 0.056) (Supplementary Table 9). This decoupling between river discharge and sediment flux implies that discharge is not the sole process controlling the sediment flux of PCW rivers. A positive slope-estimate for seasonality is consistent with/reflects the general increase in precipitation from the dry to wet season transition, and corresponding increase in discharge. A negative slope-estimate for ONI suggests less discharge with higher ONI values, which agrees with previous hydrological analyses for the region23,28,38,39.
Our visual inspection of the PCW dataset identified elevated mean discharge and weathering fluxes for several seasonal timesteps during La Nina conditions (Fig. 4). We then performed a series of one-way ANOVA tests to determine statistically significant differences (p < 0.05) between La Niña, El Niño and neutral discharge as well as weathering fluxes. We confirmed statistical differences only for the December-January–February (DJF) and January–February-March (JFM) periods; with La Niña events exhibiting the highest average values for 9 of the 12 tri-monthly time periods (Fig. 4). Our subsequent series of ad-hoc Tukey tests on the dataset revealed significant differences between La Niña and El Niño (p < 0.001) and La Niña and neutral (p ≤ 0.001) discharge as well as weathering flux values for the DJF period (Fig. 5). Significant differences were also observed between weathering flux values and La Niña and neutral discharge (p ≤ 0.035) and La Nina and El Niño (p ≤ 0.002) discharge values for the JFM period. Previous hydrological studies in Panama and northern South America have identified a positive correlation between monthly average and/or daily maximum precipitation23,38, and stream discharge values23,28,38,39 with DJF SOI (i.e. drier El Niño and wetter La Niña); the time period when El Niño events typically achieve their maximum. In neighboring Columbia, strong seasonal moisture advection anomalies created by the winds of the “CHOCO jet” have been offered as a possible explanation for a robust regional relationship between precipitation and the DJF SOI time interval23, while weaker correlations for March–April-May are potentially explained by the fact that ENSO is either just starting to develop or is declining at that time of the year23.
Our observed ENSO-driven deviations in stream discharge have a demonstrable impact on annual cation weathering fluxes. For example, the four highest basin-wide annual average cation yields (Supplementary Table 1; watersheds with 18 year record, only) were recorded in years dominated by strong La Niña events (1999, 2007, 2010, and 2011). This trend was also supported by the sediment yield records for overlapping years (2007, 2010, 2012; Table S2). We also observed that the timing and relative strength of the La Niña event is also an important factor as these same four years exhibited ONI values < 1 through most of the wet season. The only exception to this pattern was in 1998, which was also marked by a strong El Niño event (ONI values > 1) during the first half of the year. Furthermore, a particularly strong La Niña event that occurred during the entirety of the 2010 wet season resulted in average annual cation and sediment fluxes (15–76% and 18%–404%, respectively) that were substantially greater than their respective long-term averages. Exceptionally high rainfall during a 7–8 December 2010 storm, having a return period estimate of 2,000 year40, resulted in numerous landslides throughout the PCW40,41 and river sediment loads that overwhelmed the Panama City water treatment facilities40. High rainfall events associated with La Niña periods have also been linked to an increase in landslides in Columbia and Venezuela42,43. Landslides have been previously shown to play a critical role in maintaining the disproportionately high sediment and cation yields in SMR catchments44,45, and thus are likely playing a similar role in the PCW.
Conversely, the strong El Niño event observed throughout the 2015 calendar year coincided with our lowest basin-wide average annual cation weathering rate over the 18-year study period. This event was also marked by anomalously high ONI values (i.e. > 2) during the latter half of the year and anecdotal stories regarding water shortages across the canal zone. However, we observed a more variable hydrological response to both neutral and El Niño conditions/events throughout the remainder of the study period, which is supported by our statistical analysis. Previous studies have documented an 8% decrease in rainfall in almost all regions of Panama during 13 El Niño episodes between 1920 and 198319 and below median Lake Gatun inflow in 17 of 20 instances when SST anomalies in the NINO3 region exceed 0.6 ºC27. While we did not observe a similar overall hydrological response to El Niño events, this might be in part due to the relative lack of strong El Niño events over our 18-year study period37.
ENSO driven deviations in weathering rates are also apparent through comparisons to previous studies utilizing shorter-term data sets. For example, a recent determination of cation weathering rates for the Chagres and Pequini watersheds10,11 calculated long-term values 80% and 48% greater, than those of this study. Unsurprisingly, 63% of the spot samples and instantaneous discharge values we used to construct the associated weathering equations for the study were collected during La Niña events. With atmospheric modeling predicting an increase in the frequency of both extreme El Niño46 and La Niña events47 due to greenhouse gas warming, caution will need to be employed when evaluating future empirically-based weathering studies.
The new findings presented here not only confirm the need for long-term weathering studies in tropical regions, but also suggest that caution should be employed when incorporating data from regions influenced by multi-year or decadal climate patterns into global compilation studies. While much progress has been made over recent decades on the determination of weathering fluxes from high-yielding terrains in the Caribbean7,9,15, Central America10,11, southeast Asia8, and Oceania2, these are all regions heavily influenced by ENSO events. Interestingly, these regions have also been shown to play a disproportionate role in the delivery of dissolved48,49 and particulate organic carbon48 to the global ocean, and by extension, other nutrients such as PO4. While ENSO events have been shown to affect vertical mixing and associated upwelling of nutrients in affected ocean areas50, our data suggests nearshore locales will also be impacted by concomitant changes in nutrient delivery from the coast. For example, decreased upwelling of nutrients in the eastern Pacific during El Niño periods would be compounded with a corresponding decrease in nutrient export from land, thus exacerbating nutrient limitation in these locales.
## Methods
### Watershed flux calculations
We obtained annual hydrochemical data (1998–2015) for the eight sub-basins of GPCW included in this study from Panama Canal Authority (Atoridad del Canal de Panamá) annual hydrologic reports. A specific breakdown of data availability by sub-basin is provided in Supplementary Table 10.
Prior to its use in denudation calculations, streamwater cation concentrations were corrected for sea-salt contribution as follows: non–sea-salt concentration = measured concentration – (sea-salt correction ratio to Cl)*(Cl). Following Murphy and Stallard (2012), we used the following species-to-chloride ratios for this adjustment: Na+/Cl = 0.85251, K+/Cl = 0.01790, Mg2+/Cl = 0.09689, and Ca2+/Cl = 0.01879. We further adjusted for non-silicate contributions of Ca and Mg using volcanic end-member ratios of 0.5 for Ca/Na and 0.5 for Mg/Na, previously established33. Our use of these values instead of a continental granitic counterpart is justified by the high Ca/Na and Mg/Na concentrations in both waters and the mafic to andesitic character of igneous rocks across central Panama10,11.
Using previously established methodology10, we employed a multistep process whereby individual cation concentrations in the data set were first multiplied by the corresponding average daily discharge value in order to produce an instantaneous chemical denudation value. We then prepared x–y plots of instantaneous denudation values with respective discharge to produce specific elemental yield determination equations (Supplementary Table 10). Our approach is supported by high average correlation (r2) values observed for the instantaneous denudation value-discharge comparisons (Ca = 0.80; Mg = 0.85; Na = 0.82; and K = 0.89). Finally, we substituted daily discharge values over each of the 18 years of record into the equations, and our calculated denudation values were subsequently divided by watershed area to produce annual and long-term estimates of cation weathering rates. For the suspended sediment calculations, we multiplied average daily discharge values by the corresponding average daily suspended sediment concentrations to produce a daily sediment load. Our daily suspended sediment loads were then compiled and subsequently divided by watershed area to produce annual and long-term estimates of suspended sediment yields.
### Hydrological modeling and morphometric analysis of the Greater Panama Canal Watershed in GRASS GIS
The Geographic Resources Analysis Support System (GRASS) GIS dataset used in this study contains topographic, hydrologic, landcover, temperature, and precipitation data and is available on the Open Science Framework at https://osf.io/d5h7s under the CC0 1.0 Universal license. Our topographic data was derived from Japan Aerospace Exploration Agency (JAXA)'s 30 m resolution Advanced Land Observation Satellite (ALOS) Global Digital Surface Model51,52. We derived temperature data from the Global Historical Climatology Network (GHCN) and Climate Anomaly Monitoring System (CAMS) global monthly land surface temperature data from January 1948 to April 2018 gridded at 0.5 × 0.5 degree resolution53. We derived precipitation data from the Climate Prediction Centers (CPC) Merged Analysis of Precipitation (CMAP) global monthly precipitation data from January 1979 to April 2018 gridded at 2.5 × 2.5 degree resolution54. Our landcover data was derived from the ESA Climate Change Initiative (CCI) Landcover Dataset55.
GRASS GIS—a free and open source GIS—was used for hydrological modeling and morphometric analysis. For the sake of reproducibility in open science, our geospatial computations in GRASS GIS were automated with Python. The open source code is available under the GNU General Public License (GPL) 2.0 on the Open Science Framework at https://osf.io/bx5y6/ and on GiHub at https://github.com/baharmon/panama_hydrological_modeling.
The ALOS Global Digital Surface Model for the Greater Panama Canal Watershed study area was hydrologically conditioned to reduce noise56. Multiple flow direction (MFD) flow accumulation was computed over the hydrologically conditioned digital surface model (DSM) using an AT least-cost search algorithm to traverse depressions and obtacles57. We then extracted the stream network from the DSM and flow accumulation, and the stream gage stations were snapped onto the stream network. We derived watershed basin outlets at the stream gages from the flow direction of the stream network. Landcover data from the ESA CCI Landcover Dataset for 1998 through 2015 were reclassified and re-categorized as shown in Supplementary Tables 11 and 12.
We computed topographic, hydrological, and landcover analyses for each basin. The topographic parameters analyzed included elevation, slope, and aspect. Our hydrological parameters included flow accumulation, stream geometry, stream distance, stream order, and stream statistics. Our landcover parameter was the percentage of landcover in each class over the study period. Our topographic, hydrological, and landcover maps were visualized with shaded relief based on the composite of direct illumination derived from topographic relief and diffuse illumination derived from the sky-view factor58.
### Statistical analysis
We performed all statistical tests using JMP (Pro version 14.0.0; SAS Institute, Cary, NC). Data which did not meet conditions for normality using a Shapiro–Wilk check test were log transformed prior to analysis. Sample size (n) associated with our statistical analyses are provided in the respective supplementary data tables, unless otherwise noted.
We modelled the influence of seasonal period (average tri-monthly interval) and ONI37 on the river-discharge (pooled tri-monthly measures n = 1,284), cation flux (n = 1,284), and sediment flux (n = 780) datasets using a linear mixed-effects model (Eqs. 1, 2 and 3):
$${\text{Q}}_{{{\text{tr}}}} = \, \alpha + \beta_{{1}} {\text{ONI }} + \, \beta_{{2}} {\text{TMI }} + \, \beta_{{3}} {\text{ONI}}*{\text{TMI }} + \, \left( {\mu_{{\text{r}}} + {\text{ error}}} \right),$$
(1)
$$Cat_{tr} = \, \alpha + \beta_{{1}} {\text{ONI }} + \, \beta_{{2}} {\text{TMI }} + \, \beta_{{3}} {\text{ONI}}*{\text{TMI }} + \, \left( {\mu_{{\text{r}}} + {\text{ error}}} \right),$$
(2)
$$Sed_{tr} = \, \alpha + \beta_{{1}} {\text{ONI }} + \, \beta_{{2}} {\text{TMI }} + \, \beta_{{3}} {\text{ONI}}*{\text{TMI }} + \, \left( {\mu_{{\text{r}}} + {\text{ error}}} \right),$$
(3)
where Qtr,, Cattr , and Sedtr, represents the estimated discharge, cation flux, and sediment flux, respectively, for a particular tri-monthly period over the 18-year dataset; t, for a particular river; r (Only streams with a full 18 years of data were included in our statistical model for discharge and cation fluxes, which resulted in a total of six river basis. The included sediment record spans eleven years for the same basins). These discharge and weathering flux values were modelled as a function of TMI, a nominal variable indicating the tri-monthly seasonal interval within a year, ONI, a continuous variable, and ONI*TMI representing their interaction. Both visual inspection (graph boxplots of ONI as y-axis and DJF, etc. as x-axis, for the 18-year dataset) and Kendall’s tau suggested little association between ONI and trimonthly interval (Tb = − 0.0385, p = 0.43), therefore we used both as explanatory variables in our statistical analyses. In addition to these fixed effects, µr represents the random effect due to discharge values being associated with the rth river and ε is the random error. River was designated as a random factor, as they are a small sample of the total number found in Panama and were not chosen with some explicit comparison in mind.
We accounted for temporal auto-correlation inherent in time-series sampling using an autoregressive AR(1) correlation-structure for residuals. This auto-correlation structure dictates that the smaller the interval between two measures in a time-series, the greater the dependence between them. Multiple model runs were computed evaluating the impact of both different auto-correlation structures, and different random effects, including ARMA(0,0) error structure which does not incorporate any auto-correlation. We used the Akaike information criterion (AIC) to determine the best model, and we report slope estimates for the fixed-effect parameters from this model, calculated using the restricted maximum likelihood method (REML) (Supplementary Table 9).
## References
1. Jacobson, A. D. & Blum, J. D. Relationship between mechanical erosion and atmospheric CO2 consumption in the New Zealand Southern Alps. Geology 31, 865 (2003).
2. Lyons WB, Carey AE, Hicks DM, Nezat CE. Chemical weathering in high-sediment-yielding watersheds, New Zealand. J. Geophys. Res. 110 (2005).
3. Gaillardet, J. et al. Orography-driven chemical denudation in the Lesser Antilles: Evidence for a new feed-back mechanism stabilizing atmospheric CO2. Am. J. Sci. 311, 851–894 (2011).
4. Louvat, P. & Allègre, C. J. Present denudation rates on the island of Réunion determined by river geochemistry: Basalt weathering and mass budget between chemical and mechanical erosions. Geochim. Cosmochim. Acta 61, 3645–3669 (1997).
5. Dessert, C. et al. Erosion of Deccan Traps determined by river geochemistry: Impact on the global climate and the 87Sr/86Sr ratio of seawater. Earth Planet. Sci. Lett. 188, 459–474 (2001).
6. Dessert, C., Dupré, B., Gaillardet, J., François, L. M. & Allègre, C. J. Basalt weathering laws and the impact of basalt weathering on the global carbon cycle. Chem. Geol. 202, 257–273 (2003).
7. Rad, S., Louvat, P., Gorge, C., Gaillardet, J. & Allègre, C. J. River dissolved and solid loads in the Lesser Antilles: New insight into basalt weathering processes. J. Geochem. Explor. 88, 308–312 (2006).
8. Schopka, H. H., Derry, L. A. & Arcilla, C. A. Chemical weathering, river geochemistry and atmospheric carbon fluxes from volcanic and ultramafic regions on Luzon Island, the Philippines. Geochim. Cosmochim. Acta 75, 978–1002 (2011).
9. Goldsmith, S. T. et al. Stream geochemistry, chemical weathering and CO2 consumption potential of andesitic terrains, Dominica, Lesser Antilles. Geochim. Cosmochim. Acta 74, 85–103 (2010).
10. Goldsmith, S. T. et al. Evaluation of controls on silicate weathering in tropical mountainous rivers: Insights from the Isthmus of Panama. Geology 43, 563–566 (2015).
11. Harmon, R. S. et al. Linking silicate weathering to riverine geochemistry—A case study from a mountainous tropical setting in west-central Panama. Geol. Soc. Am. Bull. 128, 1780–1812 (2016).
12. McAdams, B. C., Trierweiler, A. M., Welch, S. A., Restrepo, C. & Carey, A. E. Two sides to every range: Orographic influences on CO2 consumption by silicate weathering. Appl. Geochem. 63, 472–483 (2015).
13. Rad, S., Cerdan, O., Rivé, K. & Grandjean, G. Age of river basins in Guadeloupe impacting chemical weathering rates and land use. Appl. Geochem. 26, S123–S126 (2011).
14. Lloret, E. et al. Comparison of dissolved inorganic and organic carbon yields and fluxes in the watersheds of tropical volcanic islands, examples from Guadeloupe (French West Indies). Chem. Geol. 280, 65–78 (2011).
15. Murphy, S.F. & Stallard, R.F. Water Quality and Landscape Processes of Four Watersheds in Eastern Puerto Rico. U.S. Geological Survey Professional Paper (2012).
16. Hilley, G. E. & Porder, S. A framework for predicting global silicate weathering and CO2 drawdown rates over geologic time-scales. Proc. Natl. Acad. Sci. USA. 105, 16855–16859 (2008).
17. Hartmann, J., Jansen, N., Dürr, H. H., Kempe, S. & Köhler, P. Global CO2-consumption by chemical weathering: What is the contribution of highly active weathering regions?. Glob. Planet. Change 69, 185–194 (2009).
18. Ibarra, D. E. et al. Differential weathering of basaltic and granitic catchments from concentration–discharge relationships. Geochim. Cosmochim. Acta 190, 265–293 (2016).
19. Estoque, M. A., Luque, J., Chandeck Monteza, M. & Garcia, J. Effects of El Niño on Panama rainfall. Geofisica Int. 24, 355–381 (1985).
20. Waylen, P. R., Caviedes, C. N. & Quesada, M. E. Interannual variability of monthly precipitation in Costa Rica. J. Clim. 9, 2606–2613 (1996).
21. Poveda, G., Waylen, P. R. & Pulwarty, R. S. Annual and inter-annual variability of the present climate in northern South America and southern Mesoamerica. Palaeogeogr. Palaeoclimatol. Palaeoecol. 234, 3–27 (2006).
22. Castillo, M. & Muñoz-Salinas, E. Controls on peak discharge at the lower course of Ameca River (Puerto Vallarta graben, west-central Mexico) and its relation to flooding. CATENA 151, 191–201 (2017).
23. Poveda, G., Jaramillo, A., Gil, M. M., Quiceno, N. & Mantilla, R. I. Seasonally in ENSO-related precipitation, river discharges, soil moisture, and vegetation index in Colombia. Water Resour. Res. 37, 2169–2178 (2001).
24. Poveda, G. et al. Linking long-term water balances and statistical scaling to estimate river flows along the drainage network of Colombia. J. Hydrol. Eng. 12, 4–13 (2007).
25. Condit, R. et al. Tropical forest dynamics across a rainfall gradient and the impact of an El Niño dry season. J. Trop. Ecol. 20, 51–72 (2004).
26. Lachniet, M. S. A 1500-year El Niño/southern oscillation and rainfall history for the Isthmus of Panama from speleothem calcite. J. Geophys. Res. 109 (2004).
27. Graham, N. E., Georgakakos, K. P., Vargas, C. & Echevers, M. Simulating the value of El Niño forecasts for the Panama Canal. Adv. Water Resour. 29, 1665–1677 (2006).
28. Ward, P. J., Beets, W., Bouwer, L. M., Jeroen C. J. & Renssen, H. Sensitivity of river discharge to ENSO. Geophys. Res. Lett. 37 (2010).
29. Condit, R. et al. The status of the Panama Canal Watershed and its biodiversity at the beginning of the 21st century. Bioscience 51, 389 (2001).
30. Harmon, R.S. The Río Chagres, Panama—Multidisciplinary Profile of a Tropical Watershed (ed. Harmon, R.S.), Chap. 2 (Springer, 2005).
31. Stallard, R.S. & Kinner, D.A. The Río Chagres, Panama—Multidisciplinary Profile of a Tropical Watershed (ed. Harmon, R.S.), Chap. 15 (Springer, 2005).
32. Instituto Geográfico Nacional ‘Tommy Guardia’ (Panama-Stad). Atlas Nacional de la Republica de Panama. (1988).
33. Gaillardet, J., Dupré, B., Louvat, P. & Allègre, C. J. Global silicate weathering and CO2 consumption rates deduced from the chemistry of large rivers. Chem. Geol. 159, 3–30 (1999).
34. Bruijnzeel, L. A. Hydrological functions of tropical forests: Not seeing the soil for the trees?. Agric. Ecosyst. Environ. 104, 185–228 (2004).
35. Ogden, F. L., Crouch, T. D., Stallard, R. F. & Hall, J. S. Effect of land cover and use on dry season river runoff, runoff efficiency, and peak storm runoff in the seasonal tropics of Central Panama. Water Resour. Res. 49, 8443–8462 (2013).
36. Litt, G. F., Gardner, C. B., Ogden, F. L. & Berry Lyons, W. Hydrologic tracers and thresholds: A comparison of geochemical techniques for event-based stream hydrograph separation and flowpath interpretation across multiple land covers in the Panama Canal Watershed. Appl. Geochem. 63, 507–518 (2015).
37. National Oceanic Atmospheric Administration National Weather Service. Center for Climatic Prediction (Monthly Oceanic Nino Index, Camp Springs, 2020).
38. Dettinger, M. D., Battisti, D. S., Garreaud, R. D., McCabe, G. J. & Bitz, C. M. Interhemispheric effects of interannual and decadal ENSO-like climate variations on the Americas. In Interhemispheric Climate Linkages 1–16 (2001).
39. Misir, V., Arya, D. S. & Murumkar, A. R. Impact of ENSO on river flows in Guyana. Water Resour. Manag. 27, 4611–4621 (2013).
40. Shamir, E., Georgakakos, K. P. & Murphy, M. J. Frequency analysis of the 7–8 December 2010 extreme precipitation in the Panama Canal Watershed. J. Hydrol. 480, 136–148 (2013).
41. Wohl, E. & Ogden, F. L. Organic carbon export in the form of wood during an extreme tropical storm, Upper Rio Chagres, Panama. Earth Surf. Process. Landforms https://doi.org/10.1002/esp.3389 (2013).
42. Klimeš, J. & Rios Escobar, V. A landslide susceptibility assessment in urban areas based on existing data: An example from the Iguaná Valley, Medellín City, Colombia. Nat. Hazards Earth Syst. Sci. 10, 2067–2079 (2010).
43. Sepúlveda, S. A. & Petley, D. N. Regional trends and controlling factors of fatal landslides in Latin America and the Caribbean. Nat. Hazards Earth Syst. Sci. 15, 1821–1833 (2015).
44. Riebe, C. S. Tectonic and Climatic Control of Physical Erosion Rates and Chemical Weathering Rates in the Sierra Nevada (Inferred from Cosmogenic Nuclides and Geochemical Mass Balance, California, 2000).
45. Dadson, S. et al. Hyperpycnal river flows from an active mountain belt. J. Geophys. Res. Earth Surf. 110 (2005).
46. Cai, W. et al. Increasing frequency of extreme El Niño events due to greenhouse warming. Nat. Clim. Change 4, 111–116 (2014).
47. Cai, W. et al. Increased frequency of extreme La Niña events under greenhouse warming. Nat. Clim. Change 5, 132–137 (2015).
48. Lyons, W. B., Berry Lyons, W., Nezat, C. A., Carey, A. E. & Murray Hicks, D. Organic carbon fluxes to the ocean from high-standing islands. Geology 30, 443 (2002).
49. Carey, A. E. Organic carbon yields from small, mountainous rivers, New Zealand. Geophys. Res. Lett. 32 (2005).
50. Barber, R. T. & Chavez, F. P. Biological consequences of El Nino. Science 222, 1203–1210 (1983).
51. Tadono, T. et al. Precise global DEM generation by ALOS PRISM. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. II-4, 71–76 (2014).
52. Takaku, J., Tadono, T. & Tsutsui, K. Generation of high resolution global DSM from ALOS PRISM. In ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4 243–248 (2014).
53. Fan, Y. & van den Dool, H. A global monthly land surface air temperature analysis for 1948–present. J. Geophys. Res. 113 (2008).
54. Xie, P. & Arkin, P. A. Global precipitation: A 17-year monthly analysis based on gauge observations, satellite estimates, and numerical model outputs. Bull. Am. Meteorol. Soc. 78, 2539–2558 (1997).
55. Hollmann, R. et al. The ESA climate change initiative: Satellite data records for essential climate variables. Bull. Am. Meteorol. Soc. 94, 1541–1552 (2013).
56. Lindsay, J. B. & Creed, I. F. Removal of artifact depressions from digital elevation models: Towards a minimum impact approach. Hydrol. Process. 19, 3113–3126 (2005).
57. Ehlschlaeger, C. et al. Conflating survey data into sociocultural indicator maps. Army Eng. Rsch. Dev. Center Champaign U. S. https://doi.org/10.21079/11681/29921 (2018).
58. Zakšek, K., Oštir, K. & Kokalj, Ž. Sky-view factor as a relief visualization technique. Remote Sens. 3, 398–415 (2011).
## Author information
Authors
### Contributions
S.G. and R.H. conceived the project. The text was written by D.S., S.G., and R.H. with contributions from B.H. and J.E., and feedback from all authors. All figures were assembled by either D.S., S.G. or B.H. with input from all authors. D.S, S.G. and B.H. are responsible for the data analysis.
### Corresponding author
Correspondence to Steven T. Goldsmith.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Smith, D.F., Goldsmith, S.T., Harmon, B.A. et al. Physical controls and ENSO event influence on weathering in the Panama Canal Watershed. Sci Rep 10, 10861 (2020). https://doi.org/10.1038/s41598-020-67797-7
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-020-67797-7
|
{}
|
# Partial Fraction Integration
1. May 6, 2007
### tanky322
1. The problem statement, all variables and given/known data
integral((3X+4)/((x^2+4)(3-x))
2. Relevant equations
3. The attempt at a solution
I know I should be using partial fractions for this problem but the x^2 in the denom. screws me up. I think the partial fraction should be:
(Ax+B)/(x^2+4) + C/(3-x)
Then:
(Ax+B)(3-x)+C(x^2+4)
Then:
(x^2+4)(3-x)=-Bx-Ax^2+3B+3Ax+Cx^2+C4
So now Im completely lost assuming im even on the right track.
Thanks Alot!!!
2. May 6, 2007
### Office_Shredder
Staff Emeritus
This makes no sense. Your first steps aren't actual equations, which is confusing you.
(Ax+B)/(x^2+4) + C/(3-x) = (3x+4)/(x^2+4)(3-x)
Multiplying by the two denomiator parts gives
(Ax+B)(3-x) + c(x^2+4) = 3x+4
This is for all x, so plug in clever values (like x=3), to solve for a, b and c
3. May 6, 2007
### cristo
Staff Emeritus
I'm not sure what you're doing! You have $$\frac{3x+4}{(x^2+4)(3-x)}=\frac{Ax+B}{x^2+4}+\frac{C}{3-x}$$. Now multiply through by the denominator of the LHS to yield $$3x+4=(Ax+B)(3-x)+C(x^2+4)$$. Now you need to find three equations to obtain A B and C.
4. May 6, 2007
### tanky322
Ok it makes more sense now, this partial fraction stuff drives me nuts!:yuck:
Thanks guys!
5. May 7, 2007
### tanky322
Ok so I just worked on this problem again.... It was driving me nuts! This is what I came out with,
A=1
B=0
C=1
So for a final answer I came up with,
(Ln(x^2+4))/2+Ln(3-x)+ C
Is that right??
Thanks,
Andy
6. May 7, 2007
### Curious3141
Partial fraction decomposition right. Integration wrong. The second term should have a negative sign. The answer should be $$\frac{1}{2}\ln{(x^2+4)} - \ln{|3-x|} + C$$. Don't forget the modulus sign on the second term (it's optional for the first since the square expression is nonnegative for real x).
|
{}
|
In a variety of graph embedding methods, given association strength $w_{ij} \geq 0$ of given data pair $(\boldsymbol{x}_i,\boldsymbol{x}_j) \in \mathbb{R}^p \times \mathbb{R}^p$ is predicted by the kernel function $\mu_{\boldsymbol{\theta}}(\boldsymbol{x}_i,\boldsymbol{x}_j)$. Whereas the kernel function $\mu_{\boldsymbol{\theta}}$ can be determined by optimizing log-likelihood of some probabilistic models, the optimization relies on the quality of given association strength $\{w_{ij}\}$; their result strongly affected by noises in $\{w_{ij}\}$. To relieve the adverse effect of the noises, Okuno and Shimodaira (AISTATS2019, to appear) proposes β-graph embedding (β-GE) that employs a newly proposed robust moment matching β-score function (MMBSF). MMBSF includes negative log-likelihood of Poisson-based probabilistic model as β=0.
In the following, we plot feature vectors with noisy $\{w_{ij}\}$, obtained by (i) existing likelihood-based GE that corresponds to β=0, (ii) proposed GE with β=0.5, and (iii) proposed GE with β=1. Each point represents each vector whose color shows its cluster, and grey lines show associated data pairs $\{(i,j) \mid w_{ij}>0\}$. Proposed robust GE is able to distinguish colored clusters even if $\{w_{ij}\}$ is noisy, whereas the existing GE cannot.
MMBSF is related to unnormalized β-score function (UBSF; see Kanamori and Fujisawa, 2015) that robustly measures the discrepancy between two non-negative functions. Technically speaking, MMBSF has two advantages (i) also robust against distributional misspecification of $w_{ij} \mid \boldsymbol{x}_i,\boldsymbol{x}_j$ and (ii) computationally feasible, in contrast to naively applying UBSF to the probabilistic models for graph embedding.
|
{}
|
# RS-485 using USART or UART port on STM32
On STM32F405 MCUs there are USART ports as well as UART ports available. If I need to implement RS-485 communication then which one of these should be used, USART or UART? Or are both of them equally good for RS-485 communication?
I have searched through the datasheet as well as reference manual for the MCU, but it does not provide additional information regarding the RS-485 implementation.
• Why is this question tagged with "3.3v"? How is this relevant? – Peter Mortensen Feb 25 at 12:27
|
{}
|
# Regularity lemmas for clustering graphs
-
Fan Chung, University of California, San Diego
Fine Hall 224
A fundamental tool in graph theory is Szemeredi's regularity lemma which asserts that any dense graph can be partitioned into finitely many parts so that almost all edges are contained in the union of bipartite subgraphs between pairs of the parts and these bipartite subgraphs are random-like under the notion of epsilon-regularity. Here, we consider a variation of the regularity lemma for graphs with a nontrivial clustering coefficient. The clustering coefficient is the ratio of the number of triangles and the number of paths of length 2 in a graph. Note many real-world graphs have large clustering coefficients and such a clustering effect is one of the main characteristics of the so-called small world phenomenon''. In this talk, we give a regularity lemma for clustering graphs without any restriction on edge density. We also discuss several generalizations of the regularity lemma and mention some related problems.
|
{}
|
# Particle Physics Planet
## October 03, 2015
### Georg von Hippel - Life on the lattice
Fundamental Parameters from Lattice QCD, Last Days
The last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.
On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).
On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).
On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.
### Clifford V. Johnson - Asymptotia
Benedict
I call this part of the garden Benedict, for obvious reasons... right?
The post Benedict appeared first on Asymptotia.
## October 02, 2015
### Emily Lakdawalla - The Planetary Society Blog
New Horizons releases new color pictures of Charon, high-resolution lookback photo of Pluto
Now that New Horizons is regularly sending back data, the mission is settling into a routine of releasing a set of captioned images on Thursdays, followed by raw LORRI images on Friday. The Thursday releases give us the opportunity to see lovely color data from the spacecraft's Ralph MVIC instrument. This week, the newly available color data set covered Charon.
### Christian P. Robert - xi'an's og
moral [dis]order
“For example, a religiously affiliated college that receives federal grants could fire a professor simply for being gay and still receive those grants. Or federal workers could refuse to process the tax returns of same-sex couples simply because of bigotry against their marriages. It doesn’t stop there. As critics of the bill quickly pointed out, the measure’s broad language — which also protects those who believe that “sexual relations are properly reserved to” heterosexual marriages alone — would permit discrimination against anyone who has sexual relations outside such a marriage. That would appear to include women who have children outside of marriage, a class generally protected by federal law.” The New York Time
An excerpt from this week New York Time Sunday Review editorial about what it qualifies as “a nasty bit of business congressional Republicans call the First Amendment Defense Act.” A bill which first line states to be intended to “prevent discriminatory treatment of any person on the basis of views held with respect to marriage” and which in essence would allow for discriminatory treatment of homosexual and unmarried couples not to be prosecuted. A fine example of Newspeak if any! (Maybe they could also borrow Orwell‘s notion of a Ministry of Love.) Another excerpt of the bill that similarly competes for Newspeak of the Year:
(5) Laws that protect the free exercise of religious beliefs and moral convictions about marriage will encourage private citizens and institutions to demonstrate tolerance for those beliefs and convictions and therefore contribute to a more respectful, diverse, and peaceful society.
This reminded me of a story I was recently told me about a friend of a friend who is currently employed by a Catholic school in Australia and is afraid of being fired if found being pregnant outside of marriage. Which kind of “freedom” is to be defended in such “tolerant” behaviours?!
Filed under: Kids, Travel Tagged: 1984, bigotry, discrimination, George Orwell, Newspeak, same-sex marriage, The New York Times
### Tommaso Dorigo - Scientificblogging
Thank You Guido
It is with great sadness that I heard (reading it here first) about the passing away of Guido Altarelli, a very distinguished Italian theoretical physicist. Altarelli is best known for the formulas that bear his name, the Altarelli-Parisi equations (also known as DGLAP since it was realized that due recognition for the equations had to be given also to Dokshitzer, Gribov, and Lipatov). But Altarelli was a star physicist who gave important contributions to Standard Model physics in a number of ways.
### Christian P. Robert - xi'an's og
Argentan, 30th and 17th and 7th edition(s)
When I started the ‘Og, in 2008, I was about to run the 23rd edition of the Argentan half-marathon… Seven years later, I am once again getting ready for the race, after a rather good training season, between the mountains of the North Cascade and the track of Malakoff. with the last week in England, Holland, and Canada having seen close to two trainings a day. (Borderline stress injury, maybe!) Weather does not look too bad this year, so we’ll see tomorrow how I fare against myself (and the other V2 runners, incidentally!).
Filed under: Running, Travel Tagged: Argentan, France, half-marathon, Malakoff, Normandy, North Cascades National Park, veteran (V2)
### Symmetrybreaking - Fermilab/SLAC
The burgeoning field of neutrino astronomy
A new breed of experiments seeks sources of cosmic rays and other astrophysics phenomena.
Ghostlike subatomic particles called neutrinos could hold clues to some of the greatest scientific questions about our universe: What extragalactic events create ultra-high-energy cosmic rays? What happened in the first seconds following the big bang? What is dark matter made of?
Scientists are asking these questions in a new and fast-developing field called neutrino astronomy, says JoAnne Hewett, director of Elementary Particle Physics at SLAC National Accelerator Laboratory.
“When I was a graduate student I never thought we’d be thinking about neutrino astronomy,” she says. “Now not only are we thinking about it, we’re already doing it. At some point it will be a standard technique.”
Neutrinos, the most abundant massive particles in the universe, are produced in a multitude of different processes. The new neutrino astronomers go after several types of neutrinos: ultra-high-energy neutrinos and neutrinos from supernovae, which they can already detect, and low-energy ones they have only measured indirectly so far.
“Every time we look for these astrophysical neutrinos, we’re hoping to learn two things,” says André de Gouvêa, a theoretical physicist at Northwestern University: what high-energy neutrinos can tell us about the processes that produced them, and what low-energy neutrinos can tell us about the conditions of the early universe.
#### Ultra-high-energy neutrinos
At the ultra-high-energy end of the spectrum, researchers hope to follow cosmic neutrinos like a trail of bread crumbs back to their sources. They are thought to originate in the universe’s most powerful, natural particle accelerators, such as supermassive black holes.
“We’re confident we’ve seen neutrinos coming from outside (our galaxy)—astrophysical sources,” says Kara Hoffman, a physics professor at the University of Maryland. She is a member of the international collaboration for IceCube, the largest neutrino telescope on the planet, which uses a cubic kilometer of South Pole ice as a massive, ultrasensitive detector.
Scientists have been tracking high-energy particles from space for decades. But cosmic neutrinos are different: Because they are neutral particles, they travel in a straight line, unaffected by the magnetic fields of space.
IceCube collaborators are exploring whether there is a correlation between ultra-high-energy neutrino events and observations of incredibly intense releases of energy known as gamma-ray bursts. Scientists also hope to learn whether there is a correlation between these neutrino events and with theorized phenomena known as gravitational waves.
Alexander Friedland, a theorist at SLAC, says high-energy neutrinos (which are less energetic than ultra-high-energy neutrinos) can provide a useful window into physics at the earliest stages of supernovae explosions.
“Neutrinos tell you about the explosion engine, and what happens later when the shock goes through,” Friedland says. “These are very rich conditions that you can never make on Earth. This is an amazing experiment that nature made for us.”
With modern detectors it may be possible to detect thousands of neutrinos and to reconstruct their energy on a second-by-second basis.
“Neutrinos basically give you a different eye to look at the universe and a unique probe of new physics,” Friedland says.
#### Low-energy neutrinos
At the low-energy end of the spectrum, researchers hope to find “relic” neutrinos produced at the start of the universe, leftovers from the big bang. Their energy is expected to be more than a quadrillion times lower than the highest-energy neutrinos.
The lower the energy of the neutrino, however, the harder it is to detect. So for now, the cosmic neutrino background remains somewhat out of reach.
“We already know a lot about it, even though we’ve never seen it directly,” de Gouvêa says. “If we look at the universe at very large scales, we can only explain things if this background exists. We can safely say: ‘Either this cosmic neutrino background exists, or there is something out there that behaves exactly like neutrinos do.’”
The European Space Agency’s Planck satellite has helped to shape our understanding of this relic neutrino background, and the planned ground-based Large Synoptic Survey Telescope will provide new data. These surveys provide bounds on the quantity and interaction of these relic neutrinos, and can give us information about neutrino mass.
As detectors become more sensitive, researchers may also learn whether a theorized particle called a “sterile neutrino” may be a component in dark matter, the invisible stuff we know accounts for most of the mass of the universe.
Some proposed experiments, such as PTOLEMY at Princeton Plasma Physics Laboratory and the Project 8 collaboration, led by scientists at the Massachusetts Institute of Technology and University of California, Santa Barbara, are working to establish properties of these neutrinos by watching for evidence of their production in a radioactive form of hydrogen called tritium.
There are several upgrades and new projects in the works in the fledgling field of neutrino astronomy.
A proposal called PINGU would extend the sensitivity of the IceCube array to a broader range of neutrino energies. It could look for neutrinos coming from the center of the sun, a possible sign of dark matter interactions, and could also look for neutrinos produced in Earth’s atmosphere.
Another project would greatly expand an underwater neutrino observatory in the Mediterranean called Antares. A third project would build a large-scale observatory in a lake in Siberia.
Scientists also hope to eventually establish the Askaryan Radio Array, a 100-cubic-kilometer neutrino detector in Antarctica.
The field of neutrino astronomy is young, but it’s constantly growing and improving, Hoffman says.
“It’s kind of like having a Polaroid that you’re waiting to develop, and you just start to see the shadow of something,” she says. “What the picture’s going to look like we don’t really know.”
Like what you see? Sign up for a free subscription to symmetry!
### Emily Lakdawalla - The Planetary Society Blog
Thousands of Photos by Apollo Astronauts now on Flickr
A cache of more than 8,400 unedited, high-resolution photos taken by Apollo astronauts during trips to the moon is now available for viewing and download on Flickr.
### Emily Lakdawalla - The Planetary Society Blog
Mars Exploration Rovers Update: Opportunity Rocks on Ancient Water During Walkabout
Opportunity continued her walkabout around Marathon Valley in September and sent home more evidence of significant water alteration and, perhaps, an ancient environment inviting enough for the emergence of life.
### ZapperZ - Physics and Physicists
25% Of Physics Nobel Laureates Are Immigrants
The people at Physics World have done an interesting but not surprising study on the number of Physics Nobel laureates who are/were immigrants. They found that this number is more than 1/4 of all Physics Nobel winners.
They discussed what they used as a criteria of an "immigrant", and the chart they showed certainly is very clear that there is a huge influx of these talents into the US.
Still, it would be nice to see how many of these immigrants did their Nobel Prize winning work before they migrated. And I definitely want to see this statistics for the next 10-20 years, especially now that they US is severely cutting budgets into basic physics research, the effects of which will not be felt immediately.
In any case, it is that time of the year again where we all make our predictions or guesses on who will win this prize this year. I am still pinning hopes that a woman will win this, considering that we have been having very strong candidates for several years.
Zz.
### Peter Coles - In the Dark
The 9 kinds of physics seminar
I just couldn’t resist reblogging this!! :-)
Originally posted on Many Worlds Theory:
As a public service, I hereby present my findings on physics seminars in convenient graph form. In each case, you will see the Understanding of an Audience Member (assumed to be a run-of-the-mill PhD physicist) graphed as a function of Time Elapsed during the seminar. All talks are normalized to be of length 1 hour, although this might not be the case in reality.
The “Typical” starts innocently enough: there are a few slides introducing the topic, and the speaker will talk clearly and generally about a field of physics you’re not really familiar with. Somewhere around the 15 minute mark, though, the wheels will come off the bus. Without you realizing it, the speaker will have crossed an invisible threshold and you will lose the thread entirely. Your understanding by the end of the talk will rarely ever recover past 10%.
The “Ideal” is what physicists strive for in…
View original 763 more words
### CERN Bulletin
Guido Altarelli (1941 - 2015)
The CERN community was deeply saddened to learn that Guido Altarelli had passed away on 30 September.
He was a true giant of particle physics and of CERN. His contributions to physics span all subjects, from strong to electroweak interactions, from neutrinos to theories beyond the Standard Model, and from the study of precision measurements to the analysis of apparent anomalies, whose interpretation in terms of new physics he often exposed as naïve and unjustified. He left milestones in the progress of our field wherever he went. The awards of the Sakurai Prize in 2012 and of the EPS Prize in 2015 rank him among the greats, and reflect only in part the wealth of knowledge he gave to high-energy physics. Guido Altarelli was not only a great scientist, but also a person of great integrity.
He was always available to make the bridge between experiment and theory and to share his time and wisdom with the experiments and the wider laboratory. The scientific community has lost a great scientist and a great friend.
The Director-General has sent a letter of condolence to his family and a full obituary will follow in the CERN Courier.
### CERN Bulletin
Researchers' Night: POPScience in Balexert
European Researchers’ Night was held on 25 September 2015. With POP Science - the Researchers’ Night event for the Geneva region - CERN met the public at the Balexert shopping centre.
### CERN Bulletin
Collection for Refugee and Migration Crisis
Dear Colleagues, In response to the current refugee and migration crisis, we are starting a collection today and we are calling on your generosity. The funds will be forwarded to the International Federation of Red Cross and Red Crescent Societies to respond to the humanitarian needs of the refugees and migrants, providing immediate and longer-term relief, including emergency medical care and basic health services, psychological support, temporary shelter, distribution of food & water and other urgently needed items. We hope that your contributions to the above-mentioned appeal will not prevent you from sparing a thought for them and doing whatever you can to help them. Bank account details for donations: Bank account holder: Association du personnel CERN - 1211 GENEVE 23 Account number: 279-HU106832.1 IBAN: CH85 0027 9279 HU10 6832 1 BIC: UBSWCHZH80A Please mention: Refugee and Migration Crisis
### CERN Bulletin
Cine club
Wednesday 7 October 2015 at 20:00 CERN Council Chamber The Day of the Beast (El día de la bestia) Directed by Álex de la Iglesia Spain, 1995, 100 minutes A Basque priest finds by means of a cabalistic study of the Bible that the Antichrist is going to be born on Christmas Day in Madrid. Assisted by a heavy-metal fan and the host of a TV show on the occult, he will try to summon the Devil to find out the place of birth and kill the baby. Original version spanish; english subtitles Wednesday 14 October 2015 at 20:00 CERN Council Chamber Tesis Directed by Alejandro Amenábar Spain, 1996, 125 minutes Why is death and violence so fascinating? Is it morally correct to show violence in movies? If so, is there a limit to what we should show? That's the subject of Ángela's examination paper. She is a young student at a film school in Madrid. Together with the student Chema (who is totally obsessed with violent movies) they find a snuff movie in which a young girl is tortured and killed. Soon they discover that the girl was a former student at their school... Original version spanish; english subtitles
### astrobites - astro-ph reader's digest
Cosmic Rays Make for Windy Galaxies
Title: Launching Cosmic Ray-Driven Outflows from the Magnetized Interstellar Medium
Authors: Philipp Girichidis et. al.
First author’s institution: Max-Planck-Institut für Astrophysik, Garching, Germany
Status: Submitted to ApJ
Galaxy evolution is a game of balance. Galaxies grow as they accumulate gas from what is called the intergalactic medium and as they merge with smaller galaxies. Over time, galaxies convert their gas into stars. This inflow of gas and the subsequent star formation is balanced out by the outflow of gas from galactic winds, through a process known as feedback. As suggested by observations and seen in simulations, these winds are driven by supernovae explosions that occur as stars die. Eventually, gas driven out may eventually fall back into the galaxy, continuing a stellar circle of life.
Although simulations have done a good job reproducing galaxy properties worrying about feedback from supernovae alone, this is far from the complete picture. Cosmic rays, or high energy protons, electrons, and nuclei moving near the speed of light, can create a significant pressure in galaxies through collisions with the gas in galaxies. Supernovae explosions are important sources of cosmic rays in galaxies. Simulating cosmic rays is computationally challenging, yet they may be very important for understanding the life cycle and structure of galaxies. In addition, since cosmic rays are charged particles, worrying about how they interact with magnetic fields in galaxies may be very important. The authors of today’s astrobite use hydrodynamic simulations with magnetic fields (or magnetohydrodynamics, MHD), cosmic rays, and supernovae to try and better understand their roles in driving galactic winds.
## Testing Feedback
FIgure 1: Density slices through the gas disk in each of three simulations. The vertical axis gives height above the disk. From left to right, the simulations include only thermal energy from supernovae explosions, only cosmic rays from supernovae, and both. (Source: Figure 1 of Girchidis et. al. 2015)
The authors aim to understand how supernovae, magnetic fields, and cosmic rays affect the evolution of gas contained within the disk of a galaxy. In particular, their test is to see what, if any, of the three best reproduces the density and temperature distribution of gas as a function of height above the gas disk. They perform three simulations of a galaxy disk, all three of which include magnetic fields. One simulation includes the thermal energy injected by supernovae explosions only, one includes cosmic rays generated by supernovae only, and the third includes both. The gas density of these three simulations are shown (left to right) in Figure 1, 250 million years after the start of the simulation.
Figure 2: This is a more quantitative view of what is shown in Figure 1. Shown is the gas density as a function of height above the disk (z) for the run with thermal supernovae energy only (black), cosmic rays only (blue), and both (red). These are compared against observations of the Milky Way (yellow). (Source: Figure 2 of Girchidis et. al. 2015)
Putting numbers to Figure 1, Figure 2 shows the gas density as a function of height above the disk for all three simulations: thermal supernovae energy only (black), cosmic rays only (blue), and both (red). The vertical lines show the position within which each simulation contains 90% of the gas mass. As shown, including only thermal supernovae energy produces a dense disk, with little gas above the disk. Adding in cosmic rays changes this significantly, driving out quite a lot of gas mass to large heights above the disk. This is in part because cosmic rays are able to quickly diffuse to large distances above the disk. The gas from the disk then flows out from the disk to large heights, following the large pressure gradient established by the cosmic rays. The cosmic ray simulations do a much better job of matching the yellow line, which gives observational estimates of the gas density above the disk of our Milky Way.
In addition, the authors go on to show that, over time, including cosmic rays serves to slowly grow the thickness of the gas disk, and quickly dumps gas at large heights above the disk. They also show that the mass flow rate of galactic winds generated by cosmic rays is nearly an order of magnitude greater than those generated by thermal energy injection alone.
## Developing an Accurate Model of Galaxy Formation
This research aims to better describe the evolution of galaxies by including the effects of supernovae feedback as well as the not-well-understood effects of cosmic rays and magnetic fields in their simulations. Their work shows that cosmic rays are able to dive out a significant amount of gas from the disks of galaxies, potentially tilting the balance between gas inflow and star formation, and gas outflow. Understanding this process better with further work will bring the properties of simulated galaxies in better agreement with observed galaxies in our Universe.
## October 01, 2015
### Emily Lakdawalla - The Planetary Society Blog
Cargo Craft Completes Six-Hour Schlep to ISS
A Russian cargo craft laden with more than three tons of food, fuel and supplies arrived at the International Space Station today.
### Christian P. Robert - xi'an's og
importance sampling with multiple MCMC sequences
Vivek Roy, Aixian Tan and James Flegal arXived a new paper, Estimating standard errors for importance sampling estimators with multiple Markov chains, where they obtain a central limit theorem and hence standard error estimates when using several MCMC chains to simulate from a mixture distribution as an importance sampling function. Just before I boarded my plane from Amsterdam to Calgary, which gave me the opportunity to read it completely (along with half a dozen other papers, since it is a long flight!) I first thought it was connecting to our AMIS algorithm (on which convergence Vivek spent a few frustrating weeks when he visited me at the end of his PhD), because of the mixture structure. This is actually altogether different, in that a mixture is made of unnormalised complex enough densities, to act as an importance sampler, and that, due to this complexity, the components can only be simulated via separate MCMC algorithms. Behind this characterisation lurks the challenging problem of estimating multiple normalising constants. The paper adopts the resolution by reverse logistic regression advocated in Charlie Geyer’s famous 1994 unpublished technical report. Beside the technical difficulties in establishing a CLT in this convoluted setup, the notion of mixing importance sampling and different Markov chains is quite appealing, especially in the domain of “tall” data and of splitting the likelihood in several or even many bits, since the mixture contains most of the information provided by the true posterior and can be corrected by an importance sampling step. In this very setting, I also think more adaptive schemes could be found to determine (estimate?!) the optimal weights of the mixture components.
Filed under: Mountains, pictures, Statistics, Travel, University life Tagged: adaptive MCMC, Ames, AMIS, Amsterdam, Charlie Geyer, importance sampling, Iowa, MCMC, Monte Carlo Statistical Methods, normalising constant, splitting data
### Emily Lakdawalla - The Planetary Society Blog
Favorite Astro Plots #1: Asteroid orbital parameters
This is the first in a series of posts in which scientists share favorite planetary science plots. For my #FaveAstroPlot, I explain what you can see when you look at how asteroid orbit eccentricity and inclination vary with distance from the Sun.
### arXiv blog
IQ Test Result: Advanced AI Machine Matches Four-Year-Old Child's Score
Artificial intelligence machines are rapidly gaining on humans, but they have some way to go on IQ tests.
### Tommaso Dorigo - Scientificblogging
Researchers' Night 2015
Last Friday I was invited by the University of Padova to talk about particle physics to the general public, in occasion of the "Researchers Night", a yearly event organized by the European Commission which takes place throughout Europe - in 280 cities this year. Of course I gladly accepted the invitation, although it caused some trouble to my travel schedule (I was in Austria for lectures until Friday morning, and you don't want to see me driving when I am in a hurry, especially on a 500km route).
### astrobites - astro-ph reader's digest
The first detection of an inverse Rossiter-McLaughlin effect
How do you observe an Earth transit, from Earth? You look at reflected sunlight from large highly reflective surfaces. Good candidates include planets, and their moons. They are the largest mirrors in the solar system.
On Jan 5 2014 Earth transited the Sun, as seen from Jupiter and its moons. Jupiter itself is not a good sunlight reflector, due to its high rotational velocity and its turbulent atmosphere. Its major solid satellites are better mirrors. Therefore, the authors observed Jupiter’s moons Europa and Ganymede during the transit (see Figure 2), and took spectra of the Sun from the reflected sunlight with HARPS, and HARPS-N, two very precise radial velocity spectrographs. The former spectrograph is located in La Silla in Chile, and the latter in La Palma in the Canary Islands. The authors’ goal was to measure the Rossiter-McLaughlin effect.
Fig 1: The Earth, and the Moon as they would appear to an observer on Jupiter on 5 January 2014, transiting the Sun. Figure 1 from the paper.
Fig 2: The geometric configuration of Jupiter and its moons, as seen from the Sun. Figure 2 from the paper.
Transits: sequential blocking of a star
The Rossiter-McLaughlin effect is a spectroscopic effect observed during transit events (see Figure 3). As a star rotates on its axis, half of the visible stellar photosphere moves towards the observer (blueshifted), while the other visible half the star moves away from the observer (redshifted). As a transiting objectin our case a planetmoves across the star, the planet will block out one quadrant of the star first, and then the other. This sequential blocking of blue-and redshifted regions on the star causes the observed stellar spectrum to vary. More specifically, the uneven contribution from the two stellar quadrants distorts the spectral line profiles, causing the apparent radial velocity of the star to change, when in fact it does not. The effect can give information on a) the planet radius, and b) the angle between the sky projections of the planet’s orbital axis, and the stellar rotational axis.
Fig 3: The Rossiter-McLaughlin effect: as a planet passes in front of a star it sequentially blocks blue-and redshifted regions of the star causing the star’s apparent radial velocity to change, when in fact it does not. The viewer is at the bottom. Figure from Wikipedia.
Observations of the transit
Figure 4 shows the whole set of corrected radial velocities taken of the transit, including observations of Jupiter’s moons the nights before and after. The transit, as seen from Jupiter’s moons, took about 9 hours and 40 minutes. The best available coverage of the event was for 6 hours from HARPS-N at La Palma Observatory. HARPS at La Silla Observatory was able to observe the transit for about an hour.
Fig 4: Corrected radial velocities measured on Jan 4-6, 2015. Vertical dashed lines denote the start, middle, and end of the transit. Observations of Europa from La Palma cover about 6 hours of the transit (black circles). Color observations are of Ganymede (cyan) and Europa (red) from La Silla. Figure 4 from the paper.
An anomalous drift
The expected modulation in the solar radial velocities due to the transit was on the order of 20cm/s. The Moon, which also partook in the transit (see Figure 1 again), added a few cm/s to this number.
Instead of detecting the expected 20cm/s modulation, the authors detected a much larger signal, on the order of -38m/sa modulation about 400 times higher and opposite in sign than expected (see peak in Figure 4): an inverse Rossiter-McLaughlin effect.
The authors ruled out that the observed modulation could be caused by instrumental effects, as the two spectrographs showed consistent results. Additionally, the authors rule out the possible dependence of the anomalous signal with magnetic activity of the Sun, from observations conducted simultaneously with The Birmingham Solar Oscillation Network (BiSON). They had another idea.
The culprit: Europa’s opposition surge
The authors suggest that the anomaly is produced by Europa’s opposition surge.
The opposition surge is a brightening of a rocky celestial surface when it is observed at opposition. An example of an object at opposition is the full moon. The “surge” part has to do with the increase, or “surge”, in reflected solar radiation observed at opposition. This is due to a combination of two effects. First, at opposition the reflective surface has almost no shadows. Second, at opposition photons can constructively interfere with dust particles close to the reflective surface, increasing its reflectivity. The latter effect is called coherent backscatter.
The authors created a simple model for Europa’s opposition surge, and compared it to their observations (see Figure 5). It works. As the Earth moves across the face of the Sun, rather than blocking the light (like in the Rossiter-McLaughlin effect shown in Figure 3), the net effect is that the light grazing the Earth is amplified. The Earth thus acts as a lens, compensating not only for the lost light during the eclipse—but makes the Sun appear much brighter! This explains the opposite sign, and the amplitude of the effect. Additionally, the amplification of reflected light is not fixed only to the transit, but happens gradually as Earth gets closer to transiting, and as Europa gets closer to being at opposition. The effect is symmetric, and is analogously observed as Earth moves out of transit.
Fig 5: The model of the opposition surge (thick black line) compared to observations from HARPS-N at La Palma (red), and HARPS at La Silla (blue). The dotted blue line shows the originally expected Rossiter-McLaughlin effect, amplified 50-fold for visibility. It is much smaller than the observed signal. Figure 7 from the paper.
Conclusion
This is the first time an inverse Rossiter-McLaughlin effect, caused by a moon’s opposition surge, has been detected. The authors predict the effect can be observed again during the next conjunction of Earth and Jupiter in 2016. Although, this will be a grazing transit with a smaller amplitude than the transit studied in this paper, the authors can now predict with confidence the extent of the newly discovered effect in the upcoming event.
## September 30, 2015
### Christian P. Robert - xi'an's og
a simulated annealing approach to Bayesian inference
A misleading title if any! Carlos Albert arXived a paper with this title this morning and I rushed to read it. Because it sounded like Bayesian analysis could be expressed as a special form of simulated annealing. But it happens to be a rather technical sequel [“that complies with physics standards”] to another paper I had missed, A simulated annealing approach to ABC, by Carlos Albert, Hans Künsch, and Andreas Scheidegger. Paper that appeared in Statistics and Computing last year, and which is most interesting!
“These update steps are associated with a flow of entropy from the system (the ensemble of particles in the product space of parameters and outputs) to the environment. Part of this flow is due to the decrease of entropy in the system when it transforms from the prior to the posterior state and constitutes the well-invested part of computation. Since the process happens in finite time, inevitably, additional entropy is produced. This entropy production is used as a measure of the wasted computation and minimized, as previously suggested for adaptive simulated annealing” (p.3)
The notion behind this simulated annealing intrusion into the ABC world is that the choice of the tolerance can be adapted along iterations according to a simulated annealing schedule. Both papers make use of thermodynamics notions that are completely foreign to me, like endoreversibility, but aim at minimising the “entropy production of the system, which is a measure for the waste of computation”. The central innovation is to introduce an augmented target on (θ,x) that is
f(x|θ)π(θ)exp{-ρ(x,y)/ε},
where ε is the tolerance, while ρ(x,y) is a measure of distance to the actual observations, and to treat ε as an annealing temperature. In an ABC-MCMC implementation, the acceptance probability of a random walk proposal (θ’,x’) is then
exp{ρ(x,y)/ε-ρ(x’,y)/ε}∧1.
Under some regularity constraints, the sequence of targets converges to
π(θ|y)exp{-ρ(x,y)},
if ε decreases slowly enough to zero. While the representation of ABC-MCMC through kernels other than the Heaviside function can be found in the earlier ABC literature, the embedding of tolerance updating within the modern theory of simulated annealing is rather exciting.
Furthermore, we will present an adaptive schedule that attempts convergence to the correct posterior while minimizing the required simulations from the likelihood. Both the jump distribution in parameter space and the tolerance are adapted using mean fields of the ensemble.” (p.2)
What I cannot infer from a rather quick perusal of the papers is whether or not the implementation gets into the way of the all-inclusive theory. For instance, how can the Markov chain keep moving as the tolerance gets to zero? Even with a particle population and a sequential Monte Carlo implementation, it is unclear why the proposal scale factor [as in equation (34)] does not collapse to zero in order to ensure a non-zero acceptance rate. In the published paper, the authors used the same toy mixture example as ours [from Sisson et al., 2007], where we earned the award of the “incredibly ugly squalid picture”, with improvements in the effective sample size, but this remains a toy example. (Hopefully a post to be continued in more depth…)
Filed under: Books, pictures, Statistics, University life Tagged: ABC, ABC-MCMC, ABC-SMC, Bayesian Analysis, endoreversibility, mixture, Monte Carlo Statistical Methods, particle system, sequential Monte Carlo, simulated annealing, Switzerland
### Symmetrybreaking - Fermilab/SLAC
Q&A with Fermilab’s first artist-in-residence
Symmetry sits down with Lindsay Olson as she wraps up a year of creating art inspired by particle physics.
S: How did you end up at Fermilab?
LO: In March 2014 I had an exhibition of my work at North Park College. Several members of the Fermilab art committee attended my talk. Hearing me speak about one of my residencies, Georgia Schwender, curator of Fermilab’s art gallery, invited me to help her establish a pilot residency that would continue Fermilab’s tradition of nurturing both art and science.
S: What did you do during your residency?
LO: During a residency, I want to have a full immersion experience. I worked closely with passionate scientists, including Don Lincoln, Sam Zeller and Debbie Harris. I read books and popular science journalism, attended public lectures, and watched videos. This immersive learning is the scaffolding from which I create my art.
S: What’s your artistic process like?
LO: I want to make engaging, accessible art about real, complicated science: art that will connect with the public and inspire them to ask their own questions about the nature of reality and the origin of the cosmos. When I converse with a scientist, I glean the key points and translate them in an artistic way. Many artists use oil paint, watercolor and other traditional materials. But when I work, I want to use media to reinforce the message in the art. Everyone uses textiles in their daily lives, so creating work in them felt like a natural choice.
S: What inspired you at Fermilab?
LO: The Standard Model was the first piece of physics I learned. This conceptual tool was not only an appropriate beginning for the project, but a door into a fascinating way to understand reality. Passionate scientists of the present and science heroes of the past, especially Ray Davis, Richard Feynman and Robert Wilson, also inspired me.
S: What is one of your most memorable experiences at Fermilab?
LO: I took several training courses, including radiation safety training. This allowed me to shadow operators into the guts of several experiments during a recent shutdown. It was thrilling. Accelerator science is about riding a bucking bronco of energetic particles. Understanding how the messy beam behaves showed me that nature is not just about forests, creatures and rocks. At the subatomic level, nature is wild, energetic and mysterious. I plan to make large-scale drawings based on what I have learned in the Accelerator Division.
S: Did anything surprise you?
LO: I’ve been surprised at every turn. As an artist, I’ve been trained to observe the surface of reality. Everything looks solid and unmoving. But the subatomic realm is far more spacious and energetic than I could have imagined.
S: How did you become interested in expressing science in your art?
LO: Before I created art about science, I painted landscapes. I created portraits of area waterways. I was editing out all the manmade features and creating idealized images of streams and rivers. One day I was canoeing past an aeration station on the Chicago Canal and became curious about the real story of water in a dense urban area. I approached the District about beginning an art project that would tell this story. I started a residency at the Metropolitan Water Reclamation District of Greater Chicago. Strange as it may sound, I fell in love with science in the middle of a wastewater treatment plant.
S: How did your residency at Fermilab differ from past residencies?
LO: The most striking difference is the amount of resources available at Fermilab. It’s hard to imagine any other government agency where you will find not only cutting-edge science, but also a buffalo herd, a beautiful art gallery, a concert hall, a restored prairie and a graveyard.
S: What will you take with you when you leave Fermilab?
LO: One of the most powerful lessons I learned with this residency is that I am not afraid to learn any kind of science. I have limits because I lack the background in math. Despite this, I feel confident about learning enough science to make meaningful art. If I can learn science, others can too.
S: What’s next?
LO: Once I’ve finished the art, the project is far from over. Finding places to show the work I made while at Fermilab will be the next challenge. I want to use the work to inspire viewers to take a closer look at science in general and particle physics in particular. I hope the project helps people with no technical training, like me, to appreciate the beauty and elegance of our universe.
I have no set plans for my next residency, but I have a few ideas simmering on the back burner. Perhaps I will be surprised by another opportunity. My residency with Fermilab has changed my view of reality enough for me to know that there are surprises out in the universe for any of us who take the time to discover what science can teach us.
### The n-Category Cafe
An exact square from a Reedy category
I first learned about exact squares from a blog post written by Mike Shulman on the $nn$-Category Café.
Today I want to describe a family of exact squares, which are also homotopy exact, that I had not encountered previously. These make a brief appearance in a new preprint, A necessary and sufficient condition for induced model structures, by Kathryn Hess, Magdalena Kedziorek, Brooke Shipley, and myself.
Proposition. If $RR$ is any (generalized) Reedy category, with ${R}^{+}\subset RR^+ \subset R$ the direct subcategory of degree-increasing morphisms and ${R}^{-}\subset RR^- \subset R$ the inverse subcategory of degree-decreasing morphisms, then the pullback square: $\begin{array}{ccc}\mathrm{iso}\left(R\right)& \to & {R}^{-}\\ ↓& ⇙\mathrm{id}& ↓\\ {R}^{+}& \to & R\end{array} \array\left\{ iso\left(R\right) & \to & R^- \\ \downarrow & \swArrow id & \downarrow \\ R^+ & \to & R\right\} $ is (homotopy) exact.
In summary, a Reedy category $\left(R,{R}^{+},{R}^{-}\right)\left(R,R^+,R^-\right)$ gives rise to a canonical exact square, which I’ll call the Reedy exact square.
## Exact squares and Kan extensions
Let’s recall the definition. Consider a square of functors inhabited by a natural transformation $\begin{array}{ccc}A& \stackrel{f}{\to }& B\\ {}^{u}↓& ⇙\alpha & {↓}^{v}\\ C& \underset{g}{\to }& D\end{array}\array\left\{A & \overset\left\{f\right\}\left\{\to\right\} & B\\ ^u\downarrow & \swArrow\alpha & \downarrow^v\\ C& \underset\left\{g\right\}\left\{\to\right\} & D\right\}$ For any category $MM$, precomposition defines a square $\begin{array}{ccc}{M}^{A}& \stackrel{{f}^{*}}{←}& {M}^{B}\\ {}^{{u}^{*}}↑& ⇙{\alpha }^{*}& {↑}^{{v}^{*}}\\ {M}^{C}& \underset{{g}^{*}}{←}& {M}^{D}\end{array}\array\left\{M^A & \overset\left\{f^\ast\right\}\left\{\leftarrow\right\} & M^B\\ ^\left\{u^\ast\right\}\uparrow & \swArrow \alpha^\ast & \uparrow^\left\{v^\ast\right\}\\ M^C& \underset\left\{g^\ast\right\}\left\{\leftarrow\right\} & M^D\right\}$ Supposing there exist left Kan extensions ${u}_{!}⊣{u}^{*}u_! \dashv u^\ast$ and ${v}_{!}⊣{v}^{*}v_! \dashv v^\ast$ and right Kan extensions ${f}^{*}⊣{f}_{*}f^\ast \dashv f_\ast$ and ${g}^{*}⊣{g}_{*}g^\ast \dashv g_\ast$, the mates of ${\alpha }^{*}\alpha^*$ define canonical Beck-Chevalley transformations: ${u}_{!}{f}^{*}⇒{g}^{*}{v}_{!}\phantom{\rule{1em}{0ex}}\mathrm{and}\phantom{\rule{1em}{0ex}}{v}^{*}{g}_{*}⇒{f}_{*}{u}^{*}. u_! f^\ast \Rightarrow g^\ast v_!\quad and \quad v^\ast g_\ast \Rightarrow f_\ast u^\ast. $ Note if either of the Beck-Chevalley transformations is an isomorphism, the other one is too by the (contravariant) correspondence between natural transformations between a pair of left adjoints and natural transformations between the corresponding right adjoints.
Definition. $\begin{array}{ccc}A& \stackrel{f}{\to }& B\\ {}^{u}↓& ⇙\alpha & {↓}^{v}\\ C& \underset{g}{\to }& D\end{array}\array\left\{A & \overset\left\{f\right\}\left\{\to\right\} & B\\ ^u\downarrow & \swArrow\alpha & \downarrow^v\\ C& \underset\left\{g\right\}\left\{\to\right\} & D\right\}$ is an exact square if, for any $MM$ admitting pointwise Kan extensions, the Beck-Chevalley transformations are isomorphisms.
Comma squares provide key examples, in which case the Beck-Chevalley isomorphisms recover the limit and colimit formulas for pointwise Kan extensions.
The notion of homotopy exact square is obtained by replacing $MM$ by some sort of homotopical category, the adjoints by derived functors, and “isomorphism” by “equivalence.”
## The proof
In the preprint we give a direct proof that these Reedy squares are exact by computing the Kan extensions, but exactness follows more immediately from the following characterization theorem, stated using comma categories. The natural transformation $\alpha :vf⇒gu\alpha \colon v f \Rightarrow g u$ induces a functor $B↓f{×}_{A}u↓C\to v↓g B \downarrow f \times_A u \downarrow C \to v \downarrow g$ over $C×BC \times B$ defined on objects by sending a pair $b\to f\left(a\right),u\left(a\right)\to cb \to f\left(a\right), u\left(a\right) \to c$ to the composite morphism $v\left(b\right)\to vf\left(a\right)\to gu\left(a\right)\to g\left(c\right)v\left(b\right) \to v f\left(a\right) \to g u\left(a\right) \to g\left(c\right)$. Fixing a pair of objects $bb$ in $BB$ and $cc$ in $CC$, this pulls back to define a functor $b↓f{×}_{A}u↓c\to \mathrm{vb}↓\mathrm{gc}. b \downarrow f \times_A u \downarrow c \to vb \downarrow gc.$
Theorem. A square $\begin{array}{ccc}A& \stackrel{f}{\to }& B\\ {}^{u}↓& ⇙\alpha & {↓}^{v}\\ C& \underset{g}{\to }& D\end{array}\array\left\{A & \overset\left\{f\right\}\left\{\to\right\} & B\\ ^u\downarrow & \swArrow\alpha & \downarrow^v\\ C& \underset\left\{g\right\}\left\{\to\right\} & D\right\}$ is exact if and only if each fiber of $b↓f{×}_{A}u↓c\to vb↓gc b \downarrow f \times_A u \downarrow c \to v b \downarrow g c$ is non-empty and connected.
See the nLab for a proof. Similarly, the square is homotopy exact if and only if each fiber of this functor has a contractible nerve.
In the case of a Reedy square $\begin{array}{ccc}\mathrm{iso}\left(R\right)& \to & {R}^{-}\\ ↓& ⇙\mathrm{id}& ↓\\ {R}^{+}& \to & R\end{array} \array\left\{ iso\left(R\right) & \to & R^- \\ \downarrow & \swArrow id & \downarrow \\ R^+ & \to & R\right\} $ these fibers are precisely the categories of Reedy factorizations of a fixed morphism. For an ordinary Reedy category $RR$, Reedy factorizations are unique, and so the fibers are terminal categories. For a generalized Reedy category, Reedy factorizations are unique up to unique isomorphism, so the fibers are contractible groupoids.
## Reedy diagrams as bialgebras
For any category $MM$, the objects in the lower right-hand square $\begin{array}{ccc}{M}^{\mathrm{iso}\left(R\right)}& ←& {M}^{{R}^{-}}\\ ↑& ⇙\mathrm{id}& ↑\\ {M}^{{R}^{+}}& ←& {M}^{R}\end{array} \array\left\{ M^\left\{iso\left(R\right)\right\} & \leftarrow & M^\left\{R^-\right\} \\ \uparrow & \swArrow id & \uparrow \\ M^\left\{R^+\right\} & \leftarrow & M^R\right\} $ are Reedy diagrams in $MM$, and the functors restrict to various subdiagrams. Because the indexing categories all have the same objects, if $MM$ is bicomplete each of these restriction functors is both monadic and comonadic. If we think of the ${M}^{{R}^{-}}M^\left\{R^-\right\}$ as being comonadic over ${M}^{\mathrm{iso}\left(R\right)}M^\left\{iso\left(R\right)\right\}$ and ${M}^{{R}^{+}}M^\left\{R^+\right\}$ as being monadic over ${M}^{\mathrm{iso}\left(R\right)}M^\left\{iso\left(R\right)\right\}$, then the Beck-Chevalley isomorphism exhibits ${M}^{R}M^R$ as the category of bialgebras for the monad induced by the direct subcategory ${R}^{+}R^+$ and the comonad induced by the inverse subcategory ${R}^{-}R^-$.
There is a homotopy-theoretic interpretation of this, which I’ll describe in the case where $RR$ is a strict Reedy category (so that $\mathrm{iso}\left(R\right)=\mathrm{ob}\left(R\right)iso\left(R\right)=ob\left(R\right)$), though it works in the generalized context as well. If $MM$ is a model category, then ${M}^{\mathrm{iso}\left(R\right)}M^\left\{iso\left(R\right)\right\}$ inherits a model structure, with everything defined objectwise. The Reedy model structure on ${M}^{{R}^{-}}M^\left\{R^-\right\}$ coincides with the injective model structure, which has cofibrations and weak equivalences created by the restriction functor ${M}^{{R}^{-}}\to {M}^{\mathrm{iso}\left(R\right)}M^\left\{R^-\right\} \to M^\left\{iso\left(R\right)\right\}$; we might say this model structure is “left-induced”. Dually, the Reedy model structure on ${M}^{{R}^{+}}M^\left\{R^+\right\}$ coincides with the projective model structure, which has fibrations and weak equivalences created by ${M}^{{R}^{+}}\to {M}^{\mathrm{iso}\left(R\right)}M^\left\{R^+\right\} \to M^\left\{iso\left(R\right)\right\}$; this is “right-induced”.
The Reedy model structure on ${M}^{R}M^R$ then has two interpretations: it is right-induced along the monadic restriction functor ${M}^{R}\to {M}^{{R}^{-}}M^R \to M^\left\{R^-\right\}$ and it is left-induced along the comonadic restriction functor ${M}^{R}\to {M}^{{R}^{+}}M^R \to M^\left\{R^+\right\}$. The paper A necessary and sufficient condition for induced model structures describes a general technique for inducing model structures on categories of bialgebras, which reproduces the Reedy model structure in this special case.
### Jester - Resonaances
Weekend plot: minimum BS conjecture
This weekend plot completes my last week's post:
It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:
• B [Baroqueness], describing how complicated is the model relative to the standard model;
• S [Specialness], describing the fine-tuning needed to achieve electroweak symmetry breaking with the observed Higgs boson mass.
To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition, B=1, while S≈(Λ/mZ)^2≈10^4. The principle of naturalness postulates that S should be much smaller, S ≲ 10. This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.
The most popular approach to reducing S is by introducing supersymmetry. The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or R-parity breaking interactions (RPV), or an additional scalar (NMSSM). Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models, S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest model of neutral naturalness - can achieve S10 at the cost of introducing a whole parallel mirror world.
The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed the literature), for a given S there seems to be a minimum value of B below which no models exist. In fact, the conjecture is that the product B*S is bounded from below:
BS ≳ 10^4.
One robust prediction of the minimum BS conjecture is the existence of a very complicated (B=10^4) yet to be discovered model with no fine-tuning at all. The take-home message is that one should always try to minimize BS, even if for fundamental reasons it cannot be avoided completely ;)
## September 29, 2015
### Symmetrybreaking - Fermilab/SLAC
New discovery? Or just another bump?
For physicists, seeing is not always believing.
In the 1960s physicists at the University of California, Berkeley saw evidence of new, unexpected particles popping up in data from their bubble chamber experiments.
But before throwing a party, the scientists did another experiment. They repeated their analysis, but instead of using the real data from the bubble chamber, they used fake data generated by a computer program, which assumed there were no new particles.
The scientists performed a statistical analysis on both sets of data, printed the histograms, pinned them to the wall of the physics lounge, and asked visitors to identify which plots showed the new particles and which plots were fakes.
No one could tell the difference. The fake plots had just as many impressive deviations from the theoretical predictions as the real plots.
Eventually, the scientists determined that some of the unexpected bumps in the real data were the fingerprints of new composite particles. But the bumps in the fake data remained the result of random statistical fluctuations.
So how do scientists differentiate between random statistical fluctuations and real discoveries?
Just like a baseball analyst can’t judge if a rookie is the next Babe Ruth after nine innings of play, physicists won’t claim a discovery until they know that their little bump-on-a-graph is the real deal.
After the histogram “social experiment” at Berkeley, scientists developed a one-size-fits-all rule to separate the “Hall of Fame” discoveries from the “few good games” anomalies: the five-sigma threshold.
“Five sigma is a measure of probability,” says Kyle Cranmer, a physicist from New York University working on the ATLAS experiment. “It means that if a bump in the data is the result of random statistical fluctuation and not the consequence of some new property of nature, then we could expect to see a bump at least this big again only if we repeated our experiment a few million more times.”
To put it another way, five sigma means that there is only a 0.00003 percent chance scientists would see this result due to statistical fluctuations alone—a good indication that there’s probably something hiding under that bump.
But the five-sigma threshold is more of a guideline than a golden rule, and it does not tell physicists whether they have made a discovery, according to Bob Cousins, a physicist at the University of California, Los Angeles working on the CMS experiment.
“A few years ago scientists posted a paper claiming that they had seen faster-than-light neutrinos,” Cousins says. But few people seemed to believe it—even though the result was six sigma. (A six-sigma result is a couple of hundred times stronger than a five-sigma result.)
The five-sigma rule is typically used as the standard for discovery in high-energy physics, but it does not incorporate another equally important scientific mantra: The more extraordinary the claim, the more evidence you need to convince the community.
“No one was arguing about the statistics behind the faster-than-light neutrinos observation,” Cranmer says. “But hardly anyone believed they got that result because the neutrinos were actually going faster than light.”
Within minutes of the announcement, physicists started dissecting every detail of the experiment to unearth an explanation. Anticlimactically, it turned out to be a loose fiber optic cable.
The “extraordinary claims, extraordinary evidence” philosophy also holds true for the inverse of the statement: If you see something you expected, then you don’t need as much evidence to claim a discovery. Physicists will sometime relax their stringent statistical standards if they are verifying processes predicted by the Standard Model of particle physics—a thoroughly vetted description of the microscopic world.
“But if you don’t have a well-defined hypothesis that you are testing, you increase your chances of finding something that looks impressive just because you are looking everywhere,” Cousins says. “If you perform 800 broad searches across huge mass ranges for new particles, you’re likely to see at least one impressive three-sigma bump that isn’t anything at all.”
In the end, there is no one-size-fits-all rule that separates discoveries from fluctuations. Two scientists could look at the same data, make the same histograms and still come to completely different conclusions.
So which results windup in textbooks and which results are buried in the archive?
“This decision comes down to two personal questions: What was your prior belief, and what is the cost of making an error?” Cousins says. “With the Higgs discovery, we waited until we had overwhelming evidence of a Higgs-like particle before announcing the discovery, because if we made an error it could weaken people’s confidence in the LHC research program.”
Experimental physicists have another way of verifying their results before making a discovery claim: comparable studies from independent experiments.
“If one experiment sees something but another experiment with similar capabilities doesn’t, the first thing we would do is find out why,” Cranmer says. “People won’t fully believe a discovery claim without a solid cross check.”
Like what you see? Sign up for a free subscription to symmetry!
### Lubos Motl - string vacua and pheno
CMS: a $$2.9\TeV$$ electron-positron pair resonance
Bonus: An ATLAS $$\mu\mu j$$ event with $$m=2.9\TeV$$ will be discussed at the end of this blog post.
A model with exactly this prediction was published in June
Two days ago, I discussed four LHC collisions suggesting a particle of mass $$5.2\TeV$$. Today, just two days later, Tommaso Dorigo described a spectacular dielectron event seen by CMS on August 22nd. See also the CERN document server; CERN graduate students have to prepare a PDF file for each of the several quadrillion collisions. ;-)
On that Tuesday, the world stock markets were just recovering from the two previous cataclysmic days while the CMS detector enjoyed a more pleasing day with one of the $$13\TeV$$ collisions that have turned the LHC into a rather new kind of a toy.
This is how the outcome of the collision looked from the direction of the beam. The electron and positron were flying almost exactly in the opposite direction, each having about $$1.25\TeV$$ of transverse energy. A perfectly balanced picture.
You may see the collision from another angle, too:
The electron-positron pair is the only notable thing that is going on.
The fun is that no such high-energy collision has been seen at the $$8\TeV$$ run – even though it has performed more than 100 times greater a number of collisions than the ongoing $$13\TeV$$ run in 2015. When you demand truly highly energetic particles in the final state, the weakness of the $$8\TeV$$ run in 2012 becomes self-evident.
The expected number of similar collisions with the invariant mass$M_{e^+e^-}\gt 2.5\TeV$ seen in the CMS dataset of 2015 (so far) has been estimated as $$\langle N \rangle =0.002$$. Clearly, this number – because it is so small that we may neglect the possibility that more than 1 such event arises – may be interpreted as the probability that one event (and not zero events) take place. For the mass above $$2.85\TeV$$, you would almost certainly get a probability $$0.001$$ or less.
If you take the estimate $$p=0.002$$ seriously, it means that either the CMS detector has been 1:500 "lucky" to see a high-energy event that is actually noise; or it is seeing a new particle that may decay to the electron-positron pair.
Such a new particle would probably be neutral from all points of view. It could be a heavier cousin of the $$Z$$-boson, a $$Z'$$-boson. That would be the gauge boson associated with a new $$U(1)_{\rm new}$$ gauge symmetry. Most types of vacua in string theory tend to predict lots of these additional $$U(1)$$ groups.
And your humble correspondent can even offer you a paper that predicts a $$Z'$$-boson of mass $$2.9\TeV$$. See the bottom of page 10 here. (Sadly, they made the prediction less accurate in v2 of their preprint.) The left-right-symmetric model in the paper also intends to explain the excesses near $$2\TeV$$ – as a $$W'$$-boson. The model is lepto-phobic (LP) which means that only right-handed quarks are arranged to doublets of $$SU(2)_R$$ while the right-handed leptons remain $$SU(2)_R$$ singlets. It's the model with the Higgs triplet (LPT) that gives the right $$Z'$$-boson mass.
Just for fun, let me show you the calculation of the invariant mass. The coordinates of the two electron-like particles are written as$\eq{ p_T &= 1.27863\TeV\\ \eta &= - 1.312\\ \phi &= 0.420 }$ and $\eq{ p_T &= 1.25620\TeV\\ \eta &= - 0.239\\ \phi &= -2.741 }$ One may convert these coordinates to the Cartesian coordinates$\eq{ p_x &= p_T\cos \phi\\ p_y &= p_T\sin \phi\\ p_z &= p_T \sinh \eta \\ E &= p_T \cosh \eta }$ in the approximation $$m_e\ll E$$ i.e. $$m_e\sim 0$$: feel free to check that the 4-vector above is identically light-like. The two 4-vectors (in the order I chose above) are therefore$\eq{ \frac{p_A^\mu }{ {\rm TeV}}&= (1.16750, 0.521375, -2.20200, 2.54631) \\ \frac{p_B^\mu }{ {\rm TeV}}&= (-1.15675, -0.48987, -0.30310, 1.29225) }$ where the last coordinate is the energy. Now, because these 4-vectors are null, $(p_A^\mu+p_B^\mu)^2 = 2p_A^\mu p_{B,\mu} = (2.908\TeV)^2$ in the West Coast metric convention. You're invited to check it. Thanks to the Higgs Kaggle contest, I gained some intuition for the $$(p_T,\eta,\phi)$$ coordinates. ;-)
In a few more weeks, we should see whether this highly energetic electron-positron event was a fluke or something much more interesting... You know, the progress on the energy frontier has been rather substantial. Note that $$13/8=1.625$$, an increase by 62.5%.
Lots of particles – the $$W$$-bosons, the $$Z$$-boson, the Higgs boson, and the top quark – are confined in the interval $$70\GeV,210\GeV$$ – safely four types of particles in an interval whose upper bound is thrice the lower bound. Now, we can produce particles with masses up to $$5\TeV$$ or so. Why shouldn't we find any new particles with masses between $$175\GeV$$ and $$4,900\GeV$$ – an interval whose ratio of limiting energies is twenty-eight?
It's quite some jump, isn't it? ;-) It could harbor lots of so far secret and elusive animals.
Next Monday, the full-fledged physics collisions should resume and continue through the early November.
### astrobites - astro-ph reader's digest
The APOGEE Treasure Trove
• Title: The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
• Authors: Steven R. Majewski, Ricardo P. Schiavon, Peter M. Frinchaboy, et al. (there are more than 70 co-authors)
• First Author’s Institution: Dept. of Astronomy, University of Virginia, Charlottesville, VA (there are 50 institutions represented among the authors)
• Paper Status: Submitted to The Astronomical Journal
Apache Point Observatory in Sunspot, NM. The SDSS 2.5-m telescope is to the right, pointing toward the center of the Milky Way. The full moon and light pollution from nearby El Paso don’t stop APOGEE! Figure 15 in the paper.
What’s black, white, and re(a)d all over… and spent three years looking all around the sky? It’s APOGEE (the Apache Point Observatory Galactic Evolution Experiment), a three-year campaign that used a single 2.5-m telescope in New Mexico to collect half a million near-infrared spectra for 146,000 stars!
Black and white? As shown below, the raw spectra from APOGEE look black and white, but appearances can be deceiving. Each horizontal stripe is the spectrum of one star, spanning a range of colors redder than your eye can see. To get the spectra nicely stacked in an image like this, fiber-optic cables are plugged into metal plates which are specially drilled to let in slits of light from individual stars in different regions of the sky. Each star gets one fiber, which corresponds to one row on the detector. An image like this allows APOGEE to gather data for a multitude of stars quickly.
Part of a raw 2D APOGEE image from observations near the bulge of the Milky Way. Each horizontal stripe is a portion of one star’s near-infrared spectrum. The x-axis will correspond to wavelength once the spectra are processed. Bright vertical lines are from airglow, dark vertical lines common to all stars are from molecules in Earth’s atmosphere, and the dark vertical lines that vary from star to star are scientifically interesting stellar absorption lines that correspond to various elements. Fainter and brighter stars are intentionally interspersed to reduce contamination between stars. Figure 14 in the paper.
Re(a)d all over? Today’s paper accompanies the latest public data release, DR12, of the Sloan Digital Sky Survey (SDSS), a large collaboration which includes APOGEE. So in addition to focusing on red giant stars viewed in near-infrared light, all the APOGEE data are now freely available and may be read by anyone. Even so, the APOGEE team has been hard at work.
Probing to new galactic depths with the near-infrared
APOGEE is designed to primarily observe evolved red giant stars in the Milky Way using near-infrared light. What’s so special about this setup? First, red giants are some of the brightest stars, so it’s possible to see them farther away than Sun-like stars. Second, near-infrared light doesn’t get blocked by dust like visible light does, so it lets APOGEE observe stars toward the center of the Milky Way, which is otherwise obscured with thick dust lanes. This is really important if you want to understand how different stellar populations in the galaxy behave.
Mapping velocities and composition
Because APOGEE collects spectra of stars, not images, each observation contains lots of information. Spectra tell us how fast a star is moving towards or away from us (its radial velocity), how hot a star is, what its surface gravity is like, and what elements it is made of. Lots of work has gone into developing a pipeline to process the spectra and return this information reliably, because it’s not practical to look at hundreds of thousands of observations by hand.
APOGEE visits each star at least three times to check if it is varying for any reason. (For example, binary stars will have different radial velocities at different times, and the APOGEE team wants to exclude binaries when they use star velocities to measure the overall motion of the galaxy.) The figures below show how a subset of stars mapped by APOGEE vary in radial velocity (top) and chemical composition (i.e., metallicity, bottom). The stars in both figures lie within two kiloparsecs above or below the disk of the Milky Way, so we are essentially seeing a slice of the middle of the galaxy. Observations don’t exist for the lower right quadrant of either figure, because that region is only visible from Earth’s southern hemisphere.
A map of stars observed by APOGEE, color-coded by radial velocity. The Sun is located at the center of the “spoke” of observations, and is defined as having zero radial velocity (greenish). An artist impression of the Milky Way is superimposed for context. This figure illustrates how the galaxy as a whole rotates. The Sun moves with the galaxy, and other stars’ relative motions depend on how far in front or behind of us they are. This astrobite has more details. Figure 24 from the paper.
A map of stars observed by APOGEE, color-coded by metallicity. As above, the Sun is in the center of the observation “spokes” and an artist impression of the Milky Way is superimposed for context. The Sun is defined to have 0 metallicity (greenish). Stars that are more chemically enriched than the Sun are red, and stars that are have fewer metals than the Sun are blue. This figure illuminates an overall galactic metallicity gradient. Figure 25 from the paper.
Together, maps like these provide an unprecedented look into our galaxy’s past, present, and future by combining kinematics and the locations of stars with different chemistry. Thanks to APOGEE’s success, plans are now underway for APOGEE-2 in the southern hemisphere using a telescope in Chile. This treasure trove of data will undoubtedly be put to good use for years to come.
### Sean Carroll - Preposterous Universe
Core Theory T-Shirts
Way back when, for purposes of giving a talk, I made a figure that displayed the world of everyday experience in one equation. The label reflects the fact that the laws of physics underlying everyday life are completely understood.
So now there are T-shirts. (See below to purchase your own.)
It’s a good equation, representing the Feynman path-integral formulation of an amplitude for going from one field configuration to another one, in the effective field theory consisting of Einstein’s general theory of relativity plus the Standard Model of particle physics. It even made it onto an extremely cool guitar.
I’m not quite up to doing a comprehensive post explaining every term in detail, but here’s the general idea. Our everyday world is well-described by an effective field theory. So the fundamental stuff of the world is a set of quantum fields that interact with each other. Feynman figured out that you could calculate the transition between two configurations of such fields by integrating over every possible trajectory between them — that’s what this equation represents. The thing being integrated is the exponential of the action for this theory — as mentioned, general relativity plus the Standard Model. The GR part integrates over the metric, which characterizes the geometry of spacetime; the matter fields are a bunch of fermions, the quarks and leptons; the non-gravitational forces are gauge fields (photon, gluons, W and Z bosons); and of course the Higgs field breaks symmetry and gives mass to those fermions that deserve it. If none of that makes sense — maybe I’ll do it more carefully some other time.
Gravity is usually thought to be the odd force out when it comes to quantum mechanics, but that’s only if you really want a description of gravity that is valid everywhere, even at (for example) the Big Bang. But if you only want a theory that makes sense when gravity is weak, like here on Earth, there’s no problem at all. The little notation k < Λ at the bottom of the integral indicates that we only integrate over low-frequency (long-wavelength, low-energy) vibrations in the relevant fields. (That's what gives away that this is an "effective" theory.) In that case there's no trouble including gravity. The fact that gravity is readily included in the EFT of everyday life has long been emphasized by Frank Wilczek. As discussed in his latest book, A Beautiful Question, he therefore advocates lumping GR together with the Standard Model and calling it The Core Theory.
I couldn’t agree more, so I adopted the same nomenclature for my own upcoming book, The Big Picture. There’s a whole chapter (more, really) in there about the Core Theory. After finishing those chapters, I rewarded myself by doing something I’ve been meaning to do for a long time — put the equation on a T-shirt, which you see above.
I’ve had T-shirts made before, with pretty grim results as far as quality is concerned. I knew this one would be especially tricky, what with all those tiny symbols. But I tried out Design-A-Shirt, and the result seems pretty impressively good.
So I’m happy to let anyone who might be interested go ahead and purchase shirts for themselves and their loved ones. Here are the links for light/dark and men’s/women’s versions. I don’t actually make any money off of this — you’re just buying a T-shirt from Design-A-Shirt. They’re a little pricey, but that’s what you get for the quality. I believe you can even edit colors and all that — feel free to give it a whirl and report back with your experiences.
### ZapperZ - Physics and Physicists
Football Physics and Deflategate
This issue doesn't seem to want to go away.
Still, anyone who has been following this (at least here in the US) have heard of the "Deflategate" controversy from last year's NFL Football playoffs.
Chad Orzel has another look at this based on a recent paper out of The Physics Teacher, this time, from the physics involved with the football receivers.
Most of the coverage of “Deflategate” has focused on Patriots quarterback Tom Brady, and speculation that he arranged for the balls to be deflated so as to provide a better grip. The authors of the Physics Teacher paper, Gregory DiLisi and Richard Rarick look at the other end of the problem, where the ball is caught by the receiver, thinking about it in terms of energy, an issue with major implications for the existence of atomic matter.
It certainly is another angle to the issue. I hope to get a copy of the paper soon and see what it says.
Zz.
### Peter Coles - In the Dark
Little Sun Charge by Olafur Eliasson
You might remember a piece I did a while ago about Little Sun by the artist Olafur Eliasson. This is a solar-powered lamp that charges up during the day and provides night-time illumination for those, e.g. in sub-Saharan Africa, without access to an electricity grid. I supported this project myself, including writing a piece here as part of the Little Charter for Light and Energy.
Well, it seems that in his travels around the world promoting Little Sun, Olafur received a lot of comments about how great it would be if the same principle could be used to provide a solar-powered mobile phone charger. So now – lo and behold! – there is a new product called Little Sun Charge. Here’s a little video about it:
I’m mentioning this here because Olafur is attempting to crowdfund this project via a kickstarter campaign. The campaign has already exceeded its initial target, but there are five days still remaining and every penny raised will used to reduce the price of the charger so that it can be sold to off-grid customers for even less than originally planned.
So please visit the link and pledge some dosh! There are treats in store for those who do!
## September 28, 2015
### Peter Coles - In the Dark
Evidence for Liquid Water on Mars?
There’s been a lot of excitement this afternoon about possible evidence for water on Mars from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on board the Mars Reconaissance Orbiter (MRO). Unfortunately, but I suppose inevitably, some of the media coverage has been a bit over the top, presenting the results as if they were proof of liquid water flowing on the Red Planet’s surface; NASA itself has pushed this interpretation. I think the results are indeed very interesting – but not altogether surprising, and by no means proof of the existence of flows of liquid water. And although they may indeed provide evidence confirming that there is water on Mars, we knew that already (at least in the form of ice and water vapour).
The full results are reported in a paper in Nature Geoscience. The abstract reads:
Determining whether liquid water exists on the Martian surface is central to understanding the hydrologic cycle and potential for extant life on Mars. Recurring slope lineae, narrow streaks of low reflectance compared to the surrounding terrain, appear and grow incrementally in the downslope direction during warm seasons when temperatures reach about 250–300K, a pattern consistent with the transient flow of a volatile species1, 2, 3. Brine flows (or seeps) have been proposed to explain the formation of recurring slope lineae1, 2, 3, yet no direct evidence for either liquid water or hydrated salts has been found4. Here we analyse spectral data from the Compact Reconnaissance Imaging Spectrometer for Mars instrument onboard the Mars Reconnaissance Orbiter from four different locations where recurring slope lineae are present. We find evidence for hydrated salts at all four locations in the seasons when recurring slope lineae are most extensive, which suggests that the source of hydration is recurring slope lineae activity. The hydrated salts most consistent with the spectral absorption features we detect are magnesium perchlorate, magnesium chlorate and sodium perchlorate. Our findings strongly support the hypothesis that recurring slope lineae form as a result of contemporary water activity on Mars.
Here’s a picture taken with the High Resolution Imaging Science Experiment (HIRISE) on MRO showing some of the recurring slope lineae (RSL):
You can see a wonderful gallery of other HIRISE images of other such features here.
The dark streaky stains in this and other examples are visually very suggestive of the possibility they were produced by flowing liquid. They also come and go with the Martian seasons, which suggests that they might involve something that melts in the summer and freezes in the winter. Putting these two facts together raises the quite reasonable question of whether, if that is indeed how they’re made, that liquid might be water.
What is new about the latest results that adds to the superb detail revealed by the HIRISE images – is that there is spectroscopic information that yields clues about the chemical composition of the stuff in the RSLs:
The black lines denote spectra that are taken at two different locations; the upper one has been interpreted as indicating the presence of some mixture of hydrated Calcium, Magnesium and Sodium Perchlorates (i.e. salts). I’m not a chemical spectroscopist so I don’t know whether other interpretations are possible, though I can’t say that I’m overwhelmingly convinced by the match between the data from laboratory specimens and that from Mars…
Anyway, if that is indeed what the spectroscopy indicates then the obvious conclusion is that there is water present, for without water there can be no hydrated salts. This water could have been absorbed from the atmospheric vapour or from the ice below the surface. The presence of salts would lowers the melting point of water ice, so this could explain how there could be some form of liquid flow at the sub-zero temperatures prevalent even in a Martian summer. It would not be pure running water, however, but an extremely concentrated salt solution, much saltier than sea water, probably in the form of a rather sticky brine. This brine might flow – or perhaps creep – down the sloping terrain (briefly) in the summer and then freeze. But nothing has actually been observed to flow in such a way. It seems to me – as a non-expert – that the features could be caused not by a flow of liquid, but by the disruption of the Martian surface, caused by melting and freezing, involving movement of solid material, or perhaps localized seeping. I’m not saying that it’s impossible that a flow of briny liquid is responsible for the features, just that I think it’s far from proven. But there’s no doubt that whatever is going on is fascinatingly complicated!
The last sentence of the abstract quoted above reads:
Our findings strongly support the hypothesis that recurring slope lineae form as a result of contemporary water activity on Mars.
I’m not sure about the “strongly support” but “contemporary water activity” is probably fair as it includes the possibilities I discussed above, but it does seem to have led quite a few people to jump to the conclusion that it means “flowing water”, which I don’t think it does. Am I wrong to be so sceptical? Let me know through the comments box!
### Sean Carroll - Preposterous Universe
The Big Picture
Once again I have not really been the world’s most conscientious blogger, have I? Sometimes other responsibilities have to take precedence — such as looming book deadlines. And I’m working on a new book, and that deadline is definitely looming!
And here it is. The title is The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. It’s scheduled to be published on May 17, 2016; you can pre-order it at Amazon and elsewhere right now.
An alternative subtitle was What Is, and What Matters. It’s a cheerfully grandiose (I’m supposed to say “ambitious”) attempt to connect our everyday lives to the underlying laws of nature. That’s a lot of ground to cover: I need to explain (what I take to be) the right way to think about the fundamental nature of reality, what the laws of physics actually are, sketch some cosmology and connect to the arrow of time, explore why there is something rather than nothing, show how interesting complex structures can arise in an undirected universe, talk about the meaning of consciousness and how it can be purely physical, and finally trying to understand meaning and morality in a universe devoid of transcendent purpose. I’m getting tired just thinking about it.
From another perspective, the book is an explication of, and argument for, naturalism — and in particular, a flavor I label Poetic Naturalism. The “Poetic” simply means that there are many ways of talking about the world, and any one that is both (1) useful, and (2) compatible with the underlying fundamental reality, deserves a place at the table. Some of those ways of talking will simply be emergent descriptions of physics and higher levels, but some will also be matters of judgment and meaning.
As of right now the book is organized into seven parts, each with several short chapters. All that is subject to change, of course. But this will give you the general idea.
* Part One: Being and Stories
How we think about the fundamental nature of reality. Poetic Naturalism: there is only one world, but there are many ways of talking about it. Suggestions of naturalism: the world moves by itself, time progresses by moments rather than toward a goal. What really exists.
* Part Two: Knowledge and Belief
Telling different stories about the same underlying truth. Acquiring and updating reliable beliefs. Knowledge of our actual world is never perfect. Constructing consistent planets of belief, guarding against our biases.
* Part Three: Time and Cosmos
The structure and development of our universe. Time’s arrow and cosmic history. The emergence of memories, causes, and reasons. Why is there a universe at all, and is it best explained by something outside itself?
* Part Four: Essence and Possibility
Drawing the boundary between known and unknown. The quantum nature of deep reality: observation, entanglement, uncertainty. Vibrating fields and the Core Theory underlying everyday life. What we can say with confidence about life and the soul.
* Part Five: Complexity and Evolution
Why complex structures naturally arise as the universe moves from order to disorder. Self-organization and incremental progress. The origin of life, and its physical purpose. The anthropic principle, environmental selection, and our role in the universe.
* Part Six: Thinking and Feeling
The mind, the brain, and the body. What consciousness is, and how it might have come to be. Contemplating other times and possible worlds. The emergence of inner experiences from non-conscious matter. How free will is compatible with physics.
* Part Seven: Caring and Mattering
Why we can’t derive ought from is, even if “is” is all there is. And why we nevertheless care about ourselves and others, and why that matters. Constructing meaning and morality in our universe. Confronting the finitude of life, deciding what stories we want to tell along the way.
Hope that whets the appetite a bit. Now back to work with me.
### astrobites - astro-ph reader's digest
Missing: Several Large Planets
Title: Hunting for planets in the HL Tau disk
Authors: L. Testi, A. Skemer, Th. Henning et al.
First author’s institution: ESO, Karl Schwarzschild str. 2, D-85748 Garching bei Muenchen, Germany
Status: Accepted for publication in ApJ Letters
ALMA image of a disc of gas and dust around the young star HL Tau. The dark rings in the disc are thought to be gaps, carved out by giant planets. Image Credit: ALMA (ESO/NAOJ/NRAO)
Nearly a year ago, the ALMA collaboration released this stunning image of the young star HL Tau. The sub-millimeter wavelengths of light that ALMA detects revealed a vast disc of gas and dust, several times larger than Neptune’s orbit. Intriguingly, the disc was divided up into a series of well-defined, concentric rings.
The cause of the rings seemed clear: There must be planets around HL Tau, their gravity sculpting the gas and sweeping out the dark gaps in the disc.
But there was an issue with this hypothesis. HL Tau is a very young star, less than a million years old. Many planetary formation models assume that planets take much longer to grow to the kind of sizes needed to shape the disc like that. If the gaps are being made by planets, then those models will need a serious rethink.
The authors of today’s paper decided to take a closer look, and see if they could spot the hypothetical planets. This isn’t as easy as it may appear. In the part of the electromagnetic spectrum probed by ALMA, the star is relatively dim, allowing the light from the disc to be discerned. However, any planets present would shine in infrared light, with a much shorter wavelength. In infrared, the blinding light from HL Tau would easily outshine that from a planet.
Two techniques were used to overcome this problem. The first was simple: Use the biggest telescope that they could get their hands on, in this case the unique Large Binocular Telescope Interferometer (LBTI).
The second trick was to use adaptive optics. This technique uses a light source, such as a laser or, in the case of the LBTI, a well-known star, to correct for the distortions in light caused by the Earth’s atmosphere. As the telescope’s computers know what the guide star “should” look like, it can continuously flex a small mirror to counteract the effects of the atmosphere. This makes the images much clearer, enough to directly image planets around some stars.
But even adaptive optics wasn’t enough to show up planets around HL Tau. The last obstacle was the disc itself. The very reason for looking for planets had become a hindrance to spotting them, scattering the light from the star out to much greater distances than usual.
To remove this scattered light, the authors made two infrared observations of HL Tau, one at a slightly redder wavelength than the other. In both, any signal from planets was drowned out by the scattered light.
But the exoplanets were predicted to be much redder than the scattered light. This meant that they wouldn’t show up at all in the less-red image, regardless of the scattered light. However, they should have been somewhere in the second image, with the scattered light roughly the same in both. Subtract the first image from the second, and the scattered light would disappear, leaving just the planets.
Left: K-band infrared image, with scattered light only. Right: Slightly redder L’ band image, showing both scattered light and (if they are there) planets. Subtract one from the other and…
…No planets. Oh well. The subtracted image, with blue lines showing the most prominent gaps in the disc, the red star the position of HL Tau, and the green circle the position of a candidate planet from an older observation. Planets should show up as white dots near the rings, of which there are none to be seen.
When the authors did this, they spotted…nothing. Based on the precision of their data, they conclude that there are no planets larger than 10-15 times the mass of Jupiter near the gaps in HL Tau’s disc.
At first glace that doesn’t seem to be a problem. Planets that large aren’t all that common, and there could easily be planets too small for the LBTI to detect hiding in the gaps.
But planets any smaller than 10 Jupiter masses wouldn’t have enough gravity to shape the disc in the way seen in the ALMA image. Planets or no planets, a new explanation for the complex structure of HL Tau’s disc may be needed.
The authors point out one possible way to solve this problem. ALMA is most sensitive to dust grains around a millimeter across, whilst the disc is probably made of a range of particle sizes. Smaller planets may have just enough gravity to move only the millimeter-scale particles into the observed rings, leaving the rest of the disc relatively untouched.
So are the gaps in the ring really caused by planets, or something else that we haven’t thought of yet? The paper ends by charting out the ways that astronomers can explore this system in the future. Longer observations by ALMA could broaden the range of dust sizes seen, allowing a more complete image of the disc structure to be made. And searches for smaller planets could be carried out, although such precise measurements will probably need to wait for the next generation of truly giant telescopes.
### Peter Coles - In the Dark
September’s Baccalaureate
September’s Baccalaureate
A combination is
Of Crickets – Crows – and Retrospects
And a dissembling Breeze
That hints without assuming –
An Innuendo sear
That makes the Heart put up its Fun
And turn Philosopher.
by Emily Dickinson (1830-1886)
### arXiv blog
Moon-Landing Equivalent for Robots: Assembling an IKEA Chair
Robots are poor at many activities that humans find simple. Now roboticists are making progress on a task that exemplifies them all: the automated assembly of an IKEA chair.
Humans have long feared that robots are taking over the world. The truth, however, is more prosaic. It’s certainly the case that robots have revolutionized certain tasks such as car manufacturing, for example.
### Clifford V. Johnson - Asymptotia
Moon Line
(Click for larger view.) This was a heartening reminder that people still care about what's going on in the sky far above. This is a snap I took of a very long line of people (along the block and then around the corner and then some more) waiting for the shuttle bus to the Griffith Observatory to take part in the moon viewing activities up there tonight. (I took it at about 6:00pm, so I hope they all made it up in time!) The full moon is at close approach, and there was a total lunar eclipse as well. Knowing the people at the Observatory, I imagine they had arranged for lots of telescopes to be out on the lawn in front of the Observatory itself, as well as plenty of people on hand to explain things to curious visitors.
I hope you got to see some of the eclipse! (It is just coming off peak now as I type...)
The post Moon Line appeared first on Asymptotia.
## September 27, 2015
### Peter Coles - In the Dark
The Meaning of Cosmology
I know it’s Sunday, and it’s also sunny, but I’m in the office catching up with my ever-increasing backlog of work so I hope you’ll forgive me for posting one from the vaults, a rehash of an old piece that dates from 2008..
–o–
When asked what I do for a living, I’ve always avoided describing myself as an astronomer, because most people seem to think that involves star signs and horoscopes. Anyone can tell I’m not an astrologer anyway, because I’m not rich. Astrophysicist sounds more impressive, but perhaps a little scary. That’s why I usually settle on the “Cosmologist”. Grandiose, but at the same time somehow cuddly.
I had an inkling that this choice was going to be a mistake at the start of my first ever visit to the United States, which was to attend a conference in memory of the great physicist Yacov Borisovich Zel’dovich, who died in 1989. The meeting was held in Lawrence, Kansas, home of the University of Kansas, in May 1990. This event was notable for many reasons, including the fact that the effective ban on Russian physicists visiting the USA had been lifted after the arrival of glasnost to the Soviet Union. Many prominent scientists from there were going to be attending. I had also been invited to give a talk, the only connection with Zel’dovich that I could figure out was that the very first paper I wrote was cited in the very last paper to be written by the great man.
I think I flew in to Detroit from London and had to clear customs there in order to transfer to an internal flight to Kansas. On arriving at the customs area in the airport, the guy at the desk peered at my passport and asked me what was the purpose of my visit. I said “I’m attending a Conference”. He eyed me suspiciously and asked me my line of work. “Cosmologist,” I proudly announced. He frowned and asked me to open my bags. He looked in my suitcase, and his frown deepened. He looked at me accusingly and said “Where are your samples?”
I thought about pointing out that there was indeed a sample of the Universe in my bag but that it was way too small to be regarded as representative. Fortunately, I thought better of it. Eventually I realised he thought cosmologist was something to do with cosmetics, and was expecting me to be carrying little bottles of shampoo or make-up to a sales conference or something like that. I explained that I was a scientist, and showed him the poster for the conference I was going to attend. He seemed satisfied. As I gathered up my possessions thinking the formalities were over, he carried on looking through my passport. As I moved off he suddenly spoke again. “Is this your first visit to the States, son?”. My passport had no other entry stamps to the USA in it. “Yes,” I said. He was incredulous. “And you’re going to Kansas?”
This little confrontation turned out to be a forerunner of a more dramatic incident involving the same lexicographical confusion. One evening during the Zel’dovich meeting there was a reception held by the University of Kansas, to which the conference participants, local celebrities (including the famous writer William Burroughs, who lived nearby) and various (small) TV companies were invited. Clearly this meeting was big news for Lawrence. It was all organized by the University of Kansas and there was a charming lady called Eunice who was largely running the show. I got talking to her near the end of the party. As we chatted, the proceedings were clearly winding down and she suggested we go into Kansas City to go dancing. I’ve always been up for a boogie, Lawrence didn’t seem to be offering much in the way of nightlife, and my attempts to talk to William Burroughs were repelled by the bevy of handsome young men who formed his entourage, so off we went in her car.
Before I go on I’ll just point out that Eunice – full name Eunice H. Stallworth – passed away suddenly in 2009. I spent quite a lot of time with her during this and other trips to Lawrence, including a memorable day out at a pow wow at Haskell Indian Nations University where there was some amazing dancing.
Anyway, back to the story. It takes over an hour to drive into Kansas City from Lawrence but we got there safely enough. We went to several fun places and had a good time until well after midnight. We were about to drive back when Eunice suddenly remembered there was another nightclub she had heard of that had just opened. However, she didn’t really know where it was and we spent quite a while looking for it. We ended up on the State Line, a freeway that separates Kansas City Kansas from Kansas City Missouri, the main downtown area of Kansas City actually being for some reason in the state of Missouri. After only a few moments on the freeway a police car appeared behind us with its lights blazing and siren screeching, and ushered us off the road into a kind of parking lot.
Eunice stopped the car and we waited while a young cop got out of his car and approached us. I was surprised to see he was on his own. I always thought the police always went around in pairs, like low comedians. He asked for Eunice’s driver’s license, which she gave him. He then asked for mine. I don’t drive and don’t have a driver’s license, and explained this to the policeman. He found it difficult to comprehend. I then realised I hadn’t brought my passport along, so I had no ID at all.
I forgot to mention that Eunice was black and that her car had Alabama license plates.
I don’t know what particular thing caused this young cop to panic, but he dashed back to his car and got onto his radio to call for backup. Soon, another squad car arrived, drove part way into the entrance of the parking lot and stopped there, presumably so as to block any attempted escape. The doors of the second car opened and two policemen got out, kneeled down and and aimed pump-action shotguns at us as they hid behind the car doors which partly shielded them from view and presumably from gunfire. The rookie who had stopped us did the same thing from his car, but he only had a handgun.
“Put your hands on your heads. Get out of the car. Slowly. No sudden movements.” This was just like the movies.
We did as we were told. Eventually we both ended up with our hands on the roof of Eunice’s car being frisked by a large cop sporting an impressive walrus moustache. He reminded me of one of the Village People, although his uniform was not made of leather. I thought it unwise to point out the resemblance to him. Declaring us “clean”, he signalled to the other policemen to put their guns away. They had been covering him as he searched us.
I suddenly realised how terrified I was. It’s not nice having guns pointed at you.
Mr Walrus had found a packet of French cigarettes (Gauloises) in my coat pocket. I clearly looked scared so he handed them to me and suggested I have a smoke. I lit up, and offered him one (which he declined). Meanwhile the first cop was running the details of Eunice’s car through the vehicle check system, clearly thinking it must have been stolen. As he did this, the moustachioed policeman, who was by now very relaxed about the situation, started a conversation which I’ll never forget.
Policeman: “You’re not from around these parts, are you?” (Honestly, that’s exactly what he said.)
Me: “No, I’m from England.”
Policeman: “I see. What are you doing in Kansas?”
Me: “I’m attending a conference, in Lawrence..”
Policeman: “Oh yes? What kind of Conference?”
At this point, Mr Walrus nodded and walked slowly to the first car where the much younger cop was still fiddling with the computer.
“Son,” he said, “there’s no need to call for backup when all you got to deal with is a Limey hairdresser…”.
### Tommaso Dorigo - Scientificblogging
One Dollar On 5.3 TeV
This is just a short post to mention one thing I recently learned from a colleague - the ATLAS experiment also seems to have collected a 5.3 TeV dijet event, as CMS recently did (the way the communication took place indicates that this is a public information; if it is not, might you ATLAS folks let me know, so that I'll remove this short posting?). If any reader here from ATLAS can point me to the event display I would be grateful. These events are spectacular to look at: the CMS 5 TeV dijet event display was posted here a month ago if you like to have a look.
## September 26, 2015
### Jester - Resonaances
Weekend Plot: celebration of a femtobarn
The LHC run-2 has reached the psychologically important point where the amount the integrated luminosity exceeds one inverse femtobarn. To celebrate this event, here is a plot showing the ratio of the number of hypothetical resonances produced so far in run-2 and in run-1 collisions as a function of the resonance mass:
In the run-1 at 8 TeV, ATLAS and CMS collected around 20 fb-1. For 13 TeV collisions the amount of data is currently 1/20 of that, however the hypothetical cross section for producing hypothetical TeV scale particles is much larger. For heavy enough particles the gain in cross section is larger than 1/20, which means that run-2 now probes a previously unexplored parameter space (this simplistic argument ignores the fact that backgrounds are also larger at 13 TeV, but it's approximately correct at very high masses where backgrounds are small). Currently, the turning point is about 2.7 TeV for resonances produced, at the fundamental level, in quark-antiquark collisions, and even below that for those produced in gluon-gluon collisions. The current plan is to continue the physics run till early November which, at this pace, should give us around 3 fb-1 to brood upon during the winter break. This means that the 2015 run will stop short before sorting out the existence of the 2 TeV di-boson resonance indicated by run-1 data. Unless, of course, the physics run is extended at the expense of heavy-ion collisions scheduled for November ;)
## September 25, 2015
### arXiv blog
How Good Are You at Detecting Digital Forgeries?
Image-based forgery is becoming more common not least because humans seem to be particularly vulnerable even to obvious fakes.
Back in 2010, the Australian public was enthralled by a case of fraud in which the fraudster convinced people of his credentials by producing pictures of himself with Pope John Paul II, Bill Clinton, Bill Gates, and others. In this way, the fraudster raised £7 million from investors who were taken in. The pictures, of course, were forgeries.
### ZapperZ - Physics and Physicists
Why Do We Put Telescope In Space?
Here's the Minute Physics explanation:
Zz.
## September 24, 2015
### Symmetrybreaking - Fermilab/SLAC
Citizen scientists published
Amateurs and professionals share the credit in the newest publications from the Space Warps project.
When amateur astronomer Julianne Wilcox first moved and traded the star-covered firmament of Petervale, South Africa, for the light-cluttered sky of London, she feared that she would no longer be able to indulge in her passion for astronomy.
Then she discovered a new way of doing what she loves: online citizen science projects that engage amateurs like her in the analysis of real astronomical data.
Wilcox is one of 37,000 citizen scientists involved in two papers accepted for publication in the journal Monthly Notices of the Royal Astronomical Society. The papers report the discovery of 29 potential new gravitational lenses—objects such as massive galaxies and galaxy clusters that distort light from faraway galaxies behind them. An additional 30 promising objects may turn out to be lenses, too.
Amateur scientists from all walks of life identified the new objects using Space Warps, a web-based gravitational lens discovery platform. They did so by marking lens-like features in some 430,000 images of the Canada-France-Hawaii Telescope Legacy Survey.
Since gravitational lenses act like cosmic magnifying glasses, they help researchers look at very distant light sources. They also provide information about invisible dark matter, because dark matter affects the way gravitational lenses bend light.
Researchers can now point their telescopes at the newly identified objects and study them in more detail.
“In addition to its immediate scientific output, Space Warps is also a great platform to figure out how to get citizen scientists involved in future large-scale astronomical surveys,” says Phil Marshall, Space Warps principle investigator for the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of SLAC National Accelerator Laboratory and Stanford University.
The Large Synoptic Survey Telescope, for instance, will begin in the early 2020s to capture images of the entire southern night sky in unprecedented detail. In the process, it’ll generate about 6 million gigabytes of data per year. Researchers hope that the public can help with processing these gigantic streams of information.
Apart from distributing a lot of work among a large number of people, crowdsourcing also appears to be well suited for the analysis of complex data.
“In our experience, humans are doing much better than computer algorithms in identifying faint and complex objects such as gravitational lenses that are not that obvious,” says Anupreeta More, Space Warps principle investigator for the Kavli Institute for the Physics and Mathematics of the Universe in Tokyo. “We can use what we’ve learned about how volunteers identify new objects to develop smarter algorithms.”
Citizen scientists also excel at spotting unexpected things. For example, when asked to look for typically bluish lens-like features in images of another survey, Space Warps users spotted an object with strong red-colored arcs—a gravitational lens bending light from a particularly interesting star-forming galaxy behind it.
“Our users have identified several stunning objects like this,” says Aprajita Verma, Space Warps principal investigator for the University of Oxford. “It shows that citizen scientists are very flexible and understand the larger context of the images they’re shown.”
But crowdsourced science benefits more than just the researchers, says Wilcox, who avidly participates in a variety of astronomy-focused projects.
“Citizen science is a two-way process,” she says. “Getting astronomical objects classified is one aspect, but it also sparks off an interest in research in people without a science background.”
As one of Space Warps’ expert users, Wilcox not only looks for gravitational lenses but also moderates the project’s community discussions and helps further analyze identified objects—contributions that have earned her and her fellow moderators Elisabeth Baeten, Claude Cornen and Christine Macmillan a spot on the author lists of the two Space Warps papers.
“It’s great to be on the papers,” she says. “It really shows the amazing opportunities that are available to citizen scientists.” Wilcox hopes that her example could help getting even more volunteers interested in people-powered research.
The sky’s the limit; try it yourself at spacewarps.org, or get involved in the Zooniverse, a citizen science platform of currently 33 projects covering various scientific disciplines.
Like what you see? Sign up for a free subscription to symmetry!
### ATLAS Experiment
Top 2015 – Mass, Momentum, and the Conga
The top quark conference normally follows the same basic structure. The first few days are devoted to reports on the general status of the field and inclusive measurements; non-objectionable stuff that doesn’t cause controversy. The final few days are given over to more focused analyses; the sort of results that professors really enjoy arguing about. We got a taste of this earlier than usual this year as discussion on top transverse momenta (pT) broke out at least three times before we even managed to get to the session on Thursday! As a postdoc, I do love this sort of debate at a workshop, almost as much as I enjoy watching the students arrive at 9am, desperately hungover and probably assuming they were quiet as they crept back into the Hotel at 3am (no Joffrey, we definitely didn’t hear you knock over that sun lounger).
The CMS combination of measurements of top-quark mass, currently the most sensitive in the world.
DAY 3:
Top Mass is always a great topic at this conference. This year the theorists started by reminding us, for what feels like the millionth time, of the difference between various interpretations of “mass” in perturbative QCD, telling us which are well-defined and safe to use. The LHC and Tevatron experiments then showed staggeringly precise measurements using our ill-defined definition of “Monte Carlo mass” that theorists have been complaining about for decades. This year we’ve really outdone ourselves and CMS have combined their results to produce a measurement with an uncertainty of less than 0.5 GeV! Fine, we’re not sure ‘exactly’ what the Monte Carlo mass really is theoretically, but we did also provide well-interpreted pole-mass results (at the cost of having larger uncertainties), so let’s hope that’s enough to keep the theorists happy.
CONFERENCE DINNER:
While it cannot yet be said that starting a conga line qualifies as a tradition at the Top conference, it does seem to occur with increasing frequency. I have my own theories about how and why this occurs (and evidence of a certain ATLAS top convenor who seems to be close to the front of the line each time it happens…) and I find that there are few things as surreal as your bosses and ex-bosses dancing around in a semi-orderly line with their hands on your hips screaming “go faster” in your ear. Though this has little to nothing to do with top physics, I enjoy mentioning it.
Predictions at leading order (LO), next-to leading order (NLO), and next-to-next-to leading order (NNLO) of the top quark transverse momentum.
DAY 4:
Once upon a time, ATLAS and CMS measured the top quark’s pT distribution in data. At first, ATLAS and CMS simulations appeared to disagree with each other, and neither agreed well with the observed data. Though most of the differences between ATLAS and CMS were eventually explained (…sort of) the data itself remained stubbornly different from the simulation. Czakon et al. and their STRIPPER program to the rescue! David Haymes presented a differential top pT distribution at full next-to-next-to leading order (NNLO), calculated using STRIPPER, that agrees nicely with all of the data, proving that next-to-leading-order doesn’t go nearly far enough when it comes to the top quark.
You’ll notice that I didn’t explain what STRIPPER actually is. In short, it is a combination of an NNLO computational algorithm, capable of providing predictions of the top quarks kinematics, and a touch of theorist humour, in the form of an extremely contrived acronym. One can only hope that STRIPPER is meant to describe the stripping away of the complexities of NNLO calculations, but I suspect that would be generous to the point of naivety. At least the speaker wasn’t wearing a horrendous anime shirt. The result itself, however, is very impressive and desperately needed in order to understand the LHC data.
DAY 5:
Well, it’s been a very successful conference. We’ve seen the first 13 TeV results, some of the most precise results to come out of LHC Run1, and even a few Tevatron highlights! Next year we’ll be near Prague, in keeping with the tradition of the conference being held in places famous for either alcohol or beaches. See you in the conga line!
James Howarth is a postdoctoral research fellow at DESY, working on top quark cross-sections and properties for ATLAS. He joined the ATLAS experiment in 2009 as a PhD student with the University of Manchester, before moving to DESY, Hamburg in 2013. In his spare time he enjoys drinking, arguing, and generally being difficult.
### Lubos Motl - string vacua and pheno
Naturalness is fuzzy, subjective, model-dependent, and uncertain, too
In an ordinary non-supersymmetric model of particle physics such as the Standard Model, the masses of (especially) scalar particles are "unprotected" which is why they "love" to be corrected by pretty much any corrections that offer their services.
For example, if you interpret the Standard Model as an effective theory approximating a better but non-supersymmetric theory that works up to the GUT scale or Planck scale, fifteen orders of magnitude above the Higgs mass, there will be assorted loop diagrams that contribute to the observable mass of the Higgs boson.$m_h^2 = \dots + 3.5 m_{Pl}^2 - 2.7 m_{Pl}^2 + 1.9 m_{GUT}^2 - \dots$ and when you add the terms up, you should better obtain the observed value$m_h^2 = [(125.1\pm 0.3)\GeV]^2$ or so. It seems that we have been insanely lucky to get this small result. Note that the lightness of all other known massive elementary particles is derived from the lightness of the Higgs. Terms that were $$10^{30}$$ times larger than the final observed Higgs mass came with both signs and (almost) cancelled themselves with a huge relative accuracy.
A curious scientist simply has to ask: Why? Why does he have to ask? Because the high-energy parameters that the individual terms depended upon had to be carefully adjusted, or fine-tuned, to obtain the very tiny final result. This means that the "qualitative hypothesis", the Standard Model (or its completion relevant near the GUT scale or the Planck scale) with arbitrary values of the parameters only predicts the outcome qualitatively similar to the observed one – a world with a light Higgs – with a very low probability.
If you assume that the probabilistic distribution on the parameter space is "reasonably quasi-uniform" in some sense, most of the points of the parameter space predict totally wrong outcomes. So the conditional probability $$P(LHC|SM)$$ where LHC indicates the masses approximately observed at the LHC and SM is the Standard Model with arbitrary parameters distributed according to some sensible, quasi-uniform distribution, is tiny, perhaps of order $$10^{-30}$$, because only a very tiny portion of the parameter space gives reasonable results.
By Bayes' theorem, we may also argue that $$P(SM|LHC)$$, the probability of the Standard Model given the qualitative observations at the LHC, is extremely tiny, perhaps $$10^{-30}$$, as well. Because the probability of the Standard Model is so small, we may say that in some sense, the Standard Model – with the extra statistical assumptions above – has been falsified. It's falsified just like any theory that predicts than an actually observed effect should be extremely unlikely. For example, a theory claiming that there is no Sun – the dot on the sky is just composed of photons that randomly arrive from that direction – becomes increasingly indefensible as you see additional photons coming from the same direction. ;-)
Before you throw the Lagrangian of the Standard Model to the trash bin, you should realize that what is actually wrong isn't the Lagrangian of the Standard Model, an effective field theory, itself. What's wrong are the statistical assumptions about the values of the parameters. Aside from the Standard Model Lagrangian, there exist additional laws in a more complete theory that actually guarantee that the value of the parameters is such that the terms contributing to the squared Higgs mass simply have to cancel each other almost exactly.
Supersymmetry is the main system of ideas that is able to achieve such a thing. The contribution of a particle, like the top quark, and its superpartner, the stop, to $$m_h^2$$ are exactly the same, up to the sign, so they cancel. More precisely, this cancellation holds for unbroken supersymmetry in which the top and stop are equally heavy. We know this not to be the case. The top and the stop have different masses – or at least, we know this for other particle species than the stop.
But even when the top and the stop have different masses and supersymmetry is spontaneously broken, it makes the fine-tuning problem much less severe. You may clump the contributions from the particles with the contributions from their superpartners into "packages". And these "couples" almost exactly cancel, up to terms comparable to the "superpartner scale". This may be around $$(1\TeV)^2$$, about 100 times higher than $$m_h^2$$. So as long as the superpartner masses are close enough to the Higgs mass, the fine-tuning problem of the Standard Model becomes much less severe once supersymmetry is added.
Realistically, $$m_h^2 \approx (125\GeV)^2$$ is obtained from the sum of terms that are about 100 times higher than the final result. About 99% of the largest term is cancelled by the remaining terms and 1% of it survives. Such a cancellation may still be viewed as "somewhat unlikely" but should you lose sleep over it? Or should you discard the supersymmetric model? I don't think so. You have still improved the problem with the probability from $$10^{-30}$$ in the non-supersymmetric model to something like $$10^{-2}$$ here. After all, supersymmetry is not the last insight about Nature that we will make and the following insights may reduce the degree of fine-tuning so that the number $$10^{-2}$$ will be raised to something even closer to one. When and if the complete theory of everything is understood and all the parameters are calculated, the probability that the right theory predicts the observed values of the parameters will reach 100%, of course.
I think that the text above makes it pretty clear that the "naturalness", the absence of unexplained excessively accurate cancellations, has some logic behind it. But at the same moment, it is a heuristic rule that depends on many ill-defined and fuzzy words such as "sensible", "tolerable", and many others. When you try to quantify the degree of fine-tuning, you may write the expressions in many different ways. They will yield slightly different results.
At the end, all these measures that quantify "how unnatural" a model is may turn out to be completely wrong, just like the non-supersymmetric result $$10^{-30}$$ was shown to be wrong once SUSY was added. When you add new principles or extract the effective field theory from a more constrained ultraviolet starting point, you are effectively choosing an extremely special subclass of the effective field theories that were possible before you embraced the new principle (such as SUSY, but it may also be grand unification, various integer-valued relationships that may follow from string theory and tons of related things).
So one can simply never assume that a calculation of the "degree of naturalness" is the final answer that may falsify a theory. It's fuzzy because none of the expressions are clearly better than others. It's subjective because people will disagree what "feels good". And it's model-dependent because qualitatively new models produce totally different probability distibutions on the parameter spaces.
Moreover, the naturalness as a principle – even when we admit it is fuzzy, subjective, and model-dependent – is still uncertain. It may actually contradict principles such as the Weak Gravity Conjecture – which is arguably supported by a more nontrivial, non-prejudiced body of evidence. And the smallness of the Higgs boson mass or the cosmological constant may be viewed as indications that something is wrong with the naturalness assumption, if not disproofs of it.
Today, rather well-known model builders Baer, Barger, and Savoy published a preprint that totally and entirely disagrees with the basic lore I wrote above. The paper is called
Upper bounds on sparticle masses from naturalness or how to disprove weak scale supersymmetry
They say that the "principle of naturalness" isn't fuzzy, subjective, model-dependent, and uncertain. Instead, it is objective, model-independent, demanding clear values of the bounds, and predictive. Wow. I honestly spent some time by reading more or less the whole paper. Can there be some actual evidence supporting these self-evidently wrong claims?
Unfortunately, what I got from the paper was just laughter, not insights. These folks can't possibly be serious!
They want to conclude that a measure of fine-tuning $$\Delta$$ has to obey $$\Delta\lt 30$$ and derive upper limits on the SUSY Higgs mixing parameter $$\mu$$ (it shall be below $$350\GeV$$) or the gluino mass (they raise the limit to $$4\TeV$$). But what are the exact equations or inequalities from which they deduce such conclusions and, especially, what is their evidence in favor of these inequalities?
I was searching hard and the only thing I found was a comment that some figure is "visually striking" (meaning that it wants you to say Someone had to fudge it). Are you serious? Will particle physicists calculate particle masses by measuring how much twisted their stomach becomes when they look at a picture?
"The visually striking" pictures obviously show columns that are not of the same order. But does it mean that the model is wrong? When stocks fell almost by 5% a day, the graphs of the stock prices were surely "visually striking" and many people couldn't believe and many of those couldn't sleep, either. But the drop was real. And it wasn't the only one. Moreover, it's obvious that different investors have different tolerance levels. In the same way, particle physicists have different tolerance levels when it comes to the degree of acceptable fine-tuning.
They want to impose $$\Delta\lt 30$$ on everyone, as a "principle", but it's silly. No finite number on the right hand side would define a good "robust law of physics" because there are no numbers that are so high that they couldn't occur naturally. :-) But if you want me to become sure about the falsification of a model – with some ideas about the distribution of parameters – you would need $$\Delta\geq 10^6$$ for me to feel the same "certainty" as I feel when a particle is discovered at 5 sigma, or $$10^3$$ to feel the 3-sigma-like certainty.
Their proposal to view $$\Delta\gt 30$$ as a nearly rigorous argument falsifying a theory is totally equivalent to using 2-sigma bumps to claim discoveries. The isomorphism is self-evident. When you add many terms of both signs, the probability that the absolute value of the result is less than 3 percent of the absolute value of the largest term is comparable to 3 percent. It's close to 5 percent, the probability that you get a 2-sigma or greater bump by chance!
And be sure that because we have only measured one Higgs mass, it's one bump. To say that $$m_h^2$$ isn't allowed to be more than 30 times smaller than the absolute value of the largest contribution is just like saying a 2-sigma bump is enough to settle any big Yes/No question in physics. Even more precisely, it's like saying that there won't ever be more than 2-sigma deviations from a correct theory. Sorry, it's simply not true. You may prefer a world in which the naturalness could be used to make similarly sharp conclusions and falsify theories. But it is not our world. In our world with the real laws of mathematics, this is simply not possible. Even $$\Delta\approx 300$$ is as possible as the emergence of a 3-sigma bump anywhere by chance. Such things simply may happen.
Obviously, once we start to embrace the anthropic reasoning or multiverse bias, much higher values of $$\Delta$$ may become totally tolerable. I don't want to slide into the anthropic wars here, however. My point is that even if we reject all forms of anthropic reasoning, much higher values of $$\Delta$$ than thirty may be OK.
But what I also find incredible is their degree of worshiping of random, arbitrary formulae. For example, Barbieri and Giudice introduced this naturalness measure$\Delta_{BG}=\max_i \abs{ \frac{ \partial\log m_Z^2 }{ \partial \log p_i } }$ You calculate the squared Z-boson mass out of many parameters $$p_i$$ of the theory at the high scale. The mass depends on each of them, you measure the slope of the dependence in the logarithmic fashion, and pick the parameter which leads to the steepest dependence. This steepest slope is then interpreted as the degree of fine-tuning.
Now, this BG formula – built from previous observations by Ellis and others – is more quantitative than the "almost purely aesthetic" appraisals of naturalness and fine-tuning that dominated the literature before that paper. But while this expression may be said to be well-defined, it's extremely arbitrary. The degree of arbitrariness almost exactly mimics the degree of vagueness that existed before. So even though we have a formula, it's not a real quantitative progress.
When I say that the formula is arbitrary, I have dozens of particular complaints in mind. And there surely exist hundreds of other complaints you could invent.
First, for example, the formula depends on a particular parameterization of the parameter space in terms of $$p_i$$. Any coordinate redefinition – a diffeomorphism on the parameter space – should be allowed, shouldn't it? But a coordinate transformation will yield a different value of $$\Delta$$. Why did you parameterize the space in one way and not another way? Even if you banned general diffeomorphisms, there surely exist transformations that are more innocent, right? Like the replacement of $$p_i$$ by their linear combinations. Or products and ratios, and so on.
Second, and it is related, why are there really the logarithms? Shouldn't the expression depend on the parameters themselves, if they are small? Shouldn't one define a natural metric on the parameter space and use this metric to define the naturalness measure?
Third, why are we picking the maximum?$\max K_i$ may be fine to pick a representative value of many values $$K_i$$. But we may also pick$\sum_i |K_i|$ or, perhaps more naturally,$\sqrt{ \sum_i |K_i|^2 }.$ For a "rough understanding", such changes usually don't change the picture dramatically. But if you wanted exact bounds, it's clearly important which of those expressions is picked and why. You would need some evidence that favors one formula or another. There is no theoretical evidence and there is no empirical evidence, either.
The bounds defined by the three alternative expressions above may be called a "cube", a "diamond", and a "ball". The corresponding limits on the superpartner masses may have similar shapes. The authors end up claiming things like $$\mu\lt 350\GeV$$ out of random assumptions of the form $$\Delta\lt 30$$ – even though the $$\Delta$$ could be replaced by a different formula and $$30$$ could be replaced by a different number. Why is their starting point better than the "product"? Why don't they directly postulate an inequality on $$\mu$$?
Fourth, shouldn't the formula for the naturalness measure get an explicit dependence on the number of parameters? If your theory has many soft parameters, you may view it as an unnatural theory regardless of the degree of fine-tuning in each parameter because it becomes easier to make things look natural if there are many moving parts that may conspire (or because there are many theories with many parameters which should make you reduce their priors). However, you could also present the arguments going exactly in the opposite direction. When there are many parameters $$p_i$$ leading to many slopes $$K_i$$, it's statistically guaranteed that at least one of them will turn out to be large, by chance, right? So perhaps, for a large number of parameters, you should tolerate higher values of $$\Delta$$, too.
One can invent infinitely many arguments and counter-arguments that will elevate or reduce the tolerable values of $$\Delta$$ for one class of theory differently than for another class of theories, arguments and counter-arguments that may take not just the number of parameters but any qualitative (and quantitative) property of the class of theories into account! The uncertainty and flexibility has virtually no limits, and for a simple reason: we are basically talking about the "right priors" for all hypotheses in physics. Well, quite generally in Bayesian inference, there can't be any universally "right priors". Priors are unavoidably subjective and fuzzy. Only the posterior or "final" answers or probabilities (close to zero or one) after a sufficient body of relevant scientific evidence is collected may be objective.
There are lots of technical complications like that. And even if you forgot about those and treated the formula $$\Delta_{BG}$$ as a canonical gift from the Heaven, which it's obviously not, what should be the upper bound that you find acceptable? Arguing whether it's $$\Delta\lt 20$$ or $$\Delta \lt 300$$ is exactly equivalent to arguments whether 2-sigma or 3-sigma bumps are enough to settle a qualitative question about the validity of a theory. You know, none of them is enough. But even if you ask "which of them is a really strong hint", the answer can't be sharp. The bound is unavoidably fuzzy.
They also discuss a similar naturalness measure$m_Z^2 = \sum_i K_i m_Z^2, \quad \Delta_{EW}=\max |K_i|$ You write the squared Z-boson mass as the sum of many terms. I wrote them as $$K_i$$ times $$m_Z^2$$ so that the sum $$\sum_i K_i = 1$$, and the degree of naturalness is the greatest absolute value among the values $$|K_i|$$. If there is a cancellation of large numbers, the theory is said to be highly EW-fine-tuned.
Again, when you write the squared Z-boson mass according to a particular template, the naturalness measure above becomes completely well-defined. But this well-definedness doesn't help you at all to answer the question Why. Why is it exactly this formula and not another one? Why are you told to group the terms in one way and not another way?
The grouping of terms is an extremely subtle thing. An important fact about physics is that only the total result for the mass, or a scattering amplitude, is physically meaningful. The way how you calculate it – how you organize the calculation or group the contributions – is clearly unphysical. There are many ways and none of them is "more correct" than all others.
At the beginning, I told you that the contributions of the top and the stop to the squared Higgs mass may be huge but when you combine them into the top-stop contribution, this contribution is much smaller than the GUT or Planck scale: it is comparable to the much lower superpartner scale. So the "impression about fine-tuning" clearly depends on how you write the thing which is unphysical.
There are lots of numbers in physics that are much smaller than the most naive order-of-magnitude estimates. The cosmological constant is the top example and its tiny size continues to be largely unexplained. The small Higgs boson mass would be unexplained without SUSY etc. But there are many more mundane examples. In those cases, there is no contradiction because we can explain why the numbers are surprisingly small.
The neutron's decay rate is very low – the lifetime is anomalously long, some 15 minutes, vastly longer than other lifetimes of things decaying by a similar beta-decay. It's because the phase space of the final 3 products to which neutron decays is tiny. It's because the neutron mass is so close to the sum of the proton mass and the electron mass (and the neutrino mass, if you don't neglect it). The suppression is by a power law.
But take an even more mundane example. The strongest spectral line emitted by the Hydrogen atom. I mean the line between $$n=1$$ and $$n=2$$. Its energy is $$13.6\eV(1-1/4)$$, some ten electronvolts. You could say that it is insanely fine-tuned because it's obtained as the difference between two energies/masses of the Hydrogen atom in two different states. The hydrogen atom's mass is almost $$1\GeV$$, mostly composed of the proton mass.
Why does the photon mass end up being $$10\eV$$, 100 million times lower than the latent energy of the Hydrogen atom? Well, we have lots of mundane explanations. First, we know why the two masses are almost the same because the proton is doing almost nothing during the transition. (This argument is totally analogous to the aforementioned claim in SUSY that the top and the stop may be assumed to be similar.) The complicated motion in the Hydrogen atom is only due to a part of the atom, the electron, whose rest mass is just $$511\keV$$, almost 2,000 times lighter than the proton. This rest mass of the electron is still 50,000 times larger than the energy of the photon. Why?
Well, it's because the binding energy of the electron in the hydrogen atom is comparable to the kinetic energy. And the kinetic energy is much lower than the rest mass because the speed of the electron is much smaller than the speed of light. It's basically the fine-structure constant times the speed of light, as one may derive easily.
Now, why is the fine-structure constant, the dimensionless strength of electromagnetism, so small? It is $$1/137.036$$ or so. Well, it is just what it is. We may find excuses as well as formulae deriving the value from constants considered more fundamental these days. First, one could argue that $$4\pi / 137$$ and not $$1/137$$ is the more natural constant to consider. So a part of the smallness of $$1/137$$ is because it implicitly contains some factor of $$1/4\pi$$, about one twelfth, you could be more careful about.
The remaining constant may be derived from the electroweak theory. The fine-structure constant ends up smaller than you could expect because 1) its smallness depends on the smallness of two electroweak coupling constants, 2) it's mostly a $$U(1)$$ coupling and such couplings are getting weaker at lower energies. So a decent value of the coupling at the GUT scale simply produces a rather small value of the fine-structure constant at low energies (below the electron mass).
We don't say that the fine-structure constant is unnaturally small because the GUT-like theories or, which is better, stringy vacua that we have in mind that may produce electromagnetism including predictions of parameters may produce values like $$1/137$$ easily. But before we knew these calculations, we could have considered the smallness of the fine-structure constant to be a fine-tuning problem.
My broader point is that there are ways to explain the surprise away. More objectively, we can derive the energy of the photon emitted by the Hydrogen atom from a more complete theory, the Standard Model or a GUT theory, and a part of the surprise about the smallness of the photon energy goes away. We would still need some explanation why the electron Yukawa coupling is so tiny and why the electron mass ends up being beneath the proton mass, and lots of other things. But there will always be a part of the explanation (of the low photon energy) of the kind "bound states where objects move much slower than the speed of light" produce small changes of the energy in the spectrum, and similar things. And there will be wisdoms such as "it's normal to get bound states with low speeds, relatively to the speed of light, because couplings often want to be orders of magnitude lower than the maximum values".
The attempts to sell naturalness as some strict, sharp, and objective law are nothing else than the denial of similar explanations in physics – in the future physics but maybe even in the past and established physics. Every explanation like that – a deeper theory from which we derive the current approximate theories; but even a method to organize the concepts and find previously overlooked patterns – change the game. They change our perception of what is natural. To say that one already has the right and final formulae deciding on how much a theory is natural – or right – is equivalent to saying that we won't learn anything important in physics in the future, and I think that it's obviously ludicrous.
The naturalness reasoning is a special example of Bayesian inference applied to probability distributions on the parameter spaces. So we need to emphasize that the conclusions depend on the Bayesian probability distributions. But a defining feature of Bayesian probabilities is that they change – or should be changed, by Bayes' theorem – whenever we get new evidence. It follows that in the future, after new papers, our perception of naturalness of one model or another will unavoidably change and attempts to codify a "current" formula for the naturalness are attempts to sell self-evidently incomplete knowledge as the complete one. More complete theories will tell us more about the values of parameters in the current approximate theories – and they will be able to say whether our probability distributions on the parameter spaces were successful bookmaker's guesses. The answer may be Yes, No, or something in between. In some cases, the guess will be right. In others, it will be wrong but it will look like the bookmaker's bad luck. But there will also be cases in which the bookmakers will be seen to have missed something – making it obvious that in general, the bookmakers' odds are something else than the actual results of the matches! It's the true results of the matches, and not some bookmakers' guesses at some point, that define the truth that physics wants to find.
At the end, I believe that physicists such as the authors of the paper I criticized above are motivated by some kind of "falsifiability wishful thinking". They would like if physics became an Olympic discipline where you may organize a straightforward race and you may declare the winners and losers. Pay \$10 billion for the LHC and it will tell you whether SUSY is relevant for the weak scale physics. But physics is not an Olympic discipline. Physics is the search for the laws of Nature. It is the search for the truth. And a part of the truth is that there are no extremely simple and universal solutions to problems or methods to answer difficult questions.
If a model can describe the observed physics – plus some bumps – with $$\Delta=10$$ instead of $$\Delta=1,000$$ of its similar competitor, I may prefer the former even though the value of $$\Delta$$ obviously won't be the only thing that matters.
But when you compare two extremely different theories or classes of theories – and supersymmetric vs non-supersymmetric models are a rather extreme example – it becomes virtually impossible to define a "calibration" of the naturalness measure that would be good for both different beasts. The more qualitatively the two theories or classes of theories differ, the more different their prior probabilities may be, and the larger is the possible multiplicative factor that you have to add to $$\Delta$$ of one theory to make the two values of $$\Delta$$ comparable.
And suggesting that people should embrace things like $$\Delta\lt 30$$ with some particular definition of $$\Delta$$ is just utterly ludicrous. It's a completely arbitrary bureaucratic restriction that someone extracted out of thin air. No scientist can take it seriously because there is zero evidence that there should be something right about such a particular choice.
If the LHC doesn't find supersymmetry during the $$13\TeV$$ run or the $$14\TeV$$ run, it won't mean that SUSY can't be hiding around the corner that would be accessible by a $$42\TeV$$ or $$100\TeV$$ collider. It's spectacularly obvious that no trustworthy argument that would imply such a thing may exist. If nothing will qualitatively change about the available theoretical paradigms, I would even say that most of the sensible phenomenologists will keep on saying that SUSY is most likely around the corner, despite the fact that similar predictions will have failed.
At least the phenomenologists who tend to pay attention to naturalness will say so. By my assumptions, SUSY will remain the most natural explanation of the weak scale on the market. At the same moment, the naturalness-avoiding research – I also mean the anthropic considerations but not only anthropic considerations – will or should strengthen. But up to abrupt flukes, all these developments will be gradual. It can't be otherwise. When experiments aren't creating radical, game-changing new data, there can't be any game-changing abrupt shift in the theoretical opinions, either.
## September 23, 2015
### astrobites - astro-ph reader's digest
When Stars Align
Title: Lens Masses and Distances from Microlens Parallax and Flux
Authors: J. C. Yee
First Author’s Institution: Harvard-Smithsonian Center for Astrophysics, Cambridge, MA
Status: Submitted to The Astrophysical Journal Letters
When stars were divine, and their journeys across the heavens foretold the events to unfold on our humble terrestrial sphere, the alignments of stars were studiously marked. They signaled the rise of new dynasties, a lucky windfall, ill-fated love. With the passage of a few millennia (and the realization that the wandering stars were planets), our modern sensibilities have been honed to instinctually interpret the apparent crossing of stellar paths as just a happy but natural coincidence with no deeper significance. But perhaps mistakenly so. For when the stars align, you just might catch a glimpse of things otherwise invisible: binary stars in wide orbits, isolated black hole hermits, or the abandoned (or unruly) free-floating planet far from the star around which it was born.
These invisible wanderers are quite literally brought to light through a unique sequence of events that occurs when two celestial bodies align. The force that orchestrates the event? Gravity. Black holes have gotten much fame for their gravitational brawn, which grants them the abilities to warp space and time and to bend light. Such powers, however, are actually not limited to black holes alone—they’re bequested on anything with mass. Your everyday celestial body—say, a star or planet—can do precisely the same. The share of the limelight that black holes have been given in this regard is fairly earned, however, as the ease with which you can see the distortions caused by a massive object scales with its mass. Such objects can act as a lens by virtue of their spheroidal shapes, focusing the stray light beams passing from behind into a distorted image. This process is known as gravitational lensing.
For objects as small as stars, you can’t see these images. The images would be tiny—about a million times smaller than the angular diameter of the Moon—hence the name of this class of gravitational lensing events, microlensing. But these minuscule images can have a disproportionately large detectable effect. When a star crosses paths with another, you’d observe the background star to brighten drastically—as much as a factor of a 1000!—then dim. The two stars don’t have to pass directly in front of each other, but the closer they do, the more the light from the star behind (the “source” star) will be focused. Maximal brightening occurs when the source and the lens are at exactly the same position, a special point called the caustic. Over weeks to months, a distant observer would note a single brightening, then dimming of the star. If the lensed star had any companions—a fairly likely scenario, as stars often come in pairs, and most (if not all!) are believed to have planets—its caustic can morph from a point into a series of closed curves (see Figure 1). If the source star approached or crossed these curves, we’d observe additional brief spikes in light. Depending on how the mass of the companion compares to the lensed star, these spikes can be as short as a day (for low mass companions such as planets)—a real challenge for planet hunters searching for microlensed systems.
Figure 1. The geometry of a a microlensing event. The path of the background star (the “source” of light) is shown by thin gray curve; the arrow shows the direction it moved along this line. The foreground lensing object is a binary system, likely a brown dwarf (M1) with a planet (M2). The background is shaded in different shades of gray to show how much the binary could cause the background star to brighten (see Figure 2 for what was observed). The dark black curves denote the “caustics” of the binary lens: when the background star crosses a caustic, it momentarily becomes infinitely bright if the background star was a point (which is unrealistic—we know stars have finite sizes!). Figure from Han et al. 2013.
Figure 2. A microlensing lightcurve. The lightcurve (brightness over time, here in days) observed for the system shown in Figure 1. The two peaks occur when the background star crosses the caustic (which, as you can see from Figure 1, occurs twice). Figure from Han et al. 2013.
Microlensing events, however, lack one piece of information prized by astronomers—masses. The lens mass affects how long a microlensing event lasts. The duration of a microlensing event is easy to measure, but it also depends on three other things: how far away the lens is, how far the source is, and how fast they’re moving relative to each other. Thus in order to derive the mass of the lens, we need to determine the other three. The distance of the source is easy—typical sources are in the Galactic bulge, the concentration of stars at the center of our galaxy, which is a well known distance away (about 8 kpc).
The author of today’s paper suggest a two-step process to determine the remaining three unknowns. The first is to make two microlensing observations of the same event, but at different places (and thus different angles). Such a “microlens parallax” measurement—which has just recently become possible to obtain for many microlens events due to a new campaign to search for space-based microlensing events with the Spitzer Space Telescope—allows you to reduce the unknowns to the lens mass and distance. This leaves you with the classic problem of having a single equation with two unknowns, for which there are an infinity of permitted combinations.
Figure 3. Disentangling the mass of the lensing star. If you can observe a microlensing event from two different angles, you can derive a mass/magnitude-distance relation (black; the dashed line denotes the uncertainty). Measuring the flux from the lensing star also produces a magnitude-distance relation (magenta). The place where the two lines cross gives the mass and distance of the lens. In the special case in which you have a binary lens, the size of the source star affects how bright it gets, which allows you to derive another mass-distance relation. Figure from today’s paper.
The final key to the puzzle? The flux from the lens, if it’s bright enough. If the lens is a star, its flux allows us to measure how far away it is, given that we know how luminous it is. Since we don’t often know how luminous a given object is, this yields a magnitude-distance relationship. Based on our understanding of stars, the mass-distance relationship obtained from a microlens parallax measurement can be converted into a second (and very different!) magnitude-distance relationship (see Figure 3). Whatever magnitude-distance combination that’s permissible by both relationships gives you the distance to the lens—which finally allows you to solve for the mass of the lens.
And there you have it. It may seem like a long and difficult process to obtain the mass of a fleetingly visible object, but these mass measurements will help us to understand planets and stars of our galaxy that are currently unreachable by another means. With additional Spitzer microlensing campaigns—the first of which is already returning a treasure trove of results—as well as the revamped Kepler mission, K2, and the upcoming mission WFIRST, space-based microlening surveys may become routine. It’s an exciting time for microlensing—many new discoveries await!
Cover image: A map of the amount of brightening you’d see if a distant star passed behind one of the stars in a equal-mass wide binary. The black curves denote the lens’s caustic—if the background star crossed this curve, would momentarily become infinitely bright if it was a point source. The path of the distant background (source) star would appear as a line across the image. You can predict the lightcurve of the microlensing event by plotting the brightening along the source star’s path. Figure from Han & Gaudi 2008.
### Symmetrybreaking - Fermilab/SLAC
Muon g-2 magnet successfully cooled down and powered up
It survived a month-long journey over 3200 miles, and now the delicate and complex electromagnet is well on its way to exploring the unknown.
Two years ago, scientists on the Muon g-2 experiment successfully brought a fragile, expensive and complex 17-ton electromagnet on a 3200-mile land and sea trek from Brookhaven National Laboratory in New York to Fermilab in Illinois. But that was just the start of its journey.
Now, the magnet is one step closer to serving its purpose as the centerpiece of an experiment to probe the mysteries of the universe with subatomic particles called muons. This week, the ring—now installed in a new, specially designed building at Fermilab—was successfully cooled down to operating temperature (minus 450 degrees Fahrenheit) and powered up, proving that even after a decade of inactivity, it remains a vital and viable scientific instrument.
Getting the electromagnet to this point took a team of dedicated people more than a year, and the results have that team breathing a collective sigh of relief. The magnet was built at Brookhaven in the 1990s for a similar muon experiment, and before the move to Fermilab, it spent more than 10 years sitting in a building, inactive.
“There were some questions about whether it would still work,” says Kelly Hardin, lead technician on the Muon g-2 experiment. “We didn’t know what to expect, so to see that it actually does work is very rewarding.”
Moving the ring from New York to Illinois cost roughly 10 times less than building a new one. But it was a tricky proposition—the 52-foot-wide, 17-ton magnet, essentially three rings of aluminum with superconducting coils inside, could not be taken apart, nor twisted more than a few degrees without causing irreparable damage.
Scientists sent the ring on a fantastic voyage, using a barge to bring it south around Florida and up a series of rivers to Illinois. A specially designed truck gently drove it the rest of the way to Fermilab.
The Muon g-2 experiment plans to use the magnet to build on the Brookhaven experiment but with a much more powerful particle beam. The experiment will trap muons in the magnetic field and use them to detect theoretical phantom particles that might be present, impacting the properties of the muons. But to do that, the team had to find out whether the machine could generate the needed magnetic field.
The magnet was moved into its own building on the Fermilab site. Over the past year, workers took on the painstaking task of reassembling the steel base. Two dozen 26-ton pieces of steel—and a dozen 11-ton pieces—had to be maneuvered into place with tremendous precision.
“It was like building a 750-ton Swiss watch,” says Chris Polly, project manager for the experiment.
While that assembly was taking place, other members of the team had to completely replace the control system for the magnet, redesigning it from scratch. Del Allspach, the project engineer, and Hogan Nguyen, one of the primary managers of the ring, oversaw much of this effort, as well as the construction of the infrastructure (helium lines, power conduits) needed before the ring could be cooled and powered.
“That work was very challenging,” Nguyen says. “We had to stay within very strict tolerances for the alignment of the equipment.”
The tightest of those tolerances was 10 microns. For comparison, the width of a human hair is 75 microns. A red blood cell is about 5 microns across.
While assembling the components around the ring, the team also tracked down and sealed a significant helium leak, one that had been previously documented at Brookhaven. Hardin says that the team was relieved to discover that the leak was in an area that could be accessed and fixed. The successful cool-down proved that the leak had been plugged.
“That’s where the big relief comes in,” says Hardin. “We had a good team, and we worked together well.”
Bringing the ring down to its operating temperature of minus 450 degrees Fahrenheit required cooling it with a helium refrigeration system and liquid nitrogen for more than two weeks. Polly noted that this was a tricky process, since the magnet as a whole shrank by at least an inch as it cooled down. This could have damaged the delicate coils inside if it was not done slowly.
Once cooling was complete, the ring had to be powered with 5300 amps of current to produce the magnetic field. This was another slow process, with technicians easing the ring up by less than 2 amps per second and stopping every 1000 amps to check the system.
“It proves we started with a good magnet,” Allspach says. “It had been off for more than a decade, then moved across the country, installed, cooled and powered. I’m very happy to be at this point. It’s a big success for all of us.”
The next step for the magnet is a long process of “shimming,” or adjusting the magnetic field to within extraordinarily small tolerances. Fermilab is in the process of constructing a beamline that will provide muons to the magnet, and scientists expect to start measuring those muons in 2017.
For Nguyen, that step—handing the magnet off to early-career scientists, who will help carry out the experiment—is exciting. One of the thrills of the process, he says, was watching these younger members of the team learn and grow as the experiment took shape.
“I can’t wait to see these younger people get to control this beautiful magnet,” he says.
Like what you see? Sign up for a free subscription to symmetry!
## September 22, 2015
### Lubos Motl - string vacua and pheno
A story on Nima Arkani-Hamed
LHC, vaguely related: ALICE confirms the CPT symmetry
Natalie Wolchover wrote a rather long article
Visions of Future Physics
about Nima Arkani-Hamed for the Quanta Magazine. You may read lots of stuff about Nima's life and career, his personality, what he considers to be his weaknesses etc.
There is also a big section about his plans to lead the Chinese nation – that has hired him – to build a new collider that is about as big as the Milky Way. ;-)
Some thoughts about the future of physics may be seen everywhere in the article.
TRF blog posts rarely follow the template of "linker not a thinker" but this one is surely one of those rare exceptions! ;-) I could write lots of things about Nima but the insane migration vote and other things have exhausted me for today.
### Symmetrybreaking - Fermilab/SLAC
Do protons decay?
Is it possible that these fundamental building blocks of atoms have a finite lifetime?
The stuff of daily existence is made of atoms, and all those atoms are made of the same three things: electrons, protons and neutrons.
Protons and neutrons are very similar particles in most respects. They’re made of the same quarks, which are even smaller particles, and they have almost exactly the same mass.
Yet neutrons appear to be different from protons in an important way: They aren’t stable. A neutron outside of an atomic nucleus decays in a matter of minutes into other particles.
A free proton is a pretty common sight in the cosmos. Much of the ordinary matter (as opposed to dark matter) in galaxies and beyond comes in the form of hydrogen plasma, a hot gas made of unattached protons and electrons. If protons were as unstable as neutrons, that plasma would eventually vanish.
But that isn’t happening. Protons—whether inside atoms or drifting free in space—appear to be remarkably stable. We’ve never seen one decay.
However, nothing essential in physics forbids a proton from decaying. In fact, a stable proton would be exceptional in the world of particle physics, and several theories demand that protons decay.
If protons are not immortal, what happens to them when they die, and what does that mean for the stability of atoms?
#### Following the rules
Fundamental physics relies on conservation laws: certain quantities that are preserved, such as energy, momentum and electric charge. The conservation of energy—combined with the famous equation E=mc2—means that lower-mass particles can’t change into higher-mass ones without an infusion of energy. Combining conservation of energy with conservation of electric charge tells us that electrons are probably stable forever: No lower-mass particle with a negative electric charge exists, to the best of our knowledge.
Protons aren’t constrained the same way: They are more massive than a number of other particles, and the fact that they are made of quarks allows for several possible ways for them to die.
For comparison, a neutron decays into a proton, an electron and a neutrino. Both energy and electric charge are preserved in the decay: A neutron is a wee bit heftier than a proton and electron combined, and the positively-charged proton balances out the negatively-charged electron to make sure the total electric charge is zero both before and after the decay. (The neutrino—or technically an antineutrino, the antimatter version—is necessary to balance other things, but that’s a story for another day.)
Because atoms are stable and we’ve never seen a proton die, perhaps protons are intrinsically stable. However, as Kaladi Babu of Oklahoma State University points out, there’s no “proton conservation law" like charge conservation to preserve a proton.
“You ask this question: What if the proton decays?” he says. “Does it violate any fundamental principle of physics? And the answer is no.”
#### No GUTs, no glory
So if there’s no rule against proton decay, is there a reason scientists expect to see it? Yes. Proton decay is the strongest testable prediction of several grand unified theories, or GUTs.
GUTs unify three of the four fundamental forces of nature: electromagnetism, the weak force and the strong force. (Gravity isn’t included because we don’t have a quantum theory for it yet.)
The first GUT, proposed in the 1970s, failed. Among other things, it predicted a proton lifetime short enough that experiments should have seen decays when they didn’t. However, the idea of grand unification was still valuable enough that particle physicists kept looking for it. (You might say they had a GUT feeling. Or you might not.)
“The idea of grand unification is really beautiful and explains many things that seem like bizarre coincidences,” says theorist Jonathan Feng, a physicist at the University of California, Irvine.
Feng is particularly interested in a GUT that involves Supersymmetry, a brand of particle physics that potentially could explain a wide variety of phenomena, including the invisible dark matter that binds galaxies together. Supersymmetric GUTs predict some new interactions that, as a pleasant side effect, result in a longer lifetime for protons, yet still leave proton decay within the realm of experimental detection. Because of the differences between supersymmetric and non-supersymmetric GUTs, Feng says the proton decay rate could be the first real sign of Supersymmetry in the lab.
However, Supersymmetry is not necessary for GUTs. Babu is fond of a GUT that shares many of the advantages of the supersymmetric versions. This GUT’s technical name is SO(10), named because its mathematical structure involves rotations in 10 imaginary dimensions. The theory includes important features absent from the Standard Model such as neutrino masses, and might explain why there is more matter than antimatter in the cosmos. Naturally, it predicts proton decay.
#### The search for proton decay
Much rests on the existence of proton decay, and yet we’ve never seen a proton die. The reason may simply be that protons rarely decay, a hypothesis borne out by both experiment and theory. Experiments say the proton lifetime has to be greater than about 1034 years: That’s a 1 followed by 34 zeroes.
For reference, the universe is only 13.8 billion years old, which is roughly a 1 followed by 10 zeros. Protons on average will outlast every star, galaxy and planet, even the ones not yet born.
The key phrase in that last sentence is “on average.” As Feng says, it’s not like “every single proton will last for 1034 years and then at 1034 years they all boom! poof! in a puff of smoke, they all disappear.”
Because of quantum physics, the time any given proton decays is random, so a tiny fraction will decay long before that 1034-year lifetime. So, “what you need to do is to get a whole bunch of protons together,” he says. Increasing the number of protons increases the chance that one of them will decay while you’re watching.
The second essential step is to isolate the experiment from particles that could mimic proton decay, so any realistic proton decay experiment must be located deep underground to isolate it from random particle passers-by. That’s the strategy pursued by the currently operating Super-Kamiokande experiment in Japan, which consists of a huge tank with 50,000 tons of water in a mine. The upcoming Deep Underground Neutrino Experiment, to be located in a former gold mine in South Dakota, will consist of 40,000 tons of liquid argon.
Because the two experiments are based on different types of atoms, they are sensitive to different ways protons might decay, which will reveal which GUT is correct … if any of the current models is right. Both Super-Kamiokande and DUNE are neutrino experiments first, Feng says, “but we're just as interested in the proton decay possibilities of these experiments as in the neutrino aspects.”
After all, proton decay follows from profound concepts of how the cosmos fundamentally operates. If protons do decay, it’s so rare that human bodies would be unaffected, but not our understanding. The impact of that knowledge would be immense, and worth a tiny bit of instability.
Like what you see? Sign up for a free subscription to symmetry!
### Quantum Diaries
Neutrinoless Double Beta Decay and the Quest for Majorana Neutrinos
Neutrinos have mass but are they their own antimatter partner?
The fortunate thing about international flights in and out of the US is that, likely, it is long enough for me to slip in a quick post. Today’s article is about the search for Majorana neutrinos.
Mexico City Airport. Credit: R. Ruiz
Neutrinos are a class of elementary particles that do not carry a color charge or electric charge, meaning that they do not interact with the strong nuclear force or electromagnetism. Though they are known to possess mass, their masses are so small experimentalists have not yet measured them. We are certain that they have mass because of neutrino oscillation data.
Neutrinos in their mass eigenstates, which are a combination of their flavor (orange, yellow, red) eigenstates. Credit: Particle Zoo
This history of neutrinos is rich. They were first proposed as a solution to the mystery of nuclear beta (β)-decay, a type of radioactive decay. Radioactive decay is the spontaneous and random disintegration of an unstable nucleus in an atom into two or more longer-lived, or more stable, nuclei. A free neutron (which is made up of two down-type quarks, one up-type quark, and lots of gluons holding everything together) is unstable and will eventually undergo radioactive decay. Its half-life is about 15 minutes, meaning that given a pile of free neutrons, roughly half will decay by the end of those 15 minutes. A neutron in a bound system, for example in a nucleus, is much more stable. When a neutron decays, a down quark will become an up-type quark by radiating a (virtual) W- boson. Two up-type quarks and a down-type quark are what make a proton, so when a neutron decays, it turns into a proton and a (virtual) W- boson. Due to conservation of energy, the boson is very restricted into what it can decay; the only choice is an electron and an antineutrino (the antiparticle partner of a neutrino). The image below represents how neutrons decay.
Since neutrinos are so light, and interact very weakly with other matter, when neutron decay was first observed, only the outgoing electron and proton (trapped inside of a nucleus) were ever observed. As electrons were historically called β-rays (β as in the Greek letter beta), this type of process is known as nuclear beta-decay (or β-decay). Observing only the outgoing electron and transmuted atom but not the neutrino caused much confusion at first. The process
Nucleus A → Nucleus B + electron
predicts, by conservation of energy and linear momentum, that the electron carries the same fixed amount of energy in each and every decay. However, outgoing electrons in β-decay do not always have the same energy: very often they come out with little energy, but other times they come out with a lot of energy. The plot below is an example distribution of how often (vertical axis) an electron in β-decay will be emitted carrying away a particular amount of energy (horizontal axis).
Electron spectrum in beta decay: Number of electrons/beta-particles (vertical axis) versus energy/kinetic energy (KE) or electrons (horizontal axis). Credit: R. Church
Scientists at the time, including Wolfgang Pauli, noted that the distribution was similar to the decay process where a nucleus decays into three particles instead of two:
Nucleus A → Nucleus B + electron + a third particle.
Furthermore, if the third particle had no mass, or at least an immeasurably small mass, then the energy spectrum of nuclear β-decay could be explained. This mysterious third particle is what we now call the neutrino.
One reason for neutrinos being so interesting is that they are chargeless. This is partially why neutrinos interact very weakly with other matter. However, since they carry no charge, they are actually nearly indistinguishable from their antiparticle partners. Antiparticles carry equal but opposite charges of their partners. For example: Antielectrons (or positrons) carry a +1 electric charge whereas the electron carries a -1 electric charge. Antiprotons carry a -1 electric charge were as protons carry a +1 electric charge. Etc. Neutrinos carry zero charge, so the charges of antineutrinos are still zero. Neutrinos and antineutrinos may in fact differ thanks to some charge that they both possess, but this has not been verified experimentally. Hence, it is possible that neutrinos and antineutrinos are actually the same particle. Such particles are called Majorana particles, named after the physicist Ettore Majorana, who first studied the possibility of neutrinos being their own antiparticles.
The Majorana nature of neutrinos is an open question in particle physics. We do not yet know the answer, but this possibility is actively being studied. One consequence of light Majorana neutrinos is the phenomenon called neutrinoless double β-decay (or 0νββ-decay). In the same spirit as nuclear β-decay (discussed above), double β-decay is when two β-decays occur simultaneously, releasing two electrons and two antineutrinos. Double β-decay proceeds through the following diagram (left):
Double beta decay (L) and neutrinoless double beta decay (R). Credit: CANDLES experiment
Neutrinoless double β-decay is a special process that can only occur if neutrinos are Majorana. In this case, neutrinos and antineutrinos are the same and we can connect the two outgoing neutrino lines in the double β-decay diagram, as shown above. In 0νββ-decay, a neutrino/antineutrino is exchanged between the two decaying neutrons instead of escaping like the electrons.
Having only four particles in the final state for 0νββ-decay (two protons and two electrons) instead of six in double β-decay (two protons, two electrons, and two neutrinos) has an important effect on the kinematics, or motion, of the electrons, i.e., the energy and momentum distributions. In double β-decay:
Nucleus A → Nucleus B + electron + electrons + neutrino + neutrino
the two protons are so heavy compared to the energy released by the decaying neutrons that there is hardly any energy to give them a kick. So for the most part, the protons remain at rest. The neutrinos and electrons then shoot off in various directions and various energies. In neutrinoless double β-decay:
Nucleus A → Nucleus B + electron + electrons
since the remnant nucleus are still roughly at rest, the electron pair take away all the remaining energy allowed by energy conservation. There are no neutrinos to take energy away from the electrons and broaden their distribution. This difference between ββ-decay and 0νββ-decay is stark, particularly in the likelihood of how often (vertical axis) the electrons in β-decay will be emitted carrying away a particular amount of energy (horizontal axis). As seen below, the electron energy distribution in double β-decay is very wide and is centered around smaller energies, whereas the 0νββ-decay is very narrow and is peaked at the maximum of the 2νββ-decay curve.
For double beta decay (blue) and neutrinoless double beta decay (red peak), the electron spectrum in beta decay: Number of electrons/beta-particles (vertical axis) versus energy/kinetic energy (KE) or electrons (horizontal axis). Credit: COBRA experiment
Unfortunately, searches for 0νββ-decay have not yielded any evidence for Majorana neutrinos. This could be because neutrinos are not their own antiparticle, in which case we will never observe the decay. Alternatively, it could be the case that current experiments are simply not yet sensitive to how rarely 0νββ-decay occurs. The rate at which the decay occurs is proportional to the mass of the intermediate neutrino: a zero neutrino mass implies a zero 0νββ-decay rate.
Experiments such as KATRIN hope to measure the mass of neutrinos in the next coming years. If a mass measurement is obtained, it would be a very impressive and impacting result. Furthermore, definitive predictions for 0νββ-decay can be made, at which point the current generation of experiments, such as MAJORANA, COURE, and EXO will be in a mad dash for testing whether or not neutrinos are indeed their own antiparticle.
Lower view of CUORE Cryostat. Credit: CUORE Experiment
Inside view of CUORE Cryostat. Credit: CUORE Experiment
Happy Hunting and Happy Colliding,
Richard Ruiz (@BraveLittleMuon)
PS Much gratitude to Yury Malyshkin, Susanne Mertens, Gastón Moreno, and Martti Nirkko for discussions and inspiration for this post. Cheers!
Update 2015 September 25: Photos of the Cryogenic Underground Observatory for Rare Events (CUORE) experiment have been added. Much appreciate to QD-er Laura Gladstone.
## September 21, 2015
### Tommaso Dorigo - Scientificblogging
Statistics Lectures For Physicists In Traunkirchen
The challenge of providing Ph.D. students in Physics with an overview of statistical methods and concepts useful for data analysis in just three hours of lectures is definitely a serious one, so I decided to take it as I got invited to the "Indian Summer School" in the pleasant lakeside town of Traunkirchen, Austria.
## September 20, 2015
### The n-Category Cafe
The Free Modular Lattice on 3 Generators
The set of subspaces of a vector space, or submodules of some module of a ring, is a lattice. It’s typically not a distributive lattice. But it’s always modular, meaning that the distributive law
$a\vee \left(b\wedge c\right)=\left(a\vee b\right)\wedge \left(a\vee c\right) a \vee \left(b \wedge c\right) = \left(a \vee b\right) \wedge \left(a \vee c\right) $
holds when $a\le ba \le b$ or $a\le ca \le c$. Another way to say it is that a lattice is modular iff whenever you’ve got $a\le a\prime a \le a\text{'}$, then the existence of an element $xx$ with
$a\wedge x=a\prime \wedge x\phantom{\rule{thickmathspace}{0ex}}\mathrm{and}\phantom{\rule{thickmathspace}{0ex}}a\vee x=a\prime \vee x a \wedge x = a\text{'} \wedge x \; \mathrm\left\{and\right\} \; a \vee x = a\text{'} \vee x $
is enough to imply $a=a\prime a = a\text{'}$. Yet another way to say it is that there’s an order-preserving map from the interval $\left[a\wedge b,b\right]\left[a \wedge b,b\right]$ to the interval $\left[a,a\vee b\right]\left[a,a \vee b\right]$ that sends any element $xx$ to $x\vee ax \vee a$, with an order-preserving inverse that sends $yy$ to $y\wedge by \wedge b$:
Dedekind studied modular lattices near the end of the nineteenth century, and in 1900 he published a paper showing that the free modular lattice on 3 generators has 28 elements.
One reason this is interesting is that the free modular lattice on 4 or more generators is infinite. But the other interesting thing is that the free modular lattice on 3 generators has intimate relations with 8-dimensional space. I have some questions about this stuff.
One thing Dedekind did is concretely exhibit the free modular lattice on 3 generators as a sublattice of the lattice of subspaces of ${ℝ}^{8}\mathbb\left\{R\right\}^8$. If we pick a basis of this vector space and call it ${e}_{1},\dots ,{e}_{8}e_1, \dots, e_8$, he looked at the subspaces
$X=⟨{e}_{2},{e}_{4},{e}_{5},{e}_{8}⟩,\phantom{\rule{1em}{0ex}}Y=⟨{e}_{2},{e}_{3},{e}_{6},{e}_{7}⟩,\phantom{\rule{1em}{0ex}}Z=⟨{e}_{1},{e}_{4},{e}_{6},{e}_{7}+{e}_{8}⟩ X = \langle e_2, e_4, e_5, e_8 \rangle , \quad Y = \langle e_2, e_3, e_6, e_7 \rangle, \quad Z = \langle e_1, e_4, e_6, e_7 + e_8 \rangle $
By repeatedly taking intersections and unions, he built 28 subspaces starting from these three.
This proves the free modular lattice on 3 generators has at least 28 elements. In fact it has exactly 28 elements. I think Dedekind showed this by working out the free modular lattice ‘by hand’ and noting that it, too, has 28 elements. It looks like this:
This picture makes it a bit hard to see the ${S}_{3}S_3$ symmetry of the lattice, but if you look you can see it. (Can someone please draw a nice 3d picture that makes the symmetry manifest?)
If you look carefully here, as Hugh Thomas did, you will see 30 elements! That’s because the person who drew this picture, like me, defines a lattice to be a poset with upper bounds and lower bounds for all finite subsets. Dedekind defined it to be a poset with upper bounds and lower bounds for all nonempty finite subsets. In other words, Dedekind’s kind of lattice has operations $\vee \vee$ and $\wedge \wedge$, while mine also has a top and bottom element. So, Dedekind’s ‘free lattice on 3 generators’ did not include the top and bottom element of the picture here. So, it had just 28 elements.
Now, there’s something funny about how 8-dimensional space and the number 28 are showing up here. After all, the dimension of $\mathrm{SO}\left(8\right)\mathrm\left\{SO\right\}\left(8\right)$ is 28. This could be just a coincidence, but maybe not. Let me explain why.
The 3 subspace problem asks us to classify triples of subspaces of a finite-dimensional vector space $VV$, up to invertible linear transformations of $VV$. There are finitely many possibilities, unlike the situation for the 4 subspace problem. One way to see this is to note that 3 subspaces $X,Y,Z\subseteq VX, Y, Z \subseteq V$ give a representation of the ${D}_{4}D_4$ quiver, which is this little category here:
This fact is trivial: a representation of the ${D}_{4}D_4$ quiver is just 3 linear maps $X\to VX \to V$, $Y\to VY \to V$, $Z\to VZ \to V$, and here we are taking those to be inclusions. The nontrivial part is that indecomposable representations of any Dynkin quiver correspond in a natural one-to-one way with positive roots of the corresponding Lie algebra. The Lie algebra corresponding to ${D}_{4}D_4$ is $\mathrm{𝔰𝔬}\left(8\right)\mathfrak\left\{so\right\}\left(8\right)$, the Lie algebra of the group of rotations in 8 dimensions. This Lie algebra has 12 positive roots. So, the ${D}_{4}D_4$ quiver has 12 indecomposable representations. The representation coming from any triple of subspaces $X,Y,Z\subseteq VX, Y, Z \subseteq V$ must be a direct sum of these indecomposable representations, so we can classify the possibilities and solve the 3 subspace problem!
What’s going on here? On the one hand, Dedekind the free modular lattice on 3 generators shows up as a lattice of subspaces generated by 3 subspaces of ${ℝ}^{8}\mathbb\left\{R\right\}^8$. On the other hand, the 3 subspace problem is closely connected to classifying representations of the ${D}_{4}D_4$ quiver, whose corresponding Lie algebra happens to be $\mathrm{𝔰𝔬}\left(8\right)\mathfrak\left\{so\right\}\left(8\right)$. But what’s the relation between these two facts, if any?
Another way to put the question is this: what’s the relation between the 12 indecomposable representations of the ${D}_{4}D_4$ quiver and the 28 elements of the free modular lattice on 3 generators? Or, more numerogically speaking: what relationship between the numbers 12 and 28 is at work in this business?
Here’s one somewhat wacky guess. The Lie algebra of $\mathrm{𝔰𝔬}\left(8\right)\mathfrak\left\{so\right\}\left(8\right)$ has 12 positive roots, and its Cartan algebra has dimension 4. As usual, the Lie algebra is spanned by positive roots, an equal number of negative roots, and the Cartan subalgebra, so we get
$28=12+12+4 28 = 12 + 12 + 4 $
But I don’t really see how this is connected to anything I’d said previously. In particular, I don’t see why 24 of the 28 elements of the lattice of subspaces generated by
$X=⟨{e}_{2},{e}_{4},{e}_{5},{e}_{8}⟩,\phantom{\rule{thickmathspace}{0ex}}Y=⟨{e}_{2},{e}_{3},{e}_{6},{e}_{7}⟩,\phantom{\rule{thickmathspace}{0ex}}Z=⟨{e}_{1},{e}_{4},{e}_{6},{e}_{7}+{e}_{8}⟩ X = \langle e_2, e_4, e_5, e_8 \rangle , \; Y = \langle e_2, e_3, e_6, e_7 \rangle, \; Z = \langle e_1, e_4, e_6, e_7 + e_8 \rangle $
should be related to roots of ${D}_{4}D_4$.
I think a more sane, non-numerological approach to this network of issues is to take the ${D}_{4}D_4$ quiver representation corresponding to Dedekind’s choice of $X,Y,Z\subseteq {ℝ}^{8}X , Y, Z \subseteq \mathbb\left\{R\right\}^8$, decompose it into indecomposables, and see which positive roots those correspond to. I may try my hand at that in the comments, but I’m really looking for some help here.
## September 18, 2015
### Clifford V. Johnson - Asymptotia
Rearrangements
Just thought I'd share with you a snapshot (click for larger view) of my thinking process concerning my office move. I've been in the same tiny box of an office for 12 years, and quite happy too. For various reasons (mostly to do with one large window with lots of light), over the years I've turned down offers to move to nicer digs... but recently I've decided to make a change (giving up some of the light) and so after much to-ing and fro-ing, it seems that we've settled on where I'm going to go.
Part of the process involved me walking over there (it's an old lab space from several decades ago, hence the sink, which I want to stay) with a tape measure one day and making some notes in my notebook about the basic dimensions of some of the key things, including some of the existing [...] Click to continue reading this post
The post Rearrangements appeared first on Asymptotia.
### ZapperZ - Physics and Physicists
Quantum Cognition?
A lot of researchers and experts in other fields have tried to use various principles in physics in their own field. Economics have tried to invent something called Econophysics, to varying degree of success. And certainly many aspects of biology are starting to incorporate quantum effects.
Quantum mechanics has been used notoriously in many areas, including crackpottish application by the likes of Deepak Chopra etc. without really understanding the underlying physics. I don't know if this falls under the same category, but the news report out of The Atlantic doesn't do it any favor. I'm reading this article on quantum cognition, in which human behavior, and certain unpredictability and irrationality of human behavior, may be attributed to quantum effects!
Take, for example, the classic prisoner’s dilemma. Two criminals are offered the opportunity to rat each other out. If one rats, and the other doesn’t, the snitch goes free while the other serves a three-year sentence. If they both rat, they each get two years. If neither rats, they each get one year. If players always behaved in their own self-interest, they’d always rat. But research has shown that people often choose to cooperate.
Classical probability can’t explain this. If the first player knew for sure that the second was cooperating, it would make most sense to defect. If the first knew for sure that the second was defecting, it would also make most sense to defect. Since no matter what the other player is doing, it’s best to defect, then the first player should logically defect no matter what.
A quantum explanation for why player one might cooperate anyway would be that when one player is uncertain about what the other is doing, it’s like a Schrödinger’s cat situation. The other player has the potential to be cooperating and the potential to be defecting, at the same time, in the first player’s mind. Each of these possibilities is like a thought wave, Wang says. And as waves of all kinds (light, sound, water) are wont to do, they can interfere with each other. Depending on how they line up, the can cancel each other out to make a smaller wave, or build on each other to make a bigger one. If “the other guy’s going to cooperate” thought wave gets strengthened in a player’s mind, he might choose to cooperate too.
So you tell me if that made any sense or if this person has actually understood QM beyond what he read in a pop-science book. First of all, when wave cancellation occurs, it doesn't "make a smaller wave". It makes NO wave at that instant and time. Secondly, this person is espousing the existence of some kind of a "thought wave" that hasn't been verified, and somehow, the thought waves from the two different prisoners overlap each other (this, BTW, can be described via classical wave pictures, so why quantum picture in invoked here?).
But the fallacy comes in the claim that there is no other way to explain why different people act differently here without invoking quantum effects. Unlike physics systems where we can prepare two systems identically, we can find no such thing in human beings (even with twins!). Two different people have different backgrounds and "baggage". We have different ethics, moral standards, etc. You'll never find two identical systems to test this out. That's why we have 9 judges on the US Supreme Court, and they can have wildly differing opinions on the identical issue! So why can't they use this to explain why people react differently under this same situation? Why can't they find the answer via the human psychology rather than invoking QM?
But it gets worse...
The act of answering a question can move people from wave to particle, from uncertainty to certainty. In quantum physics, the “observer effect” refers to how measuring the state of a particle can change the very state you’re trying to measure. In a similar way, asking someone a question about the state of her mind could very well change it. For example, if I’m telling a friend about a performance review I have coming up, and I’m not sure how I feel about it, if she asks me “Are you nervous?” that might get me thinking about all the reasons I should be nervous. I might not have been nervous before she asked me, but after the question, my answer might become, “Well, I am now!”
Of course, this smacks of the crackpottery done in "The Secret". Let's get this straight first of all, especially those who do not have a formal education in QM. There is no such thing as "wave-particle duality" in QM! QM/QFT etc. describe the system via a single, consistent formulation. We don't switch gears going from "wave" to "particle" and back to "wave" to describe things things. So the system doesn't move "from wave to particle", etc. It is the nature of the outcome that most people consider to be "wave-like" or "particle-like", but these are ALL produced by the same, single, consistent description!
The problem I have with this, and many other areas that tried to incorporate QM, is that they often start with the effects, and then say something like "Oh, it looks very much like a quantum effect". This is fine if there is an underlying, rigorous mathematical description, but often, there isn't! You cannot says that an idea is "complimentary" to another idea the same way position and momentum observables are non-commuting. The latter has a very set of rigorous mathematical rules and description. To argue that "... quantum models were able to predict order effects shown in 70 different national surveys... " is not very convincing because in physics, this would be quite unconvincing. It means that there are other factors that come in that are not predictable and can't be accounted for. What is there to argue that these other factors are also responsible for the outcome?
Again, the inability to test this out using identical systems makes it very difficult to be convincing. Human behavior can be irrational and unpredictable. That is know. Rather than considering this to be the result of quantum effects, why not consider this to be the result of a chaotic behavior over time, i.e. all of the various life experiences that an individual had all conspire to trigger the decision that he/she makes at a particular time. The "butterfly effect" in an individual's time line can easily cause a particular behavior at another time. To me, this is as valid of an explanation as any.
And that explanation is purely classical!
Zz.
### Axel Maas - Looking Inside the Standard Model
Something dark on the move
If you browse either through popular science physics or through the most recent publications on the particle physics' preprint server arxiv.org then there is one topic which you cannot miss: Dark matter.
What is dark matter? Well, we do not know. So why do we care? Because we know something is out there, something dark, and its moving. Or, more precisely, it moves stuff around. When we look to the skies and measure how stars and galaxies move, then we find something interesting. We think we know how these objects interact, and how they therefore influence each other's movement. But what we observe does not agree with our expectations. We think we have excluded any possibility that we are overlooking something known, like that there are many more black holes, intergalactic gas and dust, or any of the other particles we know filling up the cosmos. No, it seems there is more out there than we can detect right now directly and have a theory for.
Of course, it can be that we miss something about how stars and galaxies influence each other, and this possibility is also pursued. But actually the simplest explanation is that out there is a new type of matter. A type of matter which does not interact either by electromagnetism or the strong force, because otherwise we would have seen it in experiment. Since there is no interaction with electromagnetism, it does not reflect or emit light, and therefore we cannot see it using optics. Hence the name dark matter. Because it is dark.
It certainly acts gravitationally, since this is how stars and galaxies are influenced. It may still be that it either interacts by the weak interaction or with the Higgs. That is something which is currently investigated in many experiments around the world. Of course, it could also interact with the standard model particles by some unknown force we have yet to discover. This would make it even more mysterious.
Because it is so popular there are many resources on the web which discuss what we already know (or do not know) about dark matter. Rather than repeating that, I will here write why I start to be interested in it. Or at least in some possible types of it. Because dark matter which only interacts by gravitation is not particularly interesting right know, as we will likely not learn much about in the foreseeable future. So I am more interested in such types of dark matter which couple by some other means to the standard model. Until they are excluded by experiments.
If it should interact with the standard model by some new force then this new force will look likely at first just like a modification of the weak interactions and/or of the Higgs. This would be an effective description of it. Given time, we would also figure out the details, but we have not yet.
Thus, as a first shot, I will concentrate on the cases where it it could interact with the weak force or just with the Higgs. Interacting with the weak force is actually quite complicated if it should fulfill all experimental constraints we have. Modifications there, though possible, are thus unlikely. Leaves the Higgs.
Therefore, I would like to see how dark matter could interact with the Higgs. Such models are called Higgs portal models, because the Higgs act as the portal through which we see dark matter. So far, this is also pretty standard.
Now comes the new thing. I have written several times that I work on questions what the Higgs really is. That it could have an interesting self-similar structure. And here is the big deal for me: The presence of dark matter interacting with the Higgs could influence actually this structure. This is similar to what happens with other bound states: The constituents can change their identity, as we investigate in another project.
My aim is now to bring all these three things together: Dark matter, Higgs, and the structure of the Higgs. I want to know whether such a type of dark matter influences the structure of the Higgs, and if yes how. And whether this could have a measurable influence. The other way around is that I would like to know whether the Higgs influences the dark matter in some way. Combining these three things is a rather new idea, and it will be very fascinating to explore it. The best course of action will be to do this by simulating the Higgs together with dark matter. This will be neither simple nor cheap, so this may take a lot of time. I will keep you posted.
## September 17, 2015
### Lubos Motl - string vacua and pheno
What confirms a physical theory?
Guest blog by Richard Dawid, LMU Munich,
Munich Center for Mathematical Philosophy
Thanks, Lubos, for your kind invitation to write a guest blog on non-empirical theory confirmation (which I recently discussed in the book "String Theory and the Scientific Method", CUP 2013). As a long-time follower of this blog – who, I may add, fervently disagrees with much of its non-physical content – I am very glad to do so.
Fundamental physics today faces an unusual situation. Virtually all fundamental theories that have been developed during the last four decades still lack conclusive empirical confirmation. While the details with respect to empirical support and prospects for conclusive empirical testing vary from case to case, this general verdict applies to theories like low energy supersymmetry, grand unified theories, cosmic inflation or string theory.
The fact that physics is characterised by decades of continuous work on empirically unconfirmed theories turns the non-empirical assessment of those theories' chances of being viable into an important element of the scientific process. Despite the scarcity of empirical support, many physicists working on the above-mentioned theories have developed substantial trust in their theories' viability based on an overall assessment of the physical context and the theories' qualities.
In particular in the cases of string theory and cosmic inflation, that trust has been harshly criticised by others as unjustified and incompatible with basic principles of scientific reasoning. The critics argue that empirical confirmation is the only possible scientific basis for holding a theory viable. Relying on other considerations in their eyes amounts to abandoning necessary scientific restraint and leads to a relapse into pre-scientific modes of reasoning.
The critics' wholesale condemnation of non-empirical reasons for having trust in a theory's viability is based on an understanding of scientific confirmation that has dominated the philosophy of science throughout the 20th century. It can be found for example in classical hypothetico-deductivism and in most presentations of Bayesian confirmation theory. It consists of two basic ideas. First, theory confirmation is taken to be the only scientific method of generating trust in a theory's viability. Second, it is assumed that a theory can be confirmed only by empirical data that is predicted by that theory.
In my recent book, I argue that this understanding is inadequate. Not only does it prevent an adequate understanding of theory assessment in contemporary high energy physics and cosmology. It does not give an accurate understanding of the research process in 20th century physics either.
I propose an understanding of theory confirmation that is broader than the canonical understanding. My position is in agreement with the canonical understanding in assuming that our concept of confirmation should cover all observation-based scientifically supported reasons for believing in a theory's viability. I argue, however, that it is misguided and overly restrictive to assume that observations that instil trust in a theory must always be predicted by that theory. In fact, we can find cases of scientific reasoning where this assumption is quite obviously false.
A striking example is the Higgs hypothesis. High energy physicists were highly confident that some kind of Higgs particle (be it SM, SUSY, constituent or else) existed long before a Higgs particle was discovered in 2012. Their confidence was based on an assessment of the scientific context and their overall experience with predictive success in physics. Even before 2012, it would have been difficult to deny the scientific legitimacy of their assessment. It would be even more implausible today, after their assessment has been vindicated at the LHC.
Clearly, there is an important difference between the status of the Higgs hypothesis before and after its successful empirical testing in 2011/2012. That difference can be upheld by distinguishing two different kinds of confirmation. Empirical confirmation is based on the empirical testing of the theory's predictions. Non-empirical confirmation is based on observations that are not of the kind that can be predicted by the confirmed theory. Conclusive empirical confirmation is more powerful than non-empirical confirmation. But non-empirical confirmation can on its own provide fairly strong reasons for believing in a theory's viability.
At this point, I should explain why I use the term viability rather than truth and what I mean by it. The truth of a theory is a difficult concept. Often, physicist know that a given theory is not strictly speaking true (for example because it is not consistent beyond a certain regime) but that does not take anything from the theory's value within the regime where it works. What is more important that truth is a theory's capability of making correct predictions in a given regime. Roughly speaking, I call a theory viable at a given scale if can reproduce the empirical data at that scale.
What are the observations that generate non-empirical confirmation in physics today? Three main kinds of argument, each relying on one type of observation can be found when looking at the research process. They don't work in isolation but only acquire strength in conjunction.
The first and most straightforward argument is the no alternatives argument (NAA). Physicists have trust in the viability of a theory that solves a specific physical problem based on the observation that, despite extensive efforts to do so, no alternative theory that solves this problem has been found.
Trust in the Higgs hypothesis before empirical confirmation was crucially based on the fact that the Higgs hypothesis was the only known convincing theory for generating the observed mass spectrum of elementary particles within the empirically well-confirmed framework of gauge field theory. In the same vein, trust in string theory is based on the understanding that there is no other known approach for a coherent theory of all fundamental interactions.
On its own, NAA has one obvious weakness: scientists might just have not been clever enough to find the alternatives that do exist. In order to take NAA seriously, one therefore needs a method of assessing whether or not scientists in the field typically are capable of finding the viable theories. The argument of meta-inductive inference from predictive success in the research field (MIA) can provide that assessment. Scientists observe that, in similar contexts, theories without known alternatives turned out to be successful once empirically tested.
Both, the pre-discovery trust in the Higgs hypothesis and today's trust in string theory gain strength from the observation that standard model predictions were empirically highly successful. One important caveat remains, however. It often seems questionable whether previous examples of predictive success and the new theory under scrutiny are sufficiently similar to justify the use of MIA. In some cases, for example in the Higgs case, the concept under scrutiny and previous examples of predictive success are so closely related to each other that the deployment of MIA looks fairly unproblematic. NAA and MIA in conjunction thus were sufficient in the Higgs case for generating a high degree of trust in the theory. In other cases, like string theory, the comparison with earlier cases of predictive success is more contentious.
In many respects, string theory does constitute a direct continuation of the high energy physics research program that was so successful in the case of the standard model. But its evolution differs substantially from that of its predecessors. The far higher level of complexity of the mathematical problems involved makes it far more difficult to approach a complete theory. This higher level of complexity can throw the justification for a deployment of MIA into doubt. Therefore, it is important to provide a third argument indicating that, despite the high complexity of the theory in question, scientists are still capable of finding their way through the 'conceptual labyrinth' they face. The argument that can be used to that end is the argument from unexpected explanatory interconnections (UEA).
The observation on which UEA is based is the following: scientists develop a theory in order to solve a specific theory. Later it turns out that this theory also solves other conceptual problems it was not developed to solve. This is taken as an indicator of the theory's viability. UEA is the theory-based 'cousin' of the well known data-based argument of novel predictive success. The latter relies on the observation that a theory that was developed based on a given set of empirical data correctly predicts new data that had not entered the process of theory construction. UEA replaces novel empirical prediction by unexpected explanation.
The most well-known example of UEA in the context of string theory is based on the theory's role in understanding black hole entropy. String theory was proposed as a universal theory of all interactions because it was understood to imply the existence of a graviton and suspected to be capable of avoiding the problem of non-renormalizability faced by field theoretical approaches to quantum gravity. Closer investigations of the theory's structure later revealed that - at least in special cases - it allowed for the full derivation of the macro-physical black hole entropy law from micro-physical stringy structure. Considerations about black hole entropy, however, had not entered the construction of string theory. String physics offers a considerable number of unexpected explanatory interconnections that allow for the deployment of UEA. Arguably, many string physicists consider UEA type arguments the most important reason for having trust in their theory.
NAA, MIA and UEA are applicable in a wide range of cases in physics. Their deployment is by no means confined to empirically unconfirmed theories. NAA and MIA play a very important role in understanding the significance of empirical theory confirmation. The continuity between non-empirical confirmation and the assessment of empirical confirmation based on NAA and MIA can be seen nicely by having another look at the example of the Higgs discovery.
As argued above, the Higgs hypothesis was believed before 2012 based on NAA and MIA. But only the empirical discovery of a Higgs particle implied that all calculations of the background for future scattering experiments had to account for Higgs contributions. That implication is based on the fact that the discovery of a particle in a specific experimental context is taken to be a reliable basis for having trust in that particle's further empirical implications. But why is that so? It relies on the very same types of consideration that had generated trust in the Higgs hypothesis already prior to discovery. First, no alternative theoretical conception is available that can account for the measured signal without having those further empirical implications (NAA). And second, in comparable cases of particle discoveries in the past trust in the particle's further empirical implications was mostly vindicated by further experimentation (MIA).
Non-empirical confirmation in this light is no new mode of reasoning in physics. Very similar lines of reasoning have played a perfectly respectable role in the assessment of the conceptual significance of empirical confirmation throughout the 20th century. What has changed is the perceived power of non-empirical considerations already prior to empirical testing of the theory.
While NAA, MIA and UEA are firmly rooted in the history of physical reasoning, string theory does add one entirely new kind of argument that can contribute to the strength of non-empirical confirmation. String theory contains a final theory claim, i.e. the claim that, if string theory is a viable theory at its own characteristic scale, it won't ever have to be superseded by an empirically distinguishable new theory. Future theoretical conceptualization in that case would be devoted to fully developing the theory from the basic posits that are already known rather than to searching for new basic posits that are emprically more adequate. Though the character of string theory's final theory claim is not easy to understand from a philosophical perspective, it constitutes an interesting new twist to the question of non-empirical confirmation and may shed new light on the epistemic status of string theory.
For the remainder of this text, though, I want to confine my analysis to the role of the three 'classical' arguments NAA, MIA and UEA. Let me first address an important general point. In order to be convincing, theory confirmation must not be a one way street. If a certain type of observation has the potential to confirm a theory, it must also have the potential to dis-confirm it. Empirical confirmation trivially fulfils that condition: for any set of empirical data that agrees with a theory's prediction, there are others that disagree with it and therefore, if actually measured, would dis-confirm the theory.
NAA, MIA and UEA fulfil the stated condition as well. The observation that no alternatives to a theory have been found might be overridden by future observations that scientists do find alternatives. That later observation would reduce the trust in the initial theory and therefore amount to that theory's non-empirical dis-confirmation. Likewise, an observed trend of predictive success in a research field could later be overridden by a series of instances where a theory that was well trusted on non-empirical grounds turned out to disagree with empirical tests once they became possible. In the case of UEA, the observation that no unexpected explanatory interconnections show up would be taken to speak against a theory's viability. And once unexpected interconnections have been found, it could still happen that a more careful conceptual analysis reveals them to be the result of elementary structural characteristics of theory building in the given context that are not confined to the specific theory in question. To conclude, the three non-empirical arguments are not structurally biased in favour of confirmation but may just as well provide indications against a theory's viability.
Next, I briefly want to touch a more philosophical level of analysis. Empirical confirmation is based on a prediction of the confirmed theory that agrees with an observation. In the case of non-empirical confirmation, to the contrary, the confirming observations are not predicted by the theory. How can one understand the mechanism that makes those observations confirm the theory?
It turns out that an element of successful prediction is involved in non-empirical confirmation as well. That element, however, is placed at the meta-level of understanding the context of theory building. More specifically, the claim that is tested at the meta-level is a claim on the spectrum of possible scientific alternatives to the known theory. NAA, MIA and UEA all support the meta-level hypothesis that the spectrum of unconceived scientific alternatives to the theory in question is very limited. That implication can indeed be directly inferred from the fact that the metalevel hypothesis increases the probability of the observations on which NAA, MIA and UEA are based. So, at the metalevel we do find the same argumentative structure that can be found at the ground level in the case of empirical confirmation.
Let us, for the sake of simplicity, just consider the most extreme form of the meta-level hypothesis, namely the hypothesis that, in all research contexts in the scientific field, there are no possible alternatives to the viable theory at all. This radical hypothesis predicts 1: that no alternatives will be found because there aren't any (NAA); 2: that, given that there exists a predictively successful theory at all, a theory that has been developed in agreement with the available data will always be predictively successful (MIA); and 3: that that a theory that has been developed for one specific reason will explain all other aspects of the given research context as well, because there are no alternatives that could do so (UEA).
A more careful formulation of non-empirical confirmation based on the concept of limitations to the spectrum of possible alternative theories would need to say more on the criteria for accepting a theory as scientific, on how to individuate theories, etc. In this short presentation, it shall suffice to give the general flavour of the line of reasoning: non-empirical confirmation is a natural extension of empirical confirmation that places the agreement between observation and the prediction of a hypothesis at the meta-level of theory dynamics rather than at the ground level of the theory's predictions.
An instructive way of clarifying the mechanism of non-empirical confirmation and its close relation to empirical confirmation consists in formalizing the arguments within the framework of Bayesian confirmation theory. An analysis of this kind has been carried out for NAA (which is the simplest case) in "The No Alternatives Argument", Dawid, Hartmann and Sprenger BJPS 66(1), 213-34, 2015.
A number of worries have been raised with respect to the concept of non-empirical confirmation. Let me, in the last part of this text, address a few of them.
It has been argued (e.g. by Sabine Hossenfelder) that arguments of non-empirical confirmation are sociological and therefore don't constitute proper scientific reasoning. This claim may be read in two different ways. In its radical form, it would amount to the statement that there is no factual scientific basis to non-empirical confirmation at all. Confidence in a theory on that account would be driven entirely by sociological mechanisms in the physics community and only be camouflaged ex post by fake rational reasoning. The present text in its entirety aims at demonstrating that such an understanding of non-empirical confirmation is highly inadequate.
A more moderate reading of the sociology claim is the following: there may be a factual core to non-empirical confirmation, but it is so difficult to disentangle from sociological factors that science is better off if non-empirical confirmation is discarded. I concede that the role of sociology is trickier with respect to deployments of non-empirical confirmation than in cases where conclusive empirical confirmation is to be had. But I would argue that it must always be the aim of good science to extract all factual information that is provided by an investigation. If the existence of a sociological element in scientific analysis would justify discarding that analysis, quite some empirical data analysis had to be discarded as well.
To give a recent example, the year 2015 witnessed considerable differences of opinion among physicists interpreting the empirical data collected by BICEP2. Those differences of opinion may be explained to a certain degree by sociological factors involved. No-one would have suggested to discard the debate on the interpretation of the BICEP2 data as scientifically worthless on those grounds. I suggest that the very same point of view should also be taken with respect to non-empirical confirmation.
It has also been suggested (e.g. by George Ellis and Joseph Silk) that non-empirical confirmation may lead to a disregard for empirical data and therefore to the abandonment of a pivotal principle of scientific reasoning.
This worry is based on a misreading of non-empirical confirmation. Accepting the importance of non-empirical confirmation by no means devaluates the search for empirical confirmation. To the contrary, empirical confirmation is crucial for the functioning of non-empirical confirmation in two ways. Firstly, non-empirical confirmation indicates the viability of a theory. But, as I said earlier, a theory's viability is defined as: the theory's empirical predictions would turn out correct if they could be specified and empirically tested. Conclusive empirical confirmation therefore remains the ultimate judge of a theory's viability - and thus the ultimate goal of science.
Secondly MIA, which is one cornerstone of non-empirical confirmation, relies on empirical confirmation elsewhere in the research field. Therefore, if empirical confirmation was terminated in the entire research field, that would remove the possibility of testing non-empirical confirmation strategies and, in the long run, make them dysfunctional. Non-empirical confirmation itself thus highlights the importance of testing theories empirically whenever possible. It implies, though, that the absence of empirical confirmation must not be equated with knowing nothing about the theory's chances of being viable.
Finally, it has been argued (e.g. by Lee Smolin) that non-empirical confirmation further strengthens the dominant research programs and therefore in an unhealthy way contributes to thinning out the search for alternative perspectives that may turn out productive later on.
To a given extent, that is correct. Taking non-empirical confirmation seriously does support the focus on those research strategies that generate theories with a considerable degree of non-empirical confirmation. I would argue, however, that this is, by and large, a positive effect. It is an important element of successful science to understand which approaches merit further investigations and which don't.
But a very important second point must be added. As discussed above, non-empirical confirmation is a technique for understanding the spectrum of possible alternatives to the theory one knows. One crucial test in that respect is to check whether serious and extensive searches for alternatives have produced any coherent alternative theories (This is the basis for NAA). Therefore, the search for alternatives is a crucial element of non-empirical confirmation. Far from denying the value of the search for alternatives, non-empirical confirmation adds a new reason why it is important: even if the alternative strands of research fail to produce coherent theories, the observation that none of those approaches has succeeded makes an important contribution to the non-empirical confirmation of the theory that is available.
So what is the status of non-empirical confirmation? The arguments I present support the general relevance of non-empirical confirmation in physics. In the absence of empirical confirmation, non-empirical confirmation can provide a strong case for taking a theory to be viable. This does by no means render empirical confirmation obsolete. Conclusive empirical testing will always trump non-empirical confirmation and therefore remains the ultimate goal in science. Arguments of non-empirical confirmation can in some cases lead to a nearly consensual assessment in the physics community (see the trust in the Higgs particle before 2012). In other cases, they can be more controversial.
As in all contexts of scientific inquiry, argumentation stressing non-empirical confirmation can be balanced and well founded but may, in some cases, also be exaggerated and unsound. The actual strength of each specific case of non-empirical confirmation has to be assessed and discussed by the physicists concerned with the given theory based on a careful scientific analysis of the particular case. Criticism of cases of non-empirical confirmation at that level constitutes an integral and important part of theory assessment. I suggest, however, that the whole-sale verdict that non-empirical theory confirmation is unscientific and should not be taken seriously does not do justice to the research process in physics and obscures the actual state of contemporary physics by disregarding an important element of scientific analysis.
Richard Dawid, LMU Munich
### Symmetrybreaking - Fermilab/SLAC
Hitting the neutrino floor
Dark matter experiments are becoming so sensitive, even the ghostliest of particles will soon get in the way.
The scientist who first detected the neutrino called the strange new particle “the most tiny quantity of reality ever imagined by a human being.” They are so absurdly small and interact with other matter so weakly that about 100 trillion of them pass unnoticed through your body every second, most of them streaming down on us from the sun.
And yet, new experiments to hunt for dark matter are becoming so sensitive that these ephemeral particles will soon show up as background. It’s a phenomenon some physicists are calling the “neutrino floor,” and we may reach it in as little as five years.
The neutrino floor applies only to direct detection experiments, which search for the scattering of a dark matter particle off of a nucleus. Many of these experiments look for WIMPs, or weakly interacting massive particles. If dark matter is indeed made of WIMPs, it will interact in the detector in nearly the same way as solar neutrinos.
We don’t know what dark matter is made of. Experiments around the world are working toward detecting a wide range of particles.
“What’s amazing is now the experimenters are trying to measure dark matter interactions that are at the same strength or even smaller than the strength of neutrino interactions,” says Thomas Rizzo, a theoretical physicist at SLAC National Accelerator Laboratory. “Neutrinos hardly interact at all, and yet we’re trying to measure something even weaker than that in the hunt for dark matter.”
This isn’t the first time the hunt for dark matter has been linked to the detection of solar neutrinos. In the 1980s, physicists stumped by what appeared to be missing solar neutrinos envisioned massive detectors that could fix the discrepancy. They eventually solved the solar neutrino problem using different methods (discovering that the neutrinos weren’t missing; they were just changing as they traveled to the Earth), and instead put the technology to work hunting dark matter.
In recent years, as the dark matter program has grown in size and scope, scientists realized the neutrino floor was no longer an abstract problem for future researchers to handle. In 2009, Louis Strigari, an astrophysicist at Texas A&M University, published the first specific predictions of when detectors would reach the floor. His work was widely discussed at a 2013 planning meeting for the US particle physics community, turning the neutrino floor into an active dilemma for dark matter physicists.
“At some point these things are going to appear,” Strigari says, “and the question is, how big do these detectors have to be in order for the solar neutrinos to show up?”
Strigari predicts that the first experiment to hit the floor will be the SuperCDMS experiment, which will hunt for WIMPs from SNOLAB in the Vale Inco Mine in Canada.
While hitting the floor complicates some aspects of the dark matter hunt, Rupak Mahapatra, a principal investigator for SuperCDMS at Texas A&M, says he hopes they reach it sooner rather than later—a know-thy-enemy kind of thing.
“It is extremely important to know the neutrino floor very precisely,” Mahapatra says. “Once you hit it first, that’s a benchmark. You understand what exactly that number should be, and it helps you build a next-generation experiment.”
Much of the work of untangling a dark matter signal from neutrino background will come during data analysis. One strategy involves taking advantage of the natural ebbs and flows in the amount of dark matter and neutrinos hitting Earth. Dark matter’s natural flux, which arises from the motion of the sun through the Milky Way, peaks in June and reaches its lowest point in December. Solar neutrinos, on the other hand, peak in January, when the Earth is closest to the sun.
“That could help you disentangle how much is signal and how much is background,” Rizzo says.
There’s also the possibility that dark matter is not, in fact, a WIMP. Another potentially viable candidate is the axion, a hypothetical particle that solves a lingering mystery of the strong nuclear force. While WIMP and neutrino interactions look very similar, axion interactions would appear differently in a detector, making the neutrino floor a non-issue.
But that doesn’t mean physicists can abandon the WIMP search in favor of axions, says JoAnne Hewett, a theoretical physicist at SLAC. “WIMPs are still favored for many reasons. The neutrino floor just makes it more difficult to detect. It doesn’t make it less likely to exist.”
Physicists are confident that they’ll eventually be able to separate a dark matter signal from neutrino noise. Next-generation experiments might even be able to distinguish the direction a particle is coming from when it hits the detector, something the detectors being built today just can’t do. If an interaction seemed to come from the direction of the sun, that would be a clear indication that it was likely a solar neutrino.
“There’s certainly avenues to go here,” Strigari says. “It’s not game over, we don’t think, for dark matter direct detection.”
Like what you see? Sign up for a free subscription to symmetry!
## September 16, 2015
### Jester - Resonaances
What can we learn from LHC Higgs combination
Recently, ATLAS and CMS released the first combination of their Higgs results. Of course, one should not expect any big news here: combination of two datasets that agree very well with the Standard Model predictions has to agree very well with the Standard Model predictions... However, it is interesting to ask what the new results change at the quantitative level concerning our constraints on Higgs boson couplings to matter.
First, experiments quote the overall signal strength μ, which measures how many Higgs events were detected at the LHC in all possible production and decay channels compared to the expectations in the Standard Model. The latter, by definition, is μ=1. Now, if you had been impatient to wait for the official combination, you could have made a naive one using the previous ATLAS (μ=1.18±0.14) and CMS (μ=1±0.14) results. Assuming the errors are Gaussian and uncorrelated, one would obtains this way the combined μ=1.09±0.10. Instead, the true number is (drum roll)
So, the official and naive numbers are practically the same. This result puts important constraints on certain models of new physics. One important corollary is that the Higgs boson branching fraction to invisible (or any undetected exotic) decays is limited as Br(h → invisible) ≤ 13% at 95% confidence level, assuming the Higgs production is not affected by new physics.
From the fact that, for the overall signal strength, the naive and official combinations coincide one should not conclude that the work ATLAS and CMS has done together is useless. As one can see above, the statistical and systematic errors are comparable for that measurement, therefore a naive combination is not guaranteed to work. It happens in this particular case that the multiple nuisance parameters considered in the analysis pull essentially in random directions. But it could well have been different. Indeed, the more one enters into details, the more the impact of the official combination becomes relevant. For the signal strength measured in particular final states of the Higgs decay the differences are more pronounced:
One can see that the naive combination somewhat underestimates the errors. Moreover, for the WW final state the central value is shifted by half a sigma (this is mainly because, in this channel, the individual ATLAS and CMS measurements that go into the combination seem to be different than the previously published ones). The difference is even more clearly visible for 2-dimensional fits, where the Higgs production cross section via the gluon fusion (ggf) and vector boson fusion (vbf) are treated as free parameters. This plot compares the regions preferred at 68% confidence level by the official and naive combinations:
There is a significant shift of the WW and also of the ττ ellipse. All in all, the LHC Higgs combination brings no revolution, but it allows one to obtain more precise and more reliable constraints on some new physics models. The more detailed information is released, the more useful the combined results become.
### Symmetrybreaking - Fermilab/SLAC
A light in the dark
The MiniCLEAN dark matter experiment prepares for its debut.
Getting to an experimental cavern 6800 feet below the surface in Sudbury, Ontario, requires an unusual commute. The Cage, an elevator that takes people into the SNOLAB facility, descends twice every morning at 6 a.m. and 8 a.m. Before entering the lab, individuals shower and change so they don’t contaminate the experimental areas.
A thick layer of natural rock shields the clean laboratory where air quality, humidity and temperature are highly regulated. These conditions allow scientists to carry out extremely sensitive searches for elusive particles such as dark matter and neutrinos.
The Cage returns to the surface at 3:45 p.m. each day. During the winter months, researchers go underground before the sun rises and emerge as it sets. Steve Linden, a postdoctoral researcher from Boston University, makes the trek every morning to work on MiniCLEAN, which scientists will use to test a novel technique for directly detecting dark matter.
“It’s a long day,” Linden says.
Scientists and engineers have spent the past eight years designing and building the MiniCLEAN detector. Today that task is complete; they have begun commissioning and cooling the detector to fill it with liquid argon to start its search for dark matter.
Though dark matter is much more abundant than the visible matter that makes up planets, stars and everything we can see, no one has ever identified it. Dark matter particles are chargeless, don’t absorb or emit light, and interact very weakly with matter, making them incredibly difficult to detect.
#### Spotting the WIMPs
MiniCLEAN (CLEAN stands for Cryogenic Low-Energy Astrophysics with Nobles) aims to detect weakly interacting massive particles, or WIMPs, the current favorite dark matter candidate. Scientists will search for these rare particles by observing their interactions with atoms in the detector.
To make this possible, the detector will be filled with over 500 kilograms of very cold, dense, ultra-pure materials—argon at first, and later neon. If a WIMP passes through and collides with an atom’s nucleus, it will produce a pulse of light with a unique signature. Scientists can collect and analyze this light to determine whether what they saw was a dark matter particle or some other background event.
The use of both argon and neon will allow MiniCLEAN to double-check any possible signals. Argon is more sensitive than neon, so a true dark matter signal would disappear when liquid argon is replaced with liquid neon. Only an intrinsic background signal from the detector would persist. Scientists would like to eventually scale this experiment up to a larger version called CLEAN.
#### Overcoming obstacles
MiniCLEAN is a small experiment, with about 15 members in the collaboration and the project lead at Pacific Northwest National Laboratory. While working on this experiment underground with few hands to spare, the team has run into some unexpected roadblocks.
One such obstacle appeared while transporting the inner vessel, a detector component that will contain the liquid argon or neon.
“Last November, as we finished assembling the inner vessel and were getting ready to move it to where it needed to end up, we realized it wouldn’t fit between the doors into the hallway we had to wheel it down,” Linden explains.
When this happened, the team was faced with two options: somehow reduce the size of the vessel, or cut away a part of the door—not a simple thing to do in a clean lab. Fortunately, temporarily replacing some of the vessel’s parts reduced the size enough to make it fit. They got it through the doorway with about an eighth of an inch clearance on each side.
“What gives me the energy to persist on this project is that the CLEAN approach is unique, and there isn’t another approach to dark matter that is like it,” says Pacific Northwest National Laboratory scientist Andrew Hime, MiniCLEAN spokesperson and principal investigator. “It’s been eight years since we starting pushing hard on this program, and finally getting real data from the detector will be a breath of fresh air.”
Like what you see? Sign up for a free subscription to symmetry!
### ATLAS Experiment
TOP 2015 – Top quarks come to Italy!
The annual top conference! This year we’re in Ischia, Italy. The hotel is nice, the pool is tropical and heated, but you don’t want to hear about that, you want to hear about the latest news in the Standard Model’s heaviest and coolest particle, the top quark! You won’t be disappointed.
DAY 1:
Our keynote speaker is Michael Peskin. For those of you who have a PhD in particle physics, you already know Peskin. He wrote that textbook you fear. His talk is very good and accessible, even for an experimentalist like myself, and he gives us a very nice overview of the status of theory calculations in top physics, highlighting a few areas he’d like to see more work on.
The highlights of my day though are the ATLAS and CMS physics objects talks. Normally, these can be a little dull. However this year we have performance plots for the first time at 13 TeV, and most people are closely scrutinising the performance of both experiments. All except a guy who looks suspiciously like Game of Thrones character Joffrey Baratheon, who is sitting completely upright, eyes closed and snoring lightly.
POSTER SESSION:
The poster session, two hours in (photo from @JoshMcfayden)
If you’ve never been to a poster session then this is how they work: a group of students and young postdocs, eager to present their own work (a rare treat in collaborations as large as ATLAS and CMS) stand around, proudly showcasing how they managed to make powerpoint do something that it really wasn’t designed to do.
My poster (approved only hours before) gets a fair bit of attention, but not as much as I expected. Suddenly I regret not slapping a huge “New 13 TeV Results!” banner on the top of it.
After 3 hours (yes, 3 hours!) of standing by my poster I decide that everyone who wants to see it will have done by now, grab 3 (or 10) canapés and head to the laptop in the corner to cast my vote. For a brief moment I consider not voting for myself, but the moment passes and I type in my own name.
DAY 2:
I sit down next to Joffrey Baratheon and smile at him politely. It’s not his fault he’s an evil king after all. We start the morning with some theory, because we’re mostly experimentalists and everyone knows our attention spans are limited if they give us wine with lunch. As with last year, the hot topic is ever more precise calculations.
Next we have a very professional talk from a very professional-looking CMS experimentalist. People who wear shirts and sensible shoes to give a plenary talk either means serious business or a terrified student giving their first conference talk. From the polished introduction on top cross-section, you can tell it’s the former.
CMS have clearly put a lot of effort in to these results (and I’m secretly relieved that I already know our results are equally impressive), and despite a spine-chillingly large luminosity uncertainty of 12%, they have achieved remarkable precision.
Finally, we’ve arrived at the talk that I’ve been waiting for; The ATLAS Run2 cross-section results.
A summary of the latest top anti-top cross-section measurements from ATLAS.
The speaker starts by flashing our already released cross-section in the eµ channel at 13 TeV. Even with an integrated luminosity uncertainty of 9%, it’s still a fantastic early result. We show an updated eµ result in which we measure the ratio with the Z-boson cross-section (effectively cancelling the luminosity uncertainty). People seem pretty impressed by that, as they should. Getting the top group to release results this early is hard enough, getting the standard model group to release an inclusive Z cross section is nothing short of a miracle.
Now the speaker moves on to the precision 8 TeV results. Wait a minute? What’s going on? There are other 13 TeV results to show? What is he DOING?! Months of working on the ee and µµ cross section results and we’ve skipped past them? I turn to my colleague, who led the also-skipped lepton+jets cross section analysis. His face is stoic, as is his way, but inside I know he’s ready to storm the stage with me. I begin to whisper to my boss, sat one seat ahead of me, about the injustice of it all. Somehow it’s coming out as a childish tantrum, despite sounding perfectly reasonable in my head.
… and then the speaker shows the result. My boss rolls her eyes at me and returns to her laptop, possibly rethinking my contract extension. Joffrey Baratheon scowls at the disturbance I’ve caused and I consider strangling him with his pullover.
Stay tuned for part 2! Where we learn about new single-top results, new mass measurements, and ttH!
James Howarth is a postdoctoral research fellow at DESY, working on top quark cross-sections and properties for ATLAS. He joined the ATLAS experiment in 2009 as a PhD student with the University of Manchester, before moving to DESY, Hamburg in 2013. In his spare time he enjoys drinking, arguing, and generally being difficult.
## September 15, 2015
### Symmetrybreaking - Fermilab/SLAC
Where the Higgs belongs
The Higgs doesn’t quite fit in with the other particles of the Standard Model of particle physics.
If you were Luke Skywalker in Star Wars, and you carried a tiny green Jedi master on your back through the jungles of Dagobah for long enough, you could eventually raise your submerged X-wing out of the swamp just by using the Force.
But if you were a boson in the Standard Model of particle physics, you could skip the training—you would be the force.
Bosons are particles that carry the four fundamental forces. These forces push and pull what would otherwise have been an unwieldy soup of particles into the beautiful mosaic of stars and galaxies that permeate the visible universe.
The fundamental forces keep protons incredibly stable (the strong force holds them together), cause compasses to point north (the electromagnetic force attracts the needle), make apples fall off trees (gravity attracts the fruit to the ground), and keep the sun shining (the weak force allows nuclear fusion to occur).
In 2012, the Higgs boson became an officially recognized member of this family of fundamental bosons.
The Higgs is called a boson because of a quantum mechanical property called spin—which represents a particle’s intrinsic angular momentum and characterizes how a particle plays with its Standard Model friends.
Bosons have an integer spin (0, 1, 2) which makes them the touchy-feely types. They have no need for personal space. Fermions, on the other hand, have a non-integer spin (1/2, 3/2, etc.), which makes them a bit more isolated; they prefer to keep their distance from other particles.
The Higgs has a spin of 0, making it officially a boson.
“Every boson is associated with one of the four fundamental forces,” says Kyle Cranmer, an associate professor of physics at New York University. “So if we discover a new boson, it seems natural that we should find a new force.”
Scientists think that a Higgs force does exist. But it’s the Higgs boson’s relationship to that force that makes it a bit of a black sheep. It’s the reason that, when the Higgs is added to the Standard Model of particle physics, it’s often pictured apart from the rest of the boson family.
#### What the Higgs is for
The Higgs boson is an excitation of the Higgs field, which interacts with some of the fundamental particles to give them mass.
“The way the Higgs field gives masses to particles is its own unique feature, which is different from all other known fields in the universe,” says Matt Strassler, a Harvard University theoretical physicist. “When the Higgs field turns on, it changes the environment for all particles; it changes the nature of empty space itself. The way particles interact with this field is based on their intrinsic properties.”
There are three inherent qualifications required for a field to generate a force: The field must be able to switch on and off. It must have a preferred direction. And it must be able to attract or repel.
Normally the Higgs field fails the first two requirements—it’s always on, with no preferred direction. But in the presence of a Higgs boson, the field is distorted, theoretically allowing it to generate a force.
“We think that two particles can pull on each other using the Higgs field,” Strassler says. “The same equations we used to predict that the Higgs particle should exist, and how it should decay to other particles, also predict this force will exist.”
Just what role that force might play in our greater understanding of the universe is still a mystery.
“We know the Higgs field is essential in the formation of stable matter,” Strassler says. “But the Higgs force—as far as we know—is not.”
The Higgs force could be important in some other way, Strassler says. It could be related to how much dark matter exists in the universe or the huge imbalance between matter and antimatter. “It’s too early to write it off,” he says.
During this run of the Large Hadron Collider, physicists expect to produce roughly 10 times as many Higgs bosons as they did during the first run. This will enable scientists to examine the properties of this peculiar particle more deeply.
Like what you see? Sign up for a free subscription to symmetry!
### Georg von Hippel - Life on the lattice
Fundamental Parameters from Lattice QCD, Day Seven
Today's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)μ since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(mZ2) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q2. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin2θw is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.
The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.
In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.
## September 14, 2015
### The n-Category Cafe
Where Does The Spectrum Come From?
Perhaps you, like me, are going to spend some of this semester teaching students about eigenvalues. At some point in our lives, we absorbed the lesson that eigenvalues are important, and we came to appreciate that the invariant par excellence of a linear operator on a finite-dimensional vector space is its spectrum: the set-with-multiplicities of eigenvalues. We duly transmit this to our students.
There are lots of good ways to motivate the concept of eigenvalue, from lots of points of view (geometric, algebraic, etc). But one might also seek a categorical explanation. In this post, I’ll address the following two related questions:
1. If you’d never heard of eigenvalues and knew no linear algebra, and someone handed you the category $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$ of finite-dimensional vector spaces, what would lead you to identify the spectrum as an interesting invariant of endomorphisms in $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$?
2. What is the analogue of the spectrum in other categories?
I’ll give a fairly complete answer to question 1, and, with the help of that answer, speculate on question 2.
(New, simplified version posted at 22:55 UTC, 2015-09-14.)
Famously, trace has a kind of cyclicity property: given maps
$X\stackrel{f}{\to }Y\stackrel{g}{\to }X X \stackrel\left\{f\right\}\left\{\to\right\} Y \stackrel\left\{g\right\}\left\{\to\right\} X $
in $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$, we have
$\mathrm{tr}\left(g\circ f\right)=\mathrm{tr}\left(f\circ g\right). tr\left(g \circ f\right) = tr\left(f \circ g\right). $
I call this “cyclicity” because it implies the more general property that for any cycle
${X}_{0}\stackrel{{f}_{1}}{\to }{X}_{1}\stackrel{{f}_{2}}{\to }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\cdots \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\stackrel{{f}_{n-1}}{\to }{X}_{n-1}\stackrel{{f}_{n}}{\to }{X}_{0} X_0 \stackrel\left\{f_1\right\}\left\{\to\right\} X_1 \stackrel\left\{f_2\right\}\left\{\to\right\} \,\, \cdots\,\, \stackrel\left\{f_\left\{n-1\right\}\right\}\left\{\to\right\} X_\left\{n - 1\right\} \stackrel\left\{f_n\right\}\left\{\to\right\} X_0 $
of linear maps, the scalar
$\mathrm{tr}\left({f}_{i}\circ \cdots \circ {f}_{1}\circ {f}_{n}\circ \cdots \circ {f}_{i+1}\right) tr\left(f_i \circ \cdots \circ f_1 \circ f_n \circ \cdots \circ f_\left\{i + 1\right\}\right) $
is independent of $ii$.
A slightly less famous fact is that the same cyclicity property is enjoyed by a finer invariant than trace: the set-with-multiplicities of nonzero eigenvalues. In other words, the operators $g\circ fg\circ f$ and $f\circ gf\circ g$ have the same nonzero eigenvalues, with the same (algebraic) multiplicities. Zero has to be excluded to make this true: for instance, if we take $ff$ and $gg$ to be the projection and inclusion associated with a direct sum decomposition, then one composite operator has $00$ as an eigenvalue and the other does not.
I’ll write $\mathrm{Spec}\left(T\right)Spec\left(T\right)$ for the set-with-multiplicities of eigenvalues of a linear operator $TT$, and $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ for the set-with-multiplicities of nonzero eigenvalues. Everything we’ll do is on finite-dimensional vector spaces over an algebraically closed field $kk$. Thus, $\mathrm{Spec}\left(T\right)Spec\left(T\right)$ is a finite subset-with-multiplicity of $kk$ and $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is a finite subset-with-multiplicity of ${k}^{×}=k\setminus \left\{0\right\}k^\times = k \setminus \\left\{0\\right\}$.
I’ll call $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ the invertible spectrum of $TT$. Why? Because every operator $TT$ decomposes uniquely as a direct sum of operators ${T}_{\mathrm{nil}}\oplus {T}_{\mathrm{inv}}T_\left\{nil\right\} \oplus T_\left\{inv\right\}$, where every eigenvalue of ${T}_{\mathrm{nil}}T_\left\{nil\right\}$ is $00$ (or equivalently, ${T}_{\mathrm{nil}}T_\left\{nil\right\}$ is nilpotent) and no eigenvalue of ${T}_{\mathrm{inv}}T_\left\{inv\right\}$ is $00$ (or equivalently, ${T}_{\mathrm{inv}}T_\left\{inv\right\}$ is invertible). Then the invertible spectrum of $TT$ is the spectrum of its invertible part ${T}_{\mathrm{inv}}T_\left\{inv\right\}$.
If excluding zero seems forced or unnatural, perhaps it helps to consider the “reciprocal spectrum”
$\mathrm{RecSpec}\left(T\right)=\left\{\lambda \in k:\mathrm{ker}\left(\lambda T-I\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{is nontrivial}\right\}. RecSpec\left(T\right) = \\left\{\lambda \in k : ker\left(\lambda T - I\right) \,\,\text\left\{ is nontrivial\right\} \\right\}. $
There’s a canonical bijection between $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ and $\mathrm{RecSpec}\left(T\right)RecSpec\left(T\right)$ given by $\lambda ↔1/\lambda \lambda \leftrightarrow 1/\lambda$. So the invariants $\mathrm{Spec}\prime Spec\text{'}$ and $\mathrm{RecSpec}RecSpec$ carry the same information, and if $\mathrm{RecSpec}RecSpec$ seems natural to you then $\mathrm{Spec}\prime Spec\text{'}$ should too.
Moreover, if you know the space $XX$ that your operator $TT$ is acting on, then to know the invertible spectrum $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is to know the full spectrum $\mathrm{Spec}\left(T\right)Spec\left(T\right)$. That’s because the multiplicities of the eigenvalues of $TT$ sum to $\mathrm{dim}\left(X\right)dim\left(X\right)$, and so the multiplicity of $00$ in $\mathrm{Spec}\left(T\right)Spec\left(T\right)$ is $\mathrm{dim}\left(X\right)dim\left(X\right)$ minus the sum of the multiplicities of the nonzero eigenvalues.
The cyclicity equation
$\mathrm{Spec}\prime \left(g\circ f\right)=\mathrm{Spec}\prime \left(f\circ g\right) Spec\text{'}\left(g\circ f\right) = Spec\text{'}\left(f\circ g\right) $
is a very strong property of $\mathrm{Spec}\prime Spec\text{'}$. A second, seemingly more mundane, property is that for any operators ${T}_{1}T_1$ and ${T}_{2}T_2$ on the same space, and any scalar $\lambda \lambda$,
$\mathrm{Spec}\prime \left({T}_{1}\right)=\mathrm{Spec}\prime \left({T}_{2}\right)⇒\mathrm{Spec}\prime \left({T}_{1}+\lambda I\right)=\mathrm{Spec}\prime \left({T}_{2}+\lambda I\right). Spec\text{'}\left(T_1\right) = Spec\text{'}\left(T_2\right) \implies Spec\text{'}\left(T_1 + \lambda I\right) = Spec\text{'}\left(T_2 + \lambda I\right). $
In other words, for an operator $TT$, if you know $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ and you know the space that $TT$ acts on, then you know $\mathrm{Spec}\prime \left(T+\lambda I\right)Spec\text{'}\left(T + \lambda I\right)$ for each scalar $\lambda \lambda$. Why? Well, we noted above that if you know the invertible spectrum of an operator and you know the space it acts on, then you know the full spectrum. So $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ determines $\mathrm{Spec}\left(T\right)Spec\left(T\right)$, which determines $\mathrm{Spec}\left(T+\lambda I\right)Spec\left(T + \lambda I\right)$ (as $\mathrm{Spec}\left(T\right)+\lambda Spec\left(T\right) + \lambda$), which in turn determines $\mathrm{Spec}\prime \left(T+\lambda I\right)Spec\text{'}\left(T + \lambda I\right)$.
I claim that the invariant $\mathrm{Spec}\prime Spec\text{'}$ is universal with these two properties, in the following sense.
Theorem Let $\Omega \Omega$ be a set and let $\Phi :\left\{\text{linear operators}\right\}\to \Omega \Phi : \\left\{ \text\left\{linear operators\right\} \\right\} \to \Omega$ be a function satisfying:
1. $\Phi \left(g\circ f\right)=\Phi \left(f\circ g\right)\Phi\left(g \circ f\right) = \Phi\left(f \circ g\right)$ for all $X\stackrel{f}{\to }Y\stackrel{g}{\to }XX \stackrel\left\{f\right\}\left\{\to\right\} Y \stackrel\left\{g\right\}\left\{\to\right\} X$
2. $\Phi \left({T}_{1}\right)=\Phi \left({T}_{2}\right)\Phi\left(T_1\right) = \Phi\left(T_2\right)$ $\implies$ $\Phi \left({T}_{1}+\lambda I\right)=\Phi \left({T}_{2}+\lambda I\right)\Phi\left(T_1 + \lambda I\right) = \Phi\left(T_2 + \lambda I\right)$ for all operators ${T}_{1},{T}_{2}T_1, T_2$ on the same space, and all scalars $\lambda \lambda$.
Then $\Phi \Phi$ is a specialization of $\mathrm{Spec}\prime Spec\text{'}$, that is, $\mathrm{Spec}\prime \left({T}_{1}\right)=\mathrm{Spec}\prime \left({T}_{2}\right)⇒\Phi \left({T}_{1}\right)=\Phi \left({T}_{2}\right) Spec\text{'}\left(T_1\right) = Spec\text{'}\left(T_2\right) \implies \Phi\left(T_1\right) = \Phi\left(T_2\right) $ for all ${T}_{1},{T}_{2}T_1, T_2$. Equivalently, there is a unique function $\overline{\Phi }:\left\{\text{finite subsets-with-multiplicity of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{k}^{×}\right\}\to \Omega \bar\left\{\Phi\right\} : \\left\{ \text\left\{finite subsets-with-multiplicity of \right\}\,\, k^\times\\right\} \to \Omega $ such that $\Phi \left(T\right)=\overline{\Phi }\left(\mathrm{Spec}\prime \left(T\right)\right)\Phi\left(T\right) = \bar\left\{\Phi\right\}\left(Spec\text{'}\left(T\right)\right)$ for all operators $TT$.
For example, take $\Phi \Phi$ to be trace. Then conditions 1 and 2 are satisfied, so the theorem implies that trace is a specialization of $\mathrm{Spec}\prime Spec\text{'}$. That’s clear anyway, since the trace of an operator is the sum-with-multiplicities of the nonzero eigenvalues.
I’ll say just a little about the proof.
The invertible spectrum of a nilpotent operator is empty. Now, the Jordan normal form theorem invites us to pay special attention to the special nilpotent operators ${P}_{n}P_n$ on ${k}^{n}k^n$ defined as follows: writing ${e}_{1},\dots ,{e}_{n}e_1, \ldots, e_n$ for the standard basis of ${k}^{n}k^n$, the operator ${P}_{n}P_n$ is given by
${e}_{n}↦{e}_{n-1}↦\cdots ↦{e}_{1}↦0. e_n \mapsto e_\left\{n - 1\right\} \mapsto \cdots \mapsto e_1 \mapsto 0. $
So if the theorem is to be true then, in particular, $\Phi \left({P}_{n}\right)\Phi\left(P_n\right)$ must be independent of $nn$.
But it’s not hard to cook up maps $f:{k}^{n}\to {k}^{n-1}f: k^n \to k^\left\{n - 1\right\}$ and $g:{k}^{n-1}\to {k}^{n}g: k^\left\{n - 1\right\} \to k^n$ such that $g\circ f={P}_{n}g\circ f = P_n$ and $f\circ g={P}_{n-1}f \circ g = P_\left\{n - 1\right\}$. Thus, condition 1 implies that $\Phi \left({P}_{n}\right)=\Phi \left({P}_{n-1}\right)\Phi\left(P_n\right) = \Phi\left(P_\left\{n - 1\right\}\right)$. It follows that $\Phi \left({P}_{n}\right)\Phi\left(P_n\right)$ is independent of $nn$, as claimed.
Of course, that doesn’t prove the theorem. But the rest of the proof is straightforward, given the Jordan normal form theorem and condition 2, and in this way, we arrive at the conclusion of the theorem:
$\mathrm{Spec}\prime \left({T}_{1}\right)=\mathrm{Spec}\prime \left({T}_{2}\right)⇒\Phi \left({T}_{1}\right)=\Phi \left({T}_{2}\right) Spec\text{'}\left(T_1\right) = Spec\text{'}\left(T_2\right) \implies \Phi\left(T_1\right) = \Phi\left(T_2\right) $
for any operators ${T}_{1}T_1$ and ${T}_{2}T_2$.
One way to interpret the theorem is as follows. Let $\sim \sim$ be the smallest equivalence relation on $\left\{\text{linear operators}\right\}\\left\{\text\left\{linear operators\right\}\\right\}$ such that:
1. $g\circ f\sim f\circ gg\circ f \sim f \circ g$
2. ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ $\implies$ ${T}_{1}+\lambda I\sim {T}_{2}+\lambda IT_1 + \lambda I \sim T_2 + \lambda I$
(where $ff$, $gg$, etc. are quantified as in the theorem). Then the natural surjection
$\left\{\text{linear operators}\right\}⟶\left\{\text{linear operators}\right\}/\sim \\left\{ \text\left\{linear operators\right\} \\right\} \longrightarrow \\left\{ \text\left\{linear operators\right\} \\right\}/\sim $
is isomorphic to
$\mathrm{Spec}\prime :\left\{\text{linear operators}\right\}⟶\left\{\text{finite subsets-with-multiplicity of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{k}^{×}\right\}. Spec\text{'}: \\left\{ \text\left\{linear operators\right\} \\right\} \longrightarrow \\left\{ \text\left\{finite subsets-with-multiplicity of \right\}\,\, k^\times\\right\}. $
That is, there is a bijection between $\left\{\text{linear operators}\right\}/\sim \\left\{ \text\left\{linear operators\right\} \\right\}/\sim$ and $\left\{\text{finite subsets-with-multiplicity of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{k}^{×}\right\}\\left\{ \text\left\{finite subsets-with-multiplicity of \right\}\,\, k^\times\\right\}$ making the evident triangle commute.
So, we’ve characterized the invariant $\mathrm{Spec}\prime Spec\text{'}$ in terms of conditions 1 and 2. These conditions seem reasonably natural, and don’t depend on any prior concepts such as “eigenvalue”.
Condition 2 does appear to refer to some special features of the category $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$ of finite-dimensional vector spaces. But let’s now think about how it could be interpreted in other categories. That is, for a category $\mathcal\left\{E\right\}$ (in place of $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$) and a function
$\Phi :\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}\to \Omega \Phi: \\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\} \\right\} \to \Omega $
into some set $\Omega \Omega$, how can we make sense of condition 2?
Write $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$ for the category of endomorphisms in $\mathcal\left\{E\right\}$, with maps preserving those endomorphisms in the sense that the evident square commutes. (It’s the category of functors from the additive monoid $\mathbb\left\{N\right\}$, seen as a one-object category, into $\mathcal\left\{E\right\}$.)
For any scalars $\kappa \ne 0\kappa \neq 0$ and $\lambda \lambda$, there’s an automorphism ${F}_{\kappa ,\lambda }F_\left\{\kappa, \lambda\right\}$ of the category $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ given by
${F}_{\kappa ,\lambda }\left(T\right)=\kappa T+\lambda I. F_\left\{\kappa, \lambda\right\}\left(T\right) = \kappa T + \lambda I. $
I guess, but haven’t proved, that these are the only automorphisms of $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ that leave the underlying vector space unchanged. In what follows, I’ll assume this guess is right.
Now, condition 2 says that $\Phi \left(T\right)\Phi\left(T\right)$ determines $\Phi \left(T+\lambda I\right)\Phi\left(T + \lambda I\right)$ for each $\lambda \lambda$, for operators $TT$ on a known space. That’s weaker than the statement that $\Phi \left(T\right)\Phi\left(T\right)$ determines $\Phi \left(\kappa T+\lambda I\right)\Phi\left(\kappa T + \lambda I\right)$ for each $\kappa \ne 0\kappa \neq 0$ and $\lambda \lambda$ — but $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ does determine $\mathrm{Spec}\prime \left(\kappa T+\lambda I\right)Spec\text{'}\left(\kappa T + \lambda I\right)$. So the theorem remains true if we replace condition 2 with the statement that $\Phi \left(T\right)\Phi\left(T\right)$ determines $\Phi \left(F\left(T\right)\right)\Phi\left(F\left(T\right)\right)$ for each automorphism $FF$ of $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ “over $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$” (that is, leaving the underlying vector space unchanged).
This suggests the following definition:
Definition Let $\mathcal\left\{E\right\}$ be a category. Let $\sim \sim$ be the equivalence relation on $\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}\\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\} \\right\}$ generated by:
1. $g\circ f\sim f\circ gg\circ f \sim f \circ g$ for all $X\stackrel{f}{\to }Y\stackrel{g}{\to }XX \stackrel\left\{f\right\}\left\{\to\right\} Y \stackrel\left\{g\right\}\left\{\to\right\} X$ in $\mathcal\left\{E\right\}$
2. ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ $\implies$ $F\left({T}_{1}\right)\sim F\left({T}_{2}\right)F\left(T_1\right) \sim F\left(T_2\right)$ for all endomorphisms ${T}_{1},{T}_{2}T_1, T_2$ on the same object of $\mathcal\left\{E\right\}$ and all automorphisms $FF$ of $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$ over $\mathcal\left\{E\right\}$.
Call $\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}/\sim \\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\}\\right\}/\sim$ the set of invertible spectral values of $\mathcal\left\{E\right\}$. Write $\mathrm{Spec}\prime :\left\{\text{endomorphisms in}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\}\to \left\{\text{invertible spectral values of}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}ℰ\right\} Spec\text{'}: \\left\{ \text\left\{endomorphisms in \right\}\,\, \mathcal\left\{E\right\} \\right\} \to \\left\{ \text\left\{invertible spectral values of\right\}\,\, \mathcal\left\{E\right\} \\right\} $ for the natural surjection. The invertible spectrum of an endomorphism $TT$ in $\mathcal\left\{E\right\}$ is $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$.
In the case $ℰ=\mathrm{FDVect}\mathcal\left\{E\right\} = \mathbf\left\{FDVect\right\}$, the invertible spectral values are the finite subsets-with-multiplicity of ${k}^{×}k^\times$, and the invertible spectrum $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is as defined at the start of this post — namely, the set of nonzero eigenvalues with their algebraic multiplicities.
Aside At least, that’s the case up to isomorphism. You might feel that we’ve lost something, though. After all, the spectrum of a linear operator is a subset-with-multiplicities of the base field, not just an element of some abstract set.
But the theorem does give us some structure on the set of invertible spectral values. This remark of mine below (written after I wrote a first version of this post, but before I wrote the revised version you’re now reading) shows that if $\mathcal\left\{E\right\}$ has finite coproducts then $\sim \sim$ is a congruence for them; that is, if ${S}_{1}\sim {S}_{2}S_1 \sim S_2$ and ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ then ${S}_{1}+{T}_{1}\sim {S}_{2}+{T}_{2}S_1 + T_1 \sim S_2 + T_2$. (Here $++$ is the coproduct in $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$, which comes from the coproduct in $\mathcal\left\{E\right\}$ in the obvious way.) So the coproduct structure on endomorphisms induces a binary operation $\vee \vee$ on the set of invertible spectral values, satisfying
$\mathrm{Spec}\prime \left(S\oplus T\right)=\mathrm{Spec}\prime \left(S\right)\vee \mathrm{Spec}\prime \left(T\right). Spec\text{'}\left(S \oplus T\right) = Spec\text{'}\left(S\right) \vee Spec\text{'}\left(T\right). $
In the case $ℰ=\mathrm{FDVect}\mathcal\left\{E\right\} = \mathbf\left\{FDVect\right\}$, this is the union of finite subsets-with-multiplicity of ${k}^{×}k^\times$ (adding multiplicities). And in general, the algebraic properties of coproduct imply that $\vee \vee$ gives the set of invertible spectral values the structure of a commutative monoid.
Similarly, condition 2 implies that the automorphism group of $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$ acts on the set of invertible spectral values; and since automorphisms preserve coproducts (if they exist), it acts by monoid homomorphisms.
We can now ask what this general definition produces for other categories. I’ve only just begun to think about this, and only in one particular case: when $\mathcal\left\{E\right\}$ is $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$, the category of finite sets.
I believe the category of endomorphisms in $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$ has no nontrivial automorphisms over $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$. After all, given an endomorphism $TT$ of a finite set $XX$, what natural ways are there of producing another endomorphism of $XX$? There are only the powers ${T}^{n}T^n$, I think, and the process $T↦{T}^{n}T \mapsto T^n$ is only invertible when $n=1n = 1$.
So, condition 2 is trivial. We’re therefore looking for the smallest equivalence relation on $\left\{\text{endomorphisms of finite sets}\right\}\\left\{ \text\left\{endomorphisms of finite sets\right\} \\right\}$ such that $g\circ f\sim f\circ gg \circ f \sim f \circ g$ for all maps $ff$ and $gg$ pointing in opposite directions. I believe, but haven’t proved, that ${T}_{1}\sim {T}_{2}T_1 \sim T_2$ if and only if ${T}_{1}T_1$ and ${T}_{2}T_2$ have the same number of cycles
${x}_{1}↦{x}_{2}↦\cdots ↦{x}_{p}↦{x}_{1} x_1 \mapsto x_2 \mapsto \cdots \mapsto x_p \mapsto x_1 $
of each period $pp$. Thus, the invertible spectral values of $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$ are the finite sets-with-multiplicity of positive integers, and if $TT$ is an endomorphism of a finite set then $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ is the set-with-multiplicities of periods of cycles of $TT$.
All of the above is a record of thoughts I had in spare moments at this workshop I just attended in Louvain-la-Neuve, so I haven’t had much time to reflect. I’ve noted where I’m not sure of the facts, but I’m also not sure of the aesthetics:
In other words, do the theorem and definition above represent the best approach? Here are three quite specific reservations:
1. I’m not altogether satisfied with the fact that it’s the invertible spectrum, rather than the full spectrum, that comes out. Perhaps there’s something to be done with the observation that if you know the invertible spectrum, then knowing the full spectrum is equivalent to knowing (the dimension of) the space that your operator acts on.
2. Condition 2 of the theorem states that $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ determines $\mathrm{Spec}\prime \left(T+\lambda I\right)Spec\text{'}\left(T + \lambda I\right)$ for an operator $TT$ on a known space (and, of course, for known $\lambda \right)\lambda\right)$. That was enough to prove the theorem. But there’s also a much stronger true statement: $\mathrm{Spec}\prime \left(T\right)Spec\text{'}\left(T\right)$ determines $\mathrm{Spec}\prime \left(p\left(T\right)\right)Spec\text{'}\left(p\left(T\right)\right)$ for any polynomial $pp$ over $kk$ (again, for an operator $TT$ on a known space). Any polynomial $pp$ gives an endomorphism $T↦p\left(T\right)T \mapsto p\left(T\right)$ of $\mathrm{Endo}\left(\mathrm{FDVect}\right)\mathbf\left\{Endo\right\}\left(\mathbf\left\{FDVect\right\}\right)$ over $\mathrm{FDVect}\mathbf\left\{FDVect\right\}$, and I guess these are the only endomorphisms. So, we could generalize condition 2 by using endomorphisms rather than automorphisms of $\mathrm{Endo}\left(ℰ\right)\mathbf\left\{Endo\right\}\left(\mathcal\left\{E\right\}\right)$. Should we?
### Tommaso Dorigo - Scientificblogging
Kick-Off Meeting Of AMVA4NewPhysics
The European network I am coordinating will have its kick-off meeting at CERN on September 16th. This will be a short event where we give a sort of "orientation" to the participants, in terms of who we are, what we have to deliver, how we plan to do it. It is not a redundant proposition, as the AMVA4NewPhysics programme is quite varied: it includes two big experiments (ATLAS and CMS), plus two Statistics institutes, and several industrial partners; it will organize workshops in statistics, outreach, soft skills, and software tools such as MatLab, RooStats, Madgraph; and it will send our 10 students flying around like businessmen.
### ZapperZ - Physics and Physicists
A Physics App To Teach Physics
A group of educational researcher has created an app for iOS, Android, PCs, and Macs, that teaches physics to 9-graders.
The app, Exploring Physics, is meant to take particular physics curriculum already being taught in a number of public school districts, including Columbia's, and make it available digitally. The Exploring Physics curriculum app is designed to replace traditional lecture-based learning with discussions and hands-on experiments.
“The idea in the app is to have students learn by doing stuff,” said Meera Chandrasekhar, the co-creator of the app and a curators' teaching professor in the MU Department of Physics and Astronomy. “Even though it’s a digital app, it actually involves using quite a lot of hands-on materials.”
I haven't look at it. If any of you have, and better still, is using it, I very much like to hear your opinion.
Zz.
## September 13, 2015
### John Baez - Azimuth
Biology, Networks and Control Theory
The Institute for Mathematics and its Applications (or IMA, in Minneapolis, Minnesota), is teaming up with the Mathematical Biosciences Institute (or MBI, in Columbus, Ohio). They’re having a big program on control theory and networks:
### At the IMA
Here’s what’s happening at the Institute for Mathematics and its Applications:
Concepts and techniques from control theory are becoming increasingly interdisciplinary. At the same time, trends in modern control theory are influenced and inspired by other disciplines. As a result, the systems and control community is rapidly broadening its scope in a variety of directions. The IMA program is designed to encourage true interdisciplinary research and the cross fertilization of ideas. An important element for success is that ideas flow across disciplines in a timely manner and that the cross-fertilization takes place in unison.
Due to the usefulness of control, talent from control theory is drawn and often migrates to other important areas, such as biology, computer science, and biomedical research, to apply its mathematical tools and expertise. It is vital that while the links are strong, we bring together researchers who have successfully bridged into other disciplines to promote the role of control theory and to focus on the efforts of the controls community. An IMA investment in this area will be a catalyst for many advances and will provide the controls community with a cohesive research agenda.
In all topics of the program the need for research is pressing. For instance, viable implementations of control algorithms for smart grids are an urgent and clearly recognized need with considerable implications for the environment and quality of life. The mathematics of control will undoubtedly influence technology and vice-versa. The urgency for these new technologies suggests that the greatest impact of the program is to have it sooner rather than later.
First trimester (Fall 2015): Networks, whether social, biological, swarms of animals or vehicles, the Internet, etc., constitute an increasingly important subject in science and engineering. Their connectivity and feedback pathways affect robustness and functionality. Such concepts are at the core of a new and rapidly evolving frontier in the theory of dynamical systems and control. Embedded systems and networks are already pervasive in automotive, biological, aerospace, and telecommunications technologies and soon are expected to impact the power infrastructure (smart grids). In this new technological and scientific realm, the modeling and representation of systems, the role of feedback, and the value and cost of information need to be re-evaluated and understood. Traditional thinking that is relevant to a limited number of feedback loops with practically unlimited bandwidth is no longer applicable. Feedback control and stability of network dynamics is a relatively new endeavor. Analysis and control of network dynamics will occupy mostly the first trimester while applications to power networks will be a separate theme during the third trimester. The first trimester will be divided into three workshops on the topics of analysis of network dynamics and regulation, communication and cooperative control over networks, and a separate one on biological systems and networks.
The second trimester (Winter 2016) will have two workshops. The first will be on modeling and estimation (Workshop 4) and the second one on distributed parameter systems and partial differential equations (Workshop 5). The theme of Workshop 4 will be on structure and parsimony in models. The goal is to explore recent relevant theories and techniques that allow sparse representations, rank constrained optimization, and structural constraints in models and control designs. Our intent is to blend a group of researchers in the aforementioned topics with a select group of researchers with interests in a statistical viewpoint. Workshop 5 will focus on distributed systems and related computational issues. One of our aims is to bring control theorists with an interest in optimal control of distributed parameter systems together with mathematicians working on optimal transport theory (in essence an optimal control problem). The subject of optimal transport is rapidly developing with ramifications in probability and statistics (of essence in system modeling and hence of interest to participants in Workshop 4 as well) and in distributed control of PDE’s. Emphasis will also be placed on new tools and new mathematical developments (in PDE’s, computational methods, optimization). The workshops will be closely spaced to facilitate participation in more than one.
The third trimester (Spring 2016) will target applications where the mathematics of systems and control may soon prove to have a timely impact. From the invention of atomic force microscopy at the nanoscale to micro-mirror arrays for a next generation of telescopes, control has played a critical role in sensing and imaging of challenging new realms. At present, thanks to recent technological advances of AFM and optical tweezers, great strides are taking place making it possible to manipulate the biological transport of protein molecules as well as the control of individual atoms. Two intertwined subject areas, quantum and nano control and scientific instrumentation, are seen to blend together (Workshop 6) with partial focus on the role of feedback control and optimal filtering in achieving resolution and performance at such scales. A second theme (Workshop 7) will aim at control issues in distributed hybrid systems, at a macro scale, with a specific focus the “smart grid” and energy applications.
• Workshop 1, Distributed Control and Decision Making Over Networks, 28 September – 2 October 2015.
• Workshop 2, Analysis and Control of Network Dynamics, 19-23 October 2015.
• Workshop 3, Biological Systems and Networks, 11-16 November 2015.
• Workshop 4, Optimization and Parsimonious Modeling, 25-29 January 2016.
• Workshop 5, Computational Methods for Control of Infinite-dimensional Systems, 14-18 March 2016.
• Workshop 6, Quantum and Nano Control, 11-15 April 2016.
• Workshop 7, Control at Large Scales: Energy Markets and Responsive Grids, 9-13 March 2016.
### At the MBI
Here’s what’s going on at the Mathematical Biology Institute:
The MBI network program is part of a yearlong cooperative program with IMA.
Networks and deterministic and stochastic dynamical systems on networks are used as models in many areas of biology. This underscores the importance of developing tools to understand the interplay between network structures and dynamical processes, as well as how network dynamics can be controlled. The dynamics associated with such models are often different from what one might traditionally expect from a large system of equations, and these differences present the opportunity to develop exciting new theories and methods that should facilitate the analysis of specific models. Moreover, a nascent area of research is the dynamics of networks in which the networks themselves change in time, which occurs, for example, in plasticity in neuroscience and in up regulation and down regulation of enzymes in biochemical systems.
There are many areas in biology (including neuroscience, gene networks, and epidemiology) in which network analysis is now standard. Techniques from network science have yielded many biological insights in these fields and their study has yielded many theorems. Moreover, these areas continue to be exciting areas that contain both concrete and general mathematical problems. Workshop 1 explores the mathematics behind the applications in which restrictions on general coupled systems are important. Examples of such restrictions include symmetry, Boolean dynamics, and mass-action kinetics; and each of these special properties permits the proof of theorems about dynamics on these special networks.
Workshop 2 focuses on the interplay between stochastic and deterministic behavior in biological networks. An important related problem is to understand how stochasticity affects parameter estimation. Analyzing the relationship between stochastic changes, network structure, and network dynamics poses mathematical questions that are new, difficult, and fascinating.
In recent years, an increasing number of biological systems have been modeled using networks whose structure changes in time or which use multiple kinds of couplings between the same nodes or couplings that are not just pairwise. General theories such as groupoids and hypergraphs have been developed to handle the structure in some of these more general coupled systems, and specific application models have been studied by simulation. Workshop 3 will bring together theorists, modelers, and experimentalists to address the modeling of biological systems using new network structures and the analysis of such structures.
Biological systems use control to achieve desired dynamics and prevent undesirable behaviors. Consequently, the study of network control is important both to reveal naturally evolved control mechanisms that underlie the functioning of biological systems and to develop human-designed control interventions to recover lost function, mitigate failures, or repurpose biological networks. Workshop 4 will address the challenging subjects of control and observability of network dynamics.
#### Events
Workshop 1: Dynamics in Networks with Special Properties, 25-29 January, 2016.
Workshop 2: The Interplay of Stochastic and Deterministic Dynamics in Networks, 22-26 February, 2016.
Workshop 3: Generalized Network Structures and Dynamics, 21-15 March, 2016.
Workshop 4: Control and Observability of Network Dynamics, 11-15 April, 2016.
You can get more schedule information on these posters:
### Clifford V. Johnson - Asymptotia
Face the Morning…
With the new semester and a return to the routine of campus life comes taking the subway train regularly in the morning again, which I'm pleased to return to. It means odd characters, snippets of all sort of conversations, and - if I get a seat and a good look - the opportunity to practice a bit of quick sketching of faces. I'm slow and rusty from no recent regular practice, so I imagine that it was mostly luck that helped me get a reasonable likeness [...] Click to continue reading this post
The post Face the Morning… appeared first on Asymptotia.
|
{}
|
# Constant Circular Motion Not Really Constant
John Mohr
I was pondering my practice of talking about circular motion in the horizontal direction and the vertical direction. But I'd often would see in books, notes, and the internet that we assume constant velocity for the vertical case. However, when one thinks about the forces at different points in the circular path we can see from the free body diagrams a net force that may either 'assist' or 'resist' the motion of the object circulating. This would mean that the object would speed up slightly as it descends with the assistance of gravity and slow down slightly with the ascending against gravity. I suppose my thought is that the books, notes, and internet might mean the average speed remains constant as it continues to circulate.
Any thoughts and/or confirmation?
Mentor
How would you analyze it for the more accurate case you have identified?
andresB
The terminology used is weird to me. I would not use the word constant but uniform. And the velocity is not constant, the speed is.
Now, it is true that in a vertical circular motion, like the one of a mass attached to a string, if the only forces are the weight and the tension then the speed will not be constant.
That does not mean you can't have a uniform vertical circular motion, it would just require some compensation forces in order to to have a constant speed/centripetal force.
PeroK
Homework Helper
Gold Member
To add to what @andresB said, for a mass going around a vertical circle in uniform circular motion, the tension in the string must vary both in magnitude and in direction. Think about it this way.
1. A mass going around a circle at constant speed has constant angular momentum.
2. If the circle is vertical, there is the varying external torque of gravity acting on it.
3. Therefore to keep the angular momentum and hence the speed constant, another external torque must compensate for the gravity torque. This torque can only come from the string.
4. However, a string attached to a fixed point can exert only a radial force which means zero torque. So ##\dots## how does one get out of this?
Answer: Swing a mass at the end of string in a vertical circle and see what your hand holding the string does.
Gold Member
But I'd often would see in books, notes, and the internet that we assume constant velocity for the vertical case.
That is not consistent with my experience. In most of the treatments and examples I see, the motion in vertical circles is not uniform circular motion.
nasu
John Mohr
How would you analyze it for the more accurate case you have identified?
Thank you to everyone for your comments. I’m sorry I didn’t include a diagram earlier. I got home and finally had a chance to draw a diagram of what I mean.
#### Attachments
• Note Feb 26, 2021(1).pdf
731.5 KB · Views: 71
Last edited:
But I'd often would see in books, notes, and the internet that we assume constant velocity for the vertical case.
Can you give an example? It depends on the scenario. For a spinning wheel the speed on the circumference can be constant in the vertical case.
PeroK
Mentor
Thank you to everyone for your comments. I’m sorry I didn’t include a diagram earlier. I got home and finally had a chance to draw a diagram of what I mean. View attachment 278771
Let's see your Newton's 2nd Law equations for this.
Homework Helper
Gold Member
Thank you to everyone for your comments. I’m sorry I didn’t include a diagram earlier. I got home and finally had a chance to draw a diagram of what I mean. View attachment 278771
Your diagram is faulty. You show two external forces, tension T (blue) and weight W (green). OK. Then you show net force FNet in red. Not OK. The net force must be the vector sum of the blue and green arrows, reasonably drawn to scale. It is not. Examples
At the due East position there should be a read arrow pointing down and to the left. There is none. Why not? Similarly at the due West position.
At the due North position again you show no red arrow. Why not? Both the tension and the weight point down, therefore their red arrow sum must also point down. Similarly at the due South position except that the blue and green arrows point in opposite directions. However, they cannot cancel each other out because there must be net centripetal force to make the mass follow the circular path.
Having said all that, I am not sure under what conditions you drew your diagram. One possibility is that the mass is given a good kick at the due South position and more than enough speed to go around the loop. The other possibility is that the string is replaced by a light rod with some kind of driving mechanism at the center that causes the mass to go around the circle at constant speed. That's the Ferris wheel analogue. The third possibility of a mass at the end of a string going around the circle of fixed center at constant speed is not a possibility for reasons that I explained in post #4.
Gold Member
2022 Award
I think with "vertical circular motion" you mean a pendulum. Its equation of motion is
$$m a \ddot{\theta}=-m g \sin \theta \; \Rightarrow \; \ddot{\theta}=-\frac{g}{a} \sin \theta.$$
The solution is an elliptic function and the angular velocity, ##\dot{\theta}## is for sure not constant.
You can see this of course already in the small-angle approximation, i.e., for ##|\theta|\ll 1##, where you can approximate ##\sin \theta \simeq \theta##, from which you get
$$\ddot{\theta}=-\omega^2 \theta, \quad \omega=\frac{g}{a}.$$
The solution is
$$\theta(t)=A \cos(\omega t + \varphi),$$
and the angular velocity is
$$\dot{\theta}(t)=-A \omega \sin(\omega t+\varphi).$$
John Mohr
Ok, so I will do my best to address each person's comment the best I can.
But, perhaps to start, I thought a visual example might be helpful to explain what I've seen before. I know there is a whole lot more going on with the physics on this example but I'm hoping the general idea will be seen. When we see gymnasts doing 'giants' on a high bar...
...it can be seen that they slow down at the top of their swing. In a thought experiment too - if one was to pump your legs hard enough on a playground swing and you could avoid having the chain becoming slack and falling down, one could swing so hard that you'd go right around. But, just beyond the minimum speed you would likely be slower at the top then speed up at the bottom. Of course, in this scenario you would need a different mechanism where the chains attach so they wouldn't wrap around.
Your diagram is faulty. You show two external forces, tension T (blue) and weight W (green). OK. Then you show net force FNet in red. Not OK. The net force must be the vector sum of the blue and green arrows, reasonably drawn to scale. It is not.
For this diagram, I tried to think about the 2D vector addition and how long the Fnet would be. Though my scales are approximate I know it's not perfect. But the Fnet is supposed to be the sum of T and W is either is translated to the tip of the other.
At the due North position again you show no red arrow. Why not?
I know also that I didn't draw in the Fnet for all positions of the rotating object (N, S, E, and W) But certainly these could be shown too. The only two which I'd think would not assist or resist would be the N and S.
I think with "vertical circular motion" you mean a pendulum.
I think vanhees71 may be onto something here, but the math is something I'd need to brush up on or learn. Apologies, I teach at the high school level and it's been awhile. :)
Can you give an example?
On this document practice problem #1talks about a ball swung in a vertical circle at 5 m/s. And on this page if one searches for the word 'constant' it appears four times with the context being vertical circular motion and objects going at a constant speed.
Let's see your Newton's 2nd Law equations for this.
Sorry Chestermiller, I wasn't sure what you were wanting for this. Maybe I could analyze one or two of the positions on the path to show this. I'll try to put this together.
On this document practice problem #1talks about a ball swung in a vertical circle at 5 m/s.
It is "swung", so it doesn't just move unter gravity.
And on this page if one searches for the word 'constant' it appears four times with the context being vertical circular motion and objects going at a constant speed.
It is "whirled", so it doesn't just move unter gravity. It also states that the speed changes.
John Mohr
View attachment 278860
...It also states that the speed changes.
Hey that's right A.T., your right! I didn't see that before. Very cool - so it does say something about this. And from my own reference too.
PeroK
Gold Member
2022 Award
I think vanhees71 may be onto something here, but the math is something I'd need to brush up on or learn. Apologies, I teach at the high school level and it's been awhile. :)
Are you implying that at high schools nowadays you don't treat the mathematical pendulum anymore? It's sooooo sad!
Mentor
Sorry Chestermiller, I wasn't sure what you were wanting for this. Maybe I could analyze one or two of the positions on the path to show this. I'll try to put this together.
You don't need to analyze one or two positions. I'm asking you to write down the force balances in the radial and tangential directions (or, equally acceptable, the vector force balance in terms of unit vectors in the radial and tangential directions). Or, equally acceptable, the conservation of energy equation involving the kinetic energy and potential energy of the object. I would like to see some equations and a token effort to model this problem.
John Mohr
You don't need to analyze one or two positions. I'm asking you to write down the force balances in the radial and tangential directions (or, equally acceptable, the vector force balance in terms of unit vectors in the radial and tangential directions). Or, equally acceptable, the conservation of energy equation involving the kinetic energy and potential energy of the object. I would like to see some equations and a token effort to model this problem.
Hello Chestermiller, I've given the problem some thought and here's what I've come up with.
How does this seem to you?
Homework Helper
Gold Member
2022 Award
Hello Chestermiller, I've given the problem some thought and here's what I've come up with.
View attachment 278904
How does this seem to you?
What goes up must slow down!
Mentor
Hello Chestermiller, I've given the problem some thought and here's what I've come up with.
View attachment 278904
How does this seem to you?
Very good. So we have$$mr\left(\frac{d\theta}{dt}\right)^2=T+mg\sin{\theta}$$and
$$mr\frac{d^2\theta}{dt^2}=-mg\cos{\theta}$$
Do you agree?
vanhees71
John Mohr
Yes, I think I see it. Thank you Chestermiller, this has certainly helped me see the vertical case of circular motion with better clarity. It's been 25 years since I've done any calculus so I had to watch a video relating to the proof, but I think I got it. Thanks again for getting me to think deeper.
Mentor
Yes, I think I see it. Thank you Chestermiller, this has certainly helped me see the vertical case of circular motion with better clarity. It's been 25 years since I've done any calculus so I had to watch a video relating to the proof, but I think I got it. Thanks again for getting me to think deeper.
So you're not interested in seeing what the solution to these equations looks like?
John Mohr
So you're not interested in seeing what the solution to these equations looks like?
Sorry Chestermiller, I didn't mean to stop our conversation prematurely. Please, yes, I would be interested in the solution you have proposed.
|
{}
|
Insecurities in robotics are not just in the robots themselves, they are also in the whole supply chain. The tremendous growth and popularity of collaborative robots have over the past years introduced flaws in the –already complicated– supply chain, difficulting serving safe and secure robotics solutions.
This article builds upon a previous essay [1] and presents a series of thoughts and questions (most left unanswered and for future research). The aim is to question whether the current supply chain favors overall the end-user's security and safety.
The robotics supply chain
The robotics supply chain is generally organized as follows:
graph LR; M[Manufacturer] --> D[Distributor] D --> S[System Integrator] S --> U[End User]
Traditionally, Manufacturer, Distributor and System Integrator stakeholders were all into one single entity that served End users directly. This is the case of some of the biggest and oldest robot manufacturers including ABB or KUKA, among others.
Most recently, and specially with the advent of collaborative robots [2] and their insecurities [3], each one of these stakeholders acts independently, often with a blurred line between Distributor and Integrator. This brings additional complexity when it comes to responding to End User demands, or solving legal conflicts.
Companies like Universal Robots (UR) or Mobile Industrial Robots (MiR) represent best this fragmentation of the supply chain. When analyzed from a cybersecurity angle, one wonders: which of these approaches is more responsive and responsible when applying security mitigations? Does fragmentation difficult responsive reaction against cyber-threats? Are Manufacturers like Universal Robots pushing the responsibility and liabilities to their Distributors and the subsequent Integrators by fragmenting the supply chain? What are the exact legal implications of such fragmentation?
Stakeholders of the robotics supply chain
Some of the stakeholders of both the new and the old robotics supply chains are captured and defined in the figure below:
Not much to add. The diagram above is far from complete. There're indeed more players but these few allow one to already reason about the present issues that exist in the robotics supply chain.
The 'new' supply chain in robotics
It really isn't new. The supply chain (and GTM straregy) presented by vendors like UR or MiR (both owned by Teradyne) was actually inspired by many others, across industries, yet, it's certainly been growing in popularity over the last years in robotics. In fact, one could argue that the popularity of collaborative robots is related to this change in the supply chain, where many stakeholders contributed to the spread of these new technologies.
This supply chain is depicted below, where a series of security-related interactions are captured:
The diagram presents several sub-cases, each deals with scenarios that may happen when robots present cybersecurity flaws. Beyond the interactions, what's outstanding is the more than 20 legal questions related to liabilities and responsibility that came up. This, in my opinion, reflects clearly the complexity of the current supply chain in robotics, and the many compromises one needs to assume when serving, distributing, integrating, or operating a robot.
What's more scary, is that most of the stakeholders involved in the supply chain I interact with ignore their responsibilities (different reasons, from what I can see). The security angle in here is critical. Security mitigations need to be supplied all the way down to the end-user products, otherwise, it'll lead to hazards.
While I am not a laywer, my discussions with lawyers on this topic made me believe that there's lack of legal frameworks and/or clear answers in Europe for most of these questions. Morever, the lack of security awareness from many of the stakeholders involved [2:1] is not only compromising intermediaries (e.g. Distributors and System Integrators), but ultimately exposing end-users to risks.
Altogether, I strongly believe this 'new' supply chain and the clear lack of security awareness and reactions leads to a compromised supply chain in robotics. I'm listing below a few of the most relevant (refer to the diagram above for all of them) cybersecurity-related questions raised while building the figure above reasoning on the supply chain:
• Who is responsible (across the supply chain) and what are the liabilities if as a result of a cyber-attack there is human harm for a previously not known (or reported) flaw for a particular manufacturers's technology?[4]
• Who is responsible (across the supply chain) and what are the liabilities if as a result of a cyber-attack there is a human harm for a known and disclosed but not mitigated flaw for a particular manufacturers's technology?
• Who is responsible (across the supply chain) and what are the liabilities if as a result of a cyber-attack there is a human harm for a known, disclosed and mitigated flaw, yet not patched?
• What happens if the harm is environmental?
• And if there is no harm? Is there any liability for the lack of responsible behavior in the supply chain?
• What about researchers? are they allowed to freely incentivate security awareness by ethically disclosing their results? (which you'd expect when one discovers something)
• Can researchers collect insecurity evidence to demonstrate non-responsible behavior without liabilities?
While I can't answer most of this now, I hope I will in the short future.
So, what's better, fragmentation or the lack of it?
I see a huge growth through fragmentation yet, still, reckon that the biggest and most successful robotics companies out there tend to integrate it all.
What's clear to me is that fragmentation of the supply chain (or the 'new' supply chain) presents clear challenges for cybersecurity. Maintaining security in a fragmented scenario is more challenging, requires more resources and a well coordinated and often distributed series of actions (which by reason is tougher).
fragmentation of the supply chain (or the 'new' supply chain) presents clear challenges from a security perspective.
So what's better from a security angle? I don't know. I really don't. My team and I at Alias Robotics are still collecting data and slowly disclosing while cooperating with vendors. What's clear is that much needs to be done to improve the current robotics supply chain and prepare it for the upcoming cyber-threats.
Investing in robot cybersecurity by either building your own security team or relying on external support is a must.
References
1. Mayoral-Vilches, V. Vulnerability coordination and disclosure in robotics. Cybersecurity and Robotics. Retrieved from /vulnerability-coordination-and-disclosure-in-robotics/ ↩︎
2. Mayoral-Vilches, V. Universal Robots cobots are not secure. Cybersecurity and Robotics. Retrieved from /security-universal-robots/ ↩︎ ↩︎
3. Mayoral-Vilches, V. More than 100 companies use vulnerable collaborative robots. Cybersecurity and Robotics. Retrieved from /companies-use-vulnerable-collaborative-robots/ ↩︎
4. Note this questions covers both, 0-days and known flaws that weren't previously reported. ↩︎
|
{}
|
# If the square of a time series is stationary, is the original time series stationary?
I found a solution that stated that if the square of a time series is stationary, so is the original time series, and vice-versa. However I don't seem able to prove it, anyone has an idea if this is true, and if it is how to derive it?
That conjecture is false. A simple counter-example is the deterministic time-series $$X_t = (-1)^t$$ over times $$t \in \mathbb{Z}$$. This time series is not even mean stationary, but its square is strictly stationary.
• @Firebug The mean isn't zero. The mean is $-1$ for odd $t$ and $1$ for even. – Acccumulation May 13 at 17:10
|
{}
|
Batch Files
LinkList 2023 does not support wildcards or any other method of converting multiple files at a time. However, it is easy to automate the conversion process using batch files on both Windows and Linux that do support multiple files and processing subdirectories.
Windows
The following batch file will procesws all DWG files in the current directory and any subdirectories and create an XML file with the extracted content. The batch process calls processFile to convert each dwg file found in the current directory. It then loops through each subdirectory, recursively calling treeProcess to convert the files in the subdirectories.
set exepath=c:\path\to\exe\linklist2020
call :treeProcess
exit /b
:treeProcess
for %%f in (*.dwg) do call :processFile "%%f"
for /D %%d in (*) do (
cd %%d
call :treeProcess
cd ..
)
exit /b
:processFile
set fName=%1:.dwg=.xml%
del fName
%exepath% %1 -xml
exit /b
Linux
Linux batch file operation is the same as on Windows, just with a different syntax. The following batch file will convert all DWG files in the current directory and any subdirectories and create an XML file with the extracted content. The batch process calls processFile to convert each dwg file found in the current directory. It then loops through each subdirectory, recursively calling treeProcess to convert the files in the subdirectories.
#!/bin/bash
processFile(){
$exepath "$1" -xml
}
treeProcess(){
for file in *.dwg; do
if [[ -f $file ]]; then processFile "$file"
fi
done
for dir in find * -type d; do
if [[ $dir != "." ]]; then if [[$dir != ".." ]]; then
cd \$dir
treeProcess
cd ..
fi
fi
done
}
exepath=/home/company/path/to/linklist2020
treeProcess
Last updated on 16 Jul 2020
Published on 22 Mar 2020
|
{}
|
## Seminars and Colloquia by Series
Wednesday, September 19, 2012 - 14:00 , Location: Skiles 005 , Selcuk Koyuncu , Drexel University , Organizer: Jeff Geronimo
Wednesday, September 12, 2012 - 14:00 , Location: Skiles 005 , Antonio Duran , University of Seville , Organizer: Jeff Geronimo
In this talk we discuss some nonlinear transformations between moment sequences. One of these transformations is the following: if (a_n)_n is a non-vanishing Hausdorff moment sequence then the sequence defined by 1/(a_0 ... a_n) is a Stieltjes moment sequence. Our approach is constructive and use Euler's idea of developing q-infinite products in power series. Some others transformations will be considered as well as some relevant moment sequences and analytic functions related to them. We will also propose some conjectures about moment transformations defined by means of continuous fractions.
Wednesday, September 5, 2012 - 14:00 , Location: Skiles 005 , Raphael Clouatre , Indiana University , Organizer:
The classification theorem for a C_0 operator describes its quasisimilarity class by means of its Jordan model. The purpose of this talk will be to investigate when the relation between the operator and its model can be improved to similarity. More precisely, when the minimal function of the operator T can be written as a product of inner functions satisfying the so-called (generalized) Carleson condition, we give some natural operator theoretic assumptions on T that guarantee similarity.
Wednesday, August 29, 2012 - 14:00 , Location: Skiles 005 , Greg Knese , University of Alabama , Organizer: Jeff Geronimo
Using integral formulas based on Green's theorem and in particular a lemma of Uchiyama, we give simple proofs of comparisons of different BMO norms without using the John-Nirenberg inequality while we also give a simple proof of the strong John-Nirenberg inequality. Along the way we prove the inclusions of BMOA in the dual of H^1 and BMO in the dual of real H^1. Some difficulties of the method and possible future directions to take it will be suggested at the end.
Friday, May 4, 2012 - 11:00 , Location: Skiles 006 , Professor Bernard Chevreau , University of Bordeaux 1 , Organizer:
In the first part of the talk we will give a brief survey of significant results going from S. Brown pioneering work showing the existence of invariant subspaces for subnormal operators (1978) to Ambrozie-Muller breakthrough asserting the same conclusion for the adjoint of a polynomially bounded operator (on any Banach space) whose spectrum contains the unit circle (2003). The second part will try to give some insight of the different techniques involved in this series of results, culminating with a brilliant use of Carleson interpolation theory for the last one. In the last part of the talk we will discuss additional open questions which might be investigated by these techniques.
Wednesday, April 25, 2012 - 15:30 , Location: Skiles 005 , Konstantin Oskolkov , University of South Carolina , Organizer: Michael Lacey
Wednesday, April 25, 2012 - 15:30 , Location: Skiles 005 , Kabe Moen , University of Alabama , Organizer: Michael Lacey
Motivated by mappings of finite distortion, we consider degenerate p-Laplacian equations whose ellipticity condition is satisfied by thedistortion tensor and the inner distortion function of such a mapping. Assuming a certain Muckenhoupt type condition on the weightinvolved in the ellipticity condition, we describe the set of continuity of solutions.
Wednesday, April 18, 2012 - 14:00 , Location: Skiles 005 , Kelly Bickel , Washington University - St. Louis , Organizer:
It is well-known that every Schur function on the bidisk can be written as a sum involving two positive semidefinite kernels. Such decompositions, called Agler decompositions, have been used to answer interpolation questions on the bidisk as well as to derive the transfer function realization of Schur functions used in systems theory. The original arguments for the existence of such Agler decompositions were nonconstructive and the structure of these decompositions has remained quite mysterious. In this talk, we will discuss an elementary proof of the existence of Agler decompositions on the bidisk, which is constructive for inner functions. We will use this proof as a springboard to examine the structure of such decompositions and properties of their associated reproducing kernel Hilbert spaces.
Wednesday, April 11, 2012 - 14:00 , Location: Skiles 005 , Vladimir Eiderman , University of Wisconsin , Organizer: Michael Lacey
This is a joint work with F.~Nazarov and A.~Volberg.Let $s\in(1,2)$, and let $\mu$ be a finite positive Borel measure in $\mathbb R^2$ with $\mathcal H^s(\supp\mu)<+\infty$. We prove that if the lower $s$-density of $\mu$ is+equal to zero $\mu$-a.~e. in $\mathbb R^2$, then$\|R\mu\|_{L^\infty(m_2)}=\infty$, where $R\mu=\mu\ast\frac{x}{|x|^{s+1}}$ and $m_2$ is the Lebesque measure in $\mathbb R^2$. Combined with known results of Prat and+Vihtil\"a, this shows that for any noninteger $s\in(0,2)$ and any finite positive Borel measure in $\mathbb R^2$ with $\mathcal H^s(\supp\mu)<+\infty$, we have+$\|R\mu\|_{L^\infty(m_2)}=\infty$.Also I will tell about the resent result of Ben Jaye, as well as about open problems.
Monday, March 26, 2012 - 14:00 , Location: Skiles 114 , Dan Timotin , Indiana University and Mathematical Institute of Romania , Organizer:
Truncated Toeplitz operators, introduced in full generality by Sarason a few years ago, are compressions of multiplication operators on H^2 to subspaces invariant to the adjoint of the shift. The talk will survey this newly developing area, presenting several of the basic results and highlighting some intriguing open questions.
|
{}
|
# Dennis and Carmen problem with my trial solution , is it right?
## Homework Statement
Dennis and Carmen are standing on the edge of a cliff. Dennis throws a basketball vertically upward, and at the same time Carmen throws a basketball vertically downward with the same initial speed. You are standing below the cliff observing this strange behavior. Whose ball is moving fastest when it hits the ground?
v^2 = v0^2 -2gΔy
## The Attempt at a Solution
in the same speed cause when Dennis basketball back to the initial position "the throwing position" it will back with the same initial speed with different direction "the same direction as Carmen" so it's the same but it'll take Dennis basketball more time
Dennis
v0 is +
v^2 =+v0^2 -2gΔy
Carmen
v0 is -
v^2 =-v0^2 -2gΔy
-----
g =-9.8 for both
Δy is the same for both
so
Dennis v^2 =+v0^2
Carmen v^2 =-v0^2
v^2+v^2 =0
v^2 =-v^2
please tell me if that's right if not how it can be solve
thank you
## Answers and Replies
Related Introductory Physics Homework Help News on Phys.org
gneill
Mentor
## Homework Statement
Dennis and Carmen are standing on the edge of a cliff. Dennis throws a basketball vertically upward, and at the same time Carmen throws a basketball vertically downward with the same initial speed. You are standing below the cliff observing this strange behavior. Whose ball is moving fastest when it hits the ground?
v^2 = v0^2 -2gΔy
## The Attempt at a Solution
in the same speed cause when Dennis basketball back to the initial position "the throwing position" it will back with the same initial speed with different direction "the same direction as Carmen" so it's the same but it'll take Dennis basketball more time
Your logic is correct
Dennis
v0 is +
v^2 =+v0^2 -2gΔy
Carmen
v0 is -
v^2 =-v0^2 -2gΔy
Careful, the formula wants you to square the initial velocity. Carmen's initial velocity is -v0, and squared is (-v0)(-v0) = +v02.
As you can see, the initial velocity being negative does not change the result.
-----
g =-9.8 for both
Δy is the same for both
so
Dennis v^2 =+v0^2
Carmen v^2 =-v0^2
v^2+v^2 =0
v^2 =-v^2
If you consider that last line of math it represents an impossible situation, since you can't have any real number squared that turns out negative. The glitch can be traced back to squaring the initial velocity as I pointed out above.
You could also have used a conservation of energy approach (if you've covered that yet in your course).
1 person
thank you ,we have not cover it yet
so if I'm going to answer for that question in the exam i just write this formula "v^2 = v0^2 -2gΔy" and say
everything is the same for both cases " v0^2 -2gΔy"
then the final speed is the same
only or should i add something else ?
phinds
Gold Member
2019 Award
thank you ,we have not cover it yet
so if I'm going to answer for that question in the exam i just write this formula "v^2 = v0^2 -2gΔy" and say
everything is the same for both cases " v0^2 -2gΔy"
then the final speed is the same
only or should i add something else ?
Forget the math for a minute and just think about it this way. If the ball thrown upward has an initial velocity X, then when it gets back to the same point on its way down, what is its velocity at that point?
gneill
Mentor
thank you ,we have not cover it yet
so if I'm going to answer for that question in the exam i just write this formula "v^2 = v0^2 -2gΔy" and say
everything is the same for both cases " v0^2 -2gΔy"
then the final speed is the same
only or should i add something else ?
That's all you really need
Forget the math for a minute and just think about it this way. If the ball thrown upward has an initial velocity X, then when it gets back to the same point on its way down, what is its velocity at that point?
the same initial velocity that was it thrown with
|
{}
|
Search this author in Google Scholar
Articles: 1
### GC-Fusion frames
Methods Funct. Anal. Topology 16 (2010), no. 2, 112-119
In this paper we introduce the generalized continuous version of fusion frame, namely $gc$-fusion frame. Also we get some new results about Bessel mappings and perturbation in this case.
|
{}
|
# Homework Help: Integrationg over exp with two variables
1. Nov 11, 2013
### cutesteph
1. The problem statement, all variables and given/known data
f(x,y) = exp(-x^2 +xy -y^2)
transform with
x =(1/sqrt(2)) *(u – v), y = (1/sqrt(2))* (u + v) .
2. Relevant equations
Jacobian
3. The attempt at a solution
Jacobian = 1
f(u,v) = exp(-(u^2)/2 -(3v^2/2)
double integral f(u,v) du dv
the bounds would be x > 0 => ( u-v) >0 => u > v
and x < ∞ => u < ∞
v > 0 to v < ∞
I am lost on what to do next. If anyone can be as kind as to help, I would greatly appreciate it!
2. Nov 12, 2013
### tiny-tim
hi there cutesteph!
(try using the X2 button just above the Reply box )
what are your limits for x and y ?
i'll assume they're both from 0 to ∞
draw the region (in x,y), and mark a grid of lines of equal u and v
u goes from 0 to ∞
for each value of u, where does v go from and to?
3. Nov 12, 2013
### cutesteph
So the limits v from 0 to infinity and u from -v to v.
∫0 to∞ exp(-u2/2)∫-u to u exp(-3v2/2) dv du
4. Nov 12, 2013
### tiny-tim
isn't it the other way round?
5. Nov 12, 2013
### Ray Vickson
You never actually answered the question about the limits on x and y, and without your answer I cannot possibly tell what are the limits on u and v. However, you can determine the latter for yourself by noting that
$$u = \frac{x+y}{\sqrt{2}}, \; v = \frac{y-x}{\sqrt{2}}$$
If you know the ranges of x and y you can figure out the ranges on u and v.
|
{}
|
# intrinsic
1. ${\displaystyle x}$ is intrinsic to ${\displaystyle y}$ if ${\displaystyle x}$ is an important part of ${\displaystyle y}$. ${\displaystyle y}$ must always have ${\displaystyle x}$ as a part.
|
{}
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Quine is a self-reproducing function or a computer program that will output its source source or itself. Terms used are for these programs are also self-replicating programs, self-reproducing programs or self-copying programs (courtesy of wikipedia).
Many programming languages have been known to do this, for example in Python, a simple Quine would look like this:
a= 'a=%b;print(a%%a)';print(a%a)
Similar to Python, quine can be written in R as well. Simple example would be:
a<-"a<-0;cat(sub(0,deparse(a),a))";cat(sub(0,deparse(a),a))
Example consists of two blocks; the first block contains the function that will perform the process of replication
"a<-0;cat(sub(0,deparse(a),a))"
And the second part contains the code that will be outputted.
cat(sub(0,deparse(a),a))
When we run the command, the script will return itself, revealing the complete input command.
Word quine primarily comes from the biology, precisely from the self-replication, and it consists of two parts, first part is the code that performs the replication and the second is the data that contains all the code, script, instructions to perform the replication process.
The value of variable a can hold basically any text or any additional information, since the function in R is using string manipulation functions sub and deparse.
Deparse is used to preserve the quotations in original input command and sub is used to get the the first and the second block of code.
Happy R-coding!!!
|
{}
|
# Force with zero acceleration [duplicate]
This question already has an answer here:
If I apply a force on a body which is kept against a wall, then the body will not move. The body is not moving means that its velocity is zero, and hence its acceleration is also zero. According to Newton's second law of motion, $$\ F = ma$$ If the acceleration is $0$, then $F = 0\ \text N$. It means that I'm not applying any force on the body, but how can it be if I'm pushing my hands against the body?
-
## marked as duplicate by Waffle's Crazy Peanut, dmckee♦Mar 25 '13 at 19:03
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Possibly, You don't need an object between you and the wall. But simply, the same thing happens if you push the wall. – Waffle's Crazy Peanut Mar 25 '13 at 14:38
## 3 Answers
It means that the ball is pushing to the wall. And, due to Newton's third law, the wall also on the ball. The forces on the ball are then equal and opposite.
-
According to Newton's third law of motion, to every action there is an equal and opposite reaction. When I'm applying the force on the body which is kept against the wall, so I'm not just applying that force over the body but also on the wall, as a result it reacts by applying opposite force on me which is equal in magnitude to that of my force. Therefore, both the forces cancel each other out and the net force is 0. However, if I apply a force on the wall directly then the wall will exert an equal amount of force over me. These forces are not cancelling each other out. The reaction force is acting on me, whereas the action force is acting on the wall. Inertia is the property of an object which tends to keep the state unchanged, and mass is the measure of inertia of an object. Therefore, I'm more likely to be pushed by the wall backward and accelerate because my mass is very much less than the wall, whereas the wall wouldn't move from its place because it has a greater mass and hence greater inertia.
-
Acceleration appears if the resulting force is not zero. Your probe body experiences two forces: yours and the wall's that are in the opposite directions. They in sum give zero.
-
|
{}
|
# Two-way mixed effects ANOVA model
Consider the two-way ANOVA model with mixed effects : $$Y_{i,j,k} = \underset{M_{i,j}}{\underbrace{\mu + \alpha_i + B_j + C_{i,j}}} + \epsilon_{i,j,k},$$ with $\textbf{(1)}$ : $\sum \alpha_i = 0$, the random terms $B_j$, $C_{i,j}$ and $\epsilon_{i,j,k}$ are independent, $B_j \sim_{\text{iid}} {\cal N}(0, \sigma_\beta^2)$, $\epsilon_{i,j,k} \sim_{\text{iid}} {\cal N}(0, \sigma^2)$ ; and there are two possibilities for the random interactions : $\textbf{(2a)}$ : $C_{i,j} \sim_{\text{iid}} {\cal N}(0, \sigma_\gamma^2)$ or $\textbf{(2b)}$ : $C_{i,j} \sim {\cal N}(0, \sigma_\gamma^2)$ for all $i,j$, the random vectors $C_{(1:I), 1}$, $C_{(1:I), 2}$, $\ldots$, $C_{(1:I), J}$ are independent, and $C_{\bullet j}=0$ for all $j$ (which means that mean of each random vector $C_{(1:I), j}$ is zero).
Model $\textbf{(1)}$ + $\textbf{(2a)}$ is the one which is treated by the nlme/lme4 package in R or the PROC MIXED statement in SAS. Model $\textbf{(1)}$ + $\textbf{(2b)}$ is called the "restricted model", it satisfies in particular $M_{\bullet j} = \mu + B_j$. Do you think one of these two models is "better" (in which sense) or more appropriate than the other one ? Do you know whether it is possible to perform the fitting of the restricted model in R or SAS ? Thanks.
I will try to give an answer, but I am not sure if I understood your question correctly. Hence, first some clarification on what I tried to answer (as you will see, I am not mathematician/statistician).
We are talking about a classical split-plot design with the following factors: experimental unit $B$, repeated-measures factor $C$ (each experimental unit is observed under all levels of $C$), and fixed-effect factor $\alpha$ (each experimental unit is observed under only one level of $\alpha$; I am not sure why $\sum \alpha_i = 0$, but as there needs to be a fixed factor, it seems to be $\alpha$).
Model $\textbf{(1)}$ + $\textbf{(2a)}$ is the standard mixed-model with crossed-random effects of $B$ and $C$ and fixed effect $\alpha$.
Model $\textbf{(1)}$ + $\textbf{(2b)}$ is the standard split-plot ANOVA with a random effects for $B$, the repeated-measures factor $C$ and fixed effect $\alpha$.
That is, $\textbf{(1)}$ + $\textbf{(2a)}$ does not enforce/assumes a specific error strata, whereas $\textbf{(1)}$ + $\textbf{(2b)}$ enforces/assumes variance homogeneity and sphericity.
You could fit $\textbf{(1)}$ + $\textbf{(2a)}$ using lme4:
m1 <- lmer(y ~ alpha + (1|B) + (1|C))
You could fit $\textbf{(1)}$ + $\textbf{(2b)}$ using nlme:
m2 <- lmer(y ~ alpha * C, random = ~1|C, correlation = corCompSymm(form = ~1|C))
Notes:
• I don't believe your proposal for (1)+(2b) is correct. Nothing looks like the constraint $C_{\bullet j}=0$ in your model m2. Apr 24, 2012 at 5:00
• Hmm, I have to say that then, I dont get it. Can you clarify in a little less mathematical terms, what this constraint means? Apr 24, 2012 at 6:31
|
{}
|
# What's my number? [duplicate]
I am thinking of an integer $1,2$ or $3$. You can ask me only a single question to which I can reply "Yes", "No" or "I don't know". I will be completely honest. What will you ask me to figure out what number I am thinking about?
Note: Since there are an infinite number of solutions to this puzzle, I'll select the wittiest one / the one with most upvotes. (Because the one I came up with is kind of overly mathematical and a little too out of the box.)
• The possible duplicate, though very similar, has 4 possible answers ("maybe"), which in essence changes the answers. – AvZ Feb 8 '15 at 14:58
• "Maybe" can be changed to "I don't know" in almost all circumstances. Besides, there are enough answers on the other question that don't use "maybe" already. – mdc32 Feb 8 '15 at 15:03
• @AvZ I have a feeling this question is gonna be closed, perhaps you should accept my answer, and tag this as open-ended. – warspyking Feb 8 '15 at 15:06
• @warspyking Like I have said before, comments like that make me think you answer for the reputation only. – mdc32 Feb 8 '15 at 23:41
• @mdc32 My answer was the only understandable one here. Why just leave it unanswered? – warspyking Feb 9 '15 at 0:53
I'm going to think of a random number too, 1 or 2. If we multiply our numbers, will the product be greater than or equal to 3?
• No -> 1
• Yes -> 3
• I don't know -> 2
• Nice, this works. Let's see what others think. I am going to acxept this within a day or 2 if there aren't any other better answers. – AvZ Feb 8 '15 at 14:55
• @AvZ If there is a better answer I'll just have to get cleverer. – warspyking Feb 8 '15 at 14:56
• Is there any particular reason why people downvote "duplicatish" posts. – AvZ Feb 8 '15 at 17:44
• @AvZ Yes, because you didn't research enough. – warspyking Feb 8 '15 at 19:08
I'm thinking of either 1.5 or 2.5 . Is your number greater than mine?
• Yes - 3
• No - 1
• I don't know - 2
Is either of the following true:
• Your number is $1$.
• Your number is $2$, and the Riemann hypothesis is true?
If you say "yes", it's 1, if you say "I don't know", it's $2$, and if you say "no", it's $3$. And, if you say "yes" and I find out your number isn't $1$, I'm gonna want a proof.
(You could replace "Riemann hypothesis" by anything else the answerer would not know. For instance, "Is a random number I just thought of equal to $1$?")
• Glad you asked. Here – AvZ Feb 8 '15 at 17:43
Is the reciprocal of 2 minus your number greater than zero?
• If the number is 1, the answer is YES (as $\frac{1}{2-1}>0$ )
• If the number is 2, the answer is I DON'T KNOW (as $\frac{1}{2-2}$ is undefined )
• If the number is 3, the answer is NO (as $\frac{1}{2-3}<0$ )
• You generally consider the form $\frac{1}{0}=+\infty$ which is greater than $0$ – AvZ Feb 8 '15 at 14:34
• @AvZ I would consider $\frac{1}{0} = \pm\infty$, but the other answers are better anyway. – frodoskywalker Feb 8 '15 at 17:37
|
{}
|
# Thread: Ring problem with zero divisors
1. ## Ring problem with zero divisors
Does every ring with 1001 elements contain zero divisors?
2. Originally Posted by ashamrock415
Does every ring with 1001 elements contain zero divisors?
Yes, otherwise it'd be a field - a finite ring without zero divisors is a division ring and thus a field (this is not trivial!) - , but $\displaystyle 1001=7\cdot 11\cdot 13$ is not the power of a prime...
Tonio
|
{}
|
# How do I draw the following triangles in latex?
I would like to draw in tikz the following image:
I am not sure where to start. I thought of maybe having a node which is of triangle shape, and then positioning it in different places.
I started with this:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes}
\begin{document}
\begin{tikzpicture}
\node[fill=white,shape=circle,draw=black, minimum size=4cm] (A) at (0,4.3) {A};
\node[fill=white,shape=circle,draw=black, minimum size=3cm] (A) at (0,3.2) {A};
\node[fill=white,shape=circle,draw=black, minimum size=2cm] (A) at (0,2.1) {A};
\node[fill=white,shape=circle,draw=black, minimum size=1cm] (A) at (0,1.1) {A};
\end{tikzpicture}
\end{document}
There are several things:
1. I thought of changing the circle shapes to triangles, but when I put "triangle" instead of "circle" it doesn't work -- even though I use the shapes library.
2. I want the letter A to appear above the shape, not in the middle.
3. I want a squiggly line to connect the top of all shapes. Any squiggly line would do fine.
I would define a triangle "node" using a pic (see section 18.2 of the tikz manual, version 3.0.1a). For the MWE, we need to specify the height of the triangle and the label, so the pic needs to accept two arguments. One way to do this is to define a "triangle" pic as:
\tikzset{
pics/triangle/.style args ={#1,#2}{% pic=triangle{label, height}
code = {
\draw(0,0)node[left]{$#1$} -- ++(#2/2,-#2) -- ++(-#2,0) -- cycle;
}
}
}
This is just "normal" tikz code that gets placed whenever you "call" the pic. So, for example, with this in place you could draw a triangle of height 3 and label S with:
\begin{tikzpicture}
\pic at (0,3) {triangle={S,3}};
\end{tikzpicture}
There are several other ways to "draw" pics. For example, you can also draw this triangle with \draw(0,3)pic{triangle={S,3}};.
The easiest way to draw your "squiggly lines" is probably using a "snake decoration" -- see section 48.3 of the manual.
Putting this all together you can produce the diagram
using the code
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{arrows.meta, decorations.pathmorphing}
\begin{document}
\tikzset{
triangle/.style = {draw=black, fill=brown!5, thick},
squiggle/.style = {decoration={snake, segment length=5mm}, decorate},
pics/triangle/.style args ={#1,#2}{% pic=triangle{label, height}
code = {
\draw[triangle] (0,0)node[left]{$#1$} -- ++(#2/2,-#2) -- ++(-#2,0) -- cycle;
}
}
}
\begin{tikzpicture}
\pic at (0,3) {triangle={S,3}};
\pic at (0,2) {triangle={P,2}};
\pic at (0,1) {triangle={P,1}};
\draw[squiggle](0,2) -- (0,3);
\draw[squiggle](0,1) -- (0,2);
\draw[-{Latex[open]}] (2,1.5) -- ++(2,0);% using arrows.meta
\pic at (6,3) {triangle={S,3}};
\pic at (6,2) {triangle={P,2}};
\pic at (6,1) {triangle={P,2}};
\pic at (6,0) {triangle={P,2}};
\pic at (6,-1.5) {triangle={P,0.5}};
\foreach \bot/\y in {-1.5/1.5, 0/1, 1/1, 2/1} {
\draw[squiggle](6,\bot) -- ++(0,\y);
}
\end{tikzpicture}
\end{document}
Partly to show how to do it, and partly to fine-tune the diagram (particularly the segment length for the snake), I have added some styling.
You called all the nodes A, which is probably not what you want, and you may use polygon instead of triangle. Putting a label on top of a node can be achieved by label=above:.
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes}
\begin{document}
\begin{tikzpicture}
\node[fill=white,shape=regular polygon,regular polygon sides=3,draw=black,
minimum size=4cm,label=above:A,anchor=south] (A1) at (0,4.3) {};
\node[fill=white,shape=regular polygon,regular polygon sides=3,draw=black,
minimum size=3cm,label=above:A,anchor=south] (A2) at (0,3.2) [above]{};
\node[fill=white,shape=regular polygon,regular polygon sides=3,draw=black,
minimum size=2cm,label=above:A,anchor=south] (A3) at (0,2.1) [above]{};
\node[fill=white,shape=regular polygon,regular polygon sides=3,draw=black,
minimum size=1cm,label=above:A,anchor=south] (A4) at (0,1.1) [above]{};
\end{tikzpicture}
\end{document}
Now you can play with the coordinates. If you are always drawing the same thing just with different scales, you may consider using scope.
|
{}
|
Leaderboard submission due 11:59 pm, Thursday, February 19th. Code and report due 11:59 pm, Friday, February 20th.
# Decoding: Homework 2
Decoding is the problem of taking an input sentence in a foreign language:
honorables sénateurs , que se est - il passé ici , mardi dernier ?
and finding the best translation into a target language, according to a model:
honourable senators , what happened here on Tuesday ?
Your task is to find the most probable translation, given the foreign input, the translation likelihood model, and the English language model. We assume the traditional noisy channel decomposition:
We also assume that the distribution over all segmentations and all alignments is uniform. This means that there is no distortion model or segmentation model.
## Getting Started
To begin, download the Homework 2 starter kit. You may either choose to develop locally or on Penn’s servers. For the latter, we recommend using the Biglab machines, whose memory and runtime restrictions are much less stringent than those on Eniac. The Biglab servers can be accessed directly using the command ssh PENNKEY@biglab.seas.upenn.edu, or from Eniac using the command ssh biglab.
In the downloaded directory you will find a Python program called decode, which is an implementation of a simple stack decoder. The decoder translates monotonically — that is, without reordering the English phrases — and by default it also uses very strict pruning limits, which you can vary on the command line.
The provided decoder solves the search problem, but makes two assumptions, the first of which is that
This approximation means that you can use the dynamic programming Viterbi algorithm to find the best translation.
The second assumption is that there is no reordering of phrases during translation, therefore the reordering of source words can only be accomplished if the reordering is part of a memorized phrase pair. If word-for-word translations are all that are available for some sentence, the translations will always be in the source language order.
In the starter kit there is also the data directory which contains a translation model, a language model, and a set of input sentences to translate. Run the decoder using this command:
./decode > output
This loads the models and decodes the input sentences, storing the result in output. You can see the translations simply by looking at the file. To calculate their true model score, run the command:
./grade < output
This command computes the probability of the output sentences according to the model. It works by summing over all possible ways that the model could have generated the English from the French. In general this is intractable, but because the phrase dictionary is fixed and sparse, the specific instances here can be computed in a few minutes. It is still easier to do this exactly than it is to find the optimal translation. In fact, if you look at the grade script you may get some hints about how to do the assignment!
Improving the search algorithm in the decoder — for instance by enabling it to search over permutations of English phrases — should permit you to find more probable translations of the input French sentences than the ones found by the default system. This assignment differs from Homework 1, in that there is no hidden evaluation measure. The grade program will tell you the probability of your output, and whoever finds the most probable output will receive the most points.
## The Challenge
Your task for this assignment is to find the English sentence with the highest possible probability. Formally, this means your goal is to solve the problem:
where $f$ is a French sentence and $e$ is an English sentence. In the model we have provided you,
and
We will make the simplifying assumption that segmentation and reordering probabilities are uniform across all sentences, and hence constant. This results in a model whose probability density function does not sum to one. But from a practical perspective, it slightly simplifies the implementation without substantially harming empirical accuracy. This means that you only need consider the product of the phrase translation probabilities
where $\langle i,i' \rangle$ and $\langle j,j' \rangle$ index phrases in $\mathbf{f}$ and $\mathbf{e}$, respectively.
Unfortunately, even with all of these simplifications, finding the most probable English sentence is completely intractable! To compute it exactly, for each English sentence you would need to compute $p(\mathbf{f} \mid \mathbf{e})$ as a sum over all possible alignments with the French sentence: $p(\mathbf{f} \mid \mathbf{e}) = \sum_\mathbf{a} p(\mathbf{f},\mathbf{a} \mid \mathbf{e})$. A nearly universal approximation is to instead search for the English string together with a single alignment, $\mathop{\arg\,\max}\limits_{\mathbf{e},\mathbf{a}}~ p(\mathbf{f},\mathbf{a} \mid \mathbf{e}) \times p(\mathbf{e})$. This is the approach taken by the monotone default decoder.
Since this involves multiplying together many small probabilities, it is helpful to work in logspace to avoid numerical underflow. We instead solve for $\mathbf{e},\mathbf{a}$ that maximizes:
The default decoder already works with log probabilities, so it is not necessary for you to perform any additional conversion; you can simply work with the sum of the scores that the model provides for you. Note that since probabilities are always less than or equal to one, their equivalent values in logspace will always be negative or zero, respectively (you may notice that grade sometimes reports positive translation model scores; this is because the sum it computes does not include the large negative constant associated with the log probabilities of segmentation and reordering). Hence your translations will always have negative scores, and you will be looking for the one with the smallest absolute value. In other words, we have transformed the problem of finding the most probable translation into a problem of finding the shortest path through a large graph of possible outputs.
Under the phrase-based model we’ve given you, the goal is to find a phrase segmentation, translation of each resulting phrase, and permutation of those phrases such that the product of the phrase translation probabilities and the language model score of the resulting sentence is as high as possible. Arbitrary permutations of the English phrases are allowed, provided that the phrase translations are one-to-one and exactly cover the input sentence. Even with all of the simplifications we have made, this problem is still NP-Complete, so we recommend that you solve it using an approximate method like stack decoding, discussed in Chapter 6 of the textbook. In fact, the baseline score you must beat was achieved using stack decoding with reordering. You can trade efficiency for search effectiveness by implementing histogram pruning or threshold pruning, or by playing around with reordering limits as described in the textbook. Or, you might consider implementing other approaches to solving the search problem:
Several techniques used for the IBM Models (which have very similar search problems as phrase-based models) could also be adapted:
Also consider marginalizing over the different alignments:
But the sky’s the limit! There are many, many ways to try to solve the decoding problem, and you can try anything you want as long as you follow the ground rules:
## Ground Rules
• You must work independently on this assignment.
• You should submit each of the following:
1. Your translations of the entire dataset, uploaded from any Eniac or Biglab machine using the command turnin -c cis526 -p hw2 hw2.txt. You may submit new results as often as you like, up until the assignment deadline. The output will be evaluated using grade program. The top few positions on the leaderboard will receive bonus points on this assignment.
2. Your code, uploaded using the command turnin -c cis526 -p hw2-code file1 file2 .... This is due 24 hours after the leaderboard closes. You are free to extend the code we provide or write your own in whatever langugage you like, but the code should be self-contained, self-documenting, and easy to use.
3. A report describing the models you designed and experimented with, uploaded using the command turnin -c cis526 -p hw2-report hw2-report.pdf. This is due 24 hours after the leaderboard closes. Your report does not need to be long, but it should at minimum address the following points:
• Motivation: Why did you choose the models you experimented with?
• Description of models or algorithms: Describe mathematically or algorithmically what you did. Your descriptions should be clear enough that someone else in the class could implement them.
• Results: You most likely experimented with various settings of any models you implemented. We want to know how you decided on the final model that you submitted for us to grade. What parameters did you try, and what were the results? Most importantly: what did you learn?
Since we have already given you a concrete problem and dataset, you do not need describe these as if you were writing a full scientific paper. Instead, you should focus on an accurate technical description of the above items.
Note: These reports will be made available via hyperlinks on the leaderboard. Therefore, you are not required to include your real name if you would prefer not to do so.
• You do not need any other data than what is provided. You should feel free to use additional codebases and libraries except for those expressly intended to decode machine translation models. You must write your own decoder. If you would like to base your solution on finite-state toolkits or generic solvers for traveling salesman problems or integer linear programming, that is fine. But machine translation software including (but not limited to) Moses, cdec, or Joshua is off-limits. You may of course inspect these systems if you want to understand how they work. But be warned: they are generally quite complicated because they provide a great deal of other functionality that is not the focus of this assignment. It is possible to complete the assignment with a quite modest amount of python code.
Any questions should be be posted on the course Piazza page.
Credits: This assignment is adapted from one originally developed by Adam Lopez. It incorporates some ideas from Chris Dyer.
|
{}
|
# Timer555 frequency issues
#### MishaH
Joined Feb 14, 2016
9
Hi all
Background:
- I recently built the Sunfounder advised Timer555 circuit (I have attached the circuit diagram)
- I am inputting the output pulse from the Timer Circuit into my Raspberry Pi 2B
- The Timer and corresponding code works in terms of: 1. The timer outputs the pulses and the code reads them and I can manipulate the pulses into real time data.
- I have soldered (very amateur like) the circuit onto a breadboard. to bridge some of the connections i used short, straight pieces of solid core wires.
Problem:
1. The frequency of the pulses seems to change constantly by up to 0.05Hz. Now this is sufficient to induce a massive time change.
2. I try to calibrate the Timer555 circuit against several stopwatches and over an hour the time difference can vary up to 2 minutes.
Can anyone advise me how to eliminate changes in frequency? I have tried insulating the circuit with little success (perhaps temperature is the problem)(perhaps my insulating was not good enough)
Thank you
Misha
#### DickCappels
Joined Aug 21, 2008
6,533
Interesting -that is 3%, a huge amount of drift unless you are taking it from the oven to the refrigerator, and it sounds like you are taking care of temperature changes.
The NE55's output frequency 5 has low sensitivity to power supply voltage. That brings us to the resistors and the capacitor.
Some capacitors make very good thermal sensors, and that's ok for some applications. What kind of capacitor are you using?
#### MishaH
Joined Feb 14, 2016
9
Interesting -that is 0.3% -a huge amount of drift unless you are taking it from the oven to the refrigerator, and it sounds like you are taking care of temperature changes.
The NE55's output frequency 5 has low sensitivity to power supply voltage. That brings us to the resistors and the capacitor.
Some capacitors make very good thermal sensors, and that's ok for some applications. What kind of capacitor are you using?
Two 100nF (common ceramic capacitors - the 104 that is)
#### DickCappels
Joined Aug 21, 2008
6,533
The capacitor connected to pins 2 and 6 is suspect - do you have any film capacitors near .1 uf?
Keep in mind the cause of the apparent drift might be something else; this is just the first thing to check.
#### MishaH
Joined Feb 14, 2016
9
I have insulated the circuit now completely. It has dramatically improved sensitivity, but not eliminated it. I can at least now iterate to some point whereas before it jumped around so much iterating to some point within 5% accuracy. now i have gotten it down to 0.2-0.5% accuracy, which is still really not great.
I am really new to electronics, picked up the project 3 weeks ago, so any help is appreciated. Why is the capacitor suspect?
#### dannyf
Joined Sep 13, 2015
2,197
An RC oscillator is inherently unstable, frequency and phase wise. People are known to use them as random number generators.
Your set up makes little sense to me, however. You have a mcu that can time far more accurately and stably than your 555 can ever do. Unless you are doing something else, what you are trying to do with the 555 can be done entirely in software.
#### MishaH
Joined Feb 14, 2016
9
An RC oscillator is inherently unstable, frequency and phase wise. People are known to use them as random number generators.
Your set up makes little sense to me, however. You have a mcu that can time far more accurately and stably than your 555 can ever do. Unless you are doing something else, what you are trying to do with the 555 can be done entirely in software.
The set up (circuit) I have built is according to the Sunfounder Manual that they sent together with the kit I bought.
#### GopherT
Joined Nov 23, 2012
8,012
1. The frequency of the pulses seems to change constantly by up to 0.05Hz. Now this is sufficient to induce a massive time change.
2. I try to calibrate the Timer555 circuit against several stopwatches and over an hour the time difference can vary up to 2 minutes.
(A) concerned about 0.05Hz variation in the output and,
(B) if using it as a time base, you get zero to 2 minutes variation over a one-hour window.
For (A) what accuracy do you expect from an RC timer?
For (B) what precision do you expect from same circuit?
What does the datasheet say?
The TI datasheet for NE555 says interval error is 2.25% with error define as...
2) Timing interval error is defined as the difference between the measured value and the average value of a random sample from each process run
#### MishaH
Joined Feb 14, 2016
9
(A) concerned about 0.05Hz variation in the output and,
(B) if using it as a time base, you get zero to 2 minutes variation over a one-hour window.
For (A) what accuracy do you expect from an RC timer?
For (B) what precision do you expect from same circuit?
What does the datasheet say?
The TI datasheet for NE555 says interval error is 2.25% with error define as...
2) Timing interval error is defined as the difference between the measured value and the average value of a random sample from each process run
hmmm ok. To be embarrassingly honest I haven't checked the DataSheet. I was kind of expecting a few seconds lost a day, not several minutes an hour.
Secondly, for some unknown reason to me, the system pulses at around 730Hz. Now I made sure that the resistors and circuit are correct. I checked it several times.
Do you mean to say then that the accuracy I am seeking is unrealistic? Do I have the wrong chip/circuit for that?
I am looking for seriously accurate data, I need it for the application I intend. As said, a few seconds a day would be acceptable.
#### GopherT
Joined Nov 23, 2012
8,012
hmmm ok. To be embarrassingly honest I haven't checked the DataSheet. I was kind of expecting a few seconds lost a day, not several minutes an hour.
Secondly, for some unknown reason to me, the system pulses at around 730Hz. Now I made sure that the resistors and circuit are correct. I checked it several times.
Do you mean to say then that the accuracy I am seeking is unrealistic? Do I have the wrong chip/circuit for that?
I am looking for seriously accurate data, I need it for the application I intend. As said, a few seconds a day would be acceptable.
You do have the wrong approach. A 555 is never used for an accurate real time clock for a watch or time of day clock. You need a crystal oscillator. A few seconds per day is easily achieved.
The problem with crystal oscillators, they are not available in an infinite adjustable range of frequencies. And, they are usually available only above 250k Hz.
If you need slower pulse trains that the oscillator can provide, add a counter chip to divide down - binary counters like CD4060, cd4040 or a decade counter. You will have to add a flip flop to make your decade counter's output into a square wave if that is needed (which divides by 2 itself). A CD4060 is an interesting solution because you can directly add a 32768Hz clock crystal and it will oscillate - AND it can divide that crystal oscillator frequency by 2 many times over for you. Just select a pin with a frequency you like.
Last edited:
#### GopherT
Joined Nov 23, 2012
8,012
If you go the route of a bare crystal on the CD4060, then you should order a crystal with a specific model number and accessible datasheet so you know what sized capacitors are needed. Don't order some random ebay crystal - Eventhough it will be listed as 32768 Hz, the capacitor sizes are specified for the crystal model and it is a pain to find the right ones without the datasheet.
#### MishaH
Joined Feb 14, 2016
9
You do have the wrong approach. A 555 is never used for an accurate real time clock for a watch or time of day clock. You need a crystal oscillator. A few seconds per day is easily achieved.
The problem with crystal oscillators, they are not available in an infinite adjustable range of frequencies. And, they are usually available only above 250k Hz.
If you need slower pulse trains that the oscillator can provide, add a counter chip to divide down - binary counters like CD4060, cd4040 or a decade counter. You will have to add a flip flop to make your decade counter's output into a square wave if that is needed (which divides by 2 itself). A CD4060 is an interesting solution because you can directly add a 32768Hz clock crystal and it will oscillate - AND it can divide that crystal oscillator frequency by 2 many times over for you. Just select a pin with a frequency you like.
Cool, thank you. I have narrowed down the timing error on the 555 to between 0.2 to 0.5%. You recon that is as good as it gets?
I am working with a Raspberry Pi 2B, which if I am not mistaking cannot read on frequency levels that high, so I will have to do some research and build a new RTC.
Thank you everyone!
#### GopherT
Joined Nov 23, 2012
8,012
Cool, thank you. I have narrowed down the timing error on the 555 to between 0.2 to 0.5%. You recon that is as good as it gets?
I am working with a Raspberry Pi 2B, which if I am not mistaking cannot read on frequency levels that high, so I will have to do some research and build a new RTC.
Thank you everyone!
As mentioned earlier, why not use the real time clock in the rasberry pi? Already a crystal on oard and there are standards commands that let you access the current time - in several formats.
#### MishaH
Joined Feb 14, 2016
9
As mentioned earlier, why not use the real time clock in the rasberry pi? Already a crystal on oard and there are standards commands that let you access the current time - in several formats.
Well I tried to do that, but all research I did said that the Pi does not have an on board RTC. It said something about cost implications and permanent battery requirements. I found this weird since the one can import "time" into any script.
All I need at the end of the day is a timer that can keep track of time passed since the program was executed. I dont need day/month/year RTC.
Is the time.sleep(...) what I am looking for? how much time will be lost in processing between each time.sleep()? I guess that depends on program complexity, but is it something to worry about?
My Pi will also not be connected to a network all the time and will operate autonomously. Please excuse the ignorance, Im seriously new to all this.
#### dannyf
Joined Sep 13, 2015
2,197
You don't need an onboard rtc - all you need is a crystal and a counter - which your board has.
#### dannyf
Joined Sep 13, 2015
2,197
An RC oscillator will never come close to the accuracy or stability of a crystal.
#### MishaH
Joined Feb 14, 2016
9
I have googled now for the last hour, and google is being frustratingly useless and all searched lead to building RTC's.
Can anyone help me with the code that would allow me to use the on board crystal on the pi? the time.slee(...) does not work. how would I go about it?
#### GopherT
Joined Nov 23, 2012
8,012
I have googled now for the last hour, and google is being frustratingly useless and all searched lead to building RTC's.
Can anyone help me with the code that would allow me to use the on board crystal on the pi? the time.slee(...) does not work. how would I go about it?
From a command line you can use command.
>> date
To see all options, use
>> date --help
You should be able to get a format with sub-microsecond accuracy output (if I remember correctly - if not with date, there is another command but I think it is date).
If using JAVA
You can try a time increment calculation using nanotime
Code:
long startTime = System.nanoTime();// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
#### MishaH
Joined Feb 14, 2016
9
From a command line you can use command.
>> date
To see all options, use
>> date --help
You should be able to get a format with sub-microsecond accuracy output (if I remember correctly - if not with date, there is another command but I think it is date).
If using JAVA
You can try a time increment calculation using nanotime
Code:
long startTime = System.nanoTime();// ... the code being measured ...
long estimatedTime = System.nanoTime() - startTime;
Im using Python2.7 on Raspbian. But I will try Date.
Thanks mills again for all your help, honestly much appretiated
#### hrs
Joined Jun 13, 2014
244
time.time() gives the number of seconds that passed since 1970. If you store it at the beginning of your program and subtract it from time.time() at any moment you will have the wall clock time in seconds since the program started to execute.
|
{}
|
2019
Том 71
№ 11
# Distribution of eigenvalues of the Sturm-Liouville problem with slowly increasing potential
Palyutkin V. G.
Abstract
We establish an asymptotic representation of the function $\tilde n(R) = \int\limits_0^R {\frac{{n(r) - n(0)}}{r}dr, R \in \Re } \subseteq [0, \infty ), R \to \infty ,$ where n(r) is the number of eigenvalues of the Sturm-Liouville problem on [0,∞) in (λ:¦λ¦≤r) (counting multiplicities). This result is obtained under assumption that q(x) slowly (not faster than In x) increases to infinity as x→∞ and satisfies additional requirements on some intervals $[x_ - (R), x_ + (R)],R \in \Re$ .
English version (Springer): Ukrainian Mathematical Journal 48 (1996), no. 6, pp 914-927.
Citation Example: Palyutkin V. G. Distribution of eigenvalues of the Sturm-Liouville problem with slowly increasing potential // Ukr. Mat. Zh. - 1996. - 48, № 6. - pp. 813-825.
Full text
|
{}
|
Changeset 10843 for branches/DataPreprocessing/HeuristicLab.DataPreprocessing/3.3/Implementations/FilterLogic.cs
Ignore:
Timestamp:
05/14/14 11:36:10 (6 years ago)
Message:
• Fixed or combination filter, default value was set to true and thus all rows where filtered (regardless of the actual filter)
File:
1 edited
Unmodified
|
{}
|
# Isaac Held's Blog
## 41. The hiatus and drought in the U.S.
Correlation between seasonal mean precipitation (Dec-Jan-Feb) and sea surface temperatures in the eastern equatorial Pacific (Niño 3.4: 120W-170W and 5S-5N) in observations (GPCP) and in a free-running coupled atmosphere-ocean model (GFDL’s CM2.1), from Wittenberg et al 2006. Green areas are wetter in El Niño and drier in La Niña winters; red areas are drier in El Niño and wetter in La Niña.
(Sept 30: I have moved a few sentences around to make this read better, without changing anything of substance.)
It is old news to farmers and water resource managers in the southern tier of the continental US that La Niña is associated with drought, especially with rainfall deficits in the winter months. Since the major El Niño event of 1997-8, our climate system has been reluctant to generate El Niño at the expected frequency and instead the Pacific has seen several substantial La Niña events with mostly near neutral conditions in between. This La Niña flavor to the past 15 years has been identified as causing at least part of the hiatus in global warming over this same period by simple empirical fitting and more recently by Kosaka and Xie 2013, in which a climate model is manipulated by restoring temperatures to observations in the eastern equatorial Pacific. I find the excellent fit obtained in that paper compelling, having no free parameters in the sense that this computation was not contemplated while the model, GFDL’s CM2.1, was under development, and the model was not modified from the form in which it was frozen back in 2005. The explanation for the hiatus must, in appears, flow through the the equatorial Pacific. (I have commented on this paper further here.) These authors mention briefly an important implication of this connection – the extended drought in the Southern US and the hiatus in global mean warming are related.
## 40. Playing with a diffusive energy balance model
Latitude of ice margin as a function of a non-dimensional total solar irradiance $q$ in the diffusive energy balance climate model described by North 1975, for different values of the non-dimensional diffusion $d$. Stable states are indicated by a thicker line.
When we were first starting out as graduate students, Max Suarez and I became interested in ice age theories and found it very helpful as a starting point to think about energy balance models for the latitudinal structure of the surface temperature. At about the same time, Jerry North had simplified this kind of model to its bare essence: linear diffusion on the sphere with constant diffusivity, outgoing infrared flux that is a linear function of surface temperature, and absorbed solar flux equal to a specified function of latitude multiplied by a co-albedo that is itself a function of temperature to capture the different planetary albedos for ice-free and ice-covered areas. Playing with this kind of “toy” model is valuable pedagogically – I certainly learned a lot by building and elaborating this kind of model — and can even lead to some nuggets of insight about the climate system.
## 39. FAT
The response of a 1km non-rotating doubly periodic model of radiative-convective equilibrium to an increase in surface temperature, in increments of 2K. Left: temperature, showing a moist-adiabatic response; Right: fraction of area with cloud at each height, showing an upward displacement of upper tropospheric clouds. From Kuang and Hartmann 2007.
The presence of cirrus clouds in the tropics warms the troposphere because infrared radiation is emitted to space from their relatively cold surfaces rather than the warmer temperatures below the clouds. The response of these clouds can be important as feedbacks to climate change. A reduction in the area covered by these high clouds would be a negative feedback to warming. [7/25/13: Several readers have pointed out that a reduction in the areas of high cloud cover would be a negative infrared feedback but a positive shortwave feedback and that the net effect could go either way.] An increase in the average height of these clouds with warming, resulting in a colder surface than would be the case if this height did not increase, would be a positive feedback. It is the latter that I want to discuss here. GCMs have shown a positive feedback due to increasing height of tropical cirrus since the inception of global modeling (e.g., Wetherald and Manabe, 1980). This is probably the most robust cloud feedback in GCMs over the years and is one reason that the total cloud feedbacks in GCMs tend to be positive. This increase in cloud top height has, in addition, a clear theoretical foundation, formulated as the FAT (Fixed Anvil Temperature) hypothesis by Hartmann and Larson, 2002.
## 38. NH-SH differential warming and TCR
Rough estimates of the WMGG (well-mixed greenhouse gas — red) and non-WMGG (blue) components of the global mean temperature time series obtained from observed (HADCRUT4) Northern and Southern Hemisphere mean temperatures and different assumptions about the ratio of the Northern to Southern Hemisphere responses in these two components. Black lines are estimates of the response to WMGG forcing for 6 different values of the transient climate response TCR (1.0, 1.2, 1.4, 1.6, 1.8, 2.0C).
How can we use the spatial pattern of the surface temperature evolution to help determine how much of the warming over the past century was forced by increases in the well-mixed greenhouse gases (WMGGs: CO2, CH4, N2O, CFCs), assuming as little as possible about the non-WMGG forcing and internal variability. Here is a very simple approach using only two functions of time, the mean Northern and Southern Hemisphere temperatures. (See #7, #27, #35 for related posts.)
## 37. Tropical rainfall and inter-hemispheric energy transport
Schematic of the response of tropical rainfall to high latitude warming in one hemisphere and cooling in the other or, equivalently, to a cross-equatorial heat flux in the ocean. From Kang et al 2009.
When discussing the response of the distribution of precipitation around the world to increasing CO2 or other forcing agents, I think you can make the case for the following three basic ingredients:
1. the tendency for regions in which there is moisture convergence to get wetter and regions in which there is moisture divergence to get drier (“wet get wetter and dry get drier”) in response to warming (due to increases in water vapor in the lower troposphere — post #13);
2. the tendency for the subtropical dry zones and the mid-latitude storm tracks to move polewards with warming;
3. the tendency for the tropical rainbelts to move towards the hemisphere that warms more.
There are other important elements we could add to this set, especially if one focuses on particular regions — for example, changes in ENSO variability would affect rainfall in the tropics and over North America in important ways . But I think a subset of these three basic ingredients, in some combination, are important nearly everywhere. I want to focus here on 3) the effect on tropical rain belts of changing interhemispheric gradients.
## 36. A diffusive model of atmospheric heat transport
Lower panel: the observed (irrotational) component of the horizontal eddy sensible heat flux at 850mb in Northern Hemisphere in January along with the mean temperature field at this level. Middle panel: a diffusive approximation to that flux. Upper panel: the spatially varying kinematic diffusivity (in units of ${\bf 10^6 m^2/s}$) used to generate the middle panel. From Held (1999) based on Kushner and Held (1998).
Let’s consider the simplest atmospheric model with diffusive horizontal transport on a sphere:
$C \partial T/\partial t = \nabla \cdot C\mathcal{D} \nabla T - (A + B (T-T_0)) + \mathcal{S}(\theta)$.
Here $\mathcal{S}(\theta)$ is the energy input into the atmosphere as a function of latitude $\theta$, $A + B(T-T_0)$ is the outgoing infrared flux linearized about some reference temperature $T_0$, $C$ is the heat capacity of a tropospheric column per unit horizontal area $\approx 8 \times 10^6 J/( m^2 K)$, and $\mathcal{D}$ is a kinematic diffusivity with units of (length)2/time. Think of the energy input as independent of time and, for the moment, think of $\mathcal{D}$ as just a constant.
## 35. Atlantic multi-decadal variability and aerosols
(Left) Sea surface temperature averaged over the North Atlantic (75-7.5W, 0-60N), in the HADGEM2-ES model (ensemble mean red; standard deviation yellow) compared with observations (black), as discussed in Booth et al 2012. (Right) Upper ocean (< 700m) heat content in this model averaged over the same area, from Zhang et al 2013 ( green = simulation with no anthropogenic aerosol forcing, kindly provided by Ben Booth.)
A paper by Booth et al 2012 has attracted a lot of attention because of the claim it makes that the interdecadal variability in the North Atlantic is in large part the response to external forcing agents, aerosols in particular, rather than internal variability. This has implications for estimates of (transient) climate sensitivity but it also has very direct implications for our understanding of important climate variations such as the recent upward trend in Atlantic hurricane activity (linked to the recent rapid increase in N.Atlantic sea surface temperatures) and drought in the Sahel in the 1970′s (linked to the cool N. Atlantic in that decade). I am a co-author of a recent paper by Rong Zhang and others (Zhang et al 2013) in which we argue that the Booth et al paper and the model on which it is based do not make a compelling case for this claim.
## 34. Summer temperature trends over Asia
Anomalies in near surface air temperature over land (1979-2008) averaged over Asia and the months of June-July-August from CRUTEM4 (green) — and as simulated by atmosphere/land models in which oceanic boundary conditions are prescribed to follow observations (gray shading). See text and Post #32 for details.
This is a follow up to Post #32 on Northern Hemisphere land temperatures as simulated in models in which sea surface temperatures (SSTs) and sea ice extent are prescribed to follow observations. I am interested in whether we can use simulations of this “AMIP” type to learn something about how well a climate model is handling the response of land temperatures to different forcing agents such as aerosols and well-mixed greenhouse gases. If a model forced with prescribed SST/ice boundary conditions and prescribed variations in the forcing agents does a reasonably good job of simulating observations, we can then ask how much of this response is due to the SST variations and how much is due to the forcing agents (assuming linearity). If the response to SST variations is robust enough, we have a chance to subtract it off and see if different assumptions about aerosol forcing, in particular, improve or degrade the fit to observations.
## 33. Can we trust simulations of TC statistics in global models?
Globally integrated, annual mean tropical cyclone (TC) and hurricane frequency simulated in the global model described in Post #2, as a function of a parameter in the model’s sub-grid moist convection closure scheme, from Zhao etal 2012.
It is difficult to convey to non-specialists the degree to which climate models are based on firm physical theory on the one hand, or tuned (I actually prefer optimized) to fit observations on the other. Rather than try to provide a general overview, it is easier to provide examples. Here is one related to post #2 in which I described the simulation of hurricanes in an atmospheric model.
## 32. Modeling land warming given oceanic warming
Anomalies in annual mean near surface air temperature over land (1979-2008), averaged over the Northern Hemisphere, from CRUTEM4 (green) and as simulated by an ensemble of atmosphere/land models in which oceanic boundary conditions are prescribed to follow observations.
As discussed in previous posts, it is interesting to take the atmosphere and land surface components of a climate model and run it over sea surface temperatures (SSTs) and sea ice extents that, in turn, are prescribed to evolve according to observations. In Post #2 I discussed simulations of trend and variability in hurricane frequency in such a model, and Post #21 focused on the vertical structure of temperature trends in the tropical troposphere. A basic feature worth looking at in this kind of model is simply the land temperature – or, more precisely, the near-surface air temperature over land. How well do models simulate temperature variations and trends over land when SSTs and ice are specified? These simulations are referred to as AMIP simulations, and there are quite a few of these in the CMIP5 archive, covering the period 1979-2008.
|
{}
|
# Helium Balloon
1. Aug 3, 2007
### the keck
As a helium balloon increases in altitude, the density of the air inside decreases. However, the density of the air surrounding it (The atmosphere) also decreases
with altitude p=p0*exp(-z/z0), where z0 is the scale height of the atmosphere. So does this mean the lift force acting on the balloon decrease?
p=p0#exp(-z/z0)
P=P0#exp(-z/z0) (Pressure decreases with altitude)
PV=nRT
The puzzling thing I find with this problem is this, if both the density of the helium inside the balloon and the air surrounding it decreases with increasing altitude, would this mean that the balloon could rise forever, since these effects cancel each other out i.e. the life force will stay the same as the initial lift force when it left the Earth's surface? (Assuming the atmosphere goes on forever, and the balloon is made of material that is infinitely stretchable i.e. can expand forever and not break)
Thanks...I hope the problem is clear enough for you all to understand
Regards,
The Keck
2. Aug 4, 2007
### mgb_phys
In theory yes, actually as the balloon ascends it reaches colder upper layers the helium contract and the balloon descends.
It is possible for balloons to cross intercontnaental distance before all the helium leaks away.
|
{}
|
# I Proving Fermat's last theorem with easy math
1. Mar 7, 2017
It says that there is no value of a,b and c, with n>2 and all integer numbers that satisfies this:
a^n=b^n+c^n
I'm only going to use the cosine theorem.
Let's consider three points A, B and C. They form the three sides of a triangle: a, b and c.
The sides forms three angles, which can go from 0 to 180 degrees.
If one angle, say α , is 180, then the other two are 0, but that doesn't affect the results. Then:
a^2=b^2+c^2-2b*c*cosα => a^2=b^2+c^2-2*b*c*cos180 => a^2=b^2+c^2+2*b*c => a^2=(b+c)^2
a=b+c or a=b-c
These are two basic principles of geometry, if one point is alligned with two others, their distance is the sum or difference between that point with each of the point.
We have just proven the case n=1 for Fermat's last theorem.
With the angle equal to 0 we have the same result.
Let's consider now that the triangle has one right angle. Then:
a^2=b^2+c^2-2*b*c*cos90 => a^2=b^2+c^2
I have just proven the Pythagorean theorem , and the case n=2 for Fermat's last theorem.
Cosα is only and integer number if the angles are 0, 90, 180... , and we have just seen that, if the angle is 180, n=1; and if the angle is 90, n=2. For any other values of n, the angle will be between 90 and 0, so cosα will not be an integer number.
Fermat's last theorem says that there is no value for a, b and c, with n>2 and all of them being integer numbers that makes this possible:
a^n=b^n+c^n
If we consider a, b and c the sides of a triangle, then the cosine theorem must apply. If the cosine isn't an integer number, then you aren't going to end up with a,b and c integer numbers.
I hope this is well explained and that i have not made mistakes(and sorry if i have not written something right, because i'm spanish).
Please say if this could be correct.
2. Mar 7, 2017
### Staff: Mentor
I'm afraid you haven't proven anything. Any proof that involves $n=2$ has to explain Pythagorean triples, cannot be done by the law of cosine, because this only applies to $n=2$ and last but not least, your "proof" would already had fit on the margin of Fermat's book. Furthermore the proof for relatively small primes had been given soon after Fermat's death. The general problem remained unsolved until 1995 and is by far not anywhere near of easy to solve.
|
{}
|
Project data
This project examines two hot water heating systems that are commonly used on residential buildings; Solar Thermal Collectors and Air source Heat Pumps. A spreadsheet comparing factors such as system cost, efficiency, operating costs, maintenance and pollution has been created. Values in the spreadsheet can be changed to produce a comparison answer that is more closely based on any user's needs. Preselected options in the spreadsheet are based on stated assumptions and are the baseline for comparison. The conclusion of this project has determined that the optimum hot water heating system varies depending on water use needs. For low water consumption, heat pumps are more cost effective. For higher water consumption, solar thermal panels are the more economical choice. Both systems are effective at reducing the consumer's carbon footprint.
Group members: Adam Channal, Devan Hemmings, Ryan Kriken, Robert Duncan
## Background
A homeowner in coastal Humboldt County, CA has an aging domestic hot water system, and would like to replace it with a more environmentally responsible system. Basing his decision on a set of criteria, the homeowner will choose to install either a solar thermal water heating system or an air-source heat pump. Adam Channel, Robert Duncan, Devan Hemmings and Ryan Kriken have been hired as consultants to research different hot water systems and provide a comparison of these two technologies. The client has expressed a desire to know the prices of different systems, their efficiency, lifespan, buyback time, maintenance needs, and energy costs for using the system. To simplify the client's decision, a spreadsheet allowing for changes to various inputs has been created. This spreadsheet allows for changes to variables such as system manufacturer, climate data, household water use, and several other variables to tailor the system to the client's needs.
### What is a Heat Pump?
A heat pump is any device which moves heat from one location to another. Heat pumps operate much like a refrigerator, only in reverse. A refrigerator pumps a refrigerant through a compressor and transfers heat from inside the cooled space to the outside ambient air thus cooling the inside of the fridge. A heat pump on the other hand takes heat from the outside and uses it to heat the inside air or takes inside or outside air to heat water.
There are two types of heat pumps: ground source heat pumps, and air source heat pumps. Either of these types can be used to heat a home or to heat water or both. We chose to do our analysis on air source heat pumps.
Ground source heat pumps basically work by circulating refrigerant through underground pipes and back through a compressor where it is vaporized, and condensed causing it heat up and then the heat is transferred to a coil where it can be utilized for home heating or water heating purposes. The reason the pipes are run underground is to utilize the consistent 58 degree Fahrenheit ground temperature which in cold climates may be higher than the average air temperature. Also, many air source heat pumps become very inefficient below 40-50 degrees F and in very cold climates will stop working. Ground source heat pumps can also be used as air conditioners in the summer months and are more cost effective when used to heat and cool a home. An attachment to this type of system, called a desuperheater, can be attached to also provide the home with hot water. We did not do an analysis of the desuperheater due to the very high installation costs of ground source heat pumps.
Air source heat pumps can be used either to heat the air or water. For home heating uses there is generally an outside unit with a fan and condenser drawing heat from the outside air, and inside unit with a fan, condenser, compressor, and heat expansion valve, and pipes running in to circulate the heat to and from the inside environment.
For air source heat pump water heaters, the heat from the ambient air is pulled with a fan into the condenser and heat is transferred into the water tank and heats the water through a coiled heat exchanger. Because heat is being moved instead of generated, a heat pump water heater can be up to about 60% more efficient than traditional electric water heaters.
Air source heat pump water heaters come as either a prepackaged unit which includes a tank, and an electric component heating unit, or as a stand alone unit that can be hooked into a preexisting water heater tank. For our project we chose the GE hybrid heat pump water heater because it had the highest energy factor efficiency rating and the installation and unit cost was fairly low compared to units that don't come as a package.
Because a heat pump is taking heat from the air around it, Heat pump water heaters require installation in locations that remain in the 40º–90ºF (4.4º–32.2ºC) range year-round. Garages, cellars, laundry rooms, or basements are ideal locations for installation. It is not recommended, however for a unit to be installed in a heated room because they tend to cool the air around them. It is also best not to have a unit installed outside due to problems with weather corrosion, and temperatures below 40 F. It is recommended that at least 1,000 cubic feet (28.3 cubic meters) of air space is in the room around the water heater.
### What is Solar Thermal?
Solar thermal water systems use energy gathered from sunlight to heat water. Sunlight falls on collector panels which is then moved to a hot water tank similar to conventional tanks. Many types of solar thermal systems exist some of which are more adapted to small scale and therefore residential use. The systems examined below can be adapted to fit either small scale or large scale water needs. Other systems such as parabolic solar collectors tend to be used for large scale collection and are not examined in this project.
Almost all residential Solar Thermal systems fall in to one of the following two types of systems:
Open Loop systems take water from local water pipes, pass it through the solar thermal panels to then be used directly for hot water. A schematic of such a system can be found to the right.
Closed Loop systems circulate a coolant/water through the solar thermal panels and transfer the heat using a heat exchanger to your hot water tank. In this system, the water from the tap stays in your water heater while it is being heated and does not travel through the solar thermal panels. A schematic of this system can be found to the right.
The two collector panel technologies examined in this project are:
Evacuated Tube Collector are layered glass cylinders that absorb sunlight and heat water in the center tube which is in a vacuum. The lack of pressure inside to tubes allows the boiling of water at a much lower temperature. Heat from the boiling water in the tubes is transferred to another liquid. This heated liquid travels through a house to a heat exchanger near/in the water tank. The heat exchanger again moves the heat out of the liquid and in to the water in the tank.
Flat Plate Collectors are sealed rectangular boxes with heat absorbing metal sheets next to small tubes of water or coolant. This liquid is moved through the house to the heat exchanger near/in the water tank. A schematic of this system can be found here.
How is a Solar Thermal System set up?
1.Size a system for house (depends on number of people and hot water needs) Generally, 80 gall tanks are sufficient for households with 3-4 people. Areas with less sunlight will need greater collector size. Sizing varies based on climate and hot water needs but a general rule of thumb according to Vermont's; Renewable Energy Resource Center is about 0.7ft2 and 0.85 ft2 of collector area per gallon of storage. Vermont RERC
2.Determine type of solar thermal system (flat plate vs. solar thermal...or another system!)
3. Determine whether you want an open loop system or a closed loop system. Open loop system have the benefit of less heat lost in transferring. Open loop systems can only use water as the liquid passing through the panels is the water coming out of your hot tap. Closed loop systems are not subject to mineral deposits from municipal water sources. Closed loop systems can use water or coolant.
3. Determine if you want to keep current water heater and purchase external heat exchanger or replace with tank that has an internal heat exchanger. If you are replacing your tank, it is often cheaper to by the internal heat exchanger (tank and exchanger are one unit) instead of purchasing two separate units that then have to be connected.
4. Choose to either install the system yourself or hire a contractor. Unless you have plumbing experience and are able to build these systems in accordance with local building codes, it would be advisable to hire a contractor to continue the job from here. The rest of these instructions will help you be an educated consumer...
5. Determine the optimum placement for the panels. This can be on your roof (ususally the best spot) or any other convenient location. In the northern hemisphere, south facing panels maximize sunlight exposure even during winter months when the sun is low in the sky. Be sure to maximize your solar window (the angular range of direct sunlight the panels receive during the day) and angle the panels equal to your latitude (40 degrees latitude-- 40 degrees angle towards south). Panels can be mounted on angled rooftops or on mounting racks for flat rooftops.Several appropedia pages such as Solar Radiation Maps and Solar Water Heating further discuss methods of determining the amount of light available for collection.
6. Select and purchase a hot water tank/heat exchanger system.
a. If you are keeping your old hot water tank you must purchase an external heat exchanger. The Piggyback system used in the spreadsheet is an external heat exchanger.
b. If you are purchasing a new hot water tank you can decide to have an internal or external heat exchanger. Internal heat exchangers come as a part of the water tank ( a single unit) external heat exchangers are sold separately from hot water tanks(exchanger and tank two separate units). Depending on your needs, you may choose one system over the other.
7.Determine if you require a freeze protection system such as a drain-back tank. If temperatures in a clients region drop below 40 degrees F, there is a potential for water to freeze in the system; This can cause extreme damage to a solar thermal system and create maintenance nightmares. To prevent water from freezing in the panels drain-back systems such as the one described below must be installed.
During night time when the solar panels are not receiving any light (and therefore heat), the water from the panels drains down in to a small tank inside the protection of the house. This prevents water from freezing in the pipes on your roof and potentially bursting any connections or cylinders. If you live in a climate that experiences any had frosts, a drain-back system is required. Several options for powering the drainback system are avaialable. A simple low flow water pump can be plugged in to the socket that turns off at night using a simple timer; or a small photovoltaic panel can power a pump(this system is nifty because during the nighttime when there is no sun hitting pv or thermal panel, pump shuts off and let water fall back in to drain-back tank-fully contained!!!).
8.Set up the piping. Try to find the shortest and easiest route through your house. A longer distance means more heat lost in transit and higher material costs.
9. Determine the heat transferring liquid.
a. If an open loop system has been selected, the heat transfer liquid must be municipal water.
b.If a closed loop system has been selected, the heat transfer liquid can be water or coolant. Coolant costs more than water but will not leave any mineral deposits in the system. Distilled water can actually leech minerals from the system and is not preferred.
10.Connect the pieces and let'er rip! The panels should start working when they first receive light.
## Problem statement
The final goal is to compare these two technologies for their ability to heat water for residential needs. The main questions we hoped to answer are;
-Can these systems supply all hot water needs? If not what percentage can they supply? Embedded in this question is, how much energy will a consumer save by installing these systems?
-Which system will cost the consumer less upfront and over time?
-Which system will last longer?
-Most importantly, what are the variables that make each of these systems an economical and environmentally friendly choice for consumers?
This spreadsheet will compare variable types of solar thermal collectors (flat plate and evacuated tubes) to air source heat pumps for heating domestic water needs. The client will enter in the following information:
• Latitude of Clients location, # of sunny days, # of partly cloudy days, # of cloudy days
• Household data
o Number of people per house
o Appliances; dishwasher and laundry machine use frequency
o Showering/Bathing time
o Desired temperature for water tank
o Current hot water tank; cost, efficiency, life cycle, energy factor (for comparison to new systems)
• A solar heat pump model to compare
o Solar collector model/manufacturer
o Heat Transfer/water tank model/manufacturer
• A Heat Pump model to compare
• Desired Life cycle analysis length in years
Upon entering these specific variables for the client, information regarding each system, assumed water use, climate data and system components will appear in the spreadsheet. The assumptions and conversions used can be found to the right of the spreadsheet on the main page. Several tables with the data exist on subsequent sheets and can be used to add more options to the spreadsheet.
Based on data we found in our research and the user entered data the spreadsheet will return results comparing the following points:
The Bottom Line output
Annual energy costs to consumer after installing a solar thermal system or a heat pump system
Amount of money saved per year compared to using a conventional waterheating system
Payback time in years for a solar thermal system or a heat pump system
CO2 emissions in tons per year for water heating needs with a new system installed
Mercury emissions in grams per year for water heating needs with a new system installed
Life cycle cost over a yearly length
Additional outputs will show up on the spreadsheet. All variable outputs have been embedded in to the Bottom Line Output.
## Instructions
These are the instructions for using the spreadsheet. **It may be helpful to print these instructions to view next to the spreadsheet.**
1.Select the latitude of clients location.(-90 to 90) 2.Enter the number of clear days per year, Enter the number of partially cloudy days per year, and enter the number of cloudy days per year. These three numbers should total 365 days. A note beneath the boxes will note when the total is 365. 3. Select the number of people in the household.(1-10) 4. Select yes/no for a dishwasher. 5. Select yes/no for a laundry machine. If yes, go to 6, If no go to 7. 6. Enter number of laundry loads per person per week. 7. Select yes/no for taking baths. 8. Select yes/no for taking showers shorter than 5 minutes. If yes, skip to 8, if no go to 7. 9. Enter average time of shower. 10. Enter the average water temperature(F) coming out of municipal water pipes. 11. Enter preferred temperature(F) for water heater. 12. Select the desired type of new water heater.** 13. Select a type of solar thermal system to compare. 14. Select a model of solar thermal system to compare. 15. Select yes/no for purchasing a new water heater tank. 16. Select a life cycle length to analyze the systems. 17. Compare the annual price per system, annual emissions, and payback time for a solar thermal system or an air source heat pump!
• The trendsetter contender is a heat exchanger and water tank combo. It is an automatic yes to be purchasing a new water tank. The cost of heat exchanger is calculated in to the cost of tredsetter contender.
## Justification of assumptions
This section is a reference, thorough justification of your assumptions and values. Use references you gained during your literature review to back these up.
Assumptions Values Source Average milligrams of Hg/kWh 0.012 Energy star pdf CFL's and mercury table 1Energy Star Average lbs of CO2/ kWh(coal) 2.117 Dept. of Energy 1999 CO2 emissions by energy type EIA CO2 report Average lbs of CO2/ kWh(petroleum) 1.915 Dept. of Energy 1999 CO2 emissions by energy type EIA CO2 Report Average lbs of CO2/ kWh(natural gas) 1.314 Dept. of Energy 1999 CO2 emissions by energy type EIA CO2 Report U.S. dollars/ kWh 0.12 U.S Energy Information Administration State by State Breakdown U.S. dollars/therm 0.98 Adams PG&E bill from Arcata California (two person home) Maximum solar Thermal efficiency 70% natural gas water heater efficiency 65% ACEEE] gallons used per faucet minute 3 common faucet maximum flow rate gallons used per dishwasher load 8 Average for old dishwashers pre1994,non-energy star Energy Star Q&A gallons used for bath 40 volume of 1/2 full short length bath or 1/4 of full length bath (L*W**H*7.5= ft3 to volume in gallons) gallons used per shower minute 2.5 Energy Star Q&A] gallons used per laundry load 40 Low efficiency:40 high efficiency:28 EPA Water Challenge
## Results
### Life Cycle Cost
A standard water heater lasts about 10-15 years. A solar thermal system however can last up to 30 years. We used a 15 year total life cycle cost of unit cost, installation, maintenance costs, and annual energy costs to find these results. These results are based on household size and how much water the people in that house are using. To attain this data we preset the water usage to what we considered a "moderate water usage" and tracked that data for each household size and then to a "conservative water usage" to track that to different household size. "Moderate water usage" mode assumes that the household has a dishwasher, a laundry machine, and that each family member does 1 load of laundry per week. It also assumes that each person takes 10 minute showers. Conserving mode assumes that the household does not have or does not use a dishwasher or laundry machine, and that each person in the household takes 5 minute showers. The different levels of cost for each system are then determined by how many people are in the household. This graph is also preset to some other variables. For example, it is assumed that a high efficiency gas water heater is used for a backup on the solar thermal. It is also assumed that the user lives in the Humboldt area as the graph was set to this areas climate specifications.
### Annual Energy Savings and Buy Back Period
This Chart shows us, based again on a moderate water usage, how long it will take for a system to buy itself back and contrasts that with the annual energy savings. The Annual energy savings assume that the water heater currently installed is a standard electric water heater.
### CO2 Emissions
The CO2 analysis is based on the conserving mode of hot water usage. We chose to set the table to this mode because we assumed that if a household is concerned with CO2 that they will also try to conserve energy as much as possible. Also, at any mode it was determined that solar thermal will use less CO2 because most of it's energy is solar and thus carbon neutral.
### 30 year Life-cycle cost
A solar thermal system typically lasts about 30 years. This chart shows what each system will cost over that period of time depending on daily hot water usage. To find your daily hot water usage fill out the spreadsheet provided under references. As you can see at about 25 gallons the lines cross and solar thermal becomes more cost effective over its 30 year life-cycle. In the 30 year life of a solar thermal system, because water heaters typically only last 10-15 years, the heat pump water heater or the water heater you had originally installed for your solar thermal system would need to be replaced once or twice. The cost of this replacement is all factored into the life-cycle cost of this chart.
### Average Total Cost Over Time
The average household uses about 60 gallons of water per day. This Chart is based off a 60 gallon a day hot water usage with 60 gallon peak hour demand. Like the other charts in our results it also assumes a 58 degree ground water temperature and all of the other climate specifications of Arcata, CA. As you can see, the cost at first may be cheaper on average with a heat pump water heater, but after about 20 years, after the heat pump has been replaced once, the total amount spent on the solar thermal drops below the total cost of the heat pump.
## Conclusion
The answer to our question, Which type of system should we suggest to a client? Our favorite answer: It depends! Certain conclusion can be found below; These are based on Humboldt County sunlight values and all of our assumptions. Please see the assumptions table earlier in this page and the assumptions table in the excel spreadsheet.
If you use less than 25 gallons of hot water per day, it is cheaper over a 30 year period to use air source heat pumps. This means you will need to replace the heat pump once (lifespan off 15 years so 30 years needs two heat pumps).If you use more than 25 gallons of hot water per day, it is cheaper over a 30 year period to purchase and install a solar thermal system. The lifespan of this system is about 30 years but costs about twice as much upfront.
This is a very broad based conclusion but through our data it seems like air source water heaters would be more applicable in the settings of a single apartment with limited space and smaller water use. The solar thermal system would be more applicable in a four or more person house that uses more water and has more space for the system. It seems that for households in-between these two extremes it would be best to fill out the excel spread sheet we created so that the needs of the client can be better quantified.
Broad conclusion: For low water use households such as single person dwellings or water conscious users, heat pumps appear to be a very smart decision. However, high occupancy homes and high water use needs would be most positively impacted by solar thermal systems.
## Discussion
Heat Pump
The GE air source heat pump was chosen for this project because it had a high energy factor and for its easy installation. The installation of the GE air source heat pump is exactly like a regular water heater and costs about $400 dollars for professional installation. The GE unit switches automatically from four different settings. For most of the time it will be in "eheat" setting which means that only the air source heat pump is working. When a the water usage goes above the peak demand of 63 gallons it goes into hybrid mode which is the air source heat pump and the standard hot water heating component working as one to keep up with demand. There is a standard electric mode in which it works like a regular electric hot water heater and a high demand mode for households with higher than average water use. In this mode the electric hot water heater is working very hard. We decided to stay away from air source hot water heater add on units. These units add on to existing electric water heaters. We decided to stay away from the add on units after a talk with Maples Plumbing in Eureka. According to Maples Plumbing it doesn't make sense to buy these units because although upfront they are about half the price of the combined air source hot water heater unit, they have a high cost in installation. These attachable air source heat pumps also have a lifespan of 20 years, and the tanks on the electric water heaters have a life span of about 10 to 15 years. So the limiting factor is the electric hot water heater tank. Unless a person just installed a new electric hot water heater this technology wouldn't be practical according to Maples Plumbing. It is important to note in the final cost of the air source heat pump we did include a tax rebate from the Federal government. this rebate comes back to the buyer when they fill out their federal a tax form. Under the current tax rebate a person can obtain 30% of the cost of the unit and installation up to$1500 dollars.
Benefits of 'Air Source Water Heater' Downsides to 'Air Source Water Heater' Lifespan as long as regular water heater (15 yrs) Air temp must be above 40 degrees F for the heat pump unit to work Installation is like regular electric hot water heater Less efficient under high constant demand for water Little maintenance (filters changed once a month) Needs to be in an enclosed area (cant be outside) Federal tax rebate of 30% of cost of unit and installation up to $1500 The room must be at least 10' x 10' x 7' or larger (because it draws in surrounding air) Reduces carbon footprint Solar Thermal Solar thermal panels work extremely well in very sunny climates and less so in cloudy climates. Systems capturing lots of light can supply most if not all of your hot water needs. Continuous cloudy days will cause your back up heater to be working more. There is a point where the solar thermal system doesn't get enough sunlight every day to heat very much of your water. This means that the cost of heating your water (using electric, gas or other source) will be inversely proportionate to the amount of sunlight you get. More sunlight= back up system working less, less sunlight=back up system working more. In the fog belt region of Humboldt County, CA, solar thermal panels have a long buyback time (years it takes for water heating savings to accumulate to pay off entire cost of system). For other areas in Humboldt county such as the more sunny mountainous regions (especially during summer months, winter months not so much) solar thermal system could play a larger role in water heating needs. Solar Thermal systems are ideal for heavy hot water loads. Multiple person houses would do well to choose solar thermal systems over air source heat pumps. Federal tax rebates are currently available for solar thermal systems. There are several qualifications that must be met in order to get the rebate. The requirements are as follows; at least half of all energy used in the dwelling must come from solar, the system installed must be SRCC certified and only costs on the solar thermal system are eligible for rebates. If your residence meets these requirements, 30% of the entire cost of the system can be refunded on your next federal tax bill. If the 30% rebate is greater than what you pay in federal taxes in the year of installation, 100% of your federal taxes are refunded to the purchaser. Benefits of Solar Thermal System Downsides to Solar Thermal System Long lifespan of system-30years Expensive start up costs-approx.$4000-\$5500 Easily expandable-more panels are easy to place in series Long buyback time in cloudier climates- about 20 years Efficient for high water use Difficult to install without prior experience No maintenance-system is self-sufficient (assuming no damage) Much more complicated system than conventional water heaters Energy self-reliance- reduces costs of heating Doesn't provide 100% of water needs at high demand volumes or low sunlight
This section is a list of several future steps to improve the accuracy and adaptability of the spreadsheet. Some of these suggestions are slight alternations to variables currently in the spreadsheet while some ideas are completely new.
-The values for sunlight per latitude do not take in to account the seasonal changes. In areas that get very little sunlight during winter months (winter in northern hemisphere) the back up system will be working the majority of the time. During summer months in high latitude areas, continuous sunlight for many hours every day will heat water needs well above what the household will use. The total amount of sunlight hitting the panels does not directly equate to how much water heating costs will be taking in to account seasonal variation.There needs to be a way to equate the seasonal differences. In extremely cold climates, these systems become inappropriate due to low collection rates and excessive costs using a back up system. There is a threshold around 60 degrees latitude where the winter months have too little sunlight for thermal systems to provide any input of heat. Any system not being used for portions of the year might not be the most economical solution.
-Pollution from system manufacture: We did not examine the materials and production externalities from these two systems. The heat pump requires replacement every 15 years where as solar thermal replaced every 30 years. Don't know pollution externalities, recycling potential of materials
-Ground source heat pumps were not examined in this project. They are another type of heat pump that is commonly used for hot water heating. High installation costs with this system
-Climate variation could be a little more concrete- it was difficult to find consistent sunlight values for regions, we just used NOAA national weather service values, they seemed most consistent
-Other types of hot water heaters could be included: On-Demand water heaters could be coupled with either of the examined systems
-Cost outputs could incorporate federal and state tax rebates.
## References
sizing for a solar system http://www.fsec.ucf.edu/en/consumer/solar_hot_water/pools/sizing.htm
California renewables portfolio http://web.archive.org/web/20160102050109/http://www.cpuc.ca.gov:80/PUC/energy/Renewables/index.htm
EIA energy outlook for 2009 http://www.eia.doe.gov/oiaf/ieo/world.html
Solar thermal energy applications https://www.appropedia.org/Solar_thermal_energy_%28original%29
Heat-Transfer Fluids for Solar Water Heating Systems http://web.archive.org/web/20120702213242/http://www.energysavers.gov:80/your_home/water_heating/index.cfm/mytopic=12940
Solar Water Heater Energy Efficiency http://web.archive.org/web/20120814093604/http://www.energysavers.gov:80/your_home/water_heating/index.cfm/mytopic=12900
Evaluating Your Site's Solar Resource for Solar Water Heating http://web.archive.org/web/20120822023447/http://www.energysavers.gov:80/your_home/water_heating/index.cfm/mytopic=12870
Siting Your Solar Water Heating System's Collector http://web.archive.org/web/20120823032408/http://www.energysavers.gov:80/your_home/water_heating/index.cfm/mytopic=12890
Notes from class presentation on solar thermal heating https://www.appropedia.org/Solar_Thermal_Panels
Solar Water Heating System Freeze Protection http://web.archive.org/web/20120815110611/http://www.energysavers.gov:80/your_home/water_heating/index.cfm/mytopic=12960
southface.org
Heat pumps: Michael Winkler, here at RCEA: 7 >>zero << 7-269-1>>seven<<00 Solar thermal: Ben Scurfield, Scurfield Solar: 4>>four<<3-07>>five<<9 http://web.archive.org/web/20190119081032/https://www.scurfieldsolar.com/ Alchemy Construction: http://www.alchemyinc.com/ Solar H2OT: http://web.archive.org/web/20191002001339/http://www.solarhotwaterplus.com:80/index.htm Newsflash: we're hosting a workshop tomorrow night by Solar H2OT on solar hot water systems, where you can get an introduction to one approach by a local installer. Here's a link to the announcement: http://www.redwoodenergy.org/EventsDetail.asp?EventDate=10/14/2009&EventID=222&Rec=1&DateID=294 Dana Boudreau Operations Manager Redwood Coast Energy Authority 707.2>>six<<9.17>>zero<<0 www.redwoodenergy.org
Page data
Type Project heat pump, solar, solar thermal, solar heating SDG07 Affordable and clean energy, SDG11 Sustainable cities and communities adam, Devan Hemmings, Robert Duncan, Ryan 2009 CC-BY-SA-3.0 Cal Poly Humboldt English (en) 7,277 adam, Devan Hemmings, Robert Duncan, Ryan (2009). "RCEA Solar thermal vs Heat pump". Appropedia. Retrieved August 9, 2022.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
|
{}
|
## Craps Variants
Craps is a suprisingly fair game. I remember calculating the probability of winning craps for the first time in an undergraduate discrete math class: I went back through my calculations several times, certain there was a mistake somewhere. How could it be closer than $\frac{1}{36}$? (Spoiler Warning If you haven’t calculated these odds for yourself then you may want to do so before reading further. I’m about to spoil it for you rather thoroughly in the name of exploring a more general case.
|
{}
|
# Local fullwidth environment in symmetric document with margin
In a symmetric twoside document with 5cm margin on the outer side of each page, I want to create an enviroment where I can place locally a piece of text of full page width. Any solution that I have tried so far does not create the desired effect. For example the custom narrow environment shown in the MWE adds margin on the current page, but when the text goes to the next page, it vanishes at the edge.
\documentclass[a4paper,twoside,11pt,symmetric]{book}
\usepackage[no-math,cm-default]{fontspec}
\usepackage{xunicode}
\defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase}
\setmainfont[Mapping=tex-text,Numbers=Lining,Scale=1.0,BoldFont={Times New Roman Bold}]{Times New Roman}
\defaultfontfeatures{Ligatures=TeX}
\usepackage{lipsum}
\usepackage[inner=2.00cm, top=3.00cm, bottom=2.00cm]{geometry}
\geometry{textwidth=12cm,marginparsep=5mm,marginparwidth=5cm}
\newenvironment{narrow}[2]{%
\begin{list}{}{%
\setlength{\leftmargin}{#1}%
\setlength{\rightmargin}{#2}}%
\item[]}{\end{list}}
\begin{document}
\chapter{Chapter 1}
\section{Section 1}
\lipsum\lipsum
\begin{narrow}{0cm}{-5cm}
\lipsum
\lipsum
\lipsum
\end{narrow}
\end{document}
Is it possible to adjust the textwidth no matter if the page is even or odd?
• The changepage package defines an adjustwidth environment. – Bernard Nov 2 '18 at 22:24
• I used adjustwidth but the problem with the next page remains. It's because it changes the left and right margins. I want to be able to change locally the inner and outer margins of the even and odd pages. – mac Nov 2 '18 at 22:29
• I didn't understand what a symmetric document means. You do want a piece of text which extends on the full paper width, or do I misunderstand? – Bernard Nov 2 '18 at 22:45
• Yes that's right. Without getting the result as shown in the pictures. – mac Nov 2 '18 at 23:02
• You didn't explain what ‘symmetric’ docume nt means. – Bernard Nov 2 '18 at 23:16
|
{}
|
Informative line
### Properties Of Definite Integral
Learn definition properties of definite integrals. Practice example using the properties of definite integrals.
# Reversing the Limits and Zero Integral Property of Definite Integrals
(1) $$\displaystyle\int\limits^b_af(x)dx=\,–\displaystyle\int\limits^a_bf(x)dx$$
If we interchange the limits the integral becomes negative of itself.
The value of $$\Delta x=\dfrac{b–a}{n}$$ changes sign as $$\dfrac{b–a}{n}$$ becomes $$\dfrac{a–b}{n}$$.
e.g. $$\displaystyle\int\limits^3_2\dfrac{1}{x^2}dx=\,–\displaystyle\int\limits^2_3\dfrac{1}{x^2}dx$$
(2) $$\displaystyle\int\limits^a_af(x)dx=0\to$$ If upper and lower limits are same the value is 0.
#### For some function $$'f'$$ if $$\displaystyle\int\limits^{7}_{–5}f(x)dx=\dfrac{2}{3}$$ then the value of $$\displaystyle\int\limits^{–5}_{7}f(x)dx$$ is
A 8
B $$\dfrac{7}{2}$$
C $$\dfrac{5}{2}$$
D $$\dfrac{–2}{3}$$
×
$$\displaystyle\int\limits^b_af(x)dx=\,–\displaystyle\int\limits^a_bf(x)dx$$
$$\Rightarrow\,\displaystyle\int\limits^{–5}_{7}f(x)dx=\,–\displaystyle\int\limits^{7}_{–5}f(x)dx=\dfrac{–2}{3}$$
### For some function $$'f'$$ if $$\displaystyle\int\limits^{7}_{–5}f(x)dx=\dfrac{2}{3}$$ then the value of $$\displaystyle\int\limits^{–5}_{7}f(x)dx$$ is
A
8
.
B
$$\dfrac{7}{2}$$
C
$$\dfrac{5}{2}$$
D
$$\dfrac{–2}{3}$$
Option D is Correct
# Property of Definite Integral
$$\displaystyle\int\limits^{b}_{a}c\,dx=c\,(b–a)$$ = area of rectangle whose height is $$'c'$$ and width is (b – a).
#### The value of $$\displaystyle\int\limits^{7}_{–5}3\,dx$$ is
A 28
B 36
C –18
D $$\dfrac{1}{6}$$
×
$$\displaystyle\int\limits^{b}_{a}c\,dx=c\,(b–a)$$
$$\displaystyle\int\limits^{7}_{–5}3\,dx=3\,\left(7–(–5)\right)$$
$$=3(7+5)$$
$$=36$$
### The value of $$\displaystyle\int\limits^{7}_{–5}3\,dx$$ is
A
28
.
B
36
C
–18
D
$$\dfrac{1}{6}$$
Option B is Correct
# The Constant Multiple Property of Definite Integral(Linearity of Definite Integrals)
$$\displaystyle\int\limits^{b}_{a}c\,f(x)dx=c\int\limits^{b}_{a}f(x)dx$$
Where $$c$$ is constant and does not depend on $$x$$.
#### If $$\displaystyle\int\limits^{8}_{2}f(x)dx=18$$ and $$\displaystyle\int\limits^{8}_{2}g(x)dx=–2$$ then find value of $$\displaystyle\int\limits^{8}_{2}\left(5g(x)–3f(x)\right)dx$$.
A –64
B 72
C 1
D –5
×
$$\displaystyle\int\limits^{8}_{2}\left(5g(x)–3f(x)\right)dx$$ $$=\displaystyle\int\limits^{8}_{2}5g(x)dx\,–\int\limits^{8}_{2}3f(x)dx$$
$$=\displaystyle5\int\limits^{8}_{2}g(x)dx\,–3\int\limits^{8}_{2}f(x)dx$$
$$=5×(–2)\,–3×18$$
$$=–10–54$$
$$=–64$$
### If $$\displaystyle\int\limits^{8}_{2}f(x)dx=18$$ and $$\displaystyle\int\limits^{8}_{2}g(x)dx=–2$$ then find value of $$\displaystyle\int\limits^{8}_{2}\left(5g(x)–3f(x)\right)dx$$.
A
–64
.
B
72
C
1
D
–5
Option A is Correct
# Additive Interval Property of Definite Integrals
$$\displaystyle\underbrace{\int\limits^{b}_{a}f(x)dx}_{\text{Area (1)}}+\underbrace{\int\limits^{c}_{b}f(x)dx}_{\text{Area (2)}}\,=\displaystyle\underbrace{\int\limits^{c}_{a}f(x)dx}_{\text{Area (1)+(2)}}$$ ...(1)
• Can also be written as $$\displaystyle\int\limits^{b}_{a}f(x)dx=\int\limits^{c}_{a}f(x)dx\,–\displaystyle\int\limits^{c}_{b}f(x)dx$$
#### If $$\displaystyle\int\limits^{4}_{2}f(x)dx=–11$$ and $$\displaystyle\int\limits^{6}_{2}f(x)dx=5$$ then the value of $$\displaystyle\int\limits^{6}_{4}f(x)dx$$ equals
A 16
B 82
C –4
D $$\dfrac{1}{5}$$
×
$$\displaystyle\int\limits^{4}_{2}f(x)dx+\int\limits^{6}_{4}f(x)dx\,=\displaystyle\int\limits^{6}_{2}f(x)dx$$
$$\Rightarrow\,\displaystyle–11+\int\limits^{6}_{4}f(x)dx=5$$
$$\Rightarrow\displaystyle\int\limits^{6}_{4}f(x)dx=5+11$$
$$=16$$
### If $$\displaystyle\int\limits^{4}_{2}f(x)dx=–11$$ and $$\displaystyle\int\limits^{6}_{2}f(x)dx=5$$ then the value of $$\displaystyle\int\limits^{6}_{4}f(x)dx$$ equals
A
16
.
B
82
C
–4
D
$$\dfrac{1}{5}$$
Option A is Correct
# The Sum Property of Definite Integral
$$\displaystyle\int\limits^{b}_{a}\left(f(x)+g(x)\right)dx$$$$=\displaystyle\left(\int\limits^{b}_{a}f(x)dx\right)+\left(\int\limits^{b}_{a}g(x)dx\right)$$
(The integral of a sum is the sum of integrals.)
#### If $$\displaystyle\int\limits^{5}_{2}f(x)dx=7$$ then the value of $$\displaystyle\int\limits^{5}_{2}\left(3+f(x)\right)dx$$ is
A –18
B 16
C 24
D $$\dfrac{1}{4}$$
×
$$\displaystyle\int\limits^{5}_{2}\left(3+f(x)\right)dx$$$$=\displaystyle\int\limits^{5}_{2}3\,dx+\int\limits^{5}_{2}f(x)dx$$
$$\displaystyle\left(\int\limits^{b}_{a}(f(x)+g(x))dx=\displaystyle\int\limits^{b}_{a}f(x)dx+\int\limits^{b}_{a}g(x)dx\right)$$
$$=3(5–2)+7$$
$$=9+7$$
$$=16$$
### If $$\displaystyle\int\limits^{5}_{2}f(x)dx=7$$ then the value of $$\displaystyle\int\limits^{5}_{2}\left(3+f(x)\right)dx$$ is
A
–18
.
B
16
C
24
D
$$\dfrac{1}{4}$$
Option B is Correct
# The Difference Property of Definite Integrals
$$\displaystyle\int\limits^{b}_{a}\left(f(x)–g(x)\right)dx$$ $$=\displaystyle\int\limits^{b}_{a}f(x)dx\,–\int\limits^{b}_{a}g(x)dx$$
(The integral of difference is equal to the difference of integrals.)
#### If $$\displaystyle\int\limits^{5}_{1}f(x)dx=17$$ and $$\displaystyle\int\limits^{5}_{1}g(x)dx=7$$ then the value of $$\displaystyle\int\limits^{5}_{1}\left(f(x)–g(x)\right)dx$$ is
A 72
B 10
C –81
D 4.2
×
$$\displaystyle\int\limits^{5}_{1}(f(x)–g(x))dx=\displaystyle\int\limits^{5}_{1}f(x)dx\,–\int\limits^{5}_{1}g(x)dx$$
$$\left(\displaystyle\int\limits^{b}_{a}(f(x)–g(x))dx=\displaystyle\int\limits^{b}_{a}f(x)dx\,–\int\limits^{b}_{a}g(x)dx\right)$$
$$=17–7$$
$$=10$$
### If $$\displaystyle\int\limits^{5}_{1}f(x)dx=17$$ and $$\displaystyle\int\limits^{5}_{1}g(x)dx=7$$ then the value of $$\displaystyle\int\limits^{5}_{1}\left(f(x)–g(x)\right)dx$$ is
A
72
.
B
10
C
–81
D
4.2
Option B is Correct
|
{}
|
# Slash Fonts
0 Icon, CSS Class, & Unicode. Be there for the moments that matter with superfast fibre broadband, TV packages, BT Sport & mobile deals from BT. Is there a Chinese computer font that shows the difference between the dot and slash more clearly? I had heard that there was one that was considered more standard, but I haven't been able to find it. Download the font below and install it on your computer. you can use this "Slash Font" with Software Image Editor like Adobe Photoshop, Adobe Illustrator, and More. Based on font metric, EpicSlash Regular has usweight 400, width 5, and italic angle 0. I need a default monitor font with a slashed zero but all font maps only show abc. 1 font problems. Identify fonts by appearance, find fonts by name, find picture or symbol fonts, find fonts by designer or publisher. The particular fonts used in these charts were provided to the Unicode Consortium by a number of different font designers, who own the rights to the fonts. One font may only have 1/2-3/4 but be missing the other fractions. Slash is an experimental, all-caps, no-nonsense display typeface designed by Superfried. 11 at a home at 72 Davenport St. The latest Tweets from Google Fonts (@googlefonts). Free script fonts - also known as cursive fonts - are a popular style of typography, and are especially prevalent on the web. We present our fonts based on the Medieval age, prepared for your best text. /*! Theme Name: Indiewire Author: Voce Platforms Author URI: http://voceplatforms. Use fonts that are available in both the Mac and Windows operating environments. DejaVu Fonts is an attempt to include all Unicode glyphs in a Vera compatible font. We've rounded up the best. I need a default monitor font with a slashed zero but all font maps only show abc. Download 2 Slash Fonts. The "File name" is the name that you want to call just simple one word and follow at the end with ". Have a customer requesting zeros with slashes in the human readable. Fonts are used to print text on various output devices and to display text on the screen. 1 ELFRING FONTS INC. Kruti Dev 010 Regular. Slash Font is a fun bold handwriting font, fresh & modern style. EverQuest II has a number of commands and personalization options that can make every player's experience unique and in some ways tailored specifically to their liking. Looking for Monster Slash font? Download it free at FontRiver. This font format is supported on Android and Linux, but doesn't work on macOS or Windows. Additionally, the operating systems change on occasions the default fonts they provide, so the character might not look the same on your operating system. ENTER FONT SIZE. Free and cool fonts for you. For me however, I'm trying to do a large PUT req. Download the Code 39 font package. Download slash regular font with regular style. TeX uses math fonts in math. File name Font Format Version Glyphs Size; slashhmkbold. To change the size of a font use a new font size parameter. If you are seeing this message, you probably have an ad blocker turned on. Download Hindi Fonts (Devanagari, Nepali, Sanskrit and Marathi Font) like Devlys and Kruti Dev For Free. Personalize hundreds of music stations, as well as news, sports and comedy options. By default the slash (/) key acts as a shortcut to the Excel menu bar. This section compares math font selection in LaTeX and Unicode. In English, we write from left to right. Over the years many people have created these various slashed zero fonts referred to "zero fonts, windows slashed zero fonts, or slashed zero fonts, and I often see messages show up on a reflector as to where to find them. Type a forward slash (/). It depends on the fonts you are using. ttf font by typing your own text. :: Font Preview Use the text generator tool below to preview E-Slash font, and create awesome text-based images or logos with different colors and hundreds of text effects. Click on the font you want to add it to the "My Fonts" list at the right. In addition, Python’s built-in string classes support the sequence type methods described in the Sequence Types — str, unicode, list, tuple, bytearray, buffer, xrange section, and also the string-specific methods described in the. login or sign up for a free account. online shopping has now gone a long method; it has changed the way consumers and entrepreneurs do business today. Easy to setup just make 4 folders in main directory in this order Data > Local > Font > Latin and place the downloaded font files in the latin folder. View Fontmaker Slash font information, font preview, character sets. Download slash font with badass black style. One font may only have 1/2-3/4 but be missing the other fractions. Nordic (Free Font) Nordic is a unique modern font-family based on scandinavian runes and elegant geometric forms that can fit everyday design needs. Next to each glyph name the Unicode, ASCII and GID number are listed. Every font is free to download, and 2 are 100% free for commercial-use! login or sign up for a free account. (fwd/rev), clear body. otf) format. Press Ctrl and the space bar to clear the formatting for the next step. ASCII Table and Description. With over 8,000 freeware fonts, you've come to the best place to download fonts! Most fonts on this site are freeware, some are shareware or linkware. Best Answer: This is called a "slashed zero", and while not every font has one, many of the fonts installed in your system do, and you can access the slashed zero in Windows in the way AnalProgrammer mentions. Slash font is like the name sugests rock & roll to the fullest. 10074 users have given the font a rating of 4. Tattoo Font Generator allows you to generate tattoo lettering designs by selecting the font that you want to use, the size, color and the text. FF Unit Rounded. Since the slashes should only appear inside the respective slashed zeros, it would not make sense for them to appear as separate glyphs in the exported font. com Slash is an experimental, all-caps, no-nonsense display typeface designed by Superfried. net offering 1000's of FREE fonts to download to help the millions of designers across the globe expressing their creativity with much more diversity. Did you know you can use slash commands in Discord text chat to search XIVDB and Gamer Escape? Try it out. The latest Tweets from Google Fonts (@googlefonts). Bold text is sometimes used for list headers such as in this list of font styles. Hey, get with it, and get SLASH some more! He doesn't want just a couple - he wants them ALL. Rockstar designers who look at every detail and want to create something special. If you omit line-height, you must also omit the slash, otherwise the entire line will be ignored. AU Alumni Shares Success Stories and Career Strategies in Open Forum. The word originates from the Tahitian tattau. Please consider disabling it to see content from our partners. The particular fonts used in these charts were provided to the Unicode Consortium by a number of different font designers, who own the rights to the fonts. The Ultimate Font Download is the largest and best selling font collection online. Other varieties of soft sans serifs can be found at the bottom of this list. Font Squirrel relies on advertising in order to keep bringing you great new free fonts and to keep making improvements to the web font generator. File name Font Format Version Glyphs Size; slashhmkbold. Be there for the moments that matter with superfast fibre broadband, TV packages, BT Sport & mobile deals from BT. The font's design is simple, clean, and geometric, with strokes that all have rounded ends. It depends on the fonts you are using. Is there a Chinese computer font that shows the difference between the dot and slash more clearly? I had heard that there was one that was considered more standard, but I haven't been able to find it. Aqua | Free Font Download. Note Although "slash" is most often used to describe the forward slash , it could be used to describe either a forward slash or backslash. These include punctuation marks and other symbols used in typography. The division sign or is written as a horizontal line with dot above and dot below (obelus), or a slash or horizontal line: ÷ / — The division sign indicates division operation of 2 numbers or expressions. However current use of network laser printers that use PC style fonts caused the demise of the slashed zero in most companies - only a few configured laser printers to use Ø. Imagine the cursor walking across the screen as a person. Formerly resident in Colombia. Rockstar designers who look at every detail and want to create something special. Steps to Resolve. For example, below, the stems of the "N" are debossed, while the slash itself is die-cut. The font embodies the rassion for trav- elling, surfing, nature, and everything else made with love. Over the years many people have created these various slashed zero fonts referred to "zero fonts, windows slashed zero fonts, or slashed zero fonts, and I often see messages show up on a reflector as to where to find them. Tattoo Font Generator allows you to generate tattoo lettering designs by selecting the font that you want to use, the size, color and the text. Hi!! I just installed the last version of Moodle (I already had one some motnhs ago) but now when a select the Splash Theme everything appears on bold and italic fonts and it wasnt the way it was display on my computer somo weaks ago. As David M said in the comments, it mirrors the typesetting tradition of specifying typeface sizes as " x pt on y pt" to denote the glyph size on line height. For example, on the character 下 the third stroke looks like a slash down and to the right, but it is actually a dot when handwritten. Our marketplace allows artists to register and list their fonts for sale and include them in font bundles and deals. Tap on Display. Outline fonts. Fonts, Scope, and Symbols For the most part, a control sequence is a back slash followed by the desired symbol or its designator. When you're finished, scroll to the end of the songs, and you can check out more of the. I put it in the WL program directory and in the Windows "Font" directory. Click the blue OK button to close. Slash Font Family - Fonts. Font Awesome version 4. 0 Icon Search Tool. Have Heart is a set of 2 hand-made marker pen fonts, designed to combine perfectly and allow you to create stunning hand-lettering quickly and easily. Download the Code 39 font package. Sculptured from solid blocks, it features distinct incisions and intricate curves to articulate the separate glyphs, resulting in a high impact face. you can use this “Slash Font” with Software Image Editor like Adobe Photoshop, Adobe Illustrator, and More youcan see the tutorial how to use this font in. You need to use one that does. Rounded typefaces go in and out of style. It like to party hard and it might get load when its in the room. what font(s) has slashed zero in it most fonst the zero and "o" are hard for me tell apart. This tool is available for free on Microsoft’s official website. Permission. Save over $45 on 1/16 Slash 4x4! Shop Now > FREE standard shipping on all orders over$99. Font Meme is a fonts & typography resource. If you share the workbook with others and you want others to see that slashed 0, you'll have to share the font, too. The cross out font Question Posted Tuesday January 16 2007, 6:45 pm hello fello advicers! what font is the cross out font that i can use on myspace? could someone give me the code? thanks :]. Just type / at the begging of a message and post information and links directly into your text channel. Slash is a free font designed to be unique and stand out of the crowd. you can use this “Slash Font” with Software Image Editor like Adobe Photoshop, Adobe Illustrator, and More youcan see the tutorial how to use this font in. Does anyone know if a user font exists that contains the grace note symbol with a slash such that it could be manually inserted as a text item? (Or any other workarounds for that matter?) Thanks! P. Comes with capital letters, numbers and common punctuation symbols, this font is suitable for typography-centric designs where you need to express the message loudly. These are used to embed seldom-used characters into documents such as this character, which is used in the Spanish, language to "frame" a question. TeX uses math fonts in math. This font available for Windows, Linux and MacOS. I put it in the WL program directory and in the Windows "Font" directory. You can find more information about Kruti Dev 010 Regular and it's character map in the sections below. Re: Font Needed - "Zero" with slash not sure if this is what you are looking for - but if it is something similar to the diameter symbol, then type in "%%c"after where you want that. How to Change Fonts and Icons on Samsung Galaxy S9 and S9+ 1. Progress on Hot Springs Island continues. ) > >Bubey wrote: >> >> How do I make a zero appear with a slash ? >> Thanks ! Gord Dibben MS Excel MVP. Download 10,000 fonts today. Over the years many people have created these various slashed zero fonts referred to "zero fonts, windows slashed zero fonts, or slashed zero fonts, and I often see messages show up on a reflector as to where to find them. html, index. Download Slash freebie-archive free. From these fonts we selected 20 which showcase the variety and diversity of Thai fonts found in Thailand. Instant and unlimited access to 10,000 fonts. The code is given in the lower left corner of the character map when a character is selected. Ready to personalize and share in Facebook and Twitter. Fonts have font styles such as italic, bold, and bold italic. net offering 1000's of FREE fonts to download to help the millions of designers across the globe expressing their creativity with much more diversity. The value x-large/110% refers to font-size and line-height. This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Font Squirrel relies on advertising in order to keep bringing you great new free fonts and to keep making improvements to the web font generator. Sculptured from solid blocks, it features distinct incisions & intricate curves to articulate the separate glyphs, resulting in a high impact typeface. Learn more about popular topics and find resources that will help you with all of your Apple products. Slash is a sans serif display font that includes all caps, numbers, special characters and ligatures. The division sign or is written as a horizontal line with dot above and dot below (obelus), or a slash or horizontal line: ÷ / — The division sign indicates division operation of 2 numbers or expressions. in Fancy > Horror 21,575 downloads (2 yesterday) Donationware. Increasing or decreasing the font size will enhance readability. AUTHORIZED USER AND USE You alone, as the purchaser ofthe fonts, are authorised to use the fonts under the licensefor personal, business or distributed projects. This problem is likely caused by accidentally switching your system's active keyboard layout or input language. The use of TrueType fonts allows the delivery of "Symbol Favorites" for most of the special. It has been downloaded 260244 times. These include punctuation marks and other symbols used in typography. Page 3 of 65 - SkBBP - Skyrim Blacksmith and Balance Project - posted in File topics: My carry weight seems to be messed upaccording to my inventory i'm carrying 168 weight unites, but that doesn't seem to add up when I look at the stuff in my inventory. Instructions on how to type Nordic o slash lowercase for Windows, Mac, and in HTML. That means not all fonts support all fractions equally. Based on font metric, EpicSlash Regular has usweight 400, width 5, and italic angle 0. quilt_patches' --- debian/. You need to use one that does. Slash is a display font family. Review time! You use HTML tags in PHP by using the print tags and normal, everyday HTML. If you share the workbook with others and you want others to see that slashed 0, you'll have to share the font, too. Slash is an experimental, all-caps, no-nonsense display typeface designed by Superfried. Kaleidoscope Fall Edition. Download slash font with badass black style. Slash 4X4 Platinum: 1/10 Scale 4WD Electric Short Course Truck. Every font is free to download, and 2 are 100% free for commercial-use! login or sign up for a free account. [Saturdays NYC Slash Graphic T-Shirt] ♦♦Discount Online♦♦ Saturdays NYC Slash Graphic T-Shirt [Best Buy]. Basic message formatting is easy, but there are a few quirks you'll want to understand before making your messages more complex. Download Free Fonts. , display of 11⁄12 as 11 ⁄ 12), although this may not yet be supported in certain environments and fonts. The variety of fonts provided with the Code39 font package and IDAutomation's Code39Mod43() function in IDAutomation's font encoder tools allows for implementation of this standard. Fonts must be best. quilt_patches 1970-01-01 00. With the customizer improvements added in WordPress 4. Fonts Slash font download for free, in ttf for windows and mac! Fonts Slash in Gothic category. Re: Font Needed - "Zero" with slash not sure if this is what you are looking for - but if it is something similar to the diameter symbol, then type in "%%c"after where you want that. :: Font Preview Use the text generator tool below to preview E-Slash font, and create awesome text-based images or logos with different colors and hundreds of text effects. 6 / 2 = 3. Slash is an experimental, all-caps, no-nonsense display typeface designed by Superfried. com Slash is an experimental, all-caps, no-nonsense display typeface designed by Superfried. com is a great collection of free fonts. Slash Outline Font - What Font Is - Download Slash Outline font. At NYC Music Services, we've released seven new free chord symbol fonts for Sibelius. A font is a graphic design that is applied to a collection of numbers, symbols, and characters. The MTV Slash 4X was the next logical step after the brilliant performance of MTV Slash. Convert your texts to cool and weird styles, with different alphabets, quickly and completely free. I also like the squared dots he’s used in the semi-colon to clearly differentiate it from the comma. Sculptured from solid blocks, it features distinct incisions and intricate curves to articulate the separate glyphs, resulting in a high impact face. Premium Font Deals. The Microsoft Typography group researches and develops font technologies and supports the development of OpenType fonts by independent type vendors. SHARE IT This script custom font is from Ronald Vermeijs. Rock On Font. Isonorm is a font whose forms are very legible by both the human eye and machine readers. By default, there are only very limited number of fonts in your Instagram Stories, Bio, Captions and Comments. otf) format. Epic Slash by Xerographer Fonts. Technically these character shapes are called glyphs. com is a great collection of free fonts. Convert your texts to cool and weird styles, with different alphabets, quickly and completely free. This fonts available in two different style, Medium and Slanted. In addition, Python’s built-in string classes support the sequence type methods described in the Sequence Types — str, unicode, list, tuple, bytearray, buffer, xrange section, and also the string-specific methods described in the. The latest Tweets from Google Fonts (@googlefonts). The font is bold and have a bit more. Here you can find all the tags in this proposal, including tips on usage and limitations. Does anyone know if a user font exists that contains the grace note symbol with a slash such that it could be manually inserted as a text item? (Or any other workarounds for that matter?) Thanks! P. Division Slash on various operating systems Please note that the image above is computer generated and not all images are curated, so certain errors might occur. After the Pori Chords font for Sibelius was introduced in December 2018, and especially since the major release of the complete Pori suite and full support for angled slash chords (ASC) in Pori, Norfolk, and Norfolk Sans in April 2019, the fonts have been downloaded thousands of times and are already finding a home in charts around the globe. In English, we write from left to right. Slash Font suitable for nameplate, logo, branding, greeting card, poster, book cover, and any design that you create. With over 130,000 fonts available to license for any project, MyFonts is the largest font marketplace around. Under Language for non-Unicode programs, check if the System locale is set to Arabic (Saudi Arabia). Hence the plan to slash the premium rate charged to builders for commercial properties. Hello, I downloaded the Slash 0 Font file (HAM). It is designed for type lovers, rockstar designers who look at every detail and want to create something special. Be aware that there need not be a harmonic relationship between the chord above the slash and the note below it. 11 at a home at 72 Davenport St. Slash Forward (Some URLs are Better Than Others) Sadly enough, a frequently overlooked step in this process is the structure of your links—the actual URLs you’ll be using to point to items on your site. Font Awesome version 4. On my system, DIVISION SLASH and FRACTION SLASH render identically, although differently from SOLIDUS. Also a morse code fonts that could be handy. Type a forward slash (/). Most Ham Radio users want fonts that have a slash zero so I have put together a bunch of ham fonts with slash zero. The Ultimate Font Download is the largest and best selling font collection online. See also Edit. /*! Theme Name: Indiewire Author: Voce Platforms Author URI: http://voceplatforms. It depends on the fonts you are using. Review time! You use HTML tags in PHP by using the print tags and normal, everyday HTML. Click on the small picture of SLASH to jump directly to a song. For windows users place it in C:\WINDOWS\FONTS. Slash, as its name suggests, is bold and makes no apologies for it. Every font is free to download, and 19 are. With over 130,000 fonts available to license for any project, MyFonts is the largest font marketplace around. The text wrapped in the \emph{} tag will be printed in normal font to make it stand out. In this collection we've gathered together 101 of the best free logo font options out there. font will see that slashed 0. net offering 1000's of FREE fonts to download to help the millions of designers across the globe expressing their creativity with much more diversity. Slash is an experimental, all-caps, no-nonsense display typeface designed by Superfried. These tags can be nested (ie: \textbf{\emph{bold and italicized text here}} will produce bold and italic font. – Heiko Oberdiek Sep 21 '12 at 23:16 |. With over 8,000 freeware fonts, you've come to the best place to download fonts! Most fonts on this site are freeware, some are shareware or linkware. Considerable variation is to be expected in actual fonts. These include punctuation marks and other symbols used in typography. com are either GNU/GPL, Freeware, free for Personal use, Donationware, Shareware or Demo. Simply type your normal Tweet text in the first box and the generator will convert it into a bunch of different fonts which you can copy and paste into your Tweets, or in your Twitter bio, and just about anywhere else on the. View Fontmaker Slash font information, font preview, character sets. All you do is use the exact HTML tags you would normally use in an HTML document, but the difference is that you place those tags in between print(" and "); The only time you deviate from that format is when your HTML contains quotes. Thus, AC/DC iconic logo with a lightning instead of the slash sign inspired the Typodermic studio for creating the Squeeler font named after a famous song of the Australian rock band. Note Although "slash" is most often used to describe the forward slash , it could be used to describe either a forward slash or backslash. Slacker Radio is a free internet radio service, light years away from the one-dimensional playlists that you're used to. ttf: TTF - TrueType — 191. Review time! You use HTML tags in PHP by using the print tags and normal, everyday HTML. PLACE YOUR BETS, WINNER KILLS ALL! From the author of Phantasm Exhumed comes Slash of the Titans, a revealing look at why it took New Line Cinema nearly ten years and four-million-dollars to find the right screenplay for Freddy vs Jason. We use cookies to enable an improved browsing experience, and to share information with our marketing partners. Click on the small picture of SLASH to jump directly to a song. The font is bold and have a bit more. HTML codes to put ASCII special characters on your Web page. Each of the Fonts Unleashed Font Packs contains more than 100 individual fonts. Epic Slash by Xerographer Fonts. We present our fonts based on the Medieval age, prepared for your best text. Org Here's a list of songs for which we have guitar tabs. They are often used for user interfaces, or for back-lit signage. Advanced Concepts is a benchtop manufacturer in Lonsdale, Adelaide. ) > >Bubey wrote: >> >> How do I make a zero appear with a slash ? >> Thanks ! Gord Dibben MS Excel MVP. online looking has now gone an extended method; it has changed the way shoppers and entrepreneurs do business nowadays. Division Sign. Easy to setup just make 4 folders in main directory in this order Data > Local > Font > Latin and place the downloaded font files in the latin folder. Please consider disabling it to see content from our partners. The LH font set contains letters necessary to typeset documents in languages using Cyrillic script. Instructions on how to type Nordic o slash lowercase for Windows, Mac, and in HTML. Independent solo artists can’t do it alone, and I’ve been blessed to have had incredible artistic and technical support. Free Korean Fonts. Performance-focused developers already optimize their images and CSS; it makes sense that we should also optimize web fonts. Other varieties of soft sans serifs can be found at the bottom of this list. This fonts available in two different style, Medium and Slanted. Download slash regular font with regular style. Slash Script Font. This free fonts collection also offers useful content and a huge collection of TrueType face and OpenType font families categorized in alphabetical order. Download Donate to author. We offer savings of up to 96% off fonts on a regular basis. AUTHORIZED USER AND USE You alone, as the purchaser ofthe fonts, are authorised to use the fonts under the licensefor personal, business or distributed projects. Download Fontmaker Slash font for Windows, Mac, Android. Enter some text in the box below, then click the preview button. There are a huge variety of fonts that you can use on twitter - these are only the beginning of what Unicode grants us. Slash Paradise - Photo, picture and image galleries: live and promo photos with Guns N' Roses, Slash's Snakepit, Velvet Revolver and Myles Kennedy. In this collection we've gathered together 101 of the best free logo font options out there. The characters within this story are copyrighted by Tollin/Robbins and the CW network. First, you can find a font that actually uses the slashed zero in it. Also I need the underscore. Are you sure that you haven't defined the slash as the split character in your PROC REPORT statement? I frequently use SPLIT='00'x to avoid having proc report eat any of the real characters in my data or labels. Every Font Awesome 4. Typefaces (often called type families or font families) are collections of fonts that share an overall appearance, and are designed to be used together, such as Adobe Garamond. Need some racing fonts for your wraps or motorsports themed design project? Download these 10 fonts to help get your creativity rolling! Fire Eye. quilt_patches 1970-01-01 00. Make sure "Show all fonts" is selected at the top. The cross out font Question Posted Tuesday January 16 2007, 6:45 pm hello fello advicers! what font is the cross out font that i can use on myspace? could someone give me the code? thanks :]. can download the full version of that font pack by clicking here. Font 1 is easy to read at both small and large scales and has no confusing differences between O’s and zeros, L’s and ones, etc. The "File name" is the name that you want to call just simple one word and follow at the end with ". How to Change Fonts and Icons on Samsung Galaxy S9 and S9+ 1. The font is also a good choice for drafting and architectural purposes, as well as for technical charts and graphics. The text wrapped in the \emph{} tag will be printed in normal font to make it stand out. Easy to setup just make 4 folders in main directory in this order Data > Local > Font > Latin and place the downloaded font files in the latin folder. However, an 'ex' is defined even for fonts that do not contain an "x". Available only in a badass black color, this font has many distinguishing features. In some fonts, you may encounter characters such as:. ENTER FONT SIZE. Tattoos: Designs Pictures & Galleries where you can vote/rate real tribal tattoos or free fonts or get a custom design. Decorative Fonts, Font, Grotesque font, Roman, Sans serif suited to your project, and it does not matter whether it is a printed poster or picture monitor. Slashed Zero Fonts. Fully assembled with low CG chassis, servos, Velineon VXL-3s brushless E. The scaling up does not happen for radio buttons because those rely on pixel sizes and CSS to render the dot at the center of the circle. Changing the font in your terminal is done differently depending on your system and the terminal in use. The OpenType font format is an extension of the TrueType® font format, adding support for PostScript font data. This font come in ttf format and support 70 glyphs. You can find more information about Kruti Dev 010 Regular and it's character map in the sections below. Easy to setup just make 4 folders in main directory in this order Data > Local > Font > Latin and place the downloaded font files in the latin folder. These lead to more effective, predictable, understandable results than font-feature-settings, which is a low-level feature designed to handle special cases where no other way exists to enable or access an OpenType font feature. Did you know you can use slash commands in Discord text chat to search XIVDB and Gamer Escape? Try it out. Download Sword Slash A Sound Effects by Ghetty. For example, below, the stems of the “N” are debossed, while the slash itself is die-cut. The reason you may want to do this is that the slash used in the single-character fonts built into Word (you remember—those created when you type the characters "1/2") uses a slash that is at a different angle than the slash shown when you simply type a slash. You need to use one that does. The code is given in the lower left corner of the character map when a character is selected. As far as I can tell, it's Apple's font and not sold anywhere; you get it when you g. These are free TTF fonts made to work with any PC or Windows Based Operating System. This font come in ttf format and support 70 glyphs. The radio buttons do call for a bullet symbol from the Dashicons font, but move it out of view using a negative text indent. Font Sizes \tiny \scriptsize \footnotesize \small \normalsize \large \Large \LARGE \huge \Huge All of these fonts are listed from smallest to largest. Comment On Bloody Font Generator Categories Most Popular Animated Black Blue Brown Burning Button Casual Chrome Distressed Elegant Embossed Fire Fun Girly Glossy Glowing Gold Gradient Gray Green Heavy Holiday Ice Medieval Orange Outline Pink Plain Purple Red Rounded Science-Fiction Script Shadow Shiny Small Space Sparkle Stencil Stone Trippy. ☀Cheap Reviews☀ Low Prices Chaise Lounge Side Chair End Table ★★On Sale Online★★ Saturdays NYC Embroidered Slash Hoodie ☀☀Cheap Reviews☀☀ ☀☀For Sale Good Price☀☀ If you want to buy Saturdays NYC Embroidered Slash Hoodie Ok you want deals and save. Some fonts provided are trial versions of full versions and may not allow embedding unless a commercial license is purchased or may contain a limited character set. Slash definition is - to lash out, cut, or thrash about with or as if with an edged blade. Rockstar designers who look at every detail and want to create something special. Slash Font by Superfried. Download 10,000 fonts with one click for just \$19. 6 / 2 = 3. Below is an overview of 75 frequently used characters, punctuation symbols or signs that are included in most fonts. Find the same inventory offered here (and more!) over at our partner storefront, MyFonts.
|
{}
|
# One world IAMP mathematical physics seminar
This online seminar takes place on Tuesdays, starting at UTC 14.
Current organisers are Jan Dereziński (Warsaw) and Daniel Ueltschi (Warwick).
Scientific committee: Nalini Anantharaman (Strassbourg), Mihalis Dafermos (Cambridge), Stephan De Bièvre (Lille), Krzysztof Gawedzki (ENS Lyon), Bernard Helffer (Nantes), Vojkan Jaksic (McGill), Flora Koukiou (Cergy), Antti Kupiainen (Helsinki), Mathieu Lewin (Paris Dauphine), Bruno Nachtergaele (UC Davis), Claude-Alain Pillet (Toulon), Robert Seiringer (IST Austria), Jan Philip Solovej (Copenhagen), Hal Tasaki (Gakushuin).
September 29, 2020 Alessandro Giuliani (University Roma Tre) Non-renormalization of the chiral anomaly' in interacting lattice Weyl semimetals Weyl semimetals are 3D condensed matter systems characterized by a degenerate Fermi surface, consisting of a pair of Weyl nodes'. Correspondingly, in the infrared limit, these systems behave effectively as Weyl fermions in 3+1 dimensions. We consider a class of interacting 3D lattice models for Weyl semimetals and prove that the quadratic response of the quasi-particle flow between the Weyl nodes, which is the condensed matter analogue of the chiral anomaly in QED4, is universal, that is, independent of the interaction strength and form. Universality, which is the counterpart of the Adler-Bardeen non-renormalization property of the chiral anomaly for the infrared emergent description, is proved to hold at a non-perturbative level, notwithstanding the presence of a lattice (in contrast with the original Adler-Bardeen theorem, which is perturbative and requires relativistic invariance to hold). The proof relies on constructive bounds for the Euclidean ground state correlations combined with lattice Ward Identities, and it is valid arbitrarily close to the critical point where the Weyl points merge and the relativistic description breaks down. Joint work with V. Mastropietro and M. Porta. Video link: https://zoom.us/j/97884134503?pwd=cmdGTmpZbE9LQjNnWUVjdFdTQ21wUT09 October 6, 2020 October 13, 2020 Svitlana Mayboroda (University of Minnesota) Title TBA Video link: TBA October 20, 2020 Jeremy Quastel (University of Toronto) Title TBA Video link: TBA October 27, 2020 Bruno Nachtergaele (UC Davis) Title TBA Video link: TBA November 3, 2020 Nalini Anantharaman (Strassbourg) Title TBA Video link: TBA November 10, 2020 Peter Hintz (MIT) Title TBA Video link: TBA November 17, 2020 Stefan Hollands (University of Leipzig) Title TBA Video link: TBA November 24, 2020 Roland Bauerschmidt (University of Cambridge) Title TBA Video link: TBA December 1, 2020 Alessandro Pizzo (University of Rome Tor Vergata) Title TBA Video link: TBA December 8, 2020 Katrin Wendland (Albert-Ludwigs-Universität Freiburg) Title TBA Video link: TBA December 15, 2020 Yoshiko Ogata (University of Tokyo) Title TBA Video link: TBA
September 22, 2020 Ian Jauslin (Princeton University) An effective equation to study Bose gasses at both low and high densities I will discuss an effective equation, which is used to study the ground state of the interacting Bose gas. The interactions induce many-body correlations in the system, which makes it very difficult to study, be it analytically or numerically. A very successful approach to solving this problem is Bogolubov theory, in which a series of approximations are made, after which the analysis reduces to a one-particle problem, which incorporates the many-body correlations. The effective equation I will discuss is arrived at by making a very different set of approximations, and, like Bogolubov theory, ultimately reduces to a one-particle problem. But, whereas Bogolubov theory is accurate only for very small densities, the effective equation coincides with the many-body Bose gas at both low and at high densities. I will show some theorems which make this statement more precise, and present numerical evidence that this effective equation is remarkably accurate for all densities, small, intermediate, and large. That is, the analytical and numerical evidence suggest that this effective equation can capture many-body correlations in a one-particle picture beyond what Bogolubov can accomplish. Thus, this effective equation gives an alternative approach to study the low density behavior of the Bose gas (about which there still are many important open questions). In addition, it opens an avenue to understand the physics of the Bose gas at intermediate densities, which, until now, were only accessible to Monte Carlo simulations. Video link: youtu.be/HyRG-PzvpyY September 15, 2020 Victor Ivrii (University of Toronto) Scott and Thomas-Fermi approximations to electronic density In heavy atoms and molecules, on the distances $a \ll Z^{-1/2}$ from one of the nuclei (with a charge $Z_m$), we prove that the ground state electronic density $\rho_\Psi (x)$ is approximated in $\sL^p$-norm by the ground state electronic density for a single atom in the model with no interactions between electrons. Further, on the distances $a \gg Z^{-1}$ from all of the nuclei (with a charge $Z_1,\ldots, Z_m$) we prove that $\rho_\Psi (x)$ is approximated in $\sL^p$-norm, by the Thomas-Fermi density. We cover both non-relativistic and relativistic cases. Video link: youtu.be/O25BT_-XNNE September 8, 2020 Antti Kupiainen (University of Helsinki) Integrability of Liouville Conformal Field Theory A. Polyakov introduced Liouville Conformal Field theory (LCFT) in 1981 as a way to put a natural measure on the set of Riemannian metrics over a two dimensional manifold. Ever since, the work of Polyakov has echoed in various branches of physics and mathematics, ranging from string theory to probability theory and geometry. In the context of 2D quantum gravity models, Polyakov’s approach is conjecturally equivalent to the scaling limit of Random Planar Maps and through the Alday-Gaiotto- Tachikava correspondence LCFT is conjecturally related to certain 4D Yang-Mills theories. Through the work of Dorn,Otto, Zamolodchikov and Zamolodchikov and Teschner LCFT is believed to be to a certain extent integrable. I will review a probabilistic construction of LCFT developed together with David, Rhodes and Vargas and recent proofs concerning the integrability of LCFT: -The proof in a joint work with Rhodes and Vargas of the DOZZ formula (Annals of Mathematics, 81-166,191 (2020) -The proof in a joint work with Guillarmou, Rhodes and Vargas of the bootstrap conjecture for LCFT (arXiv:2005.11530). Video link: youtu.be/0ms4gEUT2Nw July 28, 2020 Nicolas Rougerie (University of Grenoble Alpes) Two modes approximation for bosons in a double well potential We study the mean-field limit for the ground state of bosonic particles in a double-well potential, jointly with the limit of large inter-well separation/large potential energy barrier. Two one-body wave-functions are then macroscopially occupied, one for each well. The physics in this two-modes subspace is usually described by a Bose-Hubbard Hamiltonian, yielding in particular the transition from an uncorrelated "superfluid" state (each particle lives in both potential wells) to a correlated "insulating" state (half of the particles live in each potential well). Through precise energy expansions we prove that the variance of the number of particles within each well is suppressed (violation of the central limit theorem), a signature of a correlated ground state. Quantum fluctuations around the two-modes description are particularly relevant, for they give energy contributions of the same order as the energy difference due to suppressed variances in the two-modes subspace. We describe them in terms of two independent Bogoliubov Hamiltonians, one for each potential well. Joint work with Alessandro Olgiati and Dominique Spehner Video link: youtu.be/ylb6BWewlpI July 21, 2020 Hugo Duminil-Copin (IHES / University of Geneva) Marginal triviality of the scaling limits of critical 4D Ising and φ_4^4 models In this talk, we will discuss the scaling limits of spin fluctuations in four-dimensional Ising-type models with nearest-neighbor ferromagnetic interaction at or near the critical point are Gaussian and its implications from the point of view of Euclidean Field Theory. Similar statements will be proven for the λφ4 fields over R^4 with a lattice ultraviolet cutoff, in the limit of infinite volume and vanishing lattice spacing. The proofs are enabled by the models' random current representation, in which the correlation functions' deviation from Wick's law is expressed in terms of intersection probabilities of random currents with sources at distances which are large on the model's lattice scale. Guided by the analogy with random walk intersection amplitudes, the analysis focuses on the improvement of the so-called tree diagram bound by a logarithmic correction term, which is derived here through multi-scale analysis. Video link: youtu.be/DtLKEQran_Y July 14, 2020 Hal Tasaki (Gakushuin University) 'Topological' index and general Lieb-Schultz-Mattis theorems for quantum spin chains A Lieb-Schultz-Mattis (LSM) type theorem states that a quantum many-body system with certain symmetry cannot have a unique ground state accompanied by a nonzero energy gap. While the original theorem treats models with continuous U(1) symmetry, new LSM-type statements that only assume discrete symmetry have been proposed recently in close connection with topological condensed matter physics. Here we shall prove such general LSM-type theorems by using the "topological" index intensively studied in the context of symmetry protected topological phase. Operator algebraic formulation of quantum spin chains plays an essential role in our approach. Here I do not assume any advanced knowledge in quantum spin systems or operator algebra, and illustrate the ideas of the proof (which I believe to be interesting). The talk is based on a joint work with Yoshiko Ogata and Yuji Tachikawa in arXiv:2004.06458. Video link: youtu.be/q0k1sch56Dk July 7, 2020 Bruno Després (Sorbonne University) Spectral-scattering theory and fusion plasmas Motivated by fusion plasmas and Tokamaks (ITER project), I will describe recent efforts on adapting the mathematical theory of linear unbounded self-adjoint operators (Kato, Lax, Reed-Simon, ....) to problems governed by kinetic equations coupled with Maxwell equations. Firstly it will be shown that Vlasov-Poisson-Ampere equations, linearized around non homogeneous Maxwellians, can be written in the framework of abstract scattering theory (linear Landau damping is a consequence). Secondly the absorption principle applied to the hybrid resonance will be discussed. All results come from long term discussions and collaborations with many colleagues (Campos-Pinto, Charles, Colas, Heuraux, Imbert-Gérard, Lafitte, Nicolopoulos, Rege, Weder, and many others). Video link: youtu.be/lmnm1D3NFp8 June 30, 2020 Laure Saint-Raymond (ENS Lyon) Fluctuation theory in the Boltzmann-Grad limit In this talk, I will discuss a long term project with T. Bodineau, I. Gallagher and S. Simonella on hard-sphere dynamics in the kinetic regime, away from thermal equilibrium. In the low density limit, the empirical density obeys a law of large numbers and the dynamics is governed by the Boltzmann equation. Deviations from this behavior are described by dynamical correlations, which can be fully characterized for short times. This provides both a fluctuating Boltzmann equation and large deviation asymptotics. Video link: youtu.be/fLDFA7ZCagA June 23, 2020 Nilanjana Datta (University of Cambridge) Discriminating between unitary quantum processes Discriminating between unknown objects in a given set is a fundamental task in experimental science. Suppose you are given a quantum system which is in one of two given states with equal probability. Determining the actual state of the system amounts to doing a measurement on it which would allow you to discriminate between the two possible states. It is known that unless the two states are mutually orthogonal, perfect discrimination is possible only if you are given arbitrarily many identical copies of the state. In this talk we consider the task of discriminating between quantum processes, instead of quantum states. In particular, we discriminate between a pair of unitary operators acting on a quantum system whose underlying Hilbert space is possibly infinite-dimensional. We prove that in contrast to state discrimination, one needs only a finite number of copies to discriminate perfectly between the two unitaries. Furthermore, no entanglement is needed in the discrimination task. The measure of discrimination is given in terms of the energy-constrained diamond norm and one of the key ingredients of the proof is a generalization of the Toeplitz-Hausdorff Theorem in convex analysis. Moreover, we employ our results to study a novel type of quantum speed limits which apply to pairs of quantum evolutions.This work was done jointly with Simon Becker (Cambridge), Ludovico Lami (Ulm) and Cambyse Rouze (Munich) Video link: youtu.be/gHEjszXSjMQ June 16, 2020 Nicola Pinamonti (University of Genova) Equilibrium states for interacting quantum field theories and their relative entropy During this talk we will review the construction of equilibrium states for interacting scalar quantum field theories, treated with perturbation theory, recently proposed by Fredenhagen and Lindner. We shall in particular see that this construction is a generalization of known results valid in the case of C*-dynamical systems. We shall furthermore discuss some properties of these states and we compare them with known results in the physical literature. In the last part of the talk, we shall show that notions like relative entropy or entropy production can be given for states which are of the form discussed in the first part of talk. We shall thus provide an extension to quantum field theory of similar concepts available in the case of C*-dynamical systems. Video link: youtu.be/excgcO7loj0 June 9, 2020 Andreas Winter (Universitat Autònoma de Barcelona) Energy-constrained diamond norms and the continuity of channel capacities and of open-system dynamics The channels, and more generally superoperators acting on the trace class operators of a quantum system naturally form a Banach space under the completely bounded trace norm (aka diamond norm). However, it is well-known that in infinite dimension, the norm topology is often "too strong" for reasonable applications. Here, we explore a recently introduced energy-constrained diamond norm on superoperators (subject to an energy bound on the input states). Our main motivation is the continuity of capacities and other entropic quantities of quantum channels, but we also present an application to the continuity of one-parameter unitary groups and certain one-parameter semigroups of quantum channels. Video link: youtu.be/05ZQPFB0aAc June 2, 2020 Mihalis Dafermos (Cambridge University) The nonlinear stability of the Schwarzschild metric without symmetry I will discuss an upcoming result proving the full finite-codimension non-linear asymptotic stability of the Schwarzschild family as solutions to the Einstein vacuum equations in the exterior of the black hole region. No symmetry is assumed. The work is based on our previous understanding of linear stability of Schwarzschild in double null gauge. Joint work with G. Holzegel, I. Rodnianski and M. Taylor. Video link: youtu.be/6Vh62H0rPiA May 26, 2020 Sven Bachmann (University of British Columbia) Adiabatic quantum transport In the presence of a spectral gap above the ground state energy, slowly driven condensed matter systems may exhibit quantized transport of charge. One of the earliest instances of this fact is the Laughlin argument explaining the integrality of the Hall conductance. In this talk, I will discuss transport by adiabatic processes in the presence of interactions between the charge carriers. I will explain the central role played by the locality of the quantum dynamics in two instances: the adiabatic theorem and an index theorem for quantized charge transport. I will also relate fractional transport to the anyonic nature of elementary excitations. Video link: youtu.be/ErgMuxMR_1A May 19, 2020 Pierre Clavier (University of Potsdam) Borel-Ecalle resummation for a Quantum Field Theory Borel-Ecalle resummation of resurgent functions is a vast generalisation of the well-known Borel-Laplace resummation method. It can be decomposed into three steps: Borel transform, averaging and Laplace transform. I will start by a pedagogical introduction of each of these steps. To illustrate the feasability of the Borel-Ecalle resummation method I then use it to resum the solution of a (truncated) Schwinger-Dyson equation of a Wess-Zumino model. This will be done using known results about this Wess-Zumino model as well as Sauzin's analytical bounds on convolution of resurgent functions. Video link: youtu.be/EzRoLEZhono May 12, 2020 Jan Philip Solovej (University of Copenhagen) Universality in the structure of Atoms and Molecules Abstract: The simplest approximate model of atoms and molecules is the celebrated Thomas-Fermi model. It is known to give a good approximation to the ground state energy of heavy atoms. The understanding of this approximation relies on a beautiful and very accurate application of semi-classical analysis. Although the energy approximation is good, it is, unfortunately, far from being accurate enough to predict quantities relevant to chemistry. Thomas-Fermi theory may nevertheless tell us something surprisingly accurate about the structure of atoms and molecules. I will discuss how a certain universality in the Thomas-Fermi model, indeed, holds approximately in much more complicated models, such as the Hartree-Fock model. I will also show numerical and experimental evidence that the approximate universality may hold even for real atoms and molecules. Video link: youtu.be/FCxkP7CqtQQ May 5, 2020 Martin Hairer (Imperial College London) The Brownian Castle Video link: youtu.be/Ve_EFZDbXTU
|
{}
|
## Rocky Mountain Journal of Mathematics
### Strongly copure projective, injective and flat complexes
#### Abstract
In this paper, we extend the notions of strongly copure projective, injective and flat modules to that of complexes and characterize these complexes. We show that the strongly copure projective precover of any finitely presented complex exists over $n$-FC rings, and a strongly copure injective envelope exists over left Noetherian rings. We prove that strongly copure flat covers exist over arbitrary rings and that $(\mathcal {SCF},\mathcal {SCF}^\bot )$ is a perfect hereditary cotorsion theory where $\mathcal {SCF}$ is the class of strongly copure flat complexes.
#### Article information
Source
Rocky Mountain J. Math., Volume 46, Number 6 (2016), 2017-2042.
Dates
First available in Project Euclid: 4 January 2017
https://projecteuclid.org/euclid.rmjm/1483520436
Digital Object Identifier
doi:10.1216/RMJ-2016-46-6-2017
Mathematical Reviews number (MathSciNet)
MR3591270
Zentralblatt MATH identifier
1378.16004
#### Citation
Ma, Xin; Liu, Zhongkui. Strongly copure projective, injective and flat complexes. Rocky Mountain J. Math. 46 (2016), no. 6, 2017--2042. doi:10.1216/RMJ-2016-46-6-2017. https://projecteuclid.org/euclid.rmjm/1483520436
#### References
• M. Auslander and M. Bridger, Stable module theory, Mem. Amer. Math. Soc. 94 (1969), 92–105.
• R.EI. Bashir, Covers and directed colimits, Alg. Rep. Theor. 9 (2006), 423–430.
• N.Q. Ding and J.L. Cheng, The flat dimensions of injective modules, Manuscr. Math. 78 (1993), 165–177.
• P.C. Eklof and J. Trlifaj, How to make Ext vanish, Bull. Lond. Math. Soc. 138 (2010), 461–465.
• E.E. Enochs, S. Estrada and A. Iacob, Gorenstein projective and flat complexes over Noetherian rings, Math. Nachr. 285 (2012), 834–851.
• E.E. Enochs and O.M.G. Jenda, Relative homological algebra, Walter de Gruyter, Berlin, 2000.
• ––––, Copure injective resolutions, flat resolvents and dimensions, Comm. Math. Univ. Carol. 189 (1993), 167–193.
• P. Gabriel, Objects injectifs dans les Catégories abéliennes, Sem. Dubr. 59 (1958), 9–18.
• J.R. Garcia Rozas, Covers and envelopes in the category of complexes of modules, CRC Press, Boca Raton, 1999.
• M.A. Goddard, Projective covers of complexes, Collection: Ring theory, World Science Publishing (1993), 172–181.
• Z.K. Liu and C.X. Zhang, Gorenstein projective dimensions of complexes, Acta. Math. Sinica 27 (2011), 1395–1404.
• ––––, Gorenstein injective complexes of modules over Noetherian rings, J. Algebra 321 (2009), 1546–1554.
• L.X. Mao, Some aspects of strongly P-projective modules and homological dimensions, Comm. Algebra 41 (2013), 19–33.
• L.X. Mao and N.Q. Ding, $FI$-injective and $FI$-flat modules, J. Algebra 309 (2007), 367–385.
• ––––, Global dimension and left derived functors of Hom, Sci. China 50 (2007), 887–898.
• ––––, On divisible and torsionfree modules, Comm. Algebra 36 (2008), 708–731.
• P. Roberts, Homological invariants of modules over a commutative ring, Lespresses University, Montreal, 1980.
|
{}
|
# Database of adjacency matrices on cospectral non-isomorphic graph pairs
Is there a repository of cospectral non-isomorphic graphs available somewhere?
I am looking for list of $0/1$ adjacency matrix pairs that can be input data in tools such as MATLAB.
• Sage can generate all graphs on a given set of vertices which are cospectral with a given adjacency matrix. See mvngu.googlecode.com/hg/onepage/sage/graphs/graph_generators/… – Tony Huynh Dec 19 '15 at 8:54
• @TonyHuynh is there matlab code? – 1.. Dec 19 '15 at 8:56
• Sorry, I don't use Matlab, so I don't know. – Tony Huynh Dec 19 '15 at 9:07
• @TonyHuynh Cospectral to given graph looks interesting. From the documentation it is not clear to me how to do this. Would you please give example? Is it efficient or just enumerates graphs?. – joro Dec 19 '15 at 9:32
The simplest source of cospectral graphs is lists of strongly regular graphs, lots of which are easily available from Ted Spence's web page at http://www.maths.gla.ac.uk/~es/srgraphs.php.
Otherwise you can use Sage to generate small graphs (up to 10 or so vertices) and then filter out cospectral pairs or groups. I expect the built in Sage function for cospectral pairs just wraps this up.
I don't know what you are doing with them, but I'd probably recommend choosing the computational tool based on what you need, rather than specifying Matlab in advance. If you're working with 64 vertex graphs you'll need full symbolic computation with arbitrary length integers and you'll want to avoid, or be very very careful, in finding eigenvalues numerically.
• I cant make out from page whether cospectral lists are non-isomorphic (I am assuming they are) can i take them to be non-isomorphic cospectral sets? – 1.. Dec 19 '15 at 10:45
• Ted Spence's lists are of pairwise non-isomorphic graphs. – Gordon Royle Dec 19 '15 at 12:31
• every strongly regular graph of same size is cospectral? – 1.. Dec 19 '15 at 22:28
• @Turbo - every strongly regular graph with the same parameters is cospectral. See en.m.wikipedia.org/wiki/Strongly_regular_graph for precise definition of parameters. – Gordon Royle Dec 20 '15 at 0:28
• are these the hardest examples to test for isomorphism and non-isomorphism? – 1.. Dec 20 '15 at 1:02
You can do this in sage for small order and then export the adjacency matrices to say text file friendly to Matlab and then parse in Matlab.
Tony Huynh suggests one approach. Another approach is to enumerate with McKay's nauty in sage in keep track of cospectral.
Such database will be large:
https://oeis.org/A082104
A082104 Number of distinct characteristic polynomials among all simple undirected graphs on n nodes. 1, 2, 4, 11, 33, 151, 988, 11453, 247357, 10608128, 901029366, 148187993520
Check the references in OIES.
From Brouwer's reference:
https://www.win.tue.nl/~aeb/graphs/cospectral/cospectralA.html Numbers of characteristic polynomials and cospectral graphs
Consider contacting Brouwer, though the full database will take a lot of space AFAICT.
|
{}
|
# trig!!
posted by .
sin^4 x (cos^2 x)=(1/16)(1-cos 2x)(1-cos4x)
work one 1 side only!
## Similar Questions
these must be written as a single trig expression, in the form sin ax or cos bx. a)2 sin 4x cos4x b)2 cos^2 3x-1 c)1-2 sin^2 4x I need to learn this!! if you can show me the steps and solve it so I can learn I'd be grateful!!! 1) apply …
2. ### trig
Reduce the following to the sine or cosine of one angle: (i) sin145*cos75 - cos145*sin75 (ii) cos35*cos15 - sin35*sin15 Use the formulae: sin(a+b)= sin(a) cos(b) + cos(a)sin(b) and cos(a+b)= cos(a)cos(b) - sin(a)sin)(b) (1)The quantity …
3. ### tigonometry
expres the following as sums and differences of sines or cosines cos8t * sin2t sin(a+b) = sin(a)cos(b) + cos(a)sin(b) replacing by by -b and using that cos(-b)= cos(b) sin(-b)= -sin(b) gives: sin(a-b) = sin(a)cos(b) - cos(a)sin(b) …
4. ### Trig
Given: cos u = 3/5; 0 < u < pi/2 cos v = 5/13; 3pi/2 < v < 2pi Find: sin (v + u) cos (v - u) tan (v + u) First compute or list the cosine and sine of both u and v. Then use the combination rules sin (v + u) = sin u cos …
5. ### trig
it says to verify the following identity, working only on one side: cotx+tanx=cscx*secx Work the left side. cot x + tan x = cos x/sin x + sin x/cos x = (cos^2 x +sin^2x)/(sin x cos x) = 1/(sin x cos x) = 1/sin x * 1/cos x You're almost …
6. ### pre-cal
Simplify the given expression........? (2sin2x)(cos6x) sin 2x and cos 6x can be expressed as a series of terms that involve sin x or cos x only, but the end result is not a simplification. sin 2x = 2 sinx cosx cos 6x = 32 cos^6 x -48
7. ### Math - Solving Trig Equations
What am I doing wrong? Equation: sin2x = 2cos2x Answers: 90 and 270 .... My Work: 2sin(x)cos(x) = 2cos(2x) sin(x) cos(x) = cos(2x) sin(x) cos(x) = 2cos^2(x) - 1 cos(x) (+/-)\sqrt{1 - cos^2(x)} = 2cos^2(x) - 1 cos^2(x)(1 - cos^2(x))
8. ### trig help again i cant get any of these
sin^4cos^2x=(1/16)(1-cos 2x)(1-cos4x) work one 1 side only!
9. ### TRIG!
Posted by hayden on Monday, February 23, 2009 at 4:05pm. sin^6 x + cos^6 x=1 - (3/4)sin^2 2x work on one side only! Responses Trig please help! - Reiny, Monday, February 23, 2009 at 4:27pm LS looks like the sum of cubes sin^6 x + cos^6 …
10. ### pre calc trig check my work please
sin x + cos x -------------- = ? sin x sin x cos x ----- + ----- = sin x sin x cos x/sin x = cot x this is what i got, the problem is we have a match the expression to the equation work sheet and this is not one of the answers. need
More Similar Questions
|
{}
|
# How do you solve \frac{y + 4}{7} = 3?
$y + 4 = 3 \cdot 7$
$y + 4 = 21$
$y = 21 - 4$
$y = 17$
$\frac{17 + 4}{7} = 3$
$\frac{21}{7} = 3$
$3 = 3$
|
{}
|
Developing a project in Javascript
kuniga.me > NP-Incompleteness > Developing a project in Javascript
# Developing a project in Javascript
15 Feb 2016
I’ve worked with several small Javascript side-projects. The amount of Javascript libraries and frameworks is overwhelming, especially in recent times.
In the past, I would write a couple of stand-alone Javascript files from scratch. As applications get bigger and more complex, new libraries for improving project development have been created.
I decided to look around for best practices to develop open source JavaScript applications these days. This post is a set of notes describing my findings.
We’ll discuss libraries that solves different needs for software projects including libraries, modularization, automated building, linter and finally testing frameworks.
### Packages/Libraries
Javascript doesn’t have an official package management. There has been an effort to standartize how Javascript libraries are distributed. With Node.js, came its package manager that npm (node package manager), that was initially indented for Node.js packages, but can also be used for general libraries, independent of Node.js itself.
To work with npm, we need to write a configuration file, called package.json. In this file, which is a JSON, we can define metadata when building a library, including title, version and the dependencies of other libraries. A sample configuration looks like this:
Dependencies
In the dependencies, we have to specify the versions. A version (more specifically semantic version or semver) consists of three parts numbers separated by '.'. The last number should be bumped on small changes, like bug fixes, without change on functionality. The middle number, aka minor version, should be bumped whenever new features are added, but that are back-compatible. Finally, the first number, aka major version, should be bumped whenever back-incompatible changes are made [1].
In package.json, you can specify a hard-coded version number or be more relaxed. If we use the '~' in front of the version, for example ~5.11.0, it means we accept the most recent version of form 5.11.x. On the other hand, if we use the '^', for example ^2.5.0, we accept the most recent version of the form 2.x.x.
The dependencies of a package can be either production or development dependencies. In our case, browserify and uglify are only used for building our package and not a dependency our code has, so it doesn’t make sense to ship those to the user of the library.
To parse the configuration in package.json, we can run:
This will download the dependencies listed under devDependencies locally in the directory node_modules (created in the same directory the package.json is). To run the production dependencies, we can do:
### Modules
Modules are useful for splitting code in related units and enables reuse. JavaScript doesn’t have a native module system, so some libraries were built to address the modularization problem. There are three main types of module systems around: AMD (Asynchronous Module Definition), CommonJS and the ES6 loader. Addy Osmani discusses the differences between those in [2].
There are several implementations for modules, including RequireJS (AMD), browserify (uses the node.js module system, which uses CommonJS). SystemJS is able to work with all these different types.
I had been working with browserify, but it seems better to adopt the ES6 standards, so I’ve switched to SystemJS. Another advantage of SystemJS is that is also allows ES6 syntax by transpiling the code using BabelJS.
To use SystemJS we need to define a configuration file (analogous to package.json), named config.js (don’t worry about it for now).
Exporting
Named exports. We can have multiple export statements within a single file or provide all exports within a single statement [3]. Example:
Default exports. We can export default items in a module (the reason will be clear when we talk about importing next). We show the syntax for both the inline and the named exports:
Importing
We have 3 basic ways to import from a module.
1. Name all items we want to pick from the module.
1. Do not provide any specific item, in which case we’ll import the default export:
1. Import all item from the module under a ‘namespace’, basically
NPM Packages
To be able to import NPM packages, we have to download them first and for that we can use the jspm.io tool. For example, I was interested in the point-in-polygon package. Instead of running the npm command, we can use jspm:
Running jspm will write to the config.js file (it creates one if it doesn’t exist). This will write a map from where the module got installed and the name you can use in code to import it. Since npm packages use the CommonJS syntax and SystemJS understands it, in code we can simply do:
### Building
The process of running commands like SystemJS can be automated. One idea is writing Makefiles to run command line. Another option is to use JavaScript frameworka, such as Grunt and Gulp. In this post we’ll stick to Grunt.
To configure a build, we need to provide another configuration file, called Gruntfile.js (should live in the same directory as the package.json). You provide an object to grunt.initConfig(), which contains tasks configurations.
With grunt.registerTask('default', ['systemjs']) we’re telling grunt to run the systemjs task whenever we run grunt from the command line.
It’s possible to run grunt automatically upon changes to JS files via the watch task. First, we need to install the plugin:
Then we configure it in Gruntfile.js:
Here taskList is an array of task names. It can be the same one provided to the default task. Make sure to blacklist some directories like dist, which is the output directory of the systemjs task (otherwise we’ll get an infinite loop). Finally we run:
Now, whenever we perform a change to any JS file it will run the task.
### Minification
Since Javascript code is interpreted on the client (browser), the source code must be downloaded from the server. Having a large source code is not efficient from a network perspective, so often these libraries are available as a minified file (often with extension min.js to differentiate from the unminified version).
The source code can be compressed by removing extra spaces, renaming variables, etc, without changing the program. One popular tool to achieve this is UglifyJS.
To use it with Grunt, we can install the grunt-contrib-uglify module:
And in our Gruntfile.js:
### Linting
Lint tools help us avoiding bugs, sticking to code conventions and improving code quality. One popular tool for linting is jshint. Other alternatives include jslint. JSHint has a Grunt plugin:
The basic configuration here makes sure to blacklist “production” directories like node_module and dist. Also, since we’ve been adopting ES6, we can set the esnext flag to tell jshint to account for the new syntax.
We probably don’t want to run the lint every time we update the JS file. We can run it less often, for example before sending code for review. Thus, we can create a separate registry for it using grunt.registerTask('lint', ['jshint']). We can now run jshint via the command line:
### Testing
Another practice to avoid bugs is testing, including unit tests. Again, there are several libraries and frameworks that makes the job of unit testing less painful, for example easy ways to mock dependencies so we can test isolated functionality. In this case, I’ve picked Jest, which has a grunt task available in npm, which we can install via:
(NOTE: this will also install the jest-cli binary which depends on a Node.js version >= 4, so you might need to update your Node.js).
We can configure the grunt task with default configs in the following way:
With this setup we can run the following command to run jest tests:
Unfortunately, jest uses the CommonJS require syntax. It used to be possible to use babel-jest but after version 5.0 this setup doesn’t work anymore.
### Conclusion
The JavaScript environment changes extremely fast and it’s very hard to keep on top of the latest frameworks/practices, etc.
To make things worse, for every task like module system, linting, testing, there are many alternatives and none of them is a clear best choice.
I’m happy that there’s an effort of standardization with ES6. I think the more we stick to one convention the more we reduce re-inventing the wheel, the less syntax differences to learn, etc.
### References
• [1] Semantic versioning and npm
• [2] Writing Modular JavaScript With AMD, CommonJS & ES Harmony
• [3] ECMAScript 6 modules: the final syntax
|
{}
|
# New Perspectives
US/Central
One West (Fermilab)
### One West
#### Fermilab
, , , , ,
Description
New Perspectives is a conference for, and by, young researchers in the Fermilab community. It provides a forum for graduate students, postdocs, visiting researchers, and all other young persons that contribute to the scientific program at Fermilab to present their work to an audience of peers.
New Perspectives has a rich history of providing the Fermilab community with a venue for young researchers to present their work. Oftentimes, the content of these talks wouldn’t appear at typical HEP conferences, because of its work-in-progress status or because its part of work that will not be published. However, it is exactly this type of work, frequently performed by the youngest members of our community, that forms the backbone of the research program at Fermilab. The New Perspectives Organizing Committee is deeply committed to presenting to the community a program that accurately reflects the breadth and depth of research being done by young researchers at Fermilab.
To accommodate all types of participants, this year New Perspectives will be hybrid:
• 16. and 17. June -- hybrid (in-person at Fermilab and on Zoom)
• 21. and 22. June -- virtual on Zoom
New Perspectives is organized by the Fermilab Student and Postdoc Association and along with the Fermilab Users Annual Meeting.
Please reach out to us at fspa_officers@fnal.gov if you have any questions.
Participants
• Abinash Pun
• Aidan Cloonan
• Aleksandra Ciprijanovic
• Alessio D'Agliano
• Alexandra Moor
• Alexx Perloff
• Aman Desai
• Anastasia Sokolenko
• Andres Alba Hernandez
• Andrew Mastbaum
• Anežka Klustová
• Anna Hall
• Anna Heggestuen
• Antonio Gioiosa
• Ariana Hackenburg
• Arthur Conover
• Ashia Lewis
• Ashley Back
• Barbara Yaeggy
• Barnali Brahma
• Beth Powers
• Bhumika Mehta
• Biswaranjan Behera
• Brenda Cervantes
• Bruce Howard
• Byungchul Yu
• Carrie McGivern
• Chatura Kuruppu
• Christian Herwig
• Daisy Kalra
• Dan Hooper
• Daniel Carber
• David Kessler
• Dinesh Kumar Singha
• Dinupa Nawarathne
• Dominika Vasilkova
• Dylan Temples
• Ed Tatar
• Egor Danilov
• Emily Richards
• Eric Deck
• Filippo Varanini
• Franklin Lemmons
• Gabriela Lima Lichtenstein
• Gourav Khullar
• Hannah Magoon
• Hanzhi Tan
• Henry Lay
• Heriques Frandini
• Hilary Utaegbulam
• Huanbo Sun
• Ishwar Singh
• Ivan Lepetic
• Jacob Boza
• Jacob Larkin
• Jamie Dyer
• Jason Poh
• Jason St. John
• Josh Barrow
• Jozef Trokan-Tenorio
• Karem Penalo Castillo
• Katie Harrington
• Kaushik Borah
• Kirit Karkare
• Komninos-John Plows
• Lauren Yates
• Livio Calivers
• Lynn Tung
• Mackenzie Devilbiss
• Mackenzie Devilbiss
• Margaret Voetberg
• Maria Manrique Plata
• Maria Martinez Casales
• Marina Dunn
• Marvin Ascencio Sosa
• Masato Kimura
• Matthew Green
• Matthew Judah
• Matthew Solt
• Meghna Bhattacharya
• Melissa Quinnan
• Michael Hedges
• Michael Tessel
• Michelle Dolinski
• Monica Nunes
• Moonzarin Reza
• Mun Jung Jung
• Namratha Urs
• Nick Kamp
• Nilay Bostan
• Noah Weaverdyck
• Nupur Oza
• Ohana Beenevides Rodrigues
• Olivia Bitter
• On Kim
• Orgho Anoronyo Neogi
• Paul Hackspacher
• Pierce Weatherly
• Polina Abratenko
• Priyanka Dilip
• Rebecca Hicks
• Rob Fine
• Rose Branson
• Ryan Kim
• Ryan LaZur
• Sajid Ali Syed
• Sam McDermott
• Samantha Lewis
• Santiago Perez
• Sayeed Akhter
• Sebastian Sanchez Falero
• Shivaraj Mulleria Babu
• Sophia Zhou
• Sowjanya Gollapinni
• Stefan Knirck
• Stefano Tognini
• Sudeshna Ganguly
• Susanna Stevenson
• Tanvi Wamorkar
• Tausif Hossain
• Teresa Lackey
• Thomas Murphy
• Tyler Boone
• Tyler Stokes
• Umut Demirbozan
• Valentina Novati
• Vaniya Ansari
• Vincent Basque
• Vinicius do Lago Pimentel
• Zhilei Xu
• Zijie Wan
• Zubair Dar
• Thursday, June 16
• Fixed Target: SeaQuest/SpinQuest
Convener: Joshua Barrow (MIT, TAU, FNAL)
• 1
SpinQuest in 10 Minutes
The SpinQuest experiment (E1039) will measure the azimuthal asymmetry of dimuon pair production via scattering of unpolarized protons from transversely polarized NH3 and ND3 targets. The asymmetry will be measured for both Drell-Yan scattering and J/psi production. By measuring the asymmetry for the Drell-Yan process, it is possible to extract the Sivers Function for the light anti-quarks in the nucleon. A non-zero asymmetry would be “smoking gun” evidence for orbital angular momentum of the light sea-quarks: a possible contributor to the proton’s spin. The status and plans for the experiment will also be discussed.
Speaker: Arthur Conover (University of Virginia)
• 2
Extraction of Transverse Single Spin Asymmetry in $J/\psi$ Production in $p\vec{p}$ Interactions at 120 GeV Beam Energy
Estimates are presented for the SpinQuest experiment to extract the Transverse Single Spin Asymmetry (TSSA) in $J/\psi$ production as a function of the $J/\psi$ transverse momentum ($p_{T}$) and Feynman-$x$ ($x_{F}$). SpinQuest is a fixed-target Drell-Yan experiment at Fermilab, using an unpolarized 120 GeV proton beam incident on a polarized solid ammonia target. Such measurements will allow us to test models for the internal transverse momentum and angular momentum structure of the nucleon. $J/\psi$ is predominantly produced by strong interaction via quark-antiquark annihilation and gluon fusion. A non-zero asymmetry provides information on the orbital angular momentum contribution of “sea-quarks” to the spin of the nucleon. Simulated data were generated using the SpinQuest/E1039 simulation framework. Gaussian Process Regression (GPR), which is a powerful technique used in machine learning, was used to predict the background under the $J/\psi$ invariant mass peak by fitting the Radial-basis function (RBF) kernel in side-band regions on either side of the $J/\psi$ peak. We used this trained kernel to predict the background in the $J/\psi$ peak region. After subtracting the background, we used iterative Bayesian unfolding to make corrections for the detector inefficiencies and smearing effects. In this presentation, we discuss results on predictions for the expected absolute error of the asymmetry ($A_{N}$) for a few $p_{T}$ and $x_{F}$ bins for 10 weeks of running.
Speaker: Dinupa Nawarathne (New Mexico State University)
• 3
Searching for dark sector particles in the SpinQuest experiment
Searching for light and weakly-coupled dark sector particles is of vital importance in worldwide dark matter searches. Long-lived dark mediators can be generated through interactions between proton beam and fixed target at the SpinQuest experiment (E1039) at Fermilab. These hypothetical long-lived particles will travel several meters before decaying into SM particles and can be tracked by the dedicated spectrometer. A new dimuon trigger system is under development to improve the efficiency for displaced signals. We also propose a further upgrade by adding an electromagnetic calorimeter to the current detector to extend the detection capability to electron, photon, and hadronic final states. With these dedicated effort, we can perform new world-leading searches within the next few years.
Speaker: Zijie Wan (boston university)
• 4
Spin Alignment of $J/\Psi$ Production in 120 GeV $p$-Fe Interactions
Various models based on quantum chromodynamics (QCD) have not yet been able to fully explain the production mechanism of heavy quark bound states. Most recent models such as the Color Evaporation Model (CEM) and Non-Relativistic QCD (NRQCD) successfully explain the higher transverse momentum spectra while none of them is able properly explain the spin alignment measured by various experiments. The $J/\Psi$ is a charmonium bound state of charm and anti-charm quark with spin 1. SeaQuest, a fixed target experiment at Fermilab, has completed its data taking. The spectrometer of the experiment was designed to measure high energy muons, and it uses a 500 cm long Iron (Fe) block as beam dump. While interactions in the target served the primary goal of probing the flavor structure of the nucleon, a wealth of data from interaction with the iron beam dump provides ample opportunity to study charmonium production as well. In this talk, we report our progress on the measurement of the spin alignment of $J/\Psi$ produced in 120 GeV $p$-Fe interactions at the SeaQuest experiment.
Speaker: Abinash Pun (New Mexico State University)
• 5
Measurement of the Angular Distribution of Drell-Yan Production in $p$+Fe Interactions at 120 GeV Beam Energy
We report on progress towards a measurement of the angular distributions of Drell-Yan dimuons produced at the SeaQuest/E906 Fermilab experiment, using the 120 GeV proton beam on a Fe target. The beam dump upstream of the dimuon spectrometer, which serves as the iron target, is expected to provide a very large statistical significance for this measurement. To extract the Drell-Yan signal, a combinatorial background subtraction method was developed. After this subtraction, the detector, trigger, and reconstruction efficiency is corrected using a Bayesian unfolding method that takes into account acceptance, efficiency, and bin migration. The result from this analysis will provide a test of the validity of the Lam-Tung relation. In this presentation, we will demonstrate the validity of these analysis techniques.
Speaker: Md Forhad Hossain (New Mexico State University)
• 2:15 PM
Break
• Cryogenic/Electronics
Convener: Joshua Barrow (MIT, TAU, FNAL)
• 6
A Cryogenic Readout IC with 100 KSPS in-Pixel ADC for Skipper CCD-in-CMOS Sensors
The Skipper CCD-in-CMOS Parallel Read-Out Circuit (SPROCKET) is a mixed-signal front end design for the readout of Skipper CCD-in-CMOS image sensors. SPROCKET is fabricated in a 65 nm CMOS process and each pixel occupies a $45\mu m \times 45 \mu m$ footprint. SPROCKET is intended to be heterogeneously integrated with a Skipper-in-CMOS sensor array, such that one readout pixel is connected to a multiplexed array of nine Skipper-in-CMOS pixels to enable massively parallel readout. The front end includes a variable gain preamplifier, a correlated double sampling circuit, and a 10-bit serial successive approximation register (SAR) ADC. The circuit achieves a sample rate of 100 ksps with 0.48 $\mathrm{e^-_{rms}}$ equivalent noise at the input to the ADC. SPROCKET achieves a maximum dynamic range of 9,000 $e^-$ at the lowest gain setting (or 900 $e^-$ at the lowest noise setting). The circuit operates at 100 Kelvin with a power consumption of 40 $\mu W$ per pixel. A SPROCKET test chip will be submitted for manufacture in June 2022.
• 7
High-Energy Physics (HEP) experiments heavily rely on computational power to be able to conduct simulations and perform analyses. Computing infrastructure for HEP involves computational needs that cannot be met in a reasonable time by a single computer. To complete a computational task with a short turnaround, the computations are split into smaller parts which are then executed in parallel on multiple, geographically distributed, computing resources. These resources include local clusters, computing grids where universities and laboratories share their clusters, supercomputers, and commercial clouds like AWS and GCE. This approach is known as the High Throughput Computing (HTC) paradigm and is highly complex due to the heterogeneity of resources and its distributed nature. A workload manager, called GlideinWMS, is used by CMS, DUNE, OSG, and most Fermilab experiments. GlideinWMS provides elastic virtual clusters, customized to the needs of the experiments so that scientists can worry less about the computing aspects, while having the need for hundreds of thousands of computers working in parallel satisifed. Recently, GlideinWMS has been upgraded to support the provisioning of CVMFS on demand. CVMFS is a distributed file system used by many experiments to globally distribute their data and software. Providing CVMFS without the need for a local installation will allow more experiments to use CVMFS and to run more resources for the ones that use it already.
Speaker: Namratha Urs (Fermilab)
• 3:00 PM
Coffee Break
• Neutrinos: DUNE
Convener: Polina Abratenko
• 8
Modeling and Analysis of Ionization Laser Calibration for the DUNE Time Projection Chamber
The Deep Underground Neutrino Experiment (DUNE) is a next-generation neutrino oscillation experiment consisting of a near detector at Fermilab and a far detector located 1,480 meters underground and 1285 km away in Lead, South Dakota. The far detector will consist of four modules, at least three of which will be Liquid Argon Time Projection Chambers (TPC), intersecting the neutrino beam produced at Fermilab. Among other physics goals, DUNE will measure charge-parity violation in neutrinos, a possible mechanism allowing for matter-antimatter asymmetry to arise in the early universe. At 17 kilotonnes per module, DUNE’s TPCs will be the largest of their kind, resulting in new instrumentation challenges. As TPCs grow in size, improved calibration techniques are required to ensure accurate position and energy reconstruction. DUNE will require fine-grained measurement of detector response parameters such as electric field distortions, electron drift velocity, and defects such as cathode-anode misalignment. DUNE’s Ionization Laser (IoLaser) system will enable these measurements by generating tracks of known origin and direction throughout the active volume. In this talk, I will explain how the signals introduced by this calibration hardware can be converted to a robust measurement of electric field uniformity in the DUNE TPC, with a focus on the analysis and data science methods used.
Speaker: Eric Deck (Los Alamos National Lab)
• 9
Prototyping for the DUNE ND-LAr Light Detection System
The DUNE ND-LAr consortium is conducting an extensive prototyping campaign for the Liquid Argon
TPC for the DUNE Near Detector. The DUNE ND-LAr detector consists of 35 individual modules with
a total fiducial mass of 50 tons. As part of the prototyping campaign a demonstrator detector holding
2x2 modules is placed in the NuMI beam at Fermi National Accelerator Laboratory (Fermilab). Each 2x2
module is tested individually at the University of Bern, recording > 5 million cosmic ray interactions. Using
these data different detector performance studies could be performed. This talk will discuss the performance
of the light readout system with a focus on the spatial and temporal resolution as well as on the photon
detection efficiency.
Speaker: Mr Livio Calivers (University of Bern)
• 10
Muon Momentum Estimation in ProtoDUNE using Multiple Coulomb Scattering
The Deep Underground Neutrino Experiment (DUNE) is a long baseline neutrino experiment using liquid argon detectors to study neutrino oscillations, proton decay, and other phenomena. The single-phase ProtoDUNE detector is a prototype of the DUNE far detector and is located in a charged particle test beam at CERN. It is critical to have accurate momentum estimation of charged particles for calibration and testing of the ProtoDUNE detector performance, as well for proper analysis of DUNE data. Charged particles passing through matter undergo multiple Coulomb scattering (MCS). MCS is momentum-dependent, allowing it to be used in muon momentum estimation while allowing for momentum estimation of muons exiting the detector, a key benefit of MCS over various other methods. We will present the status of the MCS analysis which was developed and evaluated using Monte Carlo simulations and discuss the bias and resolution of our momentum estimation method, as well as its dependencies on the detector resolution.
Speaker: Dr Siva Prasad Kasetti (Louisiana State University)
• 11
Baryon Number Violation Searches in DUNE
The Deep Underground Neutrino Experiment (DUNE) is an international project that will study neutrinos and search for phenomena predicted by theories Beyond the Standard Model (BSM). DUNE will use a 70-kton liquid argon time projection chamber (LArTPC) located more than a kilometer underground. The excellent imaging capabilities of the LArTPC technology, in addition to the large size and underground location, allow the experiment to probe many types of rare processes. This talk will summarize DUNE’s sensitivity to baryon number violating processes and discuss ongoing efforts to improve DUNE's sensitivity to them.
Speaker: Tyler Stokes (Louisiana State University)
• 12
Development of Ionization Laser Calibration System for DUNE
The Deep Underground Neutrino Experiment (DUNE) is a forthcoming neutrino oscillation experiment that will be the largest of its kind. Utilizing liquid argon time projection chamber (LArTPC) technology, DUNE’s far detector will consist of four 17 kiloton modules and be located approximately 1,500 meters underground at Sanford Underground Research Facility (SURF). Due to its large size, improved calibration techniques are required to ensure accurate particle trajectory reconstruction. Small defects in anode-cathode alignment, electric field distortions, and wire response uniformity can negatively affect reconstruction. As DUNE is still under construction, prototype technologies for DUNE are developed and tested at ProtoDUNE, a 700 ton LArTPC located at CERN in Switzerland. At Los Alamos National Laboratory (LANL), prototype ionization laser systems are being developed for implementation in the second run cycle of ProtoDUNE. The ionization laser system (IoLaser) will allow for detector calibration by generating tracks with a known direction and energy throughout the detector volume. In this talk, I will discuss calibration challenges for DUNE and present an overview of the IoLaser system, including progress on current prototyping efforts for deployment in ProtoDUNE.
Speaker: Rebecca Hicks
• 4:30 PM
Break
• Neutrinos: ANNIE
Convener: Matthew Judah (University of Pittsburgh)
• 13
ANNIE in 10 minutes
Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a 26-ton Gd-doped water Cherenkov detector located on the Booster Neutrino Beam (BNB) at Fermilab and designed to measure the neutron multiplicity of neutrino-nucleus interactions in their final state. In long-baseline oscillation experiments, signal-background separation and a better understanding of cross-section uncertainty are in high demand. With its next-generation neutrino detector with advanced photosensors (LAPPD) and gadolinium-enhanced water, ANNIE makes possible. This talk will go over physics goals and the ANNIE status.
Speaker: Marvin Ascencio Sosa (Iowa State University)
• 14
First LAPPD Deployment in ANNIE
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is the first high energy physics experiment to use LAPPDs. The experiment uses Gd-loaded water to study for neutrino interactions and produce a measurement of the neutron yield out of neutrino-nucleus interactions. LAPPDs allow us to better localize the interaction point of the neutrinos. But what exactly are LAPPDs, besides a challenge to say it three times fast? As their name implies, these Large Area Picosecond Photo-Detectors are a novel type of light sensor with a large sensitive area and enhanced time resolution. In this talk I will explain how LAPPDs work and how they enhance the physics of ANNIE.
Speaker: Paul Hackspacher (UC Davis)
• 15
Reconstruction Techniques in ANNIE
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a 26-ton Gd-doped water Cherenkov neutrino detector. It aims both to determine the neutron multiplicity from neutrino-nucleus interactions in water and provide a staging ground for new technologies relevant to the field. To this end, several analysis methods have been developed. Interaction position and subsequent track direction is determined by a maximum likelihood fit. Machine and deep learning techniques are used to reconstruct interaction energy and perform particle identification. Beam data is being analyzed and Large Area Picosecond Photo-Detectors (LAPPDs) are being deployed and commissioned, which are expected to enhance event reconstruction capabilities. This talk will cover these analysis techniques being used and their status.
Speaker: Franklin Lemmons (South Dakota School of Mines and Technology)
• Friday, June 17
• Neutrinos: SBND
• 16
SBND in 10 Minutes
The Short-Baseline Near Detector (SBND) will be one of three Liquid Argon Time Projection Chamber (LArTPC) neutrino detectors positioned along the axis of the Booster Neutrino Beam (BNB) at Fermilab, as part of the Short-Baseline Neutrino (SBN) Program. The detector is currently in the construction phase and is anticipated to begin operation in 2023. SBND is characterized by superb imaging capabilities and will record over a million neutrino interactions per year. Thanks to its unique combination of measurement resolution and statistics, SBND will carry out a rich program of neutrino interaction measurements and novel searches for physics beyond the Standard Model (BSM). It will enable the potential of the overall SBN sterile neutrino program by performing a precise characterization of the unoscillated event rate, and constraining BNB flux and neutrino-argon cross-section systematic uncertainties. In this talk, the physics reach, current status, and future prospects of SBND are discussed.
Speaker: Heriques Frandini (UNIFAL)
• 17
Data Acquisition & Reconstruction Efficiency with the SBND PDS
The Short-Baseline Near Detector (SBND), a 112 ton active volume liquid argon time projection chamber, is one of three detectors in Fermilab's Short-Baseline Neutrino program. SBND's proximity to the target will allow for high statistics of neutrino events, but as a surface detector, it will also see a high background rate of cosmic rays. To extract the full physics potential of SBND, the data acquisition and reconstruction algorithms must be optimized across the experiment's sub-systems. SBND's photon detection system, a best-in-class light detection system for collecting scintillation photons produced by particle interactions in liquid argon, plays a crucial role in SBND's trigger and event reconstruction chain. In this talk, we give an overview of the essential steps of data acquisition and reconstruction that ultimately drives SBND's precision measurements of neutrino physics.
Speaker: Lynn Tung
• 18
The UV Laser Calibration System for measuring the electric field in the SBND Liquid Argon Time Projection system
The Short-Baseline Near Detector (SBND) is a LArTPC located approximately 110 meters from the target in Fermilab’s Booster Neutrino Beam (BNB). It will measure neutrino cross sections and the un-oscillated neutrino flux to reduce uncertainties in the aid searches for anomalous oscillations.
The electric field inside the SBND TPC may have distortions for several reasons, such as the space charge effect. The space charge effect comes from the abundant cosmic rays that ionize the argon, producing copious positive argon ions. A precise determination of the electric field distortion inside the TPC volume is required along a procedure to compensate for the distortion in the spatial coordinate. These spatial distortions, if not understood, would affect both the topological and calorimetric reconstruction of events in the detector. The UV calibration system is the detector system that will perform this measurement. In this talk, I will briefly overview the UV laser calibration system for SBND, the progress, the methodology for deriving spatial distortion and electric field, and how to correct them in data analysis.
Speaker: Shivaraj Mulleria Babu
• 19
Study of the QE-like Exclusive Channel at SBND
The upcoming Short-Baseline Near Detector (SBND) experiment will play a crucial role in the Short-Baseline Neutrino (SBN) Program’s sterile neutrino search as the near detector, as well as contribute significantly to the understanding of neutrino-nucleus interactions. The high event statistics of over a million neutrino events per year, together with the reconstruction capabilities of liquid argon time projection chamber detectors will allow precision measurements on various exclusive channels, including the quasielastic-like (QE-like) channel. As this channel is the dominant interaction channel for SBND, and since it has a simple working event topology definition of one muon, one proton and nothing else, it is an appealing channel for various physics analyses. In this talk I will outline the selection process for a high purity QE-like sample. Furthermore, I will discuss how the analysis on this channel ties to understanding neutrino-nucleus interactions and to better neutrino energy reconstruction.
Speaker: Mun Jung Jung (the University of Chicago)
• 9:15 AM
Break
• Neutrinos: ICARUS/ LArTPCs
Convener: Rob Fine
• 20
ICARUS in 10 minutes
The ICARUS experiment is now commissioned and taking physics data. ICARUS employs a 760-ton (T600) LArTPC detector. In this talk, I will summarize the status and plans of the ICARUS experiment. At this time neutrino events from both the Booster Neutrino Beam (BNB) and the NuMI off-axis beam have been observed and recorded. ICARUS is positioned to search for evidence of sterile neutrinos as part of the Short Baseline Neutrino (SBN) program at FNAL and should clarify open questions of presently observed neutrino anomalies. In addition a program of neutrino cross-sections measurements on LAr will be pursued.
Speakers: Tyler Boone, Tyler Boone (Colorado State University)
• 21
Hit Reconstruction in the ICARUS (SBN FD) Cosmic Ray Tagging system
The ICARUS neutrino detector is a 760 ton Liquid Argon Time Projection Chamber (LArTPC) operating as the far detector in the Short Baseline Neutrino (SBN) Program based at Fermilab. As this detector will operate at shallow depth, it is exposed to a high flux of cosmic rays that could fake a neutrino interaction. The installation of a 3-meter-thick concrete overburden and a Cosmic Ray Tagging (CRT) system that surrounds the LArTPC and tag incoming particles mitigate this cosmogenic background source. I will discuss a preliminary analysis using data from the now fully commissioned CRT system.
Speaker: Anna Heggestuen (Colorado State University)
• 22
Muon-neutrino selection and reconstruction in ICARUS
The ICARUS detector will search for neutrino oscillations involving eV-scale sterile neutrinos using the Booster Neutrino Beam at Fermilab. These oscillations may be observed as muon-neutrino ($\nu_\mu$) disappearance, which will require a high purity sample of $\nu_\mu$ events in the detector with sufficient statistics to maintain sensitivity to $\nu_\mu$ disappearance. Additionally, the energy of neutrino events must be reconstructed in order to perform fits of neutrino oscillations. A preliminary study of selection cuts and reconstructed neutrino energy, using simulated data, will be shown to demonstrate the impact of these factors on the sensitivity of ICARUS to $\nu_\mu$ disappearance.
Speaker: Jacob Larkin (Brookhaven National Lab)
• 23
Phenomenological Demonstration of Deep Neural Networks in the search for BSM Physics with LArTPCs
The high intensity of POT and excellent particle identification and reconstruction capabilities of LArTPCs make experiments within the SBN program sensitive to a multitude of BSM models. One such example is the demonstrated sensitivity of the program’s detectors to dilepton pairs originating from exotic Higgs Portal Scalar decays. Columnated showers that come from scalar decays to electron/positron pairs have topologies similar to those of photon pair production or single showers, making them difficult to distinguish from background. In this work, $\texttt{Geant4}$ is used to generate the distribution of charge deposited by Higgs Portal Scalar events within a box of $\hspace{1 pt} ^{40}\hspace{-1 pt }$Ar. This configuration of $\texttt{Geant4}$ provides theorists and phenomenologists a fast and accessible way to simulate LArTPC data. We then apply projections to create two dimensional images of each simulated event, similar to those captured by wire planes in operating detectors. Finally we harness the power of deep neural networks to distinguish images of signal and background events for the Higgs Portal Scalar model at the SBN program, improving upon the projected sensitivity from cut-and-count techniques by 30% in $\sin\theta$ for the benchmark scalar mass of 10 MeV.
Speaker: Jamie Dyer (Colorado State University)
• 10:20 AM
Coffee Break
• Muon Physics: g-2/ Mu2e
Convener: Alexx Perloff (University of Colorado Boulder)
• 24
Muon g-2: An Overview
The Muon g-2 experiment at Fermilab measures the magnetic moment of the muon by studying the behavior of muons as they orbit in a magnetic storage ring. Measuring muon precession frequencies relative to magnetic field strength and correcting for a wide array of factors lets us determine the magnetic moment anomaly a_μ = (g-2)/2 to very high precision. The motivation behind this effort is to investigate a possible discrepancy between the real muon magnetic moment anomaly and its value predicted by the standard model. This discrepancy was first identified twenty years ago in an experiment at Brookhaven National Laboratory, but the uncertainty at the time was too high for a conclusive discovery. Now, g-2 aims to reduce this uncertainty by a factor of four, determining at long last whether the standard model prediction is wrong. Such a discovery could revolutionize the field, opening the door to new initiatives delving for the first time into experimentally-observable physics beyond the standard model.
Speaker: David Kessler (University of Massachusetts Amherst)
• 25
Muon EDM searches at the new g-2 experiment at Fermilab
The new g-2 experiment at Fermilab is expected to improve the limit on the muon electric dipole moment (EDM) by two orders of magnitude compared to the world’s best limit previously set by the Brookhaven experiment. The Standard Model predicts a muon EDM far below the reach of current experiments, so any observation at Fermilab would be evidence for new physics, as well as a new source of CP violation in the lepton sector. Even if no EDM is observed, setting a stronger limit constrains BSM theories, making the muon EDM an excellent tool for new physics searches.
In this talk, I will review the various strategies being used to search for a muon EDM, with a focus on the analysis using the straw tracker detectors, which give the largest improvement compared to the previous measurement. I will also discuss the main systematics associated with the analysis, in particular the radial field and how it is measured with the precision required to not constrain the final result.
Speaker: Dominika Vasilkova (UCL)
• 26
The Mu2e Experiment --- Searching for Charged Lepton Flavor Violation
The Mu2e experiment will search for a Standard Model violating rate of neutrinoless conversion of a muon into an electron in the presence of an aluminum nucleus. Observation of this charged-lepton flavor-violating process would be an unambiguous sign of New Physics. Mu2e aims to improve upon previous searches by four orders of magnitude. This requires the world's highest-intensity muon beam, a detector system capable of efficiently reconstructing the 105 MeV/c conversion electrons, and minimizing sensitivity to background events. A pulsed 8 GeV proton beam strikes a target, producing pions that decay into muons. The muon beam is guided from the production target along the transport system and onto the aluminum stopping target. Conversion electrons leave the stopping target and propagate through a solenoidal magnetic field and are detected by the tracker and electromagnetic calorimeter. Here, I will introduce and outline the physics, goals, and expected performance of the Mu2e experiment, which is currently on schedule to report its search for New Physics this decade.
Speaker: Michael Hedges (Purdue University)
• 27
Mu2e Event Visualisation Development
The Mu2e experiment will search for the CLFV neutrinoless coherent conversion of muon to electron, in the field of a nucleus. A custom Event Display has been developed using TEve, a ROOT based 3-D event visualisation framework. Event displays are crucial for monitoring and debugging during live data taking as well as for public outreach. A custom GUI allows event selection and navigation. Reconstructed data like the tracks, hits and clusters can be displayed within the detector geometries upon GUI request. True Monte Carlo trajectory of the particles traversing the muon beam line, obtained directly from Geant4, can also be displayed. Tracks are coloured according to their particle identification and users get to select which trajectories to be displayed. Reconstructed tracks are refined using a Kalman filter. The resulting tracks can be displayed alongside truth information, allowing visualisation of the track resolution. The user can remove/add data based on energy deposited in a detector or arrival time. This is a prototype and an online event display, is currently under-development using Eve-7 which allows remote access for live data taking.
• 28
Design and Fabrication of the Cosmic Ray Veto for the Mu2e Experiment
The Muon-to-Electron Conversion Experiment (Mu2e) at Fermilab will search for the charged-lepton flavor-violating process of a neutrino-less conversion of a muon to electron in the presence of a nucleus. It will do so with an expected sensitivity that improves upon current limits of four orders of magnitude. Such sensitivity will require less than one expected background event over the lifetime of the experiment. The largest background are cosmic rays entering the experimental hall and producing an electron at the expected signal energy. To mitigate this otherwise indistinguishable process, the Mu2e Cosmic Ray Veto (CRV) is designed to veto cosmic rays with 99.99% efficiency while having low dead time in a high intensity environment. The Mu2e CRV is currently being fabricated at the University of Virginia and this talk will discuss the design and fabrication process.
Speaker: Matthew Solt (University of Virginia)
• 11:45 AM
Break
• Career Event: Workshop
Conveners: Beth Powers (UChicago), Mike Tessel (UChicago)
• 29
Professional Career Guide Workshop
Speakers: Beth Powers (UChicago), Mike Tessel (UChicago)
• 1:00 PM
Lunch Break
• Neutrinos: MiniBooNE/ MicroBooNE/ Neutrino beams
Convener: Stefano Tognini (Federal University of Goias)
• 30
MiniBooNE in 10 Minutes
In this talk, I will give an overview of the MiniBooNE experiment. MiniBooNE's 818-tonne mineral oil Cherenkov detector took data at Fermilab's Booster Neutrino Beam from 2002 to 2019 in both neutrino and antineutrino mode. The most notable result from this 17-year run is an as-yet unexplained $4.8\sigma$ excess of electron-like events. This excess has historically been interpreted under the hypothesis of short-baseline $\nu_\mu (\bar{\nu}_\mu) \to \nu_e (\bar{\nu}_e)$ oscillations involving a fourth sterile neutrino state; however, tension in the global sterile neutrino picture has led the community to consider alternative explanations, typically involving photon or $e^+ e^-$ final states. I will discuss the present status of the MiniBooNE anomaly. I will also cover other important results from the MiniBooNE experiment, including neutrino cross section measurements and sub-GeV dark matter constraints.
Speaker: Nick Kamp (MIT)
• 31
MicroBooNE in 10 Minutes
MicroBooNE is an 85 tonne liquid argon time projection chamber (LArTPC) detector situated at Fermilab which receives both an on-axis beam from the Booster Neutrino Beam and an off-axis beam component from the Neutrinos at the Main Injector (NuMI) beam. It collected data from 2015 until 2021 in order to acquire a high statistics sample of neutrino interactions on which its state of the art abilities of wire readout and particle identification can be utilized for fundamental physics searches. MicroBooNE’s signature analysis is to determine the source of the low-energy excess previously reported by MiniBooNE and LSND, and there is also a variety of other excellent physics taking place on topics ranging from low-to-medium-energy neutrino cross sections to detector simulation and physics reconstruction, useful to the broader short- and long-baseline oscillation programs. This talk will give a brief overview of the current status of MicroBooNE’s physics program, a summary of the latest major results, and a few future prospects.
Speaker: Alexandra Moor
• 32
Measuring the Neutral Current Neutral Pion Cross Section on Argon in MicroBooNE
MicroBooNE, a short-baseline neutrino experiment, sits on-axis in the Booster Neutrino Beamline at Fermilab where it is exposed to neutrinos with $\langle E_\nu \rangle$ ~ 0.8 GeV. Since this energy range is highly relevant to the Short Baseline Neutrino and Deep Underground Neutrino Experiment programs, cross sections measured by MicroBooNE will have implications on their searches for neutrino oscillation and charge-parity violation measurements. Additionally, MicroBooNE’s use of liquid argon time projection chamber technology makes it well-suited to precisely measure a wide range of final states, including those produced by neutral current (NC) interactions. NC $\pi^0$ interactions in particular are a significant background in searches for Beyond the Standard Model (BSM) $e^+e^-$ production and are an irreducible background to rare neutrino scattering processes such as NC $\Delta$ radiative decay and NC coherent single-photon production at low energies. Therefore, understanding the rate of NC $\pi^0$ production will improve the modeling of this background channel, reducing uncertainties in measuring BSM signatures and single-photon production processes. In this talk, I will report the highest-statistics measurement to date of the neutral current (NC) $\pi^0$ production cross section for neutrino-argon interactions.
Speaker: Nupur Oza (Los Alamos National Lab)
• 33
Application of hadron production data to Fermilab neutrino beam simulations
An accurate determination of the neutrino flux produced by the Neutrinos at the Main Injector (NuMI) and the Long-Baseline Neutrino Facility (LBNF) beamlines is essential to the neutrino oscillation and neutrino interaction measurements for the Fermilab neutrino experiments, such as MINERvA, NOvA, and the upcoming DUNE. In the current flux predictions, we use the Package to Predict the FluX (PPFX) to constrain the hadron production model using measurements of particle production off of thin targets mainly from the NA49 (CERN) experiment. Currently, the NA61/SHINE (CERN) and EMPHATIC (Fermilab) experiments are actively working to provide new hadron production measurements at different energies, nuclear targets, and particle projectiles for the accelerator-based neutrino experiments.
In this talk, we will present the status of the flux predictions and the effort to improve them by incorporating recent data from NA61/SHINE and EMPHATIC in the context of the PPFX-DUNE working group.
Speaker: Nilay Bostan (University of Notre Dame)
• Tuesday, June 21
• Neutrinos: LArIAT/ MINERvA
Convener: Ivan Lepetic (Rutgers University)
• 34
LArIAT in 10 minutes One West
### One West
#### Fermilab
Speaker: Gabriela Lima Lichtenstein (Universidade de Campinas)
• 35
MINERvA in 10 Minutes One West
### One West
#### Fermilab
The MINERvA (Main INjector ExpeRiment for v-A scattering) experiment was designed to perform high-statistics precision studies of neutrino-nucleus scattering in the GeV regime on various nuclear targets using the high-intensity NuMI beam at Fermilab. The experiment recorded neutrino and antineutrino scattering data from 2009 to 2019 using the Low-Energy and Medium-Energy beams that peak at 3.5 GeV and 6 GeV, respectively. MINERvA's results are being used as inputs to current and future experiments seeking to study neutrino oscillations, or the ability of neutrinos to change their type. The neutrino interaction measurements also provide information about the structure of protons and neutrons and the strong force dynamics that affect neutrino-nucleon interactions. A brief description of the MINERvA experiment, the highlights of past accomplishments, and recent results will be presented.
Speaker: Anezka Klustova (Imperial College London)
• 37
Nuclear medium effects in the antineutrino induced deep inelastic scattering for $<E_{\bar \nu_\mu}>\sim$ 6GeV at MINER$\nu$A
For a better understanding of neutrino properties, we require precision measurements of the oscillation parameters. Presently the systematic uncertainty on these parameters can be as large 25-30% because of the lack of understanding of neutrino-nucleon and neutrino-nucleus cross sections. For future high precision measurements we will need to reduce this uncertainty down to 2-3%. MINER𝜈A is a dedicated (anti)neutrino scattering experiment located in the NuMI beamline at Fermilab. Currently the results for the medium energy run of MINER𝜈A are being analyzed for inclusive as well as exclusive channels. We will present the preliminary results for charged current antineutrino deep inelastic scattering (DIS) observed at MINER𝜈A. For this study we used a sample of antineutrino interactions on several nuclear targets including iron, lead, carbon and hydrocarbon using the high intensity NuMI antineutrino beam with $\sim$ 6 GeV. We will discuss the sample selection and the background estimation in the passive nuclear targets as well as in the active tracker region. The ultimate goal is to extract the cross section ratios and perform an expanded partonic nuclear effects study in the weak sector for the first time.
Speaker: Vaniya Ansari (Aligarh Muslim University)
• 38
CC numu 1 pi+ production in the MINERvA tracker One West
### One West
#### Fermilab
• 9:15 AM
Break
• Neutrinos: NOvA
Convener: Lauren Yates (Fermilab)
• 39
NOvA in 10 minutes
NOvA, the NuMI Off-Axis $\nu_e$ Appearance experiment, uses a predominantly muon neutrino or anti-neutrino beam to study neutrino oscillations. NOvA is composed of two functionally equivalent, liquid scintillator detectors. A 300 ton near detector is located at Fermilab 1 km away from the beam target. A 14 kt far detector is located in Ash River, Minnesota, separated from the near detector by 809 km. By measuring and comparing neutrino and anti-neutrino rates at both detectors, we can measure the mass hierarchy, CP phase, and $\theta_{23}$. Outside the 3-flavor oscillation analyses, NOvA is also able to measure neutrino cross-sections, and search for sterile neutrinos and other signatures of new physics. In this talk I will give an overview of NOvA and discuss some of the most recent results.
Speaker: Maria Manrique Plata (NOvA)
• 40
Status of the measurement of the muon neutrino charged-current coherent pion production in the NOvA near detector
Charged Current coherent neutrino-nucleus pion production is characterized by small momentum transferred to the nucleus, which is left in its ground state. In spite of the relatively large uncertainties on the production cross-section, coherent production of mesons by neutrinos represents an important process, as it can shed light on the structure of the weak current and can also constitute a potential source of background for modern neutrino oscillation experiments and searches for Beyond Standard Model (BSM) physics. We will present the status of a new measurement of CC coherent pion production in the NOvA near detector at the Fermi National laboratory (Fermilab). The analysis is based on the use of both particle identification and kinematic selection criteria based on Convolutional Neural Networks (CNN). Given the energy range 1-5 GeV accessible with the available NOvA exposure in the NuMI beam, the results will also be relevant for future neutrino experiments like the Deep Underground Neutrino Experiment (DUNE).
Speaker: Chatura Kuruppu (University Of South Carolina)
• 41
Evaluating a novel, HEP distributed data service for NOvA neutrino candidate selection
In this work we evaluate the performance of the High-Energy Physics's new Object Store (hereafter referred to as HEPnOS) based on the mochi microservices architecture, that was designed specifically for HEP experiments and workflows. The use case we employ for the performance study is the task of NOvA neutrino candidate selection. This experimental setup consists of a HEPnOS server that holds the experimental data in an in-memory database and a set of client nodes that run the analysis by fetching the data from the server. While traditional analysis maps CPU cores to files (i.e. each core handles all events/slices within the file), the use of HEPnOS allows us to harness finer grained parallelism at the event level rather than at the file level. We show that this allows us improve strong scaling for this task, thereby allowing us to effectively harness available computational resources. Moreover, once the data is loaded into the server, the analysis can be run iteratively which can lead to speedups in higher level analysis routines like parameter fits.
Speaker: Dr Sajid Ali Syed (SCD)
• 42
Status of MRE Study for Neutrino-Electron Elastic Scattering in the NOvA Near Detector
NO$\nu$A is a long-baseline accelerator neutrino experiment at Fermilab that aims at precision neutrino oscillation analyses and cross-section measurements. Large uncertainties on the absolute neutrino flux affect both of these measurements. Measuring neutrino-electron elastic scattering provides an in-situ constraint on the absolute neutrino flux. In this analysis the signal is a single, very forward-going electron shower with $E_{e}{\theta_{e}}^{2}$ peaking around zero. After the electron selection, the primary background for this analysis is the beam $\nu_{e}$ charged current events ($\nu_{e}$ CC). Muon removed electron-added (MRE), events are constructed from $\nu_{\mu}$ CC interactions by removing the primary muon track and simulating an electron in its place. It helps us to understand the consequence of hadronic shower mismodelling on $\nu_{e}$ selection. This talk presents an overview of on-going MRE studies and a plan for how this sample can be used to provide a data-driven constrain on the $\nu_{e}$ CC backgrounds present in the $\nu$-e analysis.
• 10:30 AM
Break
Convener: Sudeshna Ganguly (Fermilab)
• 43
CMS in 10 minutes
Forty million times per second, the Large Hadron Collider (LHC) produces the highest energy collisions ever created in a laboratory. The Compact Muon Solenoid (CMS) experiment is located at one of four collision points on the LHC ring, using concentric sub-detectors to measure outgoing particles across a wide range of energies and species. The resulting data can be used to study Standard Model particles with unprecedented precision as well as to search for completely new physics phenomena. In this talk I will highlight some of the recent work by CMS physicists, and future prospects for the experiment.
Speaker: Christian Herwig (FNAL)
• 44
Standard Model four-top quark production at 13 TeV in the all-hadronic final state with CMS Run II data
Standard model four top quark production is a rare process with great potential to reveal new physics. Measurement of the cross section is not only a direct probe of the top quark Yukawa coupling with the Higgs, but an enhancement of this cross section is predicted by several beyond the standard model (BSM) theories. This process is studied in fully-hadronic proton-proton collision events collected during Run II of the CERN LHC by the CMS detector, which corresponded to an integrated luminosity of 137fb−1 and a center of mass energy of 13TeV. In order to optimize signal sensitivity with respect to significant and challenging backgrounds, several novel machine-learning based tools are applied in a multi-step and data-driven approach.
Speaker: Melissa Quinnan
• 45
BTL Cooling Plate Studies for CMS and MTD Upgrade
The Barrel Timing Layer (BTL) is a central component of the MIP Timing Detector (MTD) of the Compact Muon Solenoid (CMS). Precision timing information from this detector is necessary for the challenges of High-Luminosity LHC operations. These upgrades require an increase in the cryogenic capacity provided to the BTL system. Prototype cooling plates have been in development and have been tested in liquid CO2 at Fermilab under heating and cooling cycles. Results will be used for further development of the cooling system for the BTL detector.
Speaker: Mr Orgho Neogi (University of Iowa)
• 11:30 AM
Lunch Break
• Career Event: Panel
Conveners: Andres Alba Hernandez (intern), Ariana Hackenburg, Ryan Lazur
• 46
Annual Career Panel
Speakers: Andres Alba Hernandez (intern), Ariana Hackenburg, Ryan Lazur
• Wednesday, June 22
• Cosmic Physics: SuperCDMS/NEXUS
Convener: Samuel McDermott
• 47
SuperCDMS in 10 minutes
SuperCDMS is a dark matter (DM) search experiment under construction inside the SNOLAB facility (Lively, Canada). The experiment will employ two types of germanium- and silicon-based cryogenic calorimetric detectors to detect ionization and phonon signals from DM particle direct interactions. The detectors will be operated in a new radiopure cryostat and shield. In this talk, I will present the overview and the current status of the experiment.
Speaker: Dr Valentina Novati (Northwestern University)
• 48
NEXUS: A low-background, cryogenic facility for detector development and calibrations
The Northwestern Experimental Underground Site (NEXUS), located in the MINOS cavern at Fermilab, is a user facility for development and calibration of cryogenic detectors. The heart of NEXUS is a dilution refrigerator with a 10 mK base temperature, protected from radiogenic backgrounds by a moveable lead shield and 100 meters of rock overburden. The fridge is outfitted with cabling to support multiple detector payloads, with both RF and DC input and readout. Currently, NEXUS houses three experiments: a superconducting qubit array, SuperCDMS HVeV detectors, and a microwave resonator array. The facility is in the process of being upgraded with a DD neutron generator, an ideal source for calibrating low-energy nuclear recoils and processes like the Migdal effect. In this talk, I will provide an overview of the utilities available at NEXUS and discuss future opportunities.
Speaker: Dylan Temples (Northwestern University)
• 49
SuperCDMS HVeV program at the NEXUS facility at Fermilab
The Super Cryogenic Dark Matter Search (SuperCDMS) employs silicon and germanium calorimeters equipped with transition edge sensors to directly search for interactions from dark matter (DM). New 1-gram SuperCDMS HVeV (high-voltage with eV resolution) devices exhibit single-charge sensitivity, making it possible to search for sub-GeV-mass DM candidates such as electron-recoiling DM, dark photons and axion-like particles. These detectors are currently operated in the NEXUS facility at Fermilab. In this talk, I will present the status of the SuperCDMS HVeV program at NEXUS.
Speaker: Huanbo Sun (University of Florida)
• 50
Superconducting qubit studies at NEXUS
Superconducting qubits are of interest for the development of quantum computers and for quantum sensing in experiments such as dark matter searches. For both applications, it is crucial to understand qubit errors and the resulting performance limitations. Recent studies of charge noise and relaxation errors in a multiqubit device found significant spatial correlation of errors across the device. Such correlations are not compatible with current error-correcting algorithms for large arrays of qubits. The suspected cause of these errors is energy deposition from ionizing radiation. To test this hypothesis, we are studying the correlated charge noise of a multiqubit device in the NEXUS (Northwestern Experimental Underground Site) dilution fridge at Fermilab. The fridge is located underground in the MINOS tunnel and is equipped with lead shielding, reducing the backgrounds from both cosmic and lab-based sources of environmental radiation. This talk will provide a summary of the current status of our underground qubit experiments.
Speaker: Samantha Lewis (Fermi National Accelerator Laboratory)
• 9:15 AM
Break
• Cosmic Physics: Dark Matter
Convener: Dylan Temples
• 51
ADMX (Axion Dark Matter eXperiment) in 10 minutes
The axion is a very well-motivated Dark Matter candidate in the $\mu$eV mass range. Its discovery would also solve the longstanding question why the electric dipole moment of the neutron is vanishingly small, $< 10^{-26} e$cm, so far consistent with zero. ADMX searches for axion dark matter via its resonant conversion to photons inside a strong (7.6T) magnetic field using RF cavities. In this talk we will review the physics behind the experimental setup, recent results, and future runs.
Speaker: Stefan Knirck (Fermi National Accelerator Laboratory)
• 52
DarkSide Program
The DarkSide program is a direct WIMP dark matter search experiment using liquid argon time projection chamber (LAr-TPC). Its primary detector, DarkSide-50, run since 2015 a 50-kg-active-mass LAr-TPC filled with low radioactivity argon from underground source and produced world-class results for both the low mass (M_WIMP < 10 GeV/c2) and high mass (> 100 GeV/c2) WIMP search. The next stage of the program will be the DarkSide-20k, a 20-tonne fiducial mass LAr-TPC with SiPM based cryogenic photosensors, expected to be free of any background for exposure of 100 tonne x year. DarkSide-LM is another future experiment focusing on the low mass WIMP with an expected sensitivity down to the "solar-neutrino floor". This talk will give the latest updates and prospect on these experiments.
Speaker: Masato Kimura (AstroCeNT/CAMK, PAN)
• 53
Dark Matter Detection with the Light Dark Matter eXperiment
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus to within about an MeV to 100 TeV. Most of the stable constituents of known matter have masses in the lower range, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. The Light Dark Matter eXperiment (LDMX) is a planned electron beam fixed-target experiment at SLAC that will probe a variety of dark matter models in the sub-GeV mass range using a missing momentum technique. Although optimized for this technique, LDMX is effectively a fully instrumented beam dump experiment, making it possible to search for visibly decaying signatures. This would provide another outlet for LDMX to probe complementary regions of dark matter phase space for a variety of models, provided that the additional technical challenges can be met. This contribution will give an overview of the motivations for LDMX and focus on the technical challenges of searches for visible signatures at LDMX.
• 54
The search for low mass Dark Matter with CCDs
In recent years, the demand for experimental data in cosmology, direct searches for dark matter and neutrino physics has highlighted the need to explore very low energy interactions. While Charge-Coupled Devices have proven their worth in a wide variety of fields, its readout noise has been the main limitation when using these detectors to measure small signals. The R&D done at Fermilab allowed the creation of a non-destructive readout system that uses a floating-gate amplifier on a thick, fully depleted charge coupled device to achieve ultra-low readout noise. While these detectors have already made a significant impact in the search for rare events and direct dark matter detection (SENSEI), its uses are being expanded to quantum optics, neutrino physics and astronomy. In this short talk I will go over the main principles behind the Skipper-CCD, its novel uses as particle detectors, and the current efforts at Fermilab and around the U.S. for the construction of a large multi-kg experiment for probing electron recoils from sub-GeV DM (OSCURA).
Speaker: Santiago Perez (Universidad de Buenos Aires)
• 55
Low energy calibration and characterization of novel dark matter detectors with a scanning laser device
The search for sub-Gev particle-like dark matter has developed rapidly in recent years. A major hurdle in such searches is in demonstrating sufficiently low energy detection thresholds to detect recoils from light dark matter particles. Many detector concepts have been proposed to achieve this goal, which often include novel detector target media or sensor technology. A universal challenge in understanding the signals from these new detectors and enabling discovery potential is characterization of detector response near threshold, as the calibration methods available at low energies are very limited. We have developed a cryogenic device for robust calibration of any photon-sensitive detector over the energy range of 0.62 - 6.89eV, which can be used explore a variety of critical detector effect such as position sensitivity of detector configurations, phonon transport in materials, and the effect of quasiparticle poisoning. In this talk, I will present the design overview and specifications, along with current status of the testing program.
Speaker: Hannah Magoon
• 10:45 AM
Break
• Cosmic Physics: Theory/CMB
Convener: Kirit Karkare
• 56
The Cosmic-Ray Positron Excess and the Constraints on Milky Way Pulsars
Pulsars - spinning neutron stars that are magnetized – are likely the leading source which could explain the large excess in the observed positron flux present in data measurements from the AMS-01, HEAT, and PAMELA collaborations. While first thought to be from a source of annihilating dark matter, there have since been more compelling observations - via experiments such as HAWC - of TeV halos associated with pulsars that are especially young and within a few kiloparsecs of Earth. These halos indicate that such pulsars inject significant fluxes of very high-energy electron-positrons pairs into the interstellar medium (ISM), thereby likely providing the dominant contribution to the cosmic-ray positron flux. This talk highlights the important updates on the constraints of local pulsar populations which further support the pulsar explanation to resolving the positron excess, through building upon previous work done by Hooper, Linden, and collaborators. Using the cosmic-ray positron fraction as measured by the AMS-02 Collaboration and applying reasonable model parameters, a good agreement can be obtained with the measured positron fraction up to energies of roughly ∼ 300 GeV. At higher energies, the positron fraction is dominated by a small number of pulsars, making it difficult to reliably predict the shape of the expected positron fraction. The low-energy positron spectrum supports the conclusion that pulsars typically transfer approximately ∼ 5 − 20% of their total spindown power in efficiency into the production of very high-energy electron-positron pairs, producing a spectrum of such particles with a hard spectral index of ∼ 1.5 − 1.7. Such pulsars typically spindown on a timescale on the order of 104 years. The best fits were obtained for models in which the radio and gamma-ray beams from pulsars are detectable to 28% and 62% of surrounding observers, respectively.
Speaker: Olivia Bitter (Fermilab/UChicago)
• 57
Constraining New Physics with the Cosmic Microwave Background
Observations of the Cosmic Microwave Background have revolutionized cosmology and established ΛCDM as the standard model describing the contents and evolution of the universe. Higher precision measurements of the CMB temperature and polarization anisotropy will continue to probe high energy physics on scales inaccessible in laboratories. These include the effective number of relativistic species, sum of the neutrino masses, and the energy scales of inflation. I will discuss how CMB measurements can constrain these parameters and the future experiments, such as CMB-S4, that are being developed for this purpose.
Speaker: Katie Harrington (University of Chicago)
• 58
On-sky Optical Calibration for CMB Experiments
The goal of Cosmic Microwave Background (CMB) observations is to study cosmology and astrophysics via increasingly high precision measurements. To achieve that, we must first understand the instruments to high precision, primarily via on-sky optical calibrations.
In this talk, I will first describe the on-sky optical calibration of the Cosmology Large Angular Sky Surveyor (CLASS), describing how we calibrate the intensity beam to 90-deg radius, how we constrained the temperature-to-polarization leakage to 10e-5, and how we calibrate the polarization angle to sub-deg levels. Then I will discuss the ongoing effort to develop the calibration pipeline within the Simons Observatory. I will also discuss using drone-carrying RF sources for calibration, and the current development along this approach.
Speaker: Zhilei Xu
• 11:45 AM
Lunch Break
• Cosmic Physics: Dark Energy
Convener: Dan Hooper (Fermilab)
• 59
Rubin Observatory Legacy Survey of Space and Time (LSST) - a multi-faceted game-changer (LSST in 10 minutes)
Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) is a game-changer -- with unprecedented data on billions of galaxies, we are looking at an exciting era of discovery and precision cosmology. I will talk about various goals of LSST in general and then specifically focus on constraining dark energy, highlighting some of the work happening in the LSST Dark Energy Science Collaboration (DESC). I will also talk about what doing science with such a large instrument entails in terms of collaboration, service, intellectual growth, and skill development.
Speaker: Humna Awan
• 60
The Dark Energy Survey in 10 minutes
Using hundreds of millions of galaxies in the largest galaxy catalog ever produced, the Dark Energy Survey (DES) has placed stringent constraints on the composition of the universe and the growth of large-scale structure. I will give an overview of the experiment and how we use the images we capture to further our understanding of cosmology, with an emphasis on the recent results from the first three years of observations.
Speaker: Noah Weaverdyck (Lawrence Berkeley National Lab)
• 61
Finding the selection function for DES galaxy-galaxy strong lenses
Strong lensing is a powerful probe into the mass distributions—and the evolutionary histories—of galaxies and galaxy clusters. However, in studies using strong lenses to probe galaxy structure, we need to assess whether strong lenses are representative of the general galaxy population or they form a biased subsample. We carry out an investigation into selection biases potentially present in a sample of 98 galaxy-galaxy strong lens candidates, identified in Dark Energy Survey (DES) Year 3 imaging. We model the surface brightness profile for all galaxies in this sample and in a sample of 3990 non-lensing luminous red galaxies (LRGs) from the DES Year 3 red-sequence Matched-filter Galaxy Catalog (redMaGiC). Statistical comparisons between the two populations through Kolmogorov-Smirnov (K-S) testing are then performed using a set of photometric observables from our model posteriors. In early results, we report statistically significant differences between the two populations in several observables. Most notably, the lensing galaxies may be larger in projected size and slightly brighter than non-lensing LRGs on average. This result is congruent with simple predictions of how strong lensing occurs. The brighter and more massive galaxies will provide more lensing cross-section and thus more opportunities for strong lensing to occur. We are working to improve our techniques for lens-source deblending, in order to include more strong lensing candidates in our sample of lensing galaxies.
Speaker: Aidan Cloonan (University of Chicago)
• 62
Automated Lens Parameter Estimation using Simulation-Based Inference Methods
We present ongoing work to automate and accelerate parameter estimation of galaxy-galaxy lenses using simulation-based inference (SBI) and machine learning methods.
Current cosmological galaxy surveys, like the Dark Energy Survey (DES), are predicted to discover thousands of galaxy-scale strong lenses, while future surveys, like the Legacy Survey of Space and Time (LSST) will find hundreds of thousands. These large numbers will make strong lensing a highly competitive and complementary cosmic probe of dark energy and dark matter. Unfortunately, the traditional analysis of a single lens is highly computationally expensive, requiring up to a day of human-intensive work. To leverage the increased statistical power from these surveys, we will need highly automated lens analysis techniques.
We present an approach based on Simulation-Based Inference for lens parameter estimation of galaxy-galaxy lenses. In particular, we demonstrate the successful application of Sequential Neural Density Estimators (SNPE) to efficiently infer a 5-parameter lens mass model. We compare our SBI constraints to a Bayesian Neural Network (BNN) and find that it outperforms the BNN, often producing posteriors distributions that are both more accurate and more precise, in some cases predicting constraints on lens parameters that are several times smaller than that from the BNN. Being able to accurately estimate the lens parameters of a large sample of lenses will enable us to study the dark matter distribution across populations of lenses, as well as potentially constrain dark energy models.
Speaker: Jason Poh (University of Chicago)
• 1:30 PM
Break
• Cosmic Physics: Computational/Simulation Methods in Astrophysics and Cosmology
Convener: Anastasia Sokolenko (HEPHY)
• 63
Estimating Parameters of Gravitationally Lensed Quasars with Simulation-Based Inference and SplineCNNs
The Hubble Tension is considered a crisis for the LCDM model in modern cosmology. Addressing this problem presents opportunities for identifying issues in data acquisition and processing pipelines or discovering new physics related to dark matter and dark energy. Time delays in the time-varying flux of gravitationally lensed quasars can be used to precisely measure the Hubble constant ($H_0$) and potentially address the aforementioned crisis. Gaussian Processes (GPs) are typically used to model and infer quasar light curves; unfortunately, the optimization of GPs incurs a bias in the time-evolution parameters. In this work, we introduce a machine learning approach for fast, unbiased inference of quasar light curve parameters. Our method is amortized, which makes it applicable to very large datasets from next-generation surveys, like LSST. Additionally, since it is unbiased, it will enable improved constraints on $H_0$. Our model uses Spline Convolutional VAE (SplineCVAE) to extract descriptive statistics from quasar light curves and a Sequential Neural Posterior Estimator (SNPE) to predict posteriors of Gaussian process parameters from these statistics. Our SplineCVAE reaches reconstruction loss RMSE=0.04 for data normalized in the range $[0,1]$. SNPE predicts the order of magnitude of time-evolution parameters with an absolute error of less than 0.2.
Speaker: Egor Danilov (EPFL)
• 64
Estimating Cosmological Constraints from Galaxy Cluster Abundance using Simulation-Based Inference
Modern and next-generation cosmic surveys will collect data on billions of galaxies. To derive constraints on dark matter and dark energy, we will require more efficient data analysis methods that can handle unprecedentedly large amounts of data and address multiple systematics and unknowns in galaxy cluster modeling. In this work, we use simulation-based inference (SBI; aka likelihood-free inference) to estimate five fundamental cosmological parameters (e.g., Ωm, h, ns) from the observable abundance of optical galaxy clusters. We use and compare two very different simulations – the N-body-based Quijote simulation suite and the analytical forward models from Cosmosis. We train a neural network on these simulations to predict the posterior probability of cosmological parameters, conditional on the observable galaxy cluster abundance. This amortized posterior calculation permits fast calculations on large data sets. Additionally, the resulting posterior is not constrained to limited analytic (e.g., Gaussian forms). Our results show that the SBI method can successfully recover the true values of the cosmological parameters within 2σ, which is comparable to state-of-the-art MCMC-based inference methods.
Speaker: Moonzarin Reza
• 65
DeepBench: A simulation library for cosmology focused dataset generation
The physics community lacks user-friendly computational tools for constructing simple simulated datasets for benchmarking and education in machine learning and computer vision. We introduce the python library DeepBench, which generates highly reproducible datasets at varying levels of complexity, size, and content focused on a cosmological context. DeepBench produces both highly simplified and more complex models of astronomical objects. For instance, basic geometric shapes, such as a disc and multiple arcs, could be used to simulate a strong gravitational lens. For more realistic models of astronomical objects, such as stars or elliptical galaxies, DeepBench simulates each of their well-recorded profile distribution functions. Beyond 2D images, we can also produce 1D representations of quasar light curves and galaxy spectra. We also include tools to collect and store the dataset for consumption by a machine learning algorithm. Finally, we present a trained ResNet50 model as an illustration of the expected use of the software as a benchmarking tool for testing the suitability of various architectures for a scientifically motivated problem.
We envision this tool to be useful in a suite of contexts at the intersection of cosmology and machine learning. The simplistic nature of the simulated data permits us to rapidly generate arbitrarily large data sets, from single-object fields to multi-object fields. The data can have both categorical and floating point labels so that a variety of tasks can be tested simultaneously or in a progression on the same data set – e.g., both classification and regression. We expect the tool to be of significant interest and utility both for a wide range of users. For those new to machine learning, it can produce toy-model datasets that behave similarly to astronomical data. For ML experts, it can be used to carefully and systematically test models.
Speakers: Ashia Lewis (Fermilab), Margaret Voetberg (Fermi National Accelerator Laboratory)
|
{}
|
# Cover Letter with Style - Part Three
5 min read · tagged xelatex
## Contents
This is the third part of the tutorial Cover letter with style. You can find the second part here.
It’s time to set up a custom header for our cover letter. The default one renders the sender address on the top left part of the letter, but we can change that. I usually put my name and my current job title in the header, accordingly to my big ego. In KOMA-Script there are several ways to define your own header, each one offering an increasing degree of freedom but also an increasing complication in use. Luckily enough, for our purposes, we can use the easiest.
The simplest way to change your header is by setting firsthead variable. That will set the custom header just for the first page of the document. But we’re talking about a cover letter: don’t even think to make it longer than a page. Enough said, I will modify the template now:
\ProvidesFile{standard.lco}[%
2002/07/09 v0.9a LaTeX2e unsupported letter-class-option]
\usepackage[english]{babel}
\usepackage{fontspec}
% ==============================================
% PERSONAL DATA
% ==============================================
\setkomavar{fromname}{Ambroos Janssen}
\setkomavar{fromphone}{+31 (0)22 7394203}
\setkomavar{fromemail}{a.janssen@gmail.com}
\setkomavar{fromfax}{+31 (0)71 5144543}
\setkomavar{fromurl}{http://www.kindoblue.nl}
\setkomavar{frombank}{Postbank 9307157}
\setkomavar{place}{Amsterdam}
\setkomavar{signature}{Ambroos Janssen}
% ==============================================
% FORMATTING STUFF
% ==============================================
% === font settings
\defaultfontfeatures{Mapping=tex-text}
\setmainfont {Cormorant}[]
\setsansfont [Scale=MatchLowercase]{Fira Sans Book}
\setkomavar{firsthead}{ \centering \usekomavar{fromname}\\ Software Architect and Developer}
\endinput
So, I have specified the following: the header shall be centered (line 33) and will be composed of two lines by using \\, which is the latex command to break the line. On top there will be our name. I didn’t write it directly because it is already set at line 11. So I used \usekomavar to lookup the fromname variable. The second line is the job title. You can choose to define a variable for it as well, if you plan to use the job title more than once in your letter.
Now, render the letter again, to see the result:
Well, it sucks. But we can improve things, no worries.
### Using macros
In Latex we can use macros to make code reusable. This will help also with the terseness of code that will use those macros. I will define a macro to give a name to a particular font family that will be used exclusively for the header title. We will be using Cormorant SC, which is the Small Caps version of the main font family. Also, I will define a couple of macros to represent the title and subtitle in the header. Like this:
\ProvidesFile{standard.lco}[%
2002/07/09 v0.9a LaTeX2e unsupported letter-class-option]
.
.
.
\centering
\begin{tabular}{c} \mytitle\\ \subtitle \end{tabular}
}
.
.
.
At line 8 I use the \newfontfamily to define a new handle representing a font family. In case you want to use the fontspec customization capabilities, we can use them here, in one place only. After line 8 the name titlefont will represent whatever font family and its parameters we chose. At line 9 we define the macro mytitle. It will be our name, this time rendered with the font titlefont. At line 10 we define subtitle in the same manner. The last thing to do is to update the definition of our header by using those macros. The code is more clear now, I hope.
Let’s take a looks a the result:
Much better. But I’m still not quite happy. So, I will increase the font size and, more importantly, I will increase the letter spacing, just for the first line.
### Exploiting font features
In Latex you have commands to alter the font size:
normal size ->10pt11pt12pt
\tiny5pt6pt6pt
\scriptsize7pt8pt8pt
\footnotesize8pt9pt10pt
\small9pt10pt11pt
\normalsize10pt11pt12pt
\large12pt12pt14pt
\Large14pt14pt17pt
\LARGE17pt17pt20pt
\huge20pt20pt25pt
\Huge25pt25pt25pt
So for example if you use the command Large and the normal font size is 10pt, you will get the size of 14pt applied in whatever context the command is applied. So let’s proceed:
\ProvidesFile{standard.lco}[%
2002/07/09 v0.9a LaTeX2e unsupported letter-class-option]
.
.
.
\centering
\begin{tabular}{c}
\mytitle\\[5mm] \subtitle
\end{tabular}
}
.
.
.
At line 9 and 10 I used the command large and Huge. Since Huge is still not huge enough for the first line of the header I also changed the scale of the entire titlefont macro by using the Scale modifier.
For the titlefont I also used the command \addfontfeature to set the letterspace factor equal to 15.0. To find the right number, I just did some experiments. I also set up half centimeter of space between the lines, with the [5mm] after the \\
So, let’s render the letter again:
Very nice!
In the fourth part I will show how to setup up the footer and then we will be ready for the watermarks, logo and barcode.
Stefano software developer
...
|
{}
|
# First Order Differential Equation
A first-order differential equation is defined by an equation
dy/dx =f (x,y)
of two variables x and y with its function f(x,y) defined on a region in the xy-plane. It has only the first derivative dy/dxso that the equation is of the first order and not higher-order derivatives. The above differential equation in first-order can also be written as;
y’ = f (x,y) or
(d/dx) y = f (x,y)
The differential equation is generally used to express a relation between the function and its derivatives. In Physics and chemistry, it is used as a technique for determining the functions over its domain if we know the functions and some of the derivatives.
## First Order Linear Differential Equation
If the function f is a linear expression in y, then the first-order differential equation y’ = f (x,y) is a linear equation. That is, the equation is linear and the function f takes the form
f(x,y) = p(x)y + q(x)
Since the linear function is y = mx+b
where p and q are continuous functions on some interval I. Differential equations that are not linear are called nonlinear equations.
Consider the first order differential equation y’ = f (x,y) is a linear equation and it can be written in the form
• y’ + a(x)y = f(x)
where a(x) and f(x) are continuous functions of x
The alternate method to represent the first order linear equation in a reduced form is
(dy/dx) + P(x)y = Q (x)
Where P(x) and Q(x) are the functions of x which are the continuous functions. If P(x) or Q(x) is equal to zero, the differential equation is reduced to the variable separable form. It is easy to solve when the differential equations are in variable separable form.
### Types of First Order Differential Equations
There are basically five types of differential equations in the first order. They are:
1. Linear Equations
2. Homogeneous Equations
3. Exact Equations
4. Separable Equations
5. Integrating Factor
## First Order Differential Equations Solutions
Usually, there are two methods considered to solve the linear differential equation of first order.
1. Using Integrating Factor
2. Method of variation of constant
Let us discuss each method one by one to get the solutions for differential equations of the first-order.
### Using an Integrating Factor
If a linear differential equation is written in the standard form:
y’ + a(x)y = 0
Then, the integrating factor is defined by the formula
u(x) = exp (∫a(x)dx)
Multiplying the integrating factor u(x) on the left side of the equation that converts the left side into the derivative of the product y(x)u(x).
The general solution of the differential equation is expressed as follows:
$y=\frac{\int u(x)f(x)dx+C}{u(x)}$
where C is an arbitrary constant.
### Method of Variation of a Constant
This method is similar to the integrating factor method. Finding the general solution of the homogeneous equation is the first necessary step.
y’ + a(x)y = 0
The general solution of the homogeneous equation always contains a constant of integration C. We can replace the constant C with a certain unknown function C(x). When substituting this solution into the non-homogeneous differential equation, we can be able to determine the function C(x). This approach of the algorithm is called the method of variation of a constant. Both methods lead to the same solution.
### Properties of First-order Differential Equations
The Linear first order differential equation possesses the following properties
• It does not have any transcendental functions like trigonometric functions and logarithmic functions
• The products of y and any of its derivatives are not present
### Applications of First-order Differential Equation
Some of the applications which use the first-order differential equation are as follows:
• Newton’s law of cooling
• Growth and decay
• Orthogonal trajectories
• Electrical circuits
• Falling Body Problems
• Dilution Problems
### Examples of First Order Differential Equation
Question 1 : Solve the equation y′−y−xex = 0
Solution : Given, y′−y−xe= 0
Rewrite the given equation and the equation becomes,
y′−y xex
Using the integrating factor, it becomes;
$u(x)=e^{\int (-1)dx}=e^{-\int dx}=e^{-x}$
Therefore, the general solution of the linear equation is
$y(x)=\frac{\int u(x)f(x)dx+C}{u(x)}=\frac{\int e^{-x}xe^{x}dx+C}{e^{-x}}$ $y(x)=\frac{\int xdx+C}{e^{-x}}=e^{x}\left ( \frac{x^{2}}{2}+C \right )$.
Question 2: Solve the differential equation y’+2xy = x.
Solution: The given equation is already in a standard form, y’ + P(x)y = Q(x)
Therefore, P(x) = 2x and Q(x) = x
Register with BYJU’S learning app to get more information about the maths-related articles and start practice with the problems.
|
{}
|
×
# Natural logarithm of natural logarithm
Can you find the integral of $$\int { \ln { \left( \ln { x } \right) } dx }$$
Note by Fredirick Estrella
11 months ago
Sort by:
In terms of elementary functions? No.
But you can include the logarithmic integral $$\displaystyle \text{li}(x) = \int_0^x \dfrac{dt}{\ln t}$$, to get $$\displaystyle \int \ln(\ln x) \, dx = x \ln(\ln x) - \text{li}(x) + C$$.
Your first step is to use the substitution $$y = \ln x$$. · 11 months ago
|
{}
|
# Pablo Management has five part-time employees, each of whom earns $100 per day. They are normally... 1 answer below » Pablo Management has five part-time employees, each of whom earns$100 per day. They are normally paid on Fridays for work completed Monday through Friday of the same week. They were paid in full on Friday, December 28, 2011. The next week, the five employees worked only four days because New Year’s Day was an unpaid holiday. Show (a) the adjusting entry that would be recorded on Monday, December 31, 2011, and (b) the journal entry that would be made to record payment of the employees’ wages on Friday, January 4, 2012.
Date Particulars Amount($) Amount($)
|
{}
|
# FDR and Benjamini-Hochberg
We wish to test ${N}$ null hypotheses ${H_{01}, \dots, H_{0N}}$ indexed by the set ${\{1, \dots, N\}}$. The hypotheses indexed by ${I_0 \subseteq \{1, \dots, N\}}$ are truly null with ${|I_0| = N_0}$ and the remaining hypotheses are non-null. A test in this setting looks at the data and decides to accept or reject each ${H_{0i}}$. While devising such a test, one obviously wants to guard against rejecting too many true null hypotheses. A classical way of ensuring this is to allow only those tests whose Family Wise Error Rate (FWER) is controlled at a predetermined small level. The FWER is defined as
$\displaystyle FWER := \mathop{\mathbb P} \left(\cup_{i \in I_0} \left\{\text{Reject } H_{0i} \right\} \right). \ \ \ \ \ (1)$
It is very easy to design a test whose FWER is controlled by a predetermined level ${\alpha}$: reject or accept each hypothesis ${H_{0i}}$ according to a test whose type I error is atmost ${\alpha/N}$. By the union bound, one then has
$\displaystyle FWER = \mathop{\mathbb P} \left(\cup_{i \in I_0} \left\{\text{Reject } H_{0i} \right\} \right) \leq \sum_{i \in I_0} \mathop{\mathbb P} \left\{\text{Reject } H_{0i} \right\} \leq \frac{\alpha N_0}{N} \leq \alpha.$
The above procedure is sometimes called the Bonferroni method. In modern theory of hypothesis testing, control of the FWER is considered too stringent mainly because it leads to tests that fail to reject many non-null hypotheses as well. The modern method is to insist on control of FDR (False Discovery Rate) as opposed to FWER. The FDR of a test is defined as
$\displaystyle FDR = \mathop{\mathbb E} \left( \frac{V}{R \vee 1} \right)$
where
$\displaystyle R := \sum_{i=1}^N I \left\{\text{Reject } H_{0i} \right\} \text{ and } V := \sum_{i \in I_0} I \left\{\text{Reject } H_{0i} \right\}$
and ${R \vee 1 := \max(R, 1)}$. The quantity ${V/(R \vee 1)}$ is often called the FDP (False Discovery Proportion). FDR is therefore the expectation of FDP.
How does one design a test whose FDR is controlled at a predetermined level ${\alpha}$ (e.g., ${\alpha = 0.1}$) and which rejects more often that the Bonferroni procedure? This was answered by Benjamini and Hochberg in a famous paper in 1995. Their procedure is described below. For each hypothesis ${H_{0i}}$, obtain a ${p}$-value ${p_i}$. For ${i \in I_0}$, the ${p}$-value ${p_i}$ has the uniform distribution on ${[0, 1]}$. For ${i \notin I_0}$, the ${p}$-value ${p_i}$ has some other distribution probably more concentrated near 0. Let the ordered ${p}$-values be ${p_{(1)} < \dots < p_{(N)}}$. The BH procedure is the following:
$\displaystyle \text{Reject } H_{0i} \text{ if and only if } p_{i} \leq \frac{i_{\max} \alpha}{N}$
where
$\displaystyle i_{\max} := \max \left\{1 \leq i \leq N : p_{(i)} \leq \frac{i \alpha}{N} \right\}.$
In the event that ${p_{(i)} > i \alpha/N}$ for all ${i}$, we take ${i_{\max} = 0}$. The BH procedure is probably easier to understand via the following sequential description. Start with ${i = N}$ and keep accepting the hypothesis corresponding to ${p_{(i)}}$ as long as ${p_{(i)} > \alpha i/N}$. As soon as ${p_{(i)} \leq i \alpha/N}$, stop and reject all the hypotheses corresponding to ${p_{(j)}}$ for ${j \leq i}$.
It should be clear that the BH procedure rejects hypotheses much more liberally compared to the Bonferroni method (which rejects when ${p_i \leq \alpha/N}$). Indeed any hypothesis rejected by the Bonferroni method will also be rejected by the BH procedure. The famous Benjamini-Hochberg theorem states that the FDR of the BH procedure is exactly equal to ${N_0 \alpha/N}$ under the assumption that the ${p}$-values ${p_1, \dots, p_N}$ are independent:
Theorem 1 (Benjamini-Hochberg) The FDR of the BH procedure is exactly equal to ${N_0 \alpha/N}$ under the assumption that the ${p}$-values ${p_1, \dots, p_N}$ are independent.
There probably exist many proofs of this by-now classical inequality. Based on an understated google search, I was able to find two extremely slick and short proofs which I describe below. Prior to that, let me provide some rudimentary intution for the specific form of the BH procedure. Based on the ${p}$-values ${p_1,\dots, p_N}$, our goal is to reject or accept each hypothesis ${H_{0i}}$. It is obvious that we will have to reject those for which ${p_i}$ is small but how small is the question. Suppose we decide to reject all hypotheses for which the ${p}$-value is less than or equal to ${t}$. For this procedure, the number of rejections and the number of false rejections are given by
$\displaystyle R_t := \sum_{i=1}^N I \{p_i \leq t\} ~~ \text{ and } ~~ V_t := \sum_{i \in I_0} I \{p_i \leq t\} \ \ \ \ \ (2)$
respectively. Consequently the FDR of this procedure is ${FDR_t := \mathop{\mathbb E} V_t/(R_t \vee 1)}$. We would ideally like to choose ${t}$ to be the largest subject to the constraint that ${FDR_t \leq \alpha}$ (largest because larger values of ${t}$ lead to more rejections or discoveries). Unfortunately, we do not quite know what ${\mathop{\mathbb E} V_t/(R_t \vee 1)}$ is; we do not even know what ${V_t/(R_t \vee 1)}$ is (if we did we could have used it as a proxy for ${FDR_t}$). We do however know what ${R_t}$ is but ${V_t}$ requires knowledge of ${I_0}$ which we do not have. However, the expectation of ${V_t}$ equals ${N_0 t}$ which we know cannot be larger than the known quantity ${Nt}$. It is therefore reasonable to choose ${t}$ as
$\displaystyle \tau := \sup \left\{t \in [0, 1] : \frac{Nt}{R_t \vee 1} \leq \alpha \right\} \ \ \ \ \ (3)$
and reject all ${p}$-values which are less than or equal to ${\tau}$. This intuitive procedure is actually exactly the same as the BH procedure and this fact is not very hard to see.
Proof One: This proof uses martingales and is due to Storey, Taylor and Siegmund in a paper published in 2004. The explanation above about an alternative formulation of the BH procedure implies that we only need to prove
$\displaystyle \mathop{\mathbb E} \frac{V_{\tau}}{R_{\tau} \vee 1} = \frac{N_0 \alpha}{N}. \ \ \ \ \ (4)$
where ${V_t}$ and ${R_t}$ are defined as in (2). The important observation now is that the process ${\{V_t/t : 0 \leq t \leq 1\}}$ is a backward martingale i.e.,
$\displaystyle \mathop{\mathbb E} \left( \frac{V_s}{s} \bigg| \frac{V_{t'}}{t'} , t' \geq t \right) = \frac{V_t}{t}$
for all ${0 \leq s < t \leq 1}$. This fact involves only independent uniform random variables and is easy. With ${\tau}$ defined as in (3), one of Doob’s martingale theorems gives
$\displaystyle \mathop{\mathbb E} \left(\frac{V_{\tau}}{\tau} \right) = \mathop{\mathbb E} \left(\frac{V_1}{1} \right) = N_0.$
Now the definition (3) of ${\tau}$ implies that ${N \tau/(R_{\tau} \vee 1) = \alpha}$ (this requires an argument!). As a result, we can replace ${\tau}$ by ${\alpha (R_{\tau} \vee 1)/N}$ to obtain (4). This completes the proof.
Proof Two: This proof works directly with the original formulation of the BH procedure. I have found this proof in a recent paper by Heesen and Janssen (see page 25 in arxiv:1410.8290). We may assume that ${I_0}$ is nonempty for otherwise ${V \equiv 0}$ and there will be nothing to prove. Let ${p := (p_1, \dots, p_N)}$ and let ${R(p)}$ denote the number of rejections made by the BH procedure. From the description, it should be clear that ${R(p)}$ is exactly equal to ${i_{\max}}$. We can therefore write the FDP as
$\displaystyle FDP = \frac{V}{R(p) \vee 1} = \sum_{j \in I_0} \frac{I \left\{p_j \leq \alpha R(p)/N \right\} }{R(p) \vee 1}.$
We now fix ${j \in I_0}$ and let ${\tilde{p} := (p_1, \dots, p_{j-1}, 0, p_{j+1}, \dots, p_n)}$ i.e., the ${j}$th ${p}$-value is replaced by ${0}$ and the rest of the ${p}$-values are unchanged. Let ${R(\tilde{p})}$ denote the number of rejections of the BH procedure for ${\tilde{p}}$. It should be noted that ${R(\tilde{p}) \geq 1}$ because of the presence of a zero ${p}$-value in ${\tilde{p}}$. The key observation now is
$\displaystyle \frac{I \left\{p_j \leq \alpha R(p)/N \right\} }{R(p) \vee 1} = \frac{I \left\{p_j \leq \alpha R(\tilde{p})/N \right\} }{R(\tilde{p})} \ \ \ \ \ (5)$
To see this, it is enough to note that ${I \left\{p_j \leq \alpha R(p)/N \right\} = I \left\{p_j \leq \alpha R(\tilde{p})/N \right\}}$ and that ${R(p) = R(\tilde{p})}$ when ${p_j \leq \alpha R(p)/N}$. It is straightforward to verify these facts from the definition of the BH procedure. Using (5), we can write
$\displaystyle FDR = \sum_{j \in I_0} \mathop{\mathbb E} \frac{I \left\{p_j \leq \alpha R(p)/N \right\} }{R(p) \vee 1} = \sum_{j \in I_0} \mathop{\mathbb E} \frac{I \left\{p_j \leq \alpha R(\tilde{p})/N \right\} }{R(\tilde{p})}$
The independence assumption of ${p_1, \dots, p_N}$ now implies that ${p_j}$ and ${R(\tilde{p})}$ are independent. Also because ${p_j}$ is uniformly distributed on ${[0, 1]}$ as ${j \in I_0}$, we deduce that ${FDR = \alpha N_0/N}$ and this completes the proof.
|
{}
|
# Could quantum computers break any cipher? [closed]
I've been told that physicists and computer scientists are working on computers that could use quantum physics to increase significantly computation capabilities and break any cipher so cryptography becomes meaningless.
Is it true?
## closed as off-topic by Kyle Kanos, ACuriousMind♦, David Z♦Jul 18 '15 at 21:34
• This question does not appear to be about physics within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.
• Related reading: How will Cryptography be changed by Quantum Computing? (and probably a fair bit in Cryptography's post-quantum-cryptography tag and Information Security's quantum-computing tag). – a CVn Jul 16 '15 at 10:19
• I'm voting to close this question as off-topic because the question is asking about verifying the claim of the use of a quantum computer and not at all about physics. Perhaps Cryptography or Skeptics might be better suited for this question. – Kyle Kanos Jul 16 '15 at 15:15
• I actually don't think this is on topic for us. It's really a question about cryptography - the only connection to physics is knowing that a quantum computer can effectively solve certain problems in less-than-exponential time. – David Z Jul 16 '15 at 15:16
• I don't think it's even that big of a deal. I answered a similar question a while ago, How will Cryptography be changed by Quantum Computing?, and the tdlr of it was we know how to deal with computers getting faster: bigger key-spaces. – Nathan Cooper Jul 16 '15 at 16:03
• @NathanCooper If I can build a machine whose ability to factor grows faster than your machine's ability to encrypt/decrypt, then bigger key spaces don't help. Or am I missing something? – DanielSank Jul 16 '15 at 17:42
No, it is not.
Quantum computers can factor large numbers efficiently, which would allow to break many of the commonly used public key cryptosystems such as RSA, which are based on the hardness of factoring.
However, there are other cryptosystems such as lattice-based cryptography which are not based on the hardness of factoring, and which (to our current knowledge) would not be vulnerable to attack by a quantum computer.
Quantum computing holds lots of promise, but it is not infinitely powerful.
The (exaggerated) claims you've heard are probably based on the most famous quantum computing algorithm, Shor's algorithm. This is a method for using a quantum computer to factor integers into prime numbers. As it turns out, many encryption schemes rely on the fact that factoring large numbers is very hard. Messages can be encrypted fairly easily in such a way that only someone who knows the prime factorization of a particular number can decrypt them with any reasonable amount of effort. If you could quickly factor large numbers, you would break many present-day encryption schemes.
However, there are other techniques that are not immediately threatened by quantum computers. If nothing else, you can always use a one-time pad as long as the message itself. This is mathematically unbreakable, since any message can be "decrypted" from the encrypted one with the appropriate guess at the key, so there is no way for an eavesdropper to know the real message.
Quantum computation may also open the doors to next-generation ways of securely transmitting information. For example, most encryption today is just that -- scrambling the message so that only the intended recipient can make sense of it. But there may be good quantum ways to physically ensure eavesdroppers cannot access the transmission in the first place.
• so would quantum computers help eavesdroppers get the edge over encrypters or the other way around? sounds like decryption gets easier in some cases, but good encryption gets easier too. – innisfree Jul 16 '15 at 9:41
• For one-time pads, encryption gets both more expensive and less secure, since the pad has to be physically sent to the encryptor, who than has to ensure that it does not get read by The Bad Guys. For very large traffic streams, the pads get big, too, so there's a cost there. As long as the key remains secure, though, eavesdroppers are helpless, and quantum computers are useless. – WhatRoughBeast Jul 16 '15 at 15:15
• But in many cases eavesdropping is not the problem. E.g. say I have a file on my hard disk which I don't want anyone to read, even if they steal the machine. – jamesqf Jul 16 '15 at 21:58
• @innisfree Quantum mechanics helps encrypters more than eavesdroppers, since Quantum Key Distribution makes one-time pads viable over an insecure channel. Current systems are designed in such a way that any eavesdropper would cause a wavefunction collapse, destroying the OTP in the process. Also note that Shor's algorithm is largely a theoretical vulnerability (at time of writing, the biggest number that has been factored is 56153), whereas these QKD systems are in use today. – James_pic Jul 17 '15 at 15:24
There is actually an entire complexity class devoted to the answer, which is "no, it cannot break any code." The class is known as BQP, or "bounded error quantum polynomial time." It is the class of decision problems which can be solved by a quantum computer in polynomial time, with no more than a 1/3 error margin (this error term is accounted for in a classical computation step which occurs after most quantum algorithms to verify that results are correct).
BQP is believed to have the following relations with other complexities:
• Contains P (Polynomial Time)
• Intersects, but probably does not fully contain NP (Nondeterministic Polynomial time)
• Probably does not contain NP-complete (as a corollary)
• Subset of PSPACE (Problems that are solvable with polynomial space requirements)
(The major unknown in that list is that it is not yet known if P=NP. The list assumes P!=NP, but if P=NP, clearly NP and NP-complete would also be part of BQP. We also don't know if NP=BQP or not. so much left to discover!)
RSA is crackable using quantum computers because the task of factoring large composite numbers is in BQP, as demonstrated by Shor's algorithm. Shor's algorithm is NP (but not NP-complete). There are other NP algorithms which are believed to be outside of BQP which can be used for encryption (The accepted answer links to lattice based cryptography, which is one such class of algorithms).
• "The only unknown in that list is that it is not yet known if P=NP." -- The exact relation of NP and BQP is also unknown. – Norbert Schuch Jul 16 '15 at 14:20
The answers so far have focused on public-key encryption, in which someone publishes a public key which can be used to encrypt messages to them, and which is not secret. Quantum computers are known to be efficient at breaking several of the problems most commonly used as the basis of public-key cryptography. It does not affect all public-key cryptography, just the most popular schemes; it does affect the most popular schemes.
However, there is more to encryption than public-key. Symmetric encryption schemes, where the two parties share a secret key, is believed to be subject to no more than a quadratic speedup with quantum computers (quantum computers can achieve a quadratic speedup for general search problems, but no more). This corresponds to effectively halving the key length. Unlike the common public-key systems, effectively halving the key length is extremely easy to respond to: you can just double your key lengths and carry on. Symmetric encryption is extremely common; even where public-key encryption is used, it's most often just used to exchange a key for symmetric encryption.
The most common symmetric system, AES, has a 256-bit key variant that provides 128 bits of security against quantum computers. Other schemes in development support 512-bit keys, which would provide 256 bits of effective security. Both 128 and 256 bits are believed to be secure for the forseeable future.
Likewise, cryptographic hash functions are believed to hold up very well against quantum computers. There's the same Grover's algorithm-based attack, but like with encryption functions it is easy to counter.
So, any claims that cryptography become meaningless are totally off-base, because the only thing that is seriously affected are public-key systems. Public-key systems are important, but cryptography is a much broader field.
• And this answer is precisely why I feel this question is off-topic here: there's not a drop of physics here (something we expect is in every answer). – Kyle Kanos Jul 16 '15 at 15:51
• @KyleKanos: There is a drop of physics here, in Grover‘s algorithm, which had enough physics in 1997 to be published in Phys. Rev. Lett. () (arXiv version) . I admit, it’s just a drop of physics, but knowing whether it’s physics or computer science is a common (and frustrating) problem with quantum information. – Frédéric Grosshans Dec 2 '16 at 16:28
No. There can exist no X computer that can break any cipher because the one time pad is a cipher and one time pad cannot be broken by algorithm (trivial proof in information theory).
I would like to add that quantum computers can not break any existing code because their logical gates can perform the very same operations as classical logical gates can. They add new possibilities while keeping those formerly possible in classical computers.
Since programms, at the core, work on logical gates, it is reasonable to assume that any existing code for classical computers can work on a quantum computer.
• This makes no sense. How does saying that a quantum computer can do everything a classical computer can, rule out the possibility that quantum computers could crack codes? Especially given that classical computers can crack codes, just not necessarily in a feasible amount of time. Your argument is like saying, "Forklifts can't lift 20kg because they can lift anything a human being can." It's wrong twice: humans can lift 20kg and, even if they couldn't, the forklift can do more. – David Richerby Jul 16 '15 at 20:06
## Ok - theoretically a quantum-computer could work like this:
You can start a normal computation and calculate it in parallel for all possible input keys. Which means decrypting an encrypted text with the right key takes just as long as decrypting it with every possible key (with a fixed length). This would mean that all traditional encryption methods like AES etc. could be cracked as fast as they could be decrypted by the holder of the legal key.
The tricky part (where the one time pad excels) is how to know if the resulting message you got from decrypting is actually the right text. For example I send the Message OK to you encrypted with AES 256bit. Now there are 2^256 possible keys to decrypt this message with and all of them will result in some result. Many maybe in something like #§ or other cryptic byte symbols, but some keys might lead to two letters "WB" and some combination might even lead to "NO".
So the difficult part is then to find out, which is the correct message! Because the (theoretical) quantum computer will in the end only output a few results with high propability - so you have to code a check, which will discern if the output is acutally a valid text. If the text is a lot bigger than the key and something like plain english, or better a standard-format which can be checked for integrity this could be possible. But if there are several possible outcomes which look valid, a human will have to sort them through so in the case of a onetime-pad cracking the code is just as good as simply guessing out of the blue. Other encryption-schemes might have to be adapted to produce valid-looking messages for false keys, but this seems possible...
--
This would only work if an actual quantum computer could work like this. As far as I know we have no hard evidence for a qc actually working like this. So maybe it simply can't be done and we don't even have a problem ;-)
• This isn't how quantum computers work. The belief that they can is so common that Dr. Aaronson has an entire section of his blog devoted to this misconception: Speaking Truth to Parallelism. They do give speedups for some problems, but not nearly as many as that would suggest. (Basically, we don't think that BQP = PSPACE.) – Charles Jul 16 '15 at 13:21
• There are a lot of articles in that category, while it is true that memresistors or similar technologies suffer from several problems (like encoding the output) a quantum computer could fundamentally resolve a complex problem with a high probability. And if you can tweak the probability high enough, so you are 99.99% accurate with a few runs that is practically good enough and could solve the mentioned problems, if someone could construct such a QC – Falco Jul 16 '15 at 13:39
• The fundamental issue is that quantum computers don't let you compute all possible inputs in parallel: quantum computing isn't nondetermanism. – Charles Jul 16 '15 at 14:11
• What you're describing is a nondeterministic computer (the "N" in "NP"). While we don't know for sure that quantum computers aren't equivalent to nondeterministic ones (we don't even know that $P\ne NP$), we're pretty much certain that they are not. – cpast Jul 16 '15 at 15:49
• -1 This answer is stating that BQP = NP, which is widely believed to be false. Using Grover's Algorthm, quantum computers can speed up brute-force searches by a square-root factor, which means you could brute-force a 256-bit AES key in only 2^128 operations. But that is still exponential complexity. – BlueRaja - Danny Pflughoeft Jul 16 '15 at 15:52
|
{}
|
# Explain how sociology addresses the tension between personal responsibility and the influence that society exerts on people’s
###### Question:
Explain how sociology addresses the tension between personal responsibility and the influence that society exerts on people’s decision.
### How to create radical equations when all you have is the vertical and horizontal asymptotes
How to create radical equations when all you have is the vertical and horizontal asymptotes...
### ELLEILLETTERITLLC1THUHUHUILLIETELEEULI LE111BEHALLEDFELSEFI ELE1111111U 11111111ETHLEHELELIETTELTITILIRTICELLIELLETIETEELLITETTESFILLESHELLTIETEELLIETELLER
ELLEILLETTERITLLC1THUHUHUILLIETELEEULI LE111BEHALLEDFELSEFI ELE1111111U 11111111ETHLEHELELIETTELTITILIRTICELLIELLETIETEELLITETTESFILLESHELLTIETEELLIETELLER ITLILLETTERETETTEL LEHETELLIETTITREISETELLELEBE1111111112137 PLNEUTELSTELLI11TTERIELLEILLUDELLERTELLUNTELESTETICULETREE111111T LILLELELBITNIFIL...
### Which of the following statements most accurately describes both 'to my dear loving husband' and 'to
Which of the following statements most accurately describes both "to my dear loving husband" and "to the king's most excellent majesty"?...
### Select all expressions that are equivalent to0.75x + 0.25(x + 12.4) + (x – 2.1) A. 2x + 1 B. x +
Select all expressions that are equivalent to 0.75x + 0.25(x + 12.4) + (x – 2.1) A. 2x + 1 B. x + 1 C. x + 3.1 + x + 2.1 D. x + 3.1 + x – 2.1Please tell me what to do. I am in need of help. Thank you!!...
### PLEASE HELP!! Gas cloud 1 is likely to form a star. Gas cloud 2 is not. Based on this information, match
PLEASE HELP!! Gas cloud 1 is likely to form a star. Gas cloud 2 is not. Based on this information, match the given conditions with each cloud. *options and pictures attached* $PLEASE HELP!! Gas cloud 1 is likely to form a star. Gas cloud 2 is not. Based on this information,$...
### Production possibilities graphs can us understand scarcity true false
Production possibilities graphs can us understand scarcity true false...
### The earth's surface is approximately 71% water and 29% land a. following the approach shown in class
The earth's surface is approximately 71% water and 29% land a. following the approach shown in class from the energy balance over land and oceans: an assessment based on direct observations and cmip5 climate models," estimate the fraction of the water vapor in the atmosphere (land +ocean) that orig...
### In a story why did the Jaguar spots blur after sun god painted him
In a story why did the Jaguar spots blur after sun god painted him...
### Hai everyone quick question, 您最喜歡哪種狗?;)
Hai everyone quick question, 您最喜歡哪種狗?;)...
### Points! aplenty! how many boxes 12cm by 8 by 9cm will fit into a container measuring
Points! aplenty! how many boxes 12cm by 8 by 9cm will fit into a container measuring 2.1m by 1.6m by 1m explain! you for your !...
### Help plz.
Help plz. $Help plz.............................................$...
### What is the slope of the line whose equation is 8x-4y=8?
What is the slope of the line whose equation is 8x-4y=8?...
### What is oxygen used for? oxygen is released into the air for us to breath
What is oxygen used for? oxygen is released into the air for us to breath...
### Physical features of the united states quiz do u guys have all the answers of the quiz? theres 9 questions
Physical features of the united states quiz do u guys have all the answers of the quiz? theres 9 questions thx please helpp...
### You and your younger sibling are at a state park. You come across a doe (a female deer) in a grassy
You and your younger sibling are at a state park. You come across a doe (a female deer) in a grassy clearing. Your sibling asked, "How does the deer survive on just grass?" You begin to explain to your younger sibling how all livings obtain and use energy. It proves to be a difficult task, so, you ...
### What’s the difference between a chart and a table?
What’s the difference between a chart and a table?...
### How does the human body build the complexmolecules it needs?
How does the human body build the complex molecules it needs?...
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.