text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
(Click to enlarge) This image is of the solar eclipse earlier this week. Solar eclipses occur when the moon comes between the Earth and Sun. However, there's more to it than just that, otherwise we'd have a solar eclipse every ~28 days (one full lunar cycle). When viewed edge on, the plane in which the moon orbits is slightly tilted in relation to the plane the Earth and Sun lie on (hence the reason the shadow moves along a different line in the sky than the sun, intersecting only at the one point). Because of this, most of the time, when the moon is on the line between the Warth and the Sun, it is simply too high or too low to cause an eclipse. Sometimes it's between the point where it's too high or low and the point where it will completely come in front of the Sun. In this case, the moon will only cover part of the Sun and the result will be a partial ecplipse, such as this one I photographed in Spring 2005. Additionally, the moon's orbit around the Earth is not perfectly circular. It is slightly elliptical. This means that at some points in its orbit, it further than other points. As common experience should tell you, ther further away an object is, the smaller it will look (which is why the sun appears the same size in the sky as the moon dispite being millions of times bigger). Therefore, since the moon is further away, it will be smaller, and may not cover the sun entirely. This is known as an annular eclipse in which the moon will be silhouetted on the sun leaving a ring (such as in this picture). Thus this image is an extremely rare "total solar eclipse" in which the moon completely covers the full disk of the sun. But what's all that fuzzy stuff around it in the center one? That's called the corona and is essentially the sun's extended atmosphere which is shaped by the Sun's immense magnetic field. It's actually always there, but it's extremely faint in comparison to the sun, so we can't see it unless the sun is somehow blocked out, as in the case of a total solar eclipse. It is primarily composed of the nuclei of ionized hydrogen atoms. You may also be wondering why you didn't happen to catch this eclipse given that it only happened a few days ago. The reason is that this one only happened to be visible from regions of northern Africa and the Middle East. You should not be asking yourself, "why only such a small location given that half the Earth can see the sun at any time?" The reason for this is something called parallax. In the scenario of a total eclipse, only the locations directly below the center of the moon will see the eclipse. Locations slightly further away will be viewing the event from a slightly different angle. While this wouldn't seem like it would play much of a difference, try a quick experiment. Imagine your left eye is someone standing in southern Africa and that your right is someone standing in England. Close one eye and hold your fist out in front of you and cause it to eclipse something on the other side of the room (or outside if possible, the further away the better). Make sure the object you choose is just barely covered by your fist. Now without moving your arm, change eyes. You'll notice that your fist is no longer covering the object at all. This effect that you have just observed is precisely what happens in the case of an eclipse for different observers and is what astronomers call parallax (Parallax also has many other applications in astronomy such as directly measuring the distance to a great number of stars to extremely high precision thanks to the HIPPARCOS satellite). This quick experiment is also reasonably close to actual scale in terms of angular sizes and relation between sizes for the earth and moon. The distances between objects and true sizes aren't even close, but those don't matter in this case. So you're probably wondering why there's the strange disjointed path. After all, we never see that. There's only one sun in the sky. This image is actually a compilation of 18 images taken ~3 minutes apart (and presumably one more to use as the beautiful background). I can say that these were taken ~3 minutes apart because of the spacing of the suns. In 24 hours, the sun makes a full 360º path around the sky. Thus, converting hours to minutes and dividing, we find that the sun moves 1º every 4 minutes. Although it doesn't seem that there's any scale marked on this image to permit me to figure out how many degress there is between each image from which to figure out the time between images, there actually is a very easy one: the sun itself. Both the sun and the moon have an angular size of 1/2º. That means that if the little suns were butted right up against one another, it would have traveled 1/2º between images, which in turn implies that it would have been 2 minutes (4/2) between each image. Since there's a little more space, roughly 1/2 of a sun width (ie, 1/4º), I can estimate there was approximately another minute between pictures. Thus 2 + 1 = 3. So ultimately 18 images of the sun were taken and then reassembled to produce this dramatic image. While in and of itself it is quite stunning, a closer look reveals more information than meets the eye. This concept is one I feel is important to keep in mind in the sciences. Things are not always what they seem to be at a first glance. If this wasn't the driving concept behind science, we would still hold with many ridiculous ideas such as the Earth being flat, or alchemy, or perhaps more relavant today, intelligent design. Image copyright: Stefan Seip Found via: NASA Astronomy Picture of the Day Update: The original version of this post contained erronious math which was noted by reader, Benjamin Franz, in the comments. I have corrected my math here, but wanted to make sure he was given due credit.
<urn:uuid:c439f314-1664-4d1c-830e-b989fad44188>
4.09375
1,262
Personal Blog
Science & Tech.
56.817564
Macroscopic Properties and Microscopic Models As a simple example of how the macroscopic properties of a substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula. can be explained on a microscopic level, consider the liquidA state of matter in which the atomic-scale particles remain close together but are able to change their positions so that the matter takes the shape of its container mercury. Macroscopically, mercury at ordinary temperatures is a silvery liquid which can be poured much like water—rather unusual for a metalAn element characterized by a glossy surface, high thermal and electrical conductivity, malleability, and ductility.. Mercury is also the heaviest known liquid. Its densityThe ratio of the mass of a sample of a material to its volume. is 13.6 g cm–3, as compared with only 1.0 g cm–3 for water. When cooled below –38.9°C mercury solidifies and behaves very much like more familiar solid metals such as copper and iron. Mercury frozen around the end of a wooden stick can be used to hammer nails, as long as it is kept sufficiently cold. Solid mercury has a density of 14.1 g cm–3 slightly greater than that of the liquid. When mercury is heated, it remains a liquid until quite a high temperature, finally boilingThe process of a liquid becoming vapor in which bubbles of vapor form beneath the surface of the liquid; at the boiling temperature the vapor pressure of the liquid equals the pressure of the gas in contact with the liquid. at 356.6°C to give an invisible vaporThe gaseous state of a substance that typically exists as a liquid or solid; a gas at a temperature near or below the boiling point of the corresponding liquid.. Even at low concentrations gaseous mercury is extremely toxic if breathed into the lungs. It has been responsible for many cases of human poisoning. In other respects mercury vapor behaves much like any other gas. It is easily compressible. Even when quite modest pressures are applied, the volume decreases noticeably. Mercury vapor is also much less dense than the liquid or the solid. At 400°C and ordinary pressures, its density is 3.6 × 10–3 g cm–3 about one four-thousandth that of solid or liquid mercury. A modern chemist would interpret these macroscopic properties in terms of a <span style="background-color: navy; color: white;" />sub-microscopic model involving atoms of mercury. As shown in the following figure, the atoms may be thought of as small, hard spheres. Like billiard balls they can move around and bounce off one another. In solid mercury the centers of adjacent atoms are separated by only 300 pm (300 × 10–12 m or 3.00Å). Although each atom can move around a little, the others surround it so closely that it cannot escape its allotted position. Hence the solid is rigid. Very few atoms move out of position even when it strikes a nail. As temperature increases, the atoms vibrate more violently, and eventually the solid melts. In liquid mercury, the regular, geometrically rigid structure is gone and the atoms are free to move about, but they are still rather close together and difficult to separate. This ability of the atoms to move past each other accounts for the fact that liquid mercury can flow and take the shape of its container. Note that the structure of the liquid is not as compact as that of the solid; a few gaps are present. These gaps explain why liquid mercury is less dense than the solid. In gaseous mercury, also called mercury vapor, the atoms are very much farther apart than in the liquid and they move around quite freely and rapidly. Since there are very few atoms per unitA particular measure of a physical quantity that is used to express the magnitude of the physical quantity; for example, the meter is the unit of the physical quantity, length. volume, the density is considerably lower than for the liquid and solid. By moving rapidly in all directions, the atoms of mercury (or any other gas for that matterAnything that occupies space and has mass; contrasted with energy.) are able to fill any container in which they are placed. When the atoms hit a wall of the container, they bounce off. This constant bombardment by atoms on the <span style="background-color: navy; color: white;" />sub-microscopic level accounts for the pressure exerted by the gas on the macroscopic level. The gas can be easily compressed because there is plenty of open space between the atoms. Reducing the volume merely reduces that empty space. The liquid and the solid are not nearly so easy to compress because there is little or no empty space between the atoms. You may have noticed that although our sub-microscopic model can explain many of the properties of solid, liquid, and gaseous mercury, it cannot explain all of them. Mercury’s silvery color and why the vapor is poisonous remain a mystery, for example. There are two approaches to such a situation. We might discard the idea of atoms in favor of a different theory that can explain more macroscopic properties. On the other hand it may be reasonable to extend the atomic theory so that it can account for more facts. The second approach has been followed by chemists. In the current section on Atoms, Molecules and Chemical Reactions as well as Using Chemical Equations in Calculations we shall discuss in more detail those facts that require only a simple atomic theory for their interpretation. Many of the subsequent sections will describe extensions of the atomic theory that allow interpretations of far more observations.
<urn:uuid:788a26bb-3431-4e86-88e7-32adca8979c4>
3.953125
1,156
Academic Writing
Science & Tech.
44.467513
The power operator binds more tightly than unary operators on its left; it binds less tightly than unary operators on its right. The syntax is: Thus, in an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left (this does not constrain the evaluation order for the operands). The power operator has the same semantics as the built-in pow() function, when called with two arguments: it yields its left argument raised to the power of its right argument. The numeric arguments are first converted to a common type. The result type is that of the arguments after coercion. With mixed operand types, the coercion rules for binary arithmetic operators apply. For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. For example, 0.01. (This last feature was added in Python 2.2. In Python 2.1 and before, if both arguments were of integer types and the second argument was negative, an exception was raised). 0.0 to a negative power results in a ZeroDivisionError. Raising a negative number to a fractional power results in a ValueError. See About this document... for information on suggesting changes.
<urn:uuid:832d6b1e-1229-44fa-9a1a-ad1e31b0a7a4>
3.5625
291
Documentation
Software Dev.
47.856354
TRectangle = class(TShape) class PASCALIMPLEMENTATION TRectangle : public TShape TRectangle defines 2D rectangles with customized corners. It inherits TControl and can be used in styles to construct controls. The rectangle size and position are defined by the following properties of the TRectangle object: - The shape rectangle ShapeRect defines the initial size and position of the rectangle. - You can use the scaling factors to the TRectangle object to proportionally scale rectangle coordinates along local coordinate axes. Scaling moves the rectangle and changes its size. Note: Scaling not only scales the shape of an object proportionally to the scaling factors, but also changes the StrokeThickness of the contour proportionally to the scaling factor for each axis. - You can use the rotation axis RotationCenter and rotation angle RotationAngle of the TRectangle object to rotate and move the rectangle. - The Corners, CornerType, XRadius, and YRadius properties customize the shape of the rectangle corners. TRectangle draws the contour and fills the background with the Paint method. Paint draws the contour and fills the background using the drawing pen and brush with the properties, color, and opacity defined by the Stroke, StrokeThickness, StrokeCap, StrokeDash, StrokeJoin, and Fill properties of the TRectangle object.
<urn:uuid:415715e7-dda0-40f9-b473-16254eb3f60a>
2.703125
300
Documentation
Software Dev.
23.4825
A Geometric Proof See Also: Problem Solving with Heron's Formula 1. The incircle and its properties. 2. An excircle and its properties. 3. The area of the triangles is rs, where r is the inradius and s the semiperimeter. 4. The points of tangency of a circle inscribed in an angle are equidistant from the vertex. Return to the EMAT 4400/6600 Page
<urn:uuid:cd57d55d-6149-46a7-8969-8de8714ca8d7>
3.3125
97
Tutorial
Science & Tech.
74.45875
See also the Dr. Math FAQ: Browse High School Sequences, Series Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Strategies for finding sequences. - Divergent Infinite Series [05/30/2003] I bought a book _Praise for The Mathematical Universe_, by William Dunham, with a chapter on Euler's infinite series. A proof he outlined that I could not follow is this... - Dividing a Circle using Six Lines [08/29/2001] What is the largest number of regions into which you can divide a circle using six lines? - Does the Series cos(n)/n^(3/4) Converge or Diverge? [11/10/2009] Doctor Jordan invokes the Euler equation to bound a doozy of a series. - Do I Use n Or n-1 to Find the nth Term in a Geometric Sequence? [12/01/2009] A look at how the formula for the nth term in a geometric sequence, a*r^(n-1), sometimes needs to be a*r^n to fit the problem context. - Doubling Pennies [11/26/1996] If I start with a penny and double it daily for 30 days, how many pennies do I have at the end? - e as a Series and a Limit [03/30/1998] Why does e = 1 + 1/2! + 1/3! + 1/4! + ... and lim (1 + 1/n) ^ n, as n -- - Equation of a Sequence with Constant Third Differences [05/26/1998] Using the method of difference or the Gregory-Newton formula. - Euler's summmation of 1/n^2 [03/15/2000] Prove that pi^2/6 = the summation of 1/n^2 from 1 to infinity. - Evaluating Indefinite Sums [12/07/2003] How can I evaluate the sum of the terms 1/(3n+1)(4n+2), where n ranges from -infinity to +infinity? - Evaluating the Series n^2/2^n a Differential Way [11/05/2010] A student knows that the series n^2/2^n converges as n goes from zero to infinity. Doctor Ali offers one approach for determining its sum, based on differentiating the geometric series and its closed form solution. - Expansion of (x+y)^(1/2) [06/07/1999] Is there a way to expand (x+y)^(1/2)? If so, how is it derived? - Expected Tosses for Consecutive Heads with a Fair Coin [06/29/2004] What is the expected number of times a person must toss a fair coin to get 2 consecutive heads? - Exponential Generating Function [05/06/2000] How can I prove that the exponential generating function of the series 1, 1*3, 1*3*5, 1*3*5*7, ... is 1/sqrt(1-2*x)? - Exponential Series Proof [05/05/2001] Given e^x greater than or equal to 1 + x for all real values of x,and that (1+1)(1+(1/2))(1+(1/3))...(1+(1/n)) = n+1, prove that e^(1+(1/2)+ (1/3)+...+(1/n)) is greater than n. Also, find a value of n for which 1=(1/2)+(1/3)+...+(1/n) is greater than 100. - Factors and Multiples - Hamiltonian Path [11/02/1998] We have to make a sequence of numbers, all different, each of which is a factor or a multiple of the one preceding it. - Feeding Chickens - Arithmetical Progression [7/6/1996] A farmer has 3000 hens. Each week he sells 20... what is the total cost of feeding the hens...? - A Fibonacci Proof by Induction [06/05/1998] Let u_1, u_2, ... be the Fibonacci sequence. Prove by induction... - Fibonacci Sequence - An Example [05/12/1999] Glass plates and reflections. - Figurate and Polygonal Numbers [11/21/1998] I need to know everything about figurate numbers. - Finding a Formula for a Number Pattern [09/30/2004] We are learning about sequences and how to find the patterns in numbers. Our teacher gave us the sequence 0, 3, 8, 15, 24, 35 and told us that we had to use factoring to find the answer. I know the answer is (n + 1)(n - 1), but I can't see how to get that. - Finding a Function to Generate a Particular Output [09/21/2004] Dr. Vogler presents several possible functions f(n) that will generate the output 0,0,1,1,0,0,1,1,0,0... for n = 1 to infinity. - Finding an Explicit Formula for a Recursive Series [05/17/2000] How far will a man end up from his home if he walks a mile west, then walks east one half that distance, then walks west half of the distance he has just walked, and so on? - Finding a Non-Recursive Formula [06/10/1999] How can I find a non-recursive formula for the recurrence relation s_n = - [s_(n-1)] - n^2 with the initial condition s_0 = 3? - Finding an Unknown Sequence [3/31/1996] I can't figure out where to start with this Series and Sequences question: 1+3x+6(x)(x)+10(x)(x)(x)+15(x)(x)(x)(x)+. . . - Finding a Series Given the Sum [09/27/1999] How can I find all series of consecutive integers whose sum is a given - Finding a Term of an Arithmetic Series [12/13/1995] The fifth term of an arithmetic series is 16 and the sum of the first 10 terms is 145. Write the first three terms. - Finding Catalan Numbers [12/15/1999] What are Catalan numbers and what applications do we have for them? - Finding Common Numbers in Two Sequences [09/21/2006] I'm working with sequences that start with an initial value and an initial amount to add to get the next term. The amount added then increases by 2 as you move from term to term. If I have two such sequences, is there a way to calculate what numbers they will have in common based on the two initial values and amounts to add? - Finding Number Patterns [05/29/1999] I am trying to find the pattern of the numbers - Finding Rules for Number Patterns [06/05/2009] I'm having trouble finding an algebraic expression that generates the pattern 3, 5, 8, 12, 17, 23, 30. Can you help? - Finding Sums of Sines and Series [03/10/2004] I am trying to find the sum of sin1 + sin2 + sin3 + ... + sin90. I'm also trying to find the sum of 1^n + 2^n + 3^n + 4^n + ... + n^n. Can you help me? - Finding the 1000th Term in a Sequence [1/19/1996] Two kids on a car trip decide to count telephone poles. One kid counts normally, 1,2,3,4,5...25,26,27...31,32,33, etc. The other kid counts them a different way: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 8, 7, 6, 5, 4, 3, 2, - Finding the Digit of a Decimal Expansion [11/14/1998] What digit will appear in the 534th place after the decimal point in the decimal representation of 5/13? - Finding the Missing Numbers in a Sequence [11/30/1995] Fill in the blanks for this series of numbers based on its underlying pattern: 3, 4, 6, 8, 12, (), 18, 20, (), 30, 32 - Finding the Next Number in a Sequence Given Its Geometric Mean ... Which Is a Square Root [09/24/2009] A student who knows how to calculate geometric means gets rattled when trying to determine a sequence from its square root geometric mean. - Finding the Next Number in a Series [07/22/2002] Are there any formal or systematic methods for solving problems that ask you to find the next number in a series? - Finding the Rule for a Given Sequence [09/11/2008] If the first six terms of a sequence are -4, 0, 6, 14, 24, 36, what is the rule? Find the 20th and 200th terms. This answer discusses finite differences and other handy techniques for solving this sort of problem. - Finding the Sum of an Infinite Series [03/05/2006] Find the sum of the series 1 + 1/2 + 1/3 + 1/4 + 1/6 + 1/8 + 1/9 + 1/12 + ... which are the reciprocals of the positive integers whose only prime factors are 2's and 3's. - Finding the Sum of Arithmetico-Geometric Series [09/13/2004] Find the sum of the infinite series 1/7 + 4/(7^2) + 9/(7^3) + 16/(7^4) + ... I would also like to know if there is a general rule to find the sum of (n^2/p^n) for n = 1 to infinity. - Finding the Sum of Arithmetic Series [06/12/2006] Find the sum of the arithmetic series 4 + 10 + 16 + ... + 58.
<urn:uuid:a6f8a7d9-961d-42ff-b61f-f7f8028823b4>
2.890625
2,270
Q&A Forum
Science & Tech.
95.26681
A 136-m-wide river flows due east at a uniform speed of 9 m/s. A boat with a speed of 1 m/s relative to the water leaves the south bank pointed in a direction 16° west of north. How long does the boat take to cross the river? Assume an xy coordinate system with the positive direction of the x axis due east and the positive direction of the y axis due north. Thanks in advance.
<urn:uuid:24efdb04-aa9f-4dee-b76e-2828b63bf5bb>
2.71875
92
Q&A Forum
Science & Tech.
80.496734
A triangle is inscribed in a circle. The vertices of the triangle divide the circle into three arcs of lengths 3,4,and 5. What is the area of the triangle? Not sure how to start the problem... Could I get some hints please? You must draw a diagram. Inscribe a triangle in a circle. Draw three radial segments, one from each vertex to the center, Now you have three sub-triangles. The sum of their areas is the area you want. The radius of the circle is . How and why? The angle subtending the arc of length 5 measures . Again how and why?. If are the length of two sides of a triangle and is the measure of the angle between then the area to that triangle is . I think I got it. It took me a while though.... I understand the radius is 6/pi because 2pir = 12 r= 6/pi. Then I understood the angle subtending the arc of length 5 is 5pi/6 because 2pi X 5/12 = 5pi/6 arc of length 4 2pi X 4/12 = 2pi/3 arc of length 3 2pi X 3/12 = pi/2 I fully understand why 1/2absin(theta) gets the area therefore, 1/2 X 6/pi X 6/pi X sin 5pi/6 + 1/2 X 6/pi X 6/pi X sin 2pi/3 + 1/2 X 6/pi X 6/pi X sin pi/2 = 9/pi squared X (3 + root3)
<urn:uuid:f65c1bb5-3661-40a5-8ba0-a035c529ac8d>
3.484375
340
Q&A Forum
Science & Tech.
104.82709
A quadrilateral changes shape with the edge lengths constant. Show the scalar product of the diagonals is constant. If the diagonals are perpendicular in one position are they always perpendicular? NRICH has always had good solutions from Madras College in St Andrew's, Scotland but the solutions to this problem were truly As a quadrilateral Q is deformed (keeping the edge lengths constnt) the diagonals and the angle X between them change. Prove that the area of Q is proportional to tanX.
<urn:uuid:8d5b2533-1a34-440b-9f92-e0d7f7a65a30>
2.859375
114
Academic Writing
Science & Tech.
48.845098
Greenland Ice Sheet Melt Characteristics Derived from Passive Microwave Data The Greenland ice sheet melt extent data, acquired as part of the NASA Program for Arctic Regional Climate Assessment (PARCA), is a daily (or every other day, prior to August 1987) estimate of the spatial extent of wet snow on the Greenland ice sheet since 1979. It is derived from passive microwave satellite brightness temperature characteristics using the Cross-Polarized Gradient Ratio (XPGR) of Abdalati and Steffen (1997). It is physically based on the changes in microwave emission characteristics observable in data from the Scanning Multi-channel Microwave Radiometer (SMMR) and the Special Sensor Microwave/Imager (SSM/I) instruments when surface snow melts. It is not a direct measure of the snow wetness but rather is a binary indicator of the state of melt of each SMMR and SSM/I pixel on the ice sheet for each day of observation. It is, however, a useful proxy for the amount of melt that occurs on the Greenland ice sheet. The data are provided in a variety of formats including raw data in ASCII format, gridded daily data in binary format, and annual and complete time series climatologies in gridded binary and GeoTIFF format. All data are in a 60 x 109 pixel subset of the standard Northern Hemisphere polar stereographic grid with a 25 km resolution and are available via FTP. The following example shows how to cite the use of this data set in a publication. For more information, see our Use and Copyright Web page. Waleed Abdalati. 2008. Greenland Ice Sheet Melt Characteristics Derived from Passive Microwave Data. [indicate subset used]. Boulder, Colorado USA: National Snow and Ice Data Center.
<urn:uuid:471a4be3-b5de-422c-bae0-c0d1f976e2cb>
3.328125
370
Knowledge Article
Science & Tech.
34.724422
The gem command is one of the most used Ruby-related commands, but most users don't take the time to learn anything past gem install and gem search. Learning the gem command well is an essential Ruby skill. The gem command-line utility is split into a number of commands. Among these are the familiar install and search, but other useful commands exist such as spec and sources. However, you should start with the help command. The gem command has integrated help. By running the command gem help, you can get a basic help screen. To get a list of commands available, run the gem help commands command. To get further help about a specific command, here, for example, the purge command, run the gem help purge command. Another useful help screen is the examples screen, accessible by the gem help examples command. Most commands work on a gem repository, either local (the gems you have installed), or remote. Though, by default, it's the local repository. To specify the repository you intend , add either --remote or --local to the end of the command. For example, to search the remote repository for gems with the word "twitter" in them, you would run gem search twitter --remote. Specify both remote and local repositories by using the --both switch. When running any gem command, the name can be shortened as long as it doesn't become ambiguous. To run a gem dependency command, you can simply run a gem dep command. Below is a list of the commands and an explanation of their function. build - Given the source code for a gem and a .gemspec file, this will build a .gem file suitable for uploading to a gem repository or installing on another computer with the gem command. A .gemspec file holds information about a gem including name, author, version and dependencies. cert - Manages certificates for cryptographically signed gems. If you're worried that a malicious user is going to compromise the gems you install, you can cryptographically sign them to prevent this. Keys may be added or deleted from your list of acceptable keys, as well as a few other crypto key related functions. check - Performs a number of actions, including running any unit tests, checking the checksum of installed gems and looking for unmanaged files in the gem repository. The type of check you wish to run must be added to the end of the gem command. cleanup - Removes old versions of installed gems from your local repository. If you frequently upgrade gems, you can have old versions hanging around that you don't need anymore. contents - Shows the contents of an installed gem. This is a list of files the gem installed and where they are on the filesystem. dependency - Shows all the gems the listed gem depends upon, as well as the versions of the gem it depends upon. For example, running gem dep twitter tells me the twitter gem relies on hpricot, activesupport, httparty and echoe. This is useful when packaging your applications for deployment. environment - Displays various information about the RubyGems environment, including the version installed, where it's installed, where the gem repository is, etc. fetch - Fetches a gem and saves the .gem file in the current directory. This is useful for transferring gems to be deployed on other servers, without them needing to download the gem themselves. generate_index - Generates an index for a gem server. This is only useful if you're running a gem repository. install - Downloads a gem from the specified repository (--local or --remote) and install it. Also, downloads any dependencies and installs them as well. To install a specific version of a gem, use the --version switch. list - Displays a list of gems in the repository. Note that doing this with --remote will generate quite a large list. Save this list to a file for fast searching. lock - Generates a Ruby script that requires the exact version of all dependencies of a certain gem. This ensures that the gem versions tested during development will be installed, not future or past versions which may have bugs the developers cannot account for. mirror - Mirrors an entire gem repository. Note that trying to mirror the RubyGems repository is a huge task. Do not do so unless you need to run a local mirror for other clients. outdated - Displays a list of installed gems that have newer versions on the remote repository. pristine - Returns gems to their original state. This means unpacking all gems from the local cache, overwriting any changes made to the gems in the local gem repository. This can be used to repair a broken gem. rdoc - Generates rdoc documentation for an installed gem. This rdoc documentation can then be viewed with a web browser. search - Searches the names of all gems and returns a list of gems whose name contains a string. For example, to search for all gems containing the word twitter in the name, run gem search twitter. server - Starts a web server that will act as a gem repository and serves RDoc documentation for all installed gems. This is most useful for the documentation feature. sources - Manages the list of sources for remote repositories. By default, only http://gems.rubyforge.org is in the list, but more can be added or removed. specification - Displays the gemspec of a gem. This will tell you all the information about a gem, including author, dependencies, etc. stale - Displays a list of installed gems, as well as the access times (the last time the gem was included). This can help you weed out gems you no longer user to uninstall them. uninstall - Uninstalles a gem. If there are any installed gems that depend on this gem, you will be prompted whether you want to uninstall this gem. If you do, any gems that depended on this gem will be broken until it is reinstalled. unpack - Unpacks an install gem into the current directory. This can be used to "freeze" gems to your project directory. update - Checks if there are new versions of the specified gem in the remote repository. If there are, download and install the newest version. which - Finds the exact location of the .rb file to include. This can be useful for getting a path for requiring a gem without requiring the rubygems library.
<urn:uuid:b6d89e79-cbf4-48cc-9c11-f4650381ab55>
2.703125
1,330
Tutorial
Software Dev.
55.000636
Pressure and Buoyancy Problems Let's state the two working equations we have so far. | Pressure and Depth: || P = Po + rgh | |Buoyancy:|| Fbuoyancy = Weight displaced = rgVdisplaced | The solutions to the problems below can be found at the end of this page. As always, try all the problems before looking at the solutions. It's much easier to understand a solution put before you than to come up with the solution yourself. To develop the skills necessary to solve the problems yourself, you must spend the time doing it. - What is the absolute pressure at the bottom of the Virgin Islands Basin (located between St. Thomas and St. Croix), at a depth of 4000 meters? Express your answer in atmospheres of pressure. What is the gauge pressure? If there are fish at this depth, how would they deal with this pressure? The density of sea water is 1.03 x 103 kg/m3 - A water hose is connected to a spigot located at the bottom of a cistern. The cistern is half full with 5 ft of water. The nozzle at the other end of the hose is turned off but is left down by the papaya tree, which is 20 ft below the bottom of the cistern. If the spigot is left open, what is the pressure at the nozzle? Why would it be a good idea to turn off the spigot when you are finished watering the tree? - A large part of Holland is below sea level. Earthen dikes keep the sea at bay. There's a Holland legend of a boy who uses his finger to plug a hole in the dike and saves the country side. Assume the hole is located 3.0 meters below the sea level. The hole is the same size as the childs finger, a diameter of about 1 cm. How much force would the child have to exert against the sea pressure in order to keep the sea at bay? Do you think a child could do this? - A 10 lb box falls overboard and is floating. The box has the shape of a cube, 1 ft on a side. What is the buoyancy force on the box? - The float in a toilet tank is a sphere of diameter 10 cm. 1) What is the buoyancy force on the float when it is completely submerged? You might need a reminder that the volume of a sphere is V = 4/3p(r)3 2) Here's a slightly tougher one. If the float must have an upward buoyancy force of 3.0 N to shut off the ballcock valve, what percentage of the float will be submerged? - Here's an interesting puzzle to see if you really understand buoyancy and displacement. You are floating in a small dingy in your pool. There's a brick in the boat. You toss the brick out of the boat and into the pool. The brick sinks to the bottom of the pool. Does the water level at the side of the pool rise, stay the same, or decrease? Don't look at the answers until you've tried the problems on your own!! - Using the SI system of units, P = Po + rgh = 1.01 x 105 + 1.03 x 103 x 9.8 x 4000 = 4.0 x 107 Pa. In terms of atmospheres, that would be 3.9 x 107 Pa / 1.01 x 105 Pa/atm = 400 atmospheres! The gauge pressure is P - Po which is just the rgh term. That would be about 399 atm. If fish lived at that depth, they would not notice the pressure anymore than we notice the 15 psi pressure pushing on us. Organism generally adapt to the pressure around them. The fish take water into their bodies at the ambient pressure so there is no net or gauge pressure difference. However, changing depth can present problems. Many sea mammals, such as sea lions, have developed systems that allow them to dive to extraordinary depths. - The nozzle end is 5 + 20 = 25 ft below the water level. We can convert this to meters and apply the static pressure equation in the SI units. But we could also use the fact that 34 ft of fresh water produces a pressure of 1 atmosphere = 14.7 psi. So 25 ft corresponds to 14.7 x 25/34 = 11 psi. Note that this is the gauge pressure, which is appropriate since atmospheric pressure act both on the surface of the water and on the hose. This means there will be a net force of 11 lb pushing outward on every square inch of the hose. It's probably best to turn the spigot off. - The gauge pressure would be 1.03 x 103 x 9.8 x 3.0 = 3.1 x 104 Pa. The force exerted by his "round" finger would be F = PA = 3.1 x 104(p(.01/2)2 = 2.4 N. This is about .53 lb ... no problem! - The info on the size of the box is not relevant. If the box is floating, then the buoyancy force must be equal to the weight of the box ... = 10 lb! Here's another problem to try. A cubic foot of water weighs about 64 lb. Can you see why the box would float with 10/64 th of its volume submerged? This would mean about 1.9 inches below the water. - The volume of the float is V = 4/3p(.05)3 = 5.2 x 10-4 m3. Assuming there is freshwater in your toilet tank, then Fbuoyancy = 103 x 9.8 x 5.2 x 10-4 = 5.1 N. If you need 3.0 N of upward force to shut off the valve and there's 5.1 N of buoyancy force when completely submerged, then you would need 3.0 / 5.1 x 100% = 59% of the float to be submerged. - Did you figure this one out? The water level in the pool goes down! Some of our physics major get fooled by this one. While in the boat, the entire weight of the brick is being supported ... ultimately by water displaced by the dingy. Since the brick sinks when out of the boat, it must be more dense than water. Hence, the volume of water displaced is greater than the volume of the brick. But when the brick is tossed into the pool, it displaces only its own volume. OK, try again. What if the object tossed overboard floated?
<urn:uuid:d48331c1-3e49-40c6-8b4d-fd710bfe9652>
3.25
1,366
Tutorial
Science & Tech.
91.524973
Science Fair Project Encyclopedia The trade winds are a pattern of wind found in bands around the Earth's equatorial region. The trade winds are the prevailing winds in the tropics, blowing from the high-pressure area in the horse latitudes towards the low-pressure area around the equator. The trade winds blow predominantly from the northeast in the northern hemisphere and from the southeast in the southern hemisphere. Their name comes from the fact that these winds enabled trading ships to sail in two directions between Europe and the Americas: the ships could sail a southern route with the trade winds westward from Europe to the Americas, then head north to the middle latitudes and sail with the westerlies eastward from the Americas back to Europe. In the zone between about 30° N. and 30° S., the surface air flows toward the equator and the flow aloft is poleward. A low-pressure area of calm, light variable winds near the equator is known to mariners as the doldrums. Around 30° N. and S., the poleward flowing air begins to descend toward the surface in subtropical high-pressure belts. The sinking air is relatively dry because its moisture has already been released near the Equator above the tropical rain forests. Near the center of this high-pressure zone of descending air, called the "Horse Latitudes," the winds at the surface are weak and variable. The name for this area is believed to have been given by colonial sailors, who, becalmed sometimes at these latitudes while crossing the oceans with horses as cargo, were forced to throw a few horses overboard to conserve water. The surface air that flows from these subtropical high-pressure belts toward the Equator is deflected toward the west in both hemispheres by the Coriolis effect. Because winds are named for the direction from which the wind is blowing, these winds are called the northeast trade winds in the Northern Hemisphere and the southeast trade winds in the Southern Hemisphere. The trade winds meet at the doldrums. Surface winds known as "westerlies" flow from the Horse Latitudes toward the poles. The "westerlies" meet "easterlies" from the polar highs at about 50-60° N. and S. Near the ground, wind direction is affected by friction and by changes in topography. Winds may be seasonal, sporadic, or daily. They range from gentle breezes to violent gusts at speeds greater than 300 km/h (~200 mph). The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:5bcdf32a-0830-494a-8ded-f3d80434eb16>
3.90625
543
Knowledge Article
Science & Tech.
52.77384
Little was known about this hydrogen-breathing organism before its genome sequence was determined. By utilizing computational analyses and comparison with the genomes of other organisms, the researchers have discovered several remarkable features. For example, the genome encodes a full suite of genes for making spores, a previously unknown talent of the microbe. Organisms that make spores have attracted great interest recently because this is a process found in the bacterium that causes anthrax. Sporulation allows anthrax to be used as a bioweopon because the spores are resistant to heat, radiation, and other treatments. By comparing this genome to those of other spore-making species, including the anthrax pathogen, Eisen and colleagues identified what may be the minimal biochemical machinery necessary for any microbe to sporulate. Thus studies of this poison eating microbe may help us better understand the biology of the bacterium that causes anthrax. Building off this work, TIGR scientists are leveraging the information from the genome of this organism to study the ecology of microbes living in diverse hot springs, such as those in Yellowstone National Park. They want to know what types of microbes are found in different hot springs--and why. To find out, the researchers are dipping into the hot springs of Yellowstone, Russia, and other far-flung locales, to isolate and decipher the genomes of microbes found there. "What we want to have is a field guide for these microbes, like those available for birds and mammals," Eisen says. "Right now, we can't even answer simple questions. D Source:The Institute for Genomic Research
<urn:uuid:f98faee8-b113-4114-800d-2befb0883078>
4.1875
327
Knowledge Article
Science & Tech.
26.775693
Classification & Distribution - incomplete development (egg, nymph, adult) - closely related to Thysanoptera and Psocoptera Distribution: Abundant worldwide. Found in most terrestrial and freshwater habitats.North America Worldwide Number of Families4073 Number of Species3587>50,000 Life History & Ecology Members of the suborder Heteroptera are known as "true bugs". They have very distinctive front wings, called hemelytra, in which the basal half is leathery and the apical half is membranous. At rest, these wings cross over one another to lie flat along the insect's back. These insects also have elongate, piercing-sucking mouthparts which arise from the ventral (hypognathous) or anterior (prognathous) part of the head capsule. The mandibles and maxillae are long and thread-like, interlocking with one another to form a flexible feeding tube (proboscis) that is no more than 0.1 mm in diameter yet contains both a food channel and a salivary channel. These stylets are enclosed within a protective sheath (the labium) that shortens or retracts during feeding. The Heteroptera include a diverse assemblage of insects that have become adapted to a broad range of habitats -- terrestrial, aquatic and semi-aquatic. Terrestrial species are often associated with plants. They feed in vascular tissues or on the nutrients stored within seeds. Other species live as scavengers in the soil or underground in caves or ant nests. Still others are predators on a variety of small arthropods. A few species even feed on the blood of vertebrates. Bed bugs, and other members of the family Cimicidae, live exclusively as ectoparasites on birds and mammals (including humans). Aquatic Heteroptera can be found on the surface of both fresh and salt water, near shorelines, or beneath the water surface in nearly all freshwater habitats. With only a few exceptions, these insects are predators of other aquatic organisms. - Antennae slender with 4-5 segments - Proboscis 3-4 segmented, arising from front of head and curving below body when not in use - Pronotum usually large, trapezoidal or rounded - Triangular scutellum present behind pronotum - Front wings with basal half leathery and apical half membranous (hemelytra). Wings lie flat on the back at rest, forming an "X". - Tarsi 2- or 3-segmented - Structurally similar to adults - Always lacking wings Plant feeding bugs are important pests of many crop plants. They may cause localized injury to plant tissues, they may weaken plants by removing sap, and they may also transmit plant pathogens. Predatory species of Heteroptera are generally regarded as beneficial insects, but those that feed on blood may transmit human diseases. Chagas disease, for example, is transmitted to humans by conenose bugs (genus Triatoma, family Reduviidae). Although bed bugs (family Cimicidae) can inflict annoying bites, there is little evidence that they regularly transmit any human or animal pathogen. The three largest families of Heteroptera are: - Miridae (Plant Bugs) -- Most species feed on plants, but some are predaceous. This family includes numerous pests such as the tarnished plant bug (Lygus lineolaris). - Lygaeidae (Seed Bugs) -- Most species are seed feeders, a few are predatory. This family includes the chinch bug, Blissus leucopterus a pest of small grains, and the bigeyed bug, Geocoris bullatis, a beneficial predator. - Pentatomidae (Stink Bugs) -- Shield-shaped body with large, triangular scutellum. Most species are herbivores, some are predators. All have scent glands which can produce an unpleasant odor. Other families of terrestrial herbivores include: - Tingidae (lace bugs) - Coreidae (squash bugs and leaffooted bugs) - Alydidae (broadheaded bugs) - Rhopalidae (scentless plant bugs) - Berytidae (stilt bugs) Other families of terrestrial predators include: - Reduviidae (assassin bugs) - Phymatidae (ambush bugs) - Nabidae (damsel bugs) - Anthocoridae (minute pirate bugs) The major families of aquatic predators include: - Two families of Heteroptera are ectoparasites. The Cimicidae (bed bugs) live on birds and mammals (including humans). The Polyctenidae (bat bugs) live on bats. - Water striders in the genus Halobates (family Gerridae) are the only insects that are truly marine. They live on the surface of the Pacific Ocean. - Unlike other insects, male bedbugs do not place their sperm directly in the female's reproductive tract. Instead, they puncture her abdomen and inject the sperm into her body cavity. The sperm swim to the ovaries where they fertilize the eggs. This unusual type of reproductive behavior is appropriately known as "traumatic insemination". - Some members of the family Largidae resemble ants. They live as social parasites in ant nests, mimicking the ants' behavior to get food
<urn:uuid:6abddc74-7f17-4ddf-839f-d451d3f0d228>
3.96875
1,149
Knowledge Article
Science & Tech.
28.587396
Moons are shaped by the same surface processes that shape the planets. Several of the moons of the outer planets are large enough to be thought of as planets themselves. In fact, Jupiter's largest moon, Ganymede, is larger than the planet Mercury. Moon: NASA/USGS; Ganymede: Io: NASA/JPL/University of Arizona; Triton: ! Click each thumbnail image to see a larger version. Examine each image to look for evidence of the surface processes at work on these moons. 7. What processes do you see evidence of on these moons? Identify the moon and the process you 8. What is the most common surface process you observed in the solar system? Why do you think this process is so universal?
<urn:uuid:a92c7f4e-88c9-43da-97de-f1a6f8483f51>
3.796875
172
Tutorial
Science & Tech.
56.172273
A group of rows and columns. The x-axis is the horizontal row, and the y-axis is the vertical column. An x-y matrix is the reference framework for two-dimensional structures, such as mathematical tables, display screens, digitizer tablets, dot matrix printers and 2D graphics images. Search For x-y matrix On ChannelWeb Find the latest news and information on x-y matrix from across the Channelweb Network of IT Web sites.
<urn:uuid:0f375849-2c68-4d90-9bab-6a69151aed48>
2.96875
96
Structured Data
Science & Tech.
54.298041
Sizing up Earthquake Damage: Differing Points of View When a catastrophic event strikes an urban area, many different professionals hit the ground running. Emergency responders respond, reporters report, and scientists and engineers collect and analyze data. Journalists and scientists may share interest in these events, but they have very different missions. To a journalist, earthquake damage is news. To a scientist or engineer, earthquake damage represents a valuable source of data that can help us understand how strongly the ground shook as well as how particular structures responded to the shaking. Media reports and private accounts can provide important information about an earthquake’s impact. But a recent study co-authored by Prabhas Pande, director of the Earthquake Geology Division of the Geological Survey of India, and Susan Hough, published in the April issue of the Bulletin of the Seismological Society of America, illustrates how scientists can potentially be led astray by failing to recognize that written accounts tend to emphasize especially dramatic events rather than representative, overall effects. For a journalist, the news is what happened. When mid-rise buildings collapsed in Mexico City in 1985, that was big news. When the Nimitz Freeway collapsed in Oakland in 1989, that was big news. In any earthquake, the most dramatic damage is the biggest story. If nothing, or not much, happens, that isn’t news. Modest earthquake effects will merit a much smaller story, if one at all. Those who experience an earthquake use the same sort of selection process in relaying what happened, either orally or in correspondence. When people experience something dramatic, they are apt to write letters — or, these days, e-mail messages. As a rule, people don’t tend to write to say, “We didn’t feel the earthquake that happened last Tuesday.” For the scientist or engineer, however, the damage that didn’t happen can be every bit as important as the damage that did happen. An engineer knows, for example, that isolated damage might not reflect how hard the ground was shaking because in any area, buildings that are relatively poorly built are especially susceptible to damage. The issue of bias in media reports looms especially large for those earthquakes historically important for their impact on society or their physical devastation or both, such as the one that took place in Charleston, S.C., in 1886. As no modern instruments were available to estimate earthquake magnitudes before the late 19th century, scientists measure this by quantifying the distribution of damage and other effects, such as the area over which the shaking was felt, and then comparing the results to the effects of modern earthquakes for which a magnitude can be determined. However, the older the earthquake, the more sparse the written record. After the Charleston earthquake struck, Clarence Dutton — an Army captain working for the U.S. Geological Survey — set out to systematically compile every available account. Thanks largely to his efforts, we have accounts of this earthquake from almost a thousand locations. No similar compilation was made after the so-called New Madrid sequence of large earthquakes struck the mid-continent during the winter of 1811 to 1812. For these earthquakes, seismologists have turned to extensive archival searches to unearth written records squirreled away in old newspapers, diaries and letters. This sleuthing has turned up accounts from only about a hundred locations for each of the three largest New Madrid earthquakes. Written accounts of an earthquake’s effects are obviously quite different from a modern seismogram, but both types of observations represent data. To analyze written accounts of earthquake effects, the seismologist first assigns an intensity value based on the severity of documented effects. Intensity values differ from magnitude values in that the latter reflect the size of the earthquake itself, whereas intensity reflects the severity of shaking at a particular location. Confusions sometimes arise because intensity and magnitude values span a similar range. In fact, the magnitude scale is open-ended, and tiny earthquakes can have negative magnitudes. Intensity values, usually denoted by Roman numerals, are defined to span a range of I to X. Intensity I corresponds to shaking that is not felt while intensity X corresponds to shaking that is strong enough to cause significant damage to even well-built modern structures. As intensity values are assigned to old, often brief, archival accounts, one question often rattles in the back of seismologists’ minds: Do available accounts provide a good overview of an earthquake’s effects? The nagging voice suspects the answer is no. But how does one evaluate information that isn’t there? If the only available account of an earthquake just describes damage done to adobe houses in a certain town, it is hard to know if they were the especially poorly built structures in that area or not. Sometimes more recent newspaper articles are helpful in this regard, for example, noting that adobe buildings collapsed while wood-frame houses were only lightly damaged. But often older newspapers are less helpful and the seismologist is left guessing. The 2001 Bhuj, India, earthquake provided a unique opportunity to quantify the media bias. This magnitude-7.6 earthquake struck western India on Jan. 26, 2001, claiming nearly 20,000 lives and causing extensive damage throughout the state of Gujarat. Immediately after the earthquake, seismologists realized that damage surveys would be invaluable because the earthquake was only recorded on a handful of instruments within India, none of them very close to Bhuj. An early study analyzed media accounts of the earthquake published on the Web and in local newspapers to assign intensity values for more than 200 locations in India and Pakistan. In the meantime, the Geological Survey of India sent out teams to survey the damage directly. These teams were charged specifically with assessing the overall severity of earthquake effects in towns throughout India. When their map was complete, it could be compared in detail with the media’s map. The comparison revealed that the two approaches yielded similar results for low intensities. When media accounts report that an earthquake was lightly felt in a certain town, it appears that such accounts tend to be representative. But in regions where damage occurred, the suspicious nagging voice proves to be correct: Intensity values based on media accounts were systemically biased toward higher values than those based on direct surveys. The availability of two independent intensity surveys for the Bhuj earthquake allowed media bias to be analyzed in some detail — but only for this particular earthquake. It is not clear that reporting in modern newspapers and Web sites is comparable to that in newspapers and private letters from 100 or 200 years ago. However, the results of the Bhuj comparison provide at least a preliminary quantification of how older archival accounts might be biased. On an encouraging note, the results suggest that archival accounts of historical earthquakes can provide a good indication of the area over which shaking was felt. Comparing the extent that historical and modern earthquakes were felt therefore may yield more reliable results than comparing the extent of damage. This study also serves as a reminder that the news media past and present can help seismologists do their job, but it remains part of the seismologist’s job to understand the nature — and the limitations — of the information that journalists provide.
<urn:uuid:a9c1c6e1-1061-4a80-b4d8-cd1f516c04f6>
3.453125
1,471
Knowledge Article
Science & Tech.
31.868729
Greater DOF with secondary electron imaging is largely a matter of working distance--defined as the distance (in mm) from the objective lens to the top of the sample being imaged. Of course, the lenses in this type of instrument are electromagnetic (not glass) lenses, and can effect different crossover (focus) points based on current supplied to the lens coils. The longer the WD, the greater the DOF, (but this entails other tradeoffs as with every operating parameter). Of course, this is a familiar principle to any photographer; the closer you move to an object, the shallower the DOF is. The WD I used for this shot was 28mm, which is considered very long. I also use a tilt of around 30 degrees. This adds an additional sense of depth. If you were trying to convey the three dimensionality of a sphere, or ping pong ball for example, the worst way to photograph it would be from directly above. Better to come in obliquely from the side. The protozoa (protists is a better word) that live in the guts of lower termites are often very large, and this presents a challenge for DOF. This one in question is about 40 microns long, but others can be up to 300 microns long. We beleive that they have evolved large size in order to engulf the relatively large wood fragments that make their way to the hindgut after being chewed by the termites jaws. Focus stacking is something I've never tried, but for some large cells, I've taken multiple images with different portions in focus. If someone can point me to a tutorial for focus stacking, in Photoshop (I use CS2) I would appreciate it! Thanks very much for this explanation. Never worked with or read about this kind of equipment before so I still didnít quite get it. Due to that I asked to Mr. Google who provided the following reference. Of course there could be a lot of different SEMs but the key details that caught my eye were: ďThe scanning electron microscope has many advantages over traditional microscopes. The SEM has a large depth of field, which allows more of a specimen to be in focus at one time.Ē http://www.purdue.edu/rem/rs/sem.htm#2 My next question is: does a SEM require a light source? The shading on the image you displayed is so delicate it made me wonder how one could position one or more lights to produce the result on such a small object. We believe that they have evolved large size in order to engulf the relatively large wood fragments that make their way to the hindgut after being chewed by the termites jaws. Any time a case for selection can be exemplified that is pretty cool! I guess itís a completely different discussion but I have to ask: Are only the larger protists found in adult termites? Do they grow in size as the termite does? This is all way outside of my experience, so I hope you donít mind some questions. WRT focus stacking, there are a few here who have a lot of experience with focus stacking software. I think that kind of technology might be very useful for this kind of work. Remember us when you get a Nobel for your future work! <big toothy grin>
<urn:uuid:8197b1fa-fc76-4562-b21d-a175d8d77214>
2.703125
693
Comment Section
Science & Tech.
57.02492
Obviously, birds sing. But mice? [Mice song sound.] That’s a mouse song. Researchers have known about these high-pitched squeaky songs for years. But they only recently discovered that mice can learn the songs of other mice. Such vocal learning is a rarity among animals. We know of only three kinds of birds—parrots, hummingbirds and songbirds—and some mammals—like humans, whales, dolphins, sea lions, bats and elephants—that have demonstrated the ability to learn the vocal patterns of other animals. That is, until now. Scientists at Duke University observed that when two male mice of different lineages were kept together, the animals gradually learned to match the pitch of their songs to one another. And when the researchers examined the mice, they found that the rodents can also form the correct brain-to-vocal-cord connections to control the sounds they make. The research is published in the journal PLoS ONE. [Gustavo Arriaga, Eric P. Zhou and Erich D. Jarvis, Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds] The mouse songs are admittedly primitive. But the findings left scientists on a high note. —Gretchen Cuda Kroen [The above text is a transcript of this podcast.]
<urn:uuid:04438b5d-ad98-462e-8d8d-6e6984464144>
3.21875
283
Audio Transcript
Science & Tech.
56.519622
Fossil range: Early Paleocene - Recent | Ammospermophilus leucurus| Forty percent of mammal species are rodents, and they are found in vast numbers on all continents other than Antarctica. Common rodents include mice, rats, squirrels, chipmunks, gophers, porcupines, beavers, hamsters, gerbils, guinea pigs, chinchillas and degus. Rodents have sharp incisors that they use to gnaw wood, break into food, and bite predators. Most eat seeds or plants, though some have more varied diets. Some species have historically been pests, eating human seed stores and spreading disease. Size and range of order In terms of number of species — although not necessarily in terms of number of organisms (population) or biomass — rodents make up the largest order of mammals. There are about 2,277 species of rodents (Wilson and Reeder, 2005), with over 40 percent of mammalian species belonging to the order. Their success is probably due to their small size, short breeding cycle, and ability to gnaw and eat a wide variety of foods. (Lambert, 2000) Rodents are found in vast numbers on all continents except Antarctica, most islands, and in all habitats except oceans. They are the only placental order, other than bats (Chiroptera) and Pinnipeds, to reach Australia without human introduction. Many rodents are small; the tiny African pygmy mouse is only 6 cm in length and 7 grams in weight. On the other hand, the capybara can weigh up to 65 (Expression error: Missing operand for * ), and the largest known rodent, the extinct Josephoartigasia monesi, is estimated to weigh about 1,000 (Expression error: Missing operand for * ), and possibly up to 1,534 (Expression error: Missing operand for * ) or 2,586 (Expression error: Missing operand for * ). Rodents have two incisors in the upper as well as in the lower jaw which grow continuously and must be kept worn down by gnawing; this is the origin of the name, from the Latin rodere, to gnaw, and dens, dentis, tooth. These teeth are used for cutting wood, biting through the skin of fruit, or for defense. The teeth have enamel on the outside and exposed dentine on the inside, so they self-sharpen during gnawing. Rodents lack canines, and have a space between their incisors and premolars. Nearly all rodents feed on plants, seeds in particular, but there are a few exceptions which eat insects or fish. Some squirrels are known to eat passerine birds like cardinals and blue jays. Rodents are important in many ecosystems because they reproduce rapidly, and can function as food sources for predators, mechanisms for seed dispersal, and as disease vectors. Humans use rodents as a source of fur, as pets, as model organisms in animal testing, for food, and even in detecting landmines. Members of non-rodent orders such as Chiroptera (bats), Scandentia (treeshrews), Insectivora (moles, shrews and hedgehogs), Lagomorpha (hares, rabbits and pikas) and mustelid carnivores such as weasels and mink are sometimes confused with rodents. The fossil record of rodent-like mammals begins shortly after the extinction of the non-avian dinosaurs 65 million years ago, as early as the Paleocene. Some molecular clock data, however, suggests that modern rodents (members of the order Rodentia) already appeared in the late Cretaceous, although other molecular divergence estimations are in agreement with the fossil record. By the end of the Eocene epoch, relatives of beavers, dormouse, squirrels, and other groups appeared in the fossil record. They originated in Laurasia, the formerly joined continents of North America, Europe, and Asia. Some species colonized Africa, giving rise to the earliest hystricognaths. There is, however, a minority belief in the scientific community that evidence from mitochondrial DNA indicates that the Hystricognathi may belong to a different evolutionary offshoot and therefore a different order. From there hystricognaths rafted to South America, an isolated continent during the Oligocene and Miocene epochs. By the Miocene, Africa collided with Asia, allowing rodents such as the porcupine to spread into Eurasia. During the Pliocene, rodent fossils appeared in Australia. Even though marsupials are the prominent mammals in Australia, rodents make up almost 25% of the mammals on the continent. Meanwhile, the Americas became joined and some rodents expanded into new territory; mice headed south and porcupines headed north. - Some Prehistoric Rodents - Castoroides, a giant beaver - Ceratogaulus, a horned burrowing rodent - Spelaeomys, a rat that grew to a large size on the island of Flores - Giant hutias, a group of rodents once found in the West Indies - Ischyromys, a primitive squirrel-like rodent - Leithia, a giant dormouse - Neochoerus pinckneyi, a giant North American Capybara that weighed 50 kg - Josephoartigasia monesi, the largest known rodent - Phoberomys pattersoni, the second largest known rodent - Telicomys, a giant South American rodent The rodents are part of the clades: Glires (along with lagomorphs), Euarchontoglires (along with lagomorphs, primates, treeshrews, and colugos), and Boreoeutheria (along with most other placental mammals). The order Rodentia may be divided into suborders, infraorders, superfamilies and families. ORDER RODENTIA (from Latin, rodere, to gnaw) - Suborder Anomaluromorpha - Suborder Castorimorpha - Suborder Hystricomorpha - Family incertae sedis Diatomyidae: Laotian rock rat - Infraorder Ctenodactylomorphi - Family Ctenodactylidae: gundis - Infraorder Hystricognathi - Family Bathyergidae: African mole rats - Family Hystricidae: Old World porcupines - Family Petromuridae: dassie rat - Family Thryonomyidae: cane rats - Parvorder Caviomorpha - Family †Heptaxodontidae: giant hutias - Family Abrocomidae: chinchilla rats - Family Capromyidae: hutias - Family Caviidae: cavies, including guinea pigs and the capybara - Family Chinchillidae: chinchillas and viscachas - Family Ctenomyidae: tuco-tucos - Family Dasyproctidae: agoutis - Family Dinomyidae: pacaranas - Family Echimyidae: spiny rats - Family Erethizontidae: New World porcupines - Family Myocastoridae: nutria - Family Octodontidae: octodonts - Suborder Myomorpha - Superfamily Dipodoidea - Family Dipodidae: jerboas and jumping mice - Superfamily Muroidea - Family Calomyscidae: mouse-like hamsters - Family Cricetidae: hamsters, New World rats and mice, voles - Family Muridae: true mice and rats, gerbils, spiny mice, crested rat - Family Nesomyidae: climbing mice, rock mice, white-tailed rat, Malagasy rats and mice - Family Platacanthomyidae: spiny dormice - Family Spalacidae: mole rats, bamboo rats, and zokors - Superfamily Dipodoidea - Suborder Sciuromorpha The above taxonomy uses the shape of the lower jaw (sciurognath or hystricognath) as the primary character. This is the most commonly used approach for dividing the order into suborders. Many older references emphasize the zygomasseteric system (suborders Protrogomorpha, Sciuromorpha, Hystricomorpha, and Myomorpha). Several molecular phylogenetic studies have used gene sequences to determine the relationships among rodents, but these studies are yet to produce a single consistent and well-supported taxonomy. Some clades have been consistently produced such as: - Ctenohystrica contains: Monophyly or polyphyly? In 1991, a paper submitted to Nature proposed that caviomorphs should be reclassified as a separate order (similar to Lagomorpha), based on an analysis of the amino acid sequences of guinea pigs. This hypothesis was refined in a 1992 paper, which asserted the possibility that caviomorphs may have diverged from myomorphs prior to later divergences of Myomorpha; this would mean caviomorphs, or possibly hystricomorphs, would be moved out of the rodent classification into a separate order. A minority scientific opinion briefly emerged arguing that guinea pigs, degus, and other caviomorphs are not rodents, while several papers were put forward in support of rodent monophyly. Subsequent studies published since 2002, using wider taxon and gene samples, have restored consensus among mammalian biologists that the order Rodentia is monophyletic. - Adkins, R. M. E. L. Gelke, D. Rowe, and R. L. Honeycutt. 2001. Molecular phylogeny and divergence time estimates for major rodent groups: Evidence from multiple genes. Molecular Biology and Evolution, 18:777-791. - Carleton, M. D. and G. G. Musser. 2005. Order Rodentia. Pp 745-752 in Mammal Species of the World A Taxonomic and Geographic Reference. Johns Hopkins University Press, Baltimore. - David Lambert and the Diagram Group. The Field Guide to Prehistoric Life. New York: Facts on File Publications, 1985. ISBN 0-8160-1125-7 - Jahn, G. C. 1998. “When Birds Sing at Midnight” War Against Rats Newsletter 6:10-11. - Leung LKP, Peter G. Cox, Gary C. Jahn and Robert Nugent. 2002. Evaluating rodent management with Cambodian rice farmers. Cambodian Journal of Agriculture Vol. 5, pp. 21-26. - McKenna, Malcolm C., and Bell, Susan K. 1997. Classification of Mammals Above the Species Level. Columbia University Press, New York, 631 pp. ISBN 0-231-11013-8 - Nowak, R. M. 1999. Walker's Mammals of the World, Vol. 2. Johns Hopkins University Press, London. - Steppan, S. J., R. A. Adkins, and J. Anderson. 2004. Phylogeny and divergence date estimates of rapid radiations in muroid rodents based on multiple nuclear genes. Systematic Biology, 53:533-553. - University of California Museum of Paleontology (UCMP). 2007 "Rodentia". - Wilson, D. E. and D. M. Reeder, eds. 2005. Mammal Species of the World A Taxonomic and Geographic Reference. Johns Hopkins University Press, Baltimore. - ↑ 1.0 1.1 rodent - Encyclopedia.com. Retrieved on 2007-11-03. - ↑ Rodents: Gnawing Animals. Retrieved on 2007-11-03. - ↑ Myers, Phil (2000). Rodentia. Animal Diversity Web. University of Michigan Museum of Zoology. Retrieved on 2006-05-25. - ↑ Millien, Virginie (05 2008). "The largest among the smallest: the body mass of the giant rodent Josephoartigasia monesi". Proceedings of the Royal Society B. doi:10.1098/rspb.2008.0087. Retrieved on 2008-05-27. - ↑ Rinderknecht, Andrés; Blanco, R. Ernesto (01 2008). "The largest fossil rodent" (pdf). Proceedings of the Royal Society B: 923–928. doi:10.1098/rspb.2007.1645. Retrieved on 2008-05-27. - ↑ Wines, Michael. "Gambian rodents risk death for bananas", The Age, The Age Company Ltd., 2004-05-19. Retrieved on 2006-05-25. "A rat with a nose for landmines is doing its bit for humanity" Cited as coming from the New York Times in the article. - ↑ Douzery, E.J.P., F. Delsuc, M.J. Stanhope, and D. Huchon (2003). "Local molecular clocks in three nuclear genes: divergence times for rodents and other mammals and incompatibility among fossil calibrations". Journal of Molecular Evolution 57: S201. doi:10.1007/s00239-003-0028-x. - ↑ Horner, D.S., K. Lefkimmiatis, A. Reyes, C. Gissi, C. Saccone, and G. Pesole (2007). "Phylogenetic analyses of complete mitochondrial genome sequences suggest a basal divergence of the enigmatic rodent Anomalurus". BMC Evolutionary Biology 7: 16. doi:10.1186/1471-2148-7-16. - ↑ Graur, D., Hide, W. and Li, W. (1991) 'Is the guinea-pig a rodent?' Nature, 351: 649-652. - ↑ Li, W., Hide, W., Zharkikh, A., Ma, D. and Graur, D. (1992) 'The molecular taxonomy and evolution of the guinea pig.' Journal of Heredity, 83 (3): 174-81. - ↑ D'Erchia, A., Gissi, C., Pesole, G., Saccone, C. and Arnason, U. (1996) 'The guinea-pig is not a rodent.' Nature, 381 (6583): 597-600. - ↑ Reyes, A., Pesole, G. and Saccone, C. (2000) 'Long-branch attraction phenomenon and the impact of among-site rate variation on rodent phylogeny.' Gene, 259 (1-2): 177-87. - ↑ Cao, Y., Adachi, J., Yano, T. and Hasegawa, M. (1994) 'Phylogenetic place of guinea pigs: No support of the rodent-polyphyly hypothesis from maximum-likelihood analyses of multiple protein sequences.' Molecular Biology and Evolution, 11: 593-604. - ↑ Kuma, K. and Miyata, T. (1994) 'Mammalian phylogeny inferred from multiple protein data.' Japanese Journal of Genetics, 69 (5): 555-66. - ↑ Robinson-Rechavi, M., Ponger, L. and Mouchiroud, D. (2000) 'Nuclear gene LCAT supports rodent monophyly.' Molecular Biology and Evolution, 17: 1410-1412. - ↑ Lin, Y-H, et al. "Four new mitochondrial genomes and the increased stability of evolutionary trees of mammals from improved taxon sampling." Molecular Biology and Evolution 19 (2002): 2060-2070. - ↑ Carleton, Michael D., and Musser, Guy G. "Order Rodentia". Mammal Species of the World, 3rd edition, 2005, vol. 2, p. 745. (Concise overview of the literature) Extant mammal orders by infraclass Eutheria: Afrosoricida · Macroscelidea · Tubulidentata · Hyracoidea · Proboscidea · Sirenia · Cingulata · Pilosa · Scandentia · Dermoptera · Primates · Rodentia · Lagomorpha · Erinaceomorpha · Soricomorpha · Chiroptera · Pholidota · Carnivora · Perissodactyla · Artiodactyla · Cetacea <span id="interwiki-de-fa" /> ar:قوارض bar:Fieslfiecha bg:Гризачи ca:Rosegador cs:Hlodavci da:Gnavere de:Nagetiere el:Τρωκτικάeo:Ronĝuloj eu:Karraskari fa:جوندگان fo:Gnagdýrgl:Roedor ko:설치류 hi:गिलहरी hsb:Hrymzaki hr:Glodavci io:Rodero id:Hewan pengerat is:Nagdýr it:Rodentia he:מכרסמים ka:მღრღნელები la:Rodentia lv:Grauzēji lb:Knabberdéieren lt:Graužikai lij:Rodentia li:Knaagdiere hu:Rágcsálók mk:Глодари nl:Knaagdierenno:Gnagere nn:Gnagarar nrm:Grugeux nov:Rodentia oc:Rodentia nds:Gnaagdeerterqu:Khankiqsimple:Rodent sk:Hlodavce sl:Glodavci sr:Глодари fi:Jyrsijät sv:Gnagare ta:கொறிணி th:สัตว์ฟันแทะuk:Гризуни There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies
<urn:uuid:076634c7-bef0-4102-9d72-986c3aced601>
3.5625
3,997
Knowledge Article
Science & Tech.
43.927548
How a nuclear reactor makes electricity A nuclear reactor produces and controls the release of energy from splitting the atoms of uranium. Uranium-fuelled nuclear power is a clean and efficient way of boiling water to make steam which drives turbine generators. Except for the reactor itself, a nuclear power station works like most coal or gas-fired power stations. The Reactor Core Several hundred fuel assemblies containing thousands of small pellets of ceramic uranium oxide fuel make up the core of a reactor. For a reactor with an output of 1000 megawatts (MWe), the core would contain about 75 tonnes of enriched In the reactor core the U-235 isotope fissions or splits, producing a lot of heat in a continuous process called a chain reaction. The process depends on the presence of a moderator such as water or graphite, and is fully controlled. The moderator slows down the neutrons produced by fission of the uranium nuclei so that they go on to produce more fissions. Some of the U-238 in the reactor core is turned into plutonium and about half of this is also fissioned similarly, providing about one third of the reactor's energy output. The fission products remain in the ceramic fuel and undergo radioactive decay, releasing a bit more heat. They are the main wastes from the process. The reactor core sits inside a steel pressure vessel, so that water around it remains liquid even at the operating temperature of over 320°C. Steam is formed either above the reactor core or in separate pressure vessels, and this drives the turbine to produce electricity. The steam is then condensed and the PWRs and BWRs The main design is the pressurised water reactor (PWR) which has water in its primary cooling/heat transfer circuit, and generates steam in a secondary circuit. The less popular boiling water reactor (BWR) makes steam in the primary circuit above the reactor core, though it is still under considerable pressure. Both types use water as both coolant and moderator, to slow To maintain efficient reactor performance, about one-third or half of the used fuel is removed every year or two, to be replaced with fresh fuel. The pressure vessel and any steam generators are housed in a massive containment structure with reinforced concrete about 1.2 metres thick. This is to protect neighbours if there is a major problem inside the reactor, and to protect the reactor from Because some heat is generated from radioactive decay even after the reactor is shut down, cooling systems are provided to remove this heat as well as the main operational heat output. Natural Prehistoric Reactors The world's first nuclear reactors operated naturally in a uranium deposit about two billion years ago in what is now Gabon. The energy was not harnessed, since these were in rich uranium orebodies in the Earth's crust and moderated by percolating Nuclear energy's contribution to global electricity supply Nuclear energy supplies some 14% of the world's electricity. Today 31 countries use nuclear energy to generate up to three quarters of their electricity, and a substantial number of these depend on it for one quarter to one half of their supply. Almost 15,000 reactor-years of operational experience have been accumulated since the 1950s by the world's 440 nuclear power reactors (and nuclear reactors powering naval vessels have clocked up a similar amount).
<urn:uuid:c259cc03-df21-435b-b4d6-fb7b27710b2b>
4.125
738
Knowledge Article
Science & Tech.
35.885882
Whether they’re toys that shine in the night, black lights, glow sticks or fireflies, things that produce an eerie glow are fascinating. Give a kid a glow-in-the-dark toy or paper her ceiling in dimly shining plastic stars, and she will be occupied forever. She’ll find ever brighter lights to charge them up, ever darker places to view them for maximum glow effect, and generally love exploring how it all works. You know this; you were that kid. So what’s the deal with the glow? Learn how to make this amazing looking glow-in-the-dark cocktail over at Neatorama It’s 10 p.m. Do you know where your electrons are? While there are several “flavors” of things that glow, they all have something in common: Things glow because photons are emitted when “excited” (at a higher energy state) electrons drop back to a lower, more stable state. Aside from promising them a pony or a tour of CERN, there are several ways to get your electrons excited. In chemical glow sticks, a chemical reaction excites the electrons. This process is called chemiluminescence. Glow sticks are an excellent way to experiment with reaction rates and temperature. If you want the reaction to last longer, follow a kid’s advice and put the glow stick in the freezer or in ice water so the reaction slows down; it’ll take longer to use up the chemicals in the glow stick. The trade-off is that because the production of photons is also slower, a cold glow stick is dimmer than a warm one. Fluorescence is like light recycling. Fluorescent rocks, laundry detergent additives, paint, and even some animals can re-emit light after something shines on them. Usually we’re talking about things getting hit with ultraviolet or ‘black’ light and re-emitting within the visible spectrum. This makes sense because as you progress along the spectrum of electromagnetic radiation, visible light is a bit lower in energy than ultraviolet light — you can’t expose something to lower energy red light and get it to fluoresce in UV, for example. Fluorescent things certainly fluoresce in daylight, but not enough to outshine the ambient light, so they’re most noticeable under a black light in an otherwise dark space. Phosphorescence is a lot like fluorescence but stretched out over time — a slow glow. So you can shine light (visible or UV) on a glow-in-the-dark star and it re-emits light, too, but over a lot more time, so the glow continues for minutes or hours before it completely dies out. If you have a glow-in-the-dark toy or T-shirt, try “charging it up” with lights of different colors or intensities and checking out the glow that results. Fireflies produce and use their own chemicals, luciferin and luciferase, to dazzle and attract potential mates — and sometimes to lure prey. A surprising number of marine critters are bioluminescent, too, like dinoflagellates (plankton) that glow when disturbed, the angler fish, and some squid (perhaps they are blending in with starlight from above). Headlines occasionally announce a new genetically engineered “glowing” kitten, rabbit, plant, sheep, etc., but they are almost always talking about fluorescence instead of bioluminescence, so the light is only seen when the animal is placed under ultraviolet light. (One useful application of this is the ability to track a protein related to a certain disease by getting the introduced gene for Green Fluorescent Protein (GFP) to link to the gene for the protein of interest). Some animals like scorpions and jellyfish (the original source of GFP) fluoresce naturally. Sugar and adhesives can exhibit triboluminescence, in which friction or fracturing produces the light. This one is great to try out at home; you just need Wint-O-Green Lifesavers®, transparent tape and a very dark room (a buddy or a room with a mirror is helpful for the Lifesavers portion). Dr. Sweeting (that’s her real name) has more detailed instructions and explanation, but the big idea is that a tiny, but visible, amount of light is emitted when you peel tape off the roll and when you bite into the candy, crushing sugar crystals against each other. The wintergreen oil even improves the effect by fluorescing! Are there any other kinds of luminescence? Yes! Incandescence, piezoluminescence, radioluminescence, etc. But that’s enough fun for one post. Go try out triboluminescence! Just can’t get enough? Make sure to come early for the educational portion of HMNS’ LaB 5555 this Friday for more GLOW fun, and learn all about the science of what gives things light. I’ll be there doing demos to light up your night. For tickets and more info, click here!
<urn:uuid:c05ab62e-2728-42c0-bfc6-01e5f75eafc8>
3.53125
1,083
Personal Blog
Science & Tech.
49.596934
Some 3.5 billion years ago, a single-celled organism now named LUCA (for the Last Universal Common Ancestor of all life on Earth) developed the ability to pull oxygen out of its environment. Although LUCA is long gone, University of Hawaii microbiologist Maqsudul Alam has taken a step toward understanding the secret behind this world-changing feat of chemical engineering. LUCA evolved in an oxygen-free, or anaerobic, environment. But as oxygen levels rose in the ocean and atmosphere, the cell had to develop a way to neutralize what was, in essence, a poison. Alam hit on that defense while studying archaea—another type of primitive, single-celled creature. Alam studied two species of archaea, one aerobic and the other anaerobic. He isolated a crucial compound called protoglobin that protects anaerobic species of archaea from the toxic effects of oxygen. “Protoglobin is the nose and the hand of the archaea,” he says. “It senses oxygen, binds it, and removes it from the cell before it can do any harm.” Protoglobin, or something much like it, apparently provided a similar defense for LUCA. But that is only half of the story. When Alam purified the protoglobin to study its structure, he saw that the molecule looks surprisingly like diluted blood. In fact, protoglobin binds and releases oxygen the same way that hemoglobin does as it transports oxygen through blood. Alam believes that while LUCA initially evolved protoglobin for protection from oxygen, the organism’s descendants developed a variant of the molecule—hemoglobin—that transformed oxygen from a poison into a nutrient. That innovation enabled life to expand into new environments and set the stage for all oxygen-breathing organisms, Alam says. The next step is to create a computer model that will explain how protoglobin works. Alam hopes such a model will allow him to unravel the genetic changes that transformed protoglobin and answer what he calls the $64 million question: How did protoglobin evolve to transport oxygen through the bodies of multicellular organisms?
<urn:uuid:6f68e90c-6447-4914-90b4-113218e933d7>
3.9375
432
Knowledge Article
Science & Tech.
31.719075
VLTI observations of the radii of four small stars The radii and masses of the four very-low-mass stars now observed with the VLTI, GJ 205, GJ 887, GJ 191 (also known as "Kapteyn''s star") and Proxima Centauri (red filled circles; with error bars). For comparison, planet Jupiter's mass and radius are also plotted (blue triangle). The two curves represent theoretical models for stars of two different ages (400 million years - red dashed curve; 5 billion years - black fully drawn curve; models by Gilles Charier and collaborators at the Ecole Normale Supérieur de Lyon, France). As can be seen, theory and observations fit very well. About the Image |Release date:||29 November 2002| |Size:||800 x 789 px|
<urn:uuid:0fe245f7-49df-4a89-861a-e329222c30b8>
2.890625
180
Truncated
Science & Tech.
52.377531
Pascal’s Triangle is a special triangular arrangement of numbers used in many areas of mathematics. It is named after the famous 17th century French mathematician Blaise Pascal because he developed so many of the triangle’s properties. However, this triangular arrangement of numbers was known by the Arabian poet and mathematician Omar Khayyam (c 1044-1123) and the Chinese mathematician Zhu Shijie (c 1260-1320) some 250 years before Pascal. At the top of the triangle is a 1, which makes up the 0th row. The 1st row (1, 1) contains two 1s each formed by adding the two numbers above them, one to the left and one to the right, in this case 0 and 1. (All numbers outside the triangle are 0s.) Do the same to create the 2nd row; 0 + 1 = 1, 1 + 1 = 2, 1 + 0 = 1 and all subsequent rows. A number in the triangle can be found by using nCr (n choose r), where n is the number of the row and r is the number of the element in that row. This is especially helpful to find a particular term in the expansion of a binomial in the form (x + y)n. Find the 4th term in the 6th row of the triangle. (Remember: the first 1 in each row is the 0th element so this is correct.) Sum of rows: The sum of the numbers in any row is equal to 2n, when n is the number of the row. 20 = 1 = 1 21 = 2 = 1 + 1 22 = 4 = 1 + 2 + 1 23 = 8 = 1 + 3 + 3 + 1 24 = 16 = 1 + 4 + 6 + 4 + 1 and so forth. Prime numbers: If the first element in a row is a prime number (remember the first 1 in any row is the 0th element.) all of the numbers in that row (excluding the 1s) are divisible by it. For example in the 7 th row (1, 7, 21, 35, 35, 21, 7, 1) 7, 21, 35 are divisible by 7. In Algebra, each row in Pascal’s Triangle contains the coefficients of the binomial (x + y) raised to the power of the row. (x + y)0 = 1 (x + y)1 = 1x + 1y (x + y)2 = 1x2 + 2xy + 1y2 (x + y)3 = 1x3 + 3x2y + 3xy2 + 1y2 (x + y)4 = 1x4 + 4x3y + 6x2y2 + 4xy3 + 1y4 and so forth. Another major area where Pascal’s Triangle shows up and is very useful is in probability where it can be used to find combinations. Interesting Number Patterns: Many interesting number patterns can be found in the triangle. Included are the Fibonacci sequence, Triangular and Square Numbers (found in the diagonals starting with row 3.) and Polygonal Numbers. Another interesting connection is to Sierpinski’s Triangle. When all of the odd numbers in Pascal’s Triangle are filled in and the evens are left blank, the recursive Sierpinski Triangle fractal is revealed. Each of these are fascinating topics which warrant further research on your part.
<urn:uuid:7a83b8ab-ac7e-4b18-be31-d445000cca1b>
3.890625
737
Knowledge Article
Science & Tech.
80.608913
May 26, 2010, 6:13 AM Post #4 of 6 Please use the code tags whenever you post code. Start by adding these 2 lines, which should be in every Perl script you write. Those pragmas will point out lots of coding errors that can be difficult to track down. The strict pragma forces you to declare your vars, which is done with the 'my' keyword. You should always check the return code of an open call to make sure it was successful and take action if it wasn't. It's best to use the 3 arg form of open and a lexical var for the filehandle instead of the bareword. open my $file1, '<', $ARGV or die "failed to open '$ARGV $!"; open my $file2, '<', $ARGV or die "failed to open '$ARGV $!"; That is normally written as: Since the print function is a list operator, your attempt to read in the employee number from file to in the print statement will slurp and print the entire file. Instead, you should assign the employee number to a scalr var and use that var in the print statement.
<urn:uuid:0fb65209-141f-4086-870c-fe2d50bd57cc>
2.859375
254
Comment Section
Software Dev.
73.336645
On Friday, we posed the following back-to-school-themed Fermi problem: Assuming you're not in a big lecture hall and the professor shuts the door at the start of class, how long does it take for you and your classmates to deplete the oxygen enough to feel it? We promised a surprising answer, and here it is. You decide if our back-of-the-envelope calculations are reasonable. Let's build our classroom first. It's 16 feet wide and long, and 10 feet tall. In handy metric dimensions, that's: 5 meters by 5 meters by 3 meters, or 75 cubic meters. A cubic meter is 1000 liters, so now we've got 75,000 liters of fresh air. The oxygen content of air is about 21 percent, and at about 17.5 percent you'll run from the room screaming. To get from fresh and breathable to absolutely stifling, take the difference between 21 percent of 75,000 liters and 17.5 percent of 75,000 liters. That gives us 2,625 liters of oxygen to get through. How much oxygen does a human consume? It was tough finding a reliable source, but this press release about the 2006 installation of a new oxygen generation system on the International Space Station provides a clue: During normal operations, it will provide 12 pounds daily; enough to support six crew members. Aha! So one person needs about 2lb of oxygen a day, or .9 kg. But how many liters is that? Oxygen has a molar mass of 16 grams, so oxygen gas, or O2, has a mass of 32 grams per mole. One mole of gas at standard pressure and temperature takes up 22.4 liters. Now, as my high-school chemistry would say, it's time to hop on the mole-train: .9 kg x (1000 g/1 kg) x (1 mole O2/32 g O2) x (22.4 L/1 mole O2) This gives us a daily oxygen intake of 630 liters per person. Let's get a more reasonable rate: (630 L/day) x (1 day/24 hours) x (1 hour/60 mins) Now we have the serviceable rate of oxygen consumption of .4375 liters per minute. We're almost there. Now populate the classroom with 34 students and 1 teacher. The 35 occupants consume 15.3125 liters per minute. Now for the final calculation: 2625 L x (1 minute/ 15.3125 L) It will take about 171 minutes, or 2 hours and 51 minutes for the room to become unbearably stifling. You can image that you'd start to feel pretty uncomfortable about an hour and a half into the lecture—a good argument for shorter classes.
<urn:uuid:a68ffb3c-3c83-4b98-ad7b-1282010fb275>
3.03125
594
Personal Blog
Science & Tech.
80.662083
- Does global change increase the success of biological invaders? Trends in Ecology & Evolution, Volume 14, Issue 4, 1 April 1999, Pages 135-139 Jeffrey S. Dukes and Harold A. Mooney AbstractBiological invasions are gaining attention as a major threat to biodiversity and an important element of global change. Recent research indicates that other components of global change, such as increases in nitrogen deposition and atmospheric CO2 concentration, favor groups of species that share certain physiological or life history traits. New evidence suggests that many invasive species share traits that will allow them to capitalize on the various elements of global change. Increases in the prevalence of some of these biological invaders would alter basic ecosystem properties in ways that feed back to affect many components of global change. Abstract | Full Text | PDF (101 kb) - Roles of parasites in animal invasions Trends in Ecology & Evolution, Volume 19, Issue 7, 1 July 2004, Pages 385-390 John Prenter, Calum MacNeil, Jaimie T.A Dick and Alison M Dunn AbstractBiological invasions are global threats to biodiversity and parasites might play a role in determining invasion outcomes. Transmission of parasites from invading to native species can occur, aiding the invasion process, whilst the ‘release’ of invaders from parasites can also facilitate invasions. Parasites might also have indirect effects on the outcomes of invasions by mediating a range of competitive and predatory interactions among native and invading species. Although pathogen outbreaks can cause catastrophic species loss with knock-on effects for community structure, it is less clear what impact persistent, sub-lethal parasitism has on native-invader interactions and community structure. Here, we show that the influence of parasitism on the outcomes of animal invasions is more subtle and wide ranging than has been previously realized. Abstract | Full Text | PDF (130 kb) - Understanding the long-term effects of species invasions Trends in Ecology & Evolution, Volume 21, Issue 11, 1 November 2006, Pages 645-651 David L. Strayer, Valerie T. Eviner, Jonathan M. Jeschke and Michael L. Pace AbstractWe describe here the ecological and evolutionary processes that modulate the effects of invasive species over time, and argue that such processes are so widespread and important that ecologists should adopt a long-term perspective on the effects of invasive species. These processes (including evolution, shifts in species composition, accumulation of materials and interactions with abiotic variables) can increase, decrease, or qualitatively change the impacts of an invader through time. However, most studies of the effects of invasive species have been brief and lack a temporal context; 40% of recent studies did not even state the amount of time that had passed since the invasion. Ecologists need theory and empirical data to enable prediction, understanding and management of the acute and chronic effects of species invasions. Abstract | Full Text | PDF (587 kb) Copyright © 2012 All rights reserved. Current Biology, Volume 22, Issue 19, R819-R821, 9 October 2012 FeatureAdd/View Comments (0) - Thousands of species have invaded new territories in recent decades, often aided by global trade and man-made habitat change. While many remain harmless, some may cause serious damage. Therefore, we need improvements in surveillance and in our understanding of which factors make a successful invasion possible. Michael Gross reports.
<urn:uuid:9978e2e6-d907-421a-abd0-ef43088fb2b5>
2.6875
706
Content Listing
Science & Tech.
26.135653
Radio-Collaring Elephants in Namibia with Keith Leggett Keith Leggett radio-collars enormous elephants in the Namibian desert to find out where they range and roam-and gets help from a BBC film crew. Attaching a radio-collar to a 5-ton animal is no easy task. Especially if that animal, say, an elephant, has no interest in cooperating and does not necessarily turn up where you expect it to. This is Keith Leggett's challenge as a researcher with the Northwestern Namibia Desert-dwelling Elephant and Giraffe Project in Namibia, Africa. With the help of Earthwatch volunteers since 2002, Leggett has been radio-collaring and tracking these enormous pachyderms in the Namibian desert to find out more about their home ranges and travel routes. Why? These elephants don't make very good neighbors - they drink upwards of 30 gallons of water per day, even in the dry season when water is scarce, and are extremely destructive eaters, pushing down and trampling trees and anything in their paths. Not surprisingly, elephants and people in this area have trouble coexisting. But, Namibian elephants are of great interest to tourists, and this may be the key to their salvation in this country that has been described as the land that God created in anger. Probably a pretty fair description of the environment, says Leggett. Understanding the routines and ecology of elephants is the first step in helping them coexist with humans. Last February, Leggett got the chance to capture and collar an elephant in front of BBC cameras. This is his report on how it went: "I went up to the bush two days before the collaring was due and met the BBC team and enjoyed them straight off. We found the mature bull (WKM-14) but the younger bull was nowhere to be seen. After searching for two days and not finding the younger bull it was decided to go with the older mature male. Everyone had arrived in camp by the morning of the proposed collaring so we went straight out to collar the bull. The collaring was absolutely textbook, couldn't think of a more perfect one. The collar went straight under the bull without any hassles, the bull fell in an open area and he responded perfectly to the drugs...perfect! "On top of all that he moved straight into the floodplains of the Hoanib River, a move none of the previously collared elephants had undertaken. It will be very interesting to see his movements when he comes into musth especially in response to the other dominant bull in the area. "The film crew themselves were great fun, the only drawback was doing some takes 3 or 4 times... don't know how actors do it. I simply don't have the patience for it. Though they were very good when we were doing the collaring and stayed in the background and out of the way. Mind you it will probably work out to be about 2 minutes of airtime, but at least I have another collar." In the last three years of leading Earthwatch volunteers into the Namibian desert, Leggett has tracked, observed, and collared numerous elephants, and sends our office back emails from his trips, such as this report from May 18, 2005: "The first night we were in Purros, 3 elephants walked straight past camp. It appeared that we were going to have a good trip after all, or so we thought. The next two days was spent in the fruitless search for elephants... not another hide nor hair was observed...it was decided to head to the Hoanib River. "The first we observed on arriving in the Hoanib River was a herd of 5 elephants with one of the cows having a calf of about 3 months of age. He is still totally uncoordinated and lurches from one misadventure to another. The previous calf born in the west was 12 months ago and so a new calf is still a novelty and most of the herd females take turns in guarding and guiding him around. The minders are very vigilant and when the older calf came to play the older animals saw him off... quite amusing at times. The mothers appears to play only a minor role in the overall rearing of the individual, though they are usually doing the nursing though I have seen even other females nurse young periodically. The group takes responsibility for the offspring. "Later that day we saw the rest of the herd of 14 so it was hog heaven for 2 days and then the elephants disappeared again. It appears as though they are doing circuits at this time of year wandering between feeding areas. They are always moving never slopping for long in one area. "Overall, the volunteers were excellent and put up with the vehicle breakdowns, lack of elephants and then the total abundance, then absence again with a resigned tolerance... they were also pretty good fun. The west has dried out significantly and the days were very hot, but the nights were cool. There has been significant grass growth this year with the good rains and the animals are all looking in extremely good shape. Springbok, gemsbok and ostrich were abundant and while the elephants have spread pretty thin the rest of the wildlife has collected in feeding aggregations. "After a shower, a shave and some relaxation time, I feel almost human again..." Leggett's study is one of the first to scientifically document the home range and movements of these massive animals. Preliminary findings recently published in African Zoology show that elephant movements range from 50 to 625 kilometers (31 to 388 miles), over a period of up to five months, in response to available water and vegetation. In June, July, and August of 2006, Earthwatch teams will help Leggett track this animal, as well as up to a dozen others that he has radio-collared. They will also identify individual elephants in the field, using distinguishing tusk characteristics, ear scars, and footprint patterns, and observe their behavior. This information will help conservation agencies better manage Namibia's unique desert elephants.
<urn:uuid:015f4b05-56f2-4463-8bdd-8e038a204a0f>
3
1,253
Nonfiction Writing
Science & Tech.
57.921608
In this lesson our instructor gives an introduction to conditional loops. First, he discusses while loop, looping over arrays, and array traversal functions. Then he talks bout looping over indexed and associative arrays. He also lectures on looping over arrays using list() and each(), control structure scope and coding conventions. He ends the lesson with a helpful homework challenge. A while loop is a conditional control structure that executes a statement group repeatedly as long as its specified test condition remains A while loop’s test condition’s value is compared to TRUEbefore each execution of the loop’s statement group. 'Looping over arrays' is a common programming function, and PHP provides several built-in functions for doing so. They work on the basis of an array cursor , which is a 'marker' for the 'current' array element: current() – returns the value of the array element at the current array cursor position key() – returns the key of the array element at the current array cursor position next() – advances the array cursor by one prev() – moves the array cursor back by one reset() – sets the array cursor to the 1st element end() – sets the array cursor to the last element The list() construct and the each() function are also used to loop over arrays. list() is used to assign values to multiple variables at a time from an each() returns key/value information for the current array element in an array and advances the array cursor by one. It returns FALSE if the end of the array is reached. Unlike some programming languages, PHP does not have ‘block-level’ scope used with control structures. Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.
<urn:uuid:ed63306a-0c96-488e-bfb0-b477bcdd3fa5>
3.90625
401
Truncated
Software Dev.
39.844538
Department of Environment and Conservation (NSW), 2005 ISBN: 1 7412 2144 7 7 Previous Recovery Actions - 7.1 Survey - 7.2 Profile and environmental impact assessment guidelines - 7.3 Establishment of a recovery team - 7.4 Community awareness initiatives - 7.5 In-situ protection During the preparation of this recovery plan, 18 D. sp. C Illawarra sites were surveyed by the DEC with the assistance of Anders Bofeldt (Wollongong Botanic Gardens) and community volunteers. Habitat details, threats and observations of flowering and fruit production were recorded at each surveyed site. A species profile and environmental impact assessment guidelines have been prepared for D. sp. C Illawarra (Appendix 4) to assist public authorities, community groups and private landholders in the conservation of the species. These documents also aim to assist consent and determining authorities in the statutory assessment of potential impacts on the species. The Illawarra Regional Threatened Flora Recovery Team was established in June 2001 to coordinate the recovery planning for six plant species which occur in the Illawarra region and are listed as endangered at a State and National level. These species are D. sp. C Illawarra, Irenepharsus trypherus, Zieria granulata, Pterostylis gibbosa, Cynanchum elegans and Pimelea spicata. Representatives of the public authorities that are involved in the planning and/or management of remnant vegetation in the region are present on the recovery team, as are representatives of various regional organisations and community groups. - An information brochure has been prepared and distributed to raise awareness of the six “Threatened Plants of the Illawarra” including D. sp. C ‘Illawarra’. - In June 2002, the Australian Network for Plant Conservation and Wollongong Council hosted a workshop to raise awareness of issues relating to the conservation of threatened flora in the Illawarra. D. sp. C ‘Illawarra’ was one of the subject species of that workshop. - In November 2002, Landcare Illawarra hosted a workshop to raise awareness of D. sp. C ‘Illawarra’ and five other endangered flora species in the Illawarra. - The DEC has initiated a program of meeting landholders with D. sp. C ‘Illawarra’ on their property to discuss sympathetic management of the species and the opportunities for entering into conservation agreements. - A Voluntary Conservation Agreement (VCA) under the NP&W Act has been signed to protect habitat for the species at Willow Creek (Dc21). A Plan of Management for the site has been prepared. - A Property Agreement under the under the Native Vegetation Conservation Act 1997 has been signed to protect habitat for the species at Marshall Mount (Dc3). Cattle have also been temporarily removed from this property until the installation of watering points and fencing of native vegetation has been completed (A. Knowlson, pers. comm.). - Threat abatement works including fencing to exclude livestock and bush regeneration are being implemented by private landholders and the DEC at four sites (Dc17, Dc21, Dc28 and Dc41).
<urn:uuid:a1045ccb-eb51-4cc4-a94d-dc699accc8f1>
2.828125
685
Knowledge Article
Science & Tech.
36.642847
The Night Skies of August: A Convergence of Planets and a Shower of Meteors by Leo Enright In the month of August, with longer nights than in July, we have more time to enjoy the view of the great Summer Milky Way, as well as the famous meteor shower of mid-August. This year we have the added bonus of the two brightest planets steadily converging in the western evening sky. At the beginning of the month, sunset in this area is at about 8:30 p.m. Eastern Daylight Time, and evening astronomical twilight ends at about 10:30 p.m. By the end of August, sunset will be at about 7:45 p.m., with twilight ending at about 9:30 p.m. Late summer sky watchers who are fortunate enough to have dark, rural skies can really get to know the Summer Milky Way and the constellations within it. Just let your late-evening gaze sweep from the northeast to the southern part of the sky. In the northeast, entirely within the Milky Way, you see Cassiopeia, in the shape of a very large letter “W”. High in the east you notice Cygnus, the Swan, also called The Northern Cross from the shape of its star pattern, and down in the south, in the richest and densest part of the Milky Way, is Sagittarius, whose star pattern forms the shape of a teapot with the handle to the left and the spout to the right. This Summer Milky Way is really one arm of our home galaxy, the Milky Way Galaxy, and our immense solar system, with the Sun, its nine planets and all their many moons, is really just a small dot among the 200 billion stars that make up this galaxy, which is over 250,000 light-years in diameter! During August we also have a chance to see the famous Andromeda Galaxy, the only other galaxy that can be seen with the unaided eye. This close neighbour of our galaxy is one of the largest members of the “local group” of over a dozen galaxies, and it is only (!) about 2 million light-years away. To find it, locate the “W” of Cassiopeia well up in the northeastern sky at about 11:00 p.m. Trace a line from the right side of the “W” down and to the right toward the eastern horizon. About half way along that line, you should see a “faint fingerprint” on the sky. That is it. Remember that what you are seeing is another whole galaxy made up of 400 billion stars, and that the light from them has taken over 2 million years to reach your eyes! Among the bright planets, the two brightest of all do a great converging act this month. Brilliant Venus in early August is easily found low in the western sky between 30 minutes and 90 minutes after sunset. The second brightest planet, Jupiter, is somewhat higher but in the southwestern sky. At the beginning of August, they are 30 degrees apart, that is 3 times the width of a fist held at arm’s length. Each evening they appear closer to each other by 1 degree, that is, by about the width of a person’s little fingernail held at arm’s length. Remarkably, at dusk on August 31, these two brightest planets will appear almost on top of each other. It should be a fine reminder of the Venus-Jupiter convergence of February, 1999. Of course, they are not physically near each other, since Jupiter, with an orbit that is far outside that of Earth, is actually 5 times further way from us than Venus. The third evening planet, reddish Mars, may be seen rising in the east at about midnight in early August, and thence rising 2 to 3 minutes earlier each evening, until by month’s end it will be seen about 10:30 p.m. Mars is gradually brightening, and, if inspected in a small telescope, appears larger over the course of the month. Saturn and Mercury, which were seen low in the western evening sky in the month of June, are not visible in the first half of August, but in the last two weeks of the month they may be seen very low in the eastern sky between 60 minutes and 30 minutes before sunrise. As was the case in the western sky two months ago, they are both again below Castor and Pollux, the brightest stars in the constellation Gemini. Saturn appears above; Mercury is below and to its left. Over the last 10 days of the month, Mercury becomes considerably brighter than Saturn, but remember that a good view of the eastern horizon will be needed to see this planetary pairing in the morning sky. Several beautiful lunar-planetary arrangements are to be seen this month. On the evening of August 7, do not miss the sight of the slim crescent moon just to the right of Venus and low in the western sky about 40 to 50 minutes after sunset. At the same time on the evenings of the 8th and 9th, the crescent moon will be seen marching between the converging planets Venus and Jupiter, and on the 10th it will be to the left of Jupiter. At about midnight on August 24th the rising moon will appear to the left of Mars, and again about midnight and after on August 25th it will appear close to the Pleiades star cluster. In the morning sky about 40 minutes before sunrise on August 31st, the thin waning crescent moon will appear above Saturn, and at the same time on September 1st, the very thin crescent moon will appear below Saturn. With the famous Perseid Meteor Shower reaching its absolute peak during the day of August 12, Thursday and Friday August 11th and 12th should be almost equally good for observing this annual event which has received its name because these meteors (sometimes called “shooting stars”) all seem to radiate from a point in the constellation Perseus which is in the northeastern evening sky below the “W”of Cassiopeia. With a First Quarter Moon setting about midnight or before on those evenings, there will be no lunar interference at all after midnight, and so, amateur astronomers are looking forward to spectacular “meteoric fireworks”, especially from midnight to dawn on both of the peak nights. If the weather cooperates, many skywatchers will be observing all night, keeping an hour-by-hour count. If the weather is uncooperative on the peak nights, remember that the Perseids are somewhat active for several weeks before and after their peak. To see the most meteors possible, face in a northerly, or a southeasterly, direction, and direct your gaze to a “quarter-section” of the sky quite high above the horizon. Most of the meteors are very fast, and are coming from a spot, called the radiant, in the northeastern part of the sky. Do not despair if you have 10 minutes without seeing any; in the next 10 minutes you may see 20 of them, since they often come in clusters. I would be interested in hearing from local observers about their “per-hour counts of Perseids” for various times during both of the nights mentioned. Those who are interested in more information about observing stars, planets, and meteor showers throughout the year should obtain a copy of the book, The Beginner’s Observing Guide, which is now available at Sharbot Lake Pharmacy.
<urn:uuid:5a639053-f70d-416f-9182-d7aa0abd8c7a>
2.96875
1,557
Nonfiction Writing
Science & Tech.
58.832917
Mars One is a private sector endeavor to send human beings to Mars. The estimated cost of $6 billion will be raised by selling T-shirts and hosting reality shows. In theory, the mission will launch in 2023. In order to reduce costs, astronauts will not be returned to earth. In other words, this is a one-way trip There are a lot of technical issues that the sponsors have failed to adequately evaluate. Although they acknowledge high radiation exposure, resulting in a much higher probability of developing cancer (without a realistic ability to treat), they have set the launch date for a period of high solar activity, which dramatically increases the risks to the astronauts during transit. In order to reduce radiation exposure on Mars, astronauts will be largely confined to living underground, which poses psychological risks. Energy generation is proposed to come from solar panels. However, Mars receives 4-times less solar energy than earth. It is also susceptible to dust storms, which would reduce solar energy output to virtually zero. If the storms last longer than a few days, the astronauts will be toast. The solar energy available during winter months is also reduced considerably. The Mars rovers that relied upon solar panels had to shut down during the winter. Such an option is not available to astronauts, who must rely upon energy for heating, oxygen production and water production. Supplying astronauts with enough food is also a problem. The Mars One website says astronauts will raise their own food. However, this idea is very unrealistic. Even on earth, it took a tremendous amount of land to produce enough food to feed people in the Biosphere 2, who complained that they were always hungry. The Biosphere experiment also suffered from reduced oxygen and high carbon dioxide, which killed many species within the Biosphere. Problems on Mars could not be solved as easily as pumping oxygen from the outside, which was done for the Biosphere. If problems or illnesses arise on Mars, help is at least 7-12 months away. So, this mission truly is a suicide mission. Fortunately, the sponsors will probably never get enough money to get the mission off the ground.
<urn:uuid:5c9c5c12-81c2-4711-90d3-ecf7d346c8fa>
3.21875
427
Personal Blog
Science & Tech.
45.291458
Life Science: Session 3 Sex Cell Production What are sex cells? Sex cells, or gametes, are unique to organisms that reproduce sexually. In animals and plants (fungi are somewhat different in this regard) there are two types of sex cells: male and female. The male sex cells are sperm, while the female sex cells are eggs. Sex cells are formed from special body cells that are typically located in sex organs. In most animals, sperm are formed in the testes of males, and eggs are formed in the ovaries of females. Sex cells contain only half of the hereditary material present in the body cells that form them. This is important because male and female sex cells ultimately join to become a fertilized egg, which gives rise to a new organism, or offspring. In order for the offspring to resemble its parents, its first cell must receive the entire genome from its two parents. For humans, we know there are 46 chromosomes in body cells existing as 23 pairs. A fertilized egg must therefore contain this same number and arrangement. In an elegant process called meiosis, each sex cell receives one member of each chromosome pair—23 total. When sperm fertilizes egg, these singles unite to reform pairs, with half the genome coming from each parent. With a few exceptions, this pattern holds true for all sexually reproducing organisms. How are sex cells produced? Sex cells are produced from special body cells that contain the entire genome. The process by which the genome is halved is very precise — it’s not just a matter of randomly dividing the chromosomes into two sets. The process involves two cell divisions. Before the first occurs, all of the chromosomes are duplicated just as they are in body cell reproduction, but what happens next is different: the two duplicated strands remain attached to each other as the members of each chromosome pair move alongside each other. During the cell division that follows, only one member of each pair is transferred to each daughter cell—this is where the number of chromosomes is halved. The two strands of each chromosome are then separated during the second cell division, still maintaining half the number that existed in the parent cell. This results in four daughter cells — sperm or egg — that contain one member of each chromosome pair. This process is called meiosis. What is the role of sex cell production in an animal life cycle? Sex cell production ensures that the genome is maintained between parent and offspring generations. Occasionally, this process goes awry with chromosome pairs not lining up or not separating. The consequences are almost always harmful, and frequently lethal to potential offspring. A successful animal life cycle therefore depends on successful sex cell production. There is another consequence to sex cell production that has a profound impact on the populations involved. Unlike body cell production, where the daughter cells are identical to parent cells, fertilized eggs result from genetic material from two different parents. Furthermore, each of these parents is only able to pass on half of its genome. The mixing and matching of half sets of chromosomes results in the astounding diversity we see in the living world. For example, we can see “parts” of both our parents when we look in the mirror. Similarly, a litter of puppies will reflect the size and coloration of both parents. The significance of this is explored in Session Five: Variation, Adaptation, and Natural Selection. Compare body cell reproduction with sex cell production: |Body cell reproduction||Sex cell production| |Role in life cycle||Growth and maintenance||Reproduction| |Where process occurs||Cells in all parts of body||Sex organs or tissues| |Number of cell divisions||One||Two| |What happens to chromosomes||All chromosomes line up singly, each chromosome duplicates, the two copies separate, and one copy of each chromosome is distributed to each daughter cell.||First division: chromosomes duplicate and copies remain attached, chromosome pairs line up alongside each other, the members of each pair separate, one member of each pair goes to each daughter cell. Second division: all chromosomes line up singly, the two copies separate, one copy of each chromosome is distributed to each daughter cell.| |Number of cells that result||Two||Four| |Number of chromosomes in resulting cells||Same number as in parent cell||Half the number as in parent cell| |Significance||Genome is maintained; all information is passed along||Genome is halved; will be restored at fertilization| |prev: body cell reproduction||next: cloning|
<urn:uuid:f9e1e7fe-748a-47bc-8b98-c404e8206a3d>
4.21875
934
Knowledge Article
Science & Tech.
38.077658
Dictyostelium discoideum is a soil-living amoeba. A group of 100,000 form a mound as big as a grain of sand. The hereditary information is carried on six chromosomes with sizes ranging from 4 to 7 Mb resulting in a total of about 34 Mb of DNA, a multicopy 90 kb extrachromosomal element that harbors the rRNA genes, and the 55 kb mitochondrial genome. The estimated number of genes in the genome is 8,000 to 10,000 and many of the known genes show a high degree of sequence similarity to genes in vertebrate species. - NIH Credit: Rex Chisholm, Northwestern University (NIGMS Image Gallery)
<urn:uuid:55a0410f-0c0d-49f9-b756-4eb138946ccf>
3.421875
146
Knowledge Article
Science & Tech.
47.450708
This is an image of an unidentified environmental microbial community collected from a shallow subsurface sediment sample. The sample was taken from the Gulf of Mexico at a depth of 575 meters and photographed using a DNA DAPI fluorescent stain. The stain fluoresces blue to count the cells found in the sediment sample. Image Credit: Heath Mills/TAMU Foraminifera, like the one seen here, are tiny creatures in the ocean about the size of the head of a pen that are surrounded by calcium carbonate shells, similar to the shells around other sea creatures. Matthew Schmidt, a Texas A&M oceanographer, uses the foraminifera shells taken from ocean core samples to gather clues about the creature's surroundings, which helps scientists understand the conditions present at the start of the Younger Dryas period. Photo by Howard Spero at University of California Davis. The mutton snapper inhabits much of the Atlantic Ocean, from Massachusetts to Brazil. Texas A&M Geography doctoral candidate Pablo Granados-Dieseldorff studies the mutton snapper in its spawning ground, the Mesoamerican Reef, which runs from Mexico to Honduras, in hopes of generating science-based conservation methods to protect both fish and habitat. Peer into the interior of a thermal ionization mass spectrometer, located in the R. Ken Williams '45 Radiogenic Isotope Geosciences Laboratory. The instrument detects minute differences in the sub-atomic makeup of elements. Researchers use these differences found in rocks, minerals, sediments and fossils to trace ancient ocean and atmospheric circulation patterns during periods of past climate change. They can also use isotopic compositions of uranium and lead to date rocks that are millions to billions of years old. A drill bit from the Joides Resolution, a drilling vessel used by researchers in Texas A&M’s Integrated Ocean Drilling Program. This photo was taken during Program Expedition 321 in the equatorial Pacific Ocean, during which researchers obtained sediments from the sea floor in order to reconstruct a detailed record of climate change over the last 55 million years. Researchers looked at minerals as well as microscopic fossils to construct the history.Photo by Bridget Wade This is the image you would see were you to stand just south of the Endurance Crater on the surface of Mars and gaze northward. Endurance was visited by NASA’s Mars Exploration Rover Opportunity from May to December, 2004. Images and measurements taken by Opportunity led scientists to conclude that liquid water flowed episodically through the area in ancient times. Texas A&M Geosciences professor Mark Lemmon played integral roles as atmospheric sciences lead in the successful missions of both Mars rovers, Spirit and Opportunity. More recently, he has also contributed to efforts in the Phoenix Lander, which first encountered Mars in May, 2008, and the Mars Science Laboratory (nicknamed Curiosity), which is scheduled for launch in November, 2011. Image Credit: NASA/JPL/Cornell Pictured on Abraham Lincoln’s nose, the tiny mineral zircon is used by geochronologists such as TAMU Geology and Geophysics professor Brent Miller to date rocks that are millions to billions of years old.The mineral is found in volcanic rocks that are inter-bedded with fossil-bearing sedimentary rocks. This provides one of the best ways to determine the ages of long-extinct species. Once-molten rocks that crystallized deep underground during plate tectonic collisions also contain zircon. The age of these zircons can be linked to the crystallization of the molten rock and thus give scientists a way to clock ancient mountain building processes.
<urn:uuid:322de67f-93aa-4dfe-a64b-20f475fa6ef8>
3.734375
749
Content Listing
Science & Tech.
30.758945
ICON Web & News Search Using OECD Database Return to Previous Page Addition or Correction In Situ Synchrotron X-ray Fluorescence Mapping and Speciation of CeO2 and ZnO Nanoparticles in Soil Cultivated Soybean (Glycine max) Link to Journal Abstract With the increased use of engineered nanomaterials such as ZnO and CeO2 nanoparticles (NPs), these materials will inevitably be released into the environment, with unknown consequences. In addition, the potential storage of these NPs or their biotransformed products in edible/reproductive organs of crop plants can cause them to enter into the food chain and the next plant generation. Few reports thus far have addressed the entire life cycle of plants grown in NP-contaminated soil. Soybean (Glycine max) seeds were germinated and grown to full maturity in organic farm soil amended with either ZnO NPs at 500 mg/kg or CeO2 NPs at 1000 mg/kg. At harvest, synchrotron ì-XRF and ì-XANES analyses were performed on soybean tissues, including pods, to determine the forms of Ce and Zn in NP-treated plants. The X-ray absorption spectroscopy studies showed no presence of ZnO NPs within tissues. However, ì-XANES data showed O-bound Zn, in a form resembling Zn-citrate, which could be an important Zn complex in the soybean grains. On the other hand, the synchrotron ì-XANES results showed that Ce remained mostly as CeO2 NPs within the plant. The data also showed that a small percentage of Ce(IV), the oxidation state of Ce in CeO2 NPs, was biotransformed to Ce(III). To our knowledge, this is the first report on the presence of CeO2 and Zn compounds in the reproductive/edible portion of the soybean plant grown in farm soil with CeO2 and ZnO NPs. For this study, soybean (Glycine max) seeds were germinated and grown to full maturity in organic farm soil amended with either ZnO NPs at 500 mg/kg or CeO2 NPs at 1000 mg/kg. At harvest, synchrotron ì-XRF and ì-XANES analyses were performed on soybean tissues, including pods, to determine the forms of Ce and Zn in NP-treated plants. Peer Reviewed Journal Article Exposure Or Hazard Target Method Of Study Environmental Fate and Transport Risk Exposure Group ACS Nano, 2013, 7(2): 1415-1423 Hernandez-Viezcas JA, Castillo-Michel H, Andrews JC, Cotte M, Rico C, Peralta-Videa JR, Ge Y, Priester JH, Holden PA, Gardea-Torresdey JL Last updated on May 3, 2013 This work is supported in part by the Nanoscale Science and Engineering Initiative of the National Science Foundation under NSF Award Number EEC-0118007. Why Join Us? Mission and Strategy Good Nano Guide Nano EHS Research Needs Current Practices Survey
<urn:uuid:f6844639-952e-49f8-b074-2d30e7c2e568>
2.703125
702
Academic Writing
Science & Tech.
43.647634
At its distant orbit, Webb is much too far from Earth to be reached by the space shuttle. Webb's science mission length is 5 years with a 10 year goal. To insure the 5 year mission, NASA has engineered the observatory so that all critical subsystems have a backup or will degrade gracefully with age. For instance, the Near Infrared Camera has two identical camera systems so that the optical quality can be maintained even if one fails. Webb will also contain enough fuel for 10 years of maneuvers. As with Hubble, Chandra, and Spitzer, the Webb science and operations center has the ability to change the operations of the observatory to maximize its scientific potential as it ages. HubbleSite and STScI are not responsible for content found outside of hubblesite.org and stsci.edu
<urn:uuid:ea34cb02-989e-434c-8368-a17e06db2b00>
3.421875
166
Knowledge Article
Science & Tech.
48.372647
The answer appears to be yes, that we can construct such numbers at present. The techniques that have been used recently have their roots around 1985 when elliptic curves were first applied to cryptography and factorization and when personal computers with RAM by the megabyte became common. I would like to thank Charles for reminding me that a product of exactly two primes is called a semiprime. Chris K. Caldwell, a professor at the University of Tennessee at Martin whose current research interest is prime number theory, writes that "small examples of proven, unfactored, semiprimes can be easily constructed." What is easy for him is not so easy for me, but it might not be too hard if I would re-read my copy of Bressoud's Factorization and Primality Testing. Proven, unfactored semiprimes are called "interesting semiprimes" by Don Reble, a software consultant who took up the problem from (at least his interpretation of) remarks by Ed Pegg, Jr. There are at least two examples online, a 1084-digit interesting semiprime constructed by Don Reble and a 5061-digit interesting semiprime constructed by David Broadhurst, a theoretical high energy physicist. Reble's interesting semiprime is in a text file that presents some parameters for a proof and the proof itself. It relies on properties of elliptic curves and is therefore currently over my head. Part of Reble's proof is that his semiprime survives a check that it is not a base-two strong probable prime. Broadhurst's interesting semiprime is in a text file that can be input to Pari. He has written there the relatively elementary conditions and the parameters that he used in order to prove that his number is a semiprime, basing his work on Reble's. He provides the location of a certificate that one of his parameters was proven prime using the free-of-cost, closed-source program Primo by Marcel Martin. Primo is an implementation of elliptic curve primality proving. For suggesting the problem, Broadhurst thanked Reble and Phil Carmody, a Linux kernel developer and researcher in high-performance numerical computing.
<urn:uuid:0e092c1a-3b18-4556-b79e-25340009ab5d>
2.703125
461
Q&A Forum
Science & Tech.
35.372676
Major Section: HISTORY Example: :pe fn ; sketches the command that introduced fn and ; prints in full the event within it that created fn.See logical-name. Pe takes one argument, a logical name, and prints in full the event corresponding to the name. Pe also sketches the command responsible for that event if the command is different from the event itself. See pc for a description of the format used to display a command. To remind you that the event is inferior to the command, i.e., you can only undo the entire command, not just the event, the event is indented slightly from the command and a slash (meant to suggest a tree branch) If the given logical name corresponds to more than one event, then will print the above information for every such event. Here is an example. of such behavior. ACL2 !>:pe nth -4270 (ENCAPSULATE NIL ...) \ >V (VERIFY-TERMINATION NTH) Additional events for the logical name NTH: PV -4949 (DEFUN NTH (N L) "Documentation available via :doc" (DECLARE (XARGS :GUARD (AND (INTEGERP N) (>= N 0) (TRUE-LISTP L)))) (IF (ENDP L) NIL (IF (ZP N) (CAR L) (NTH (- N 1) (CDR L))))) ACL2 !>
<urn:uuid:fc6923c6-3f17-4acb-84f4-38c1724a11bc>
3.328125
323
Documentation
Software Dev.
71.420293
This phenomena has been explained by the Zetas and is thoroughly documented on this blog. While the "official" cause of such massive fish kills is often attributed to hypoxia (lack of oxygen), what is conveniently excluded in these opaque explanations is that high concentrations of dissolved methane essentially expels oxygen, thus rendering water and air uninhabitable for the fish and birds encountering it."Dead fish and birds falling from the sky are being reported worldwide, suddenly. This is not a local affair, obviously. Dead birds have been reported in Sweden and N America, and dead fish in N America, Brazil, and New Zealand. Methane is known to cause bird dead, and as methane rises when released during Earth shifting, will float upward through the flocks of birds above. But can this be the cause of dead fish? If birds are more sensitive than humans to methane release, fish are likewise sensitive to changes in the water, as anyone with an aquarium will attest. Those schools of fish caught in rising methane bubbles during sifting of rock layers beneath them will inevitably be affected. Fish cannot, for instance, hold their breath until the emergency passes! Nor do birds have such a mechanism." ZetaTalk Click on Map below for interactive version: yellow=2011, blue=2012, red=2013 Some of the Evidence: Youtube video up to Jan 30, 2011 5000+ Black Birds 500+ Black Birds 100,000 Drum Fish Tens of Thousands - Fish Thousands of Fish Thousands of Fish Dozens of fish in just 50 feet 50 - 100 Birds - Jackdaws 100 Tons of Fish Hundreds of Snapper 10 Tons of fish Hundreds of fish Thousands of fish Hundreds of Fish Hundreds of Fish Scores of Fish Hundreds of Fish 150 Tons of Red Tilapias Thousands of Fish Scores of dead fish Hundreds of Starfish, Jellyfish Main source: http://maps.google.com/maps/ms?ie=UT...bca25af104a22b DEAD FISH IN 36 LAKES IN CONNECTICUT! MASS FISH DIE-OFF IN MICHIGAN! HEAPS OF DEAD FISH AT BAY STATE PONDS! DOZENS OF DEAD FISH FOUND IN MADISON POND! RED SAND LAKE FISH DIE-OFF! MELTING LAKES REVEAL HUNDREDS OF DEAD FISH! HUNDREDS OF DEAD FISH IN MEADOWS RIVER DEAD BIRDS FALL FROM THE SKY IN KANSAS! TENS OF THOUSANDS OF DEAD FISH IN INDIA! LAKE MAARDU WITHOUT FISH! MASSIVE FISH MOR IN THE LIPETSK REGION! 100 TONNES OF DEAD FISH IN UKRAINE! PENGUINS LOSING THEIR FEATHERS TO UNKNOWN ILLNESS! DEAD TURTLES FOUND ON AUSTRALIAN BEACH! Animal Death List 4th June 2011 - 800 Tons of fish dead in a lake near the Taal Volcano in the Philippines. 13th May 2011 - Dozens of Sharks washing up dead in California. 13th May 2011 - Thousands of fish wash up dead on shores of Lake Erie in Ohio. 6th May 2011 - Record number of wildlife die-offs in The Rockies during the winter. 1st May 2011 - Two giant Whales wash ashore and die on Waiinu Beach in New Zealand. 22nd April 2011 - Leopard Sharks dying in San Francisco Bay. 20th April 2011 - 6 Tons of dead Sardines found in Ventura Harbour in Southern California. 20th April 2011 - Hundreds of Dead Abalone and a Marlin wash up dead on Melkbos Beach near Cape Town. 18th April 2011 - Hundreds of dead fish found in Ventura Harbour in Southern California. 29th March 2011 - Over 1300 ducks die in Houston Minnesota. 28th March 2011 - Sei Whale washes up dead on beach in Virginia. 26th March 2011 - Hundreds of fish dead in Gulf Shores. 8th March 2011 - Millions of dead fish in King Harbor Marina in California. 3rd March 2011 - 80 baby Dolphins now dead in Gulf Region. 25th February 2011 - Avian Flu - Hundreds of Chickens die suddenly in North Sumatra Indonesia. 23rd February 2011 - 28 baby Dolphins wash up dead in Alabama and Mississippi. 21st February 2011 - Big Freeze kills hundreds of thousands of fish along coast in Texas. 21st February 2011 - Bird Flu? 16 Swans die over 6 weeks in Stratford-Upon-Avon, UK. 20th February 2011 - Over 100 whales dead in Mason Bay, New Zealand. 20th February 2011 - 120 Cows found dead in Banting, Malaysia. 19th February 2011 - Many Blackbirds found dead in Ukraine. 16th February 2011 - 5 Million dead fish in Mara River, Kenya. 16th February 2011 - Thousands of fish and several dozen ducks dead in Ontario, Canada. 16th February 2011 - Mass fish death in Black Sea Region in Turkey. 11th February 2011 - 20,000 Bees died suddenly in a biodiversity exhibit in Ontario, Canada. 11th February 2011 - Hundreds of dead birds found in Lake Charles, Louisiana. 9th February 2011 - Thousands of dead fish wash ashore in Florida. 8th February 2011 - Hundreds of Sparrows fall dead in Rotorua, New Zealand. 5th February 2011 - 14 Whales die after being beached in New Zealand. 4th February 2011 - Thousands of various fish float dead in Amazon River and in Florida. 2nd February 2011 - Hundreds of Pigeons dying in Geneva, Switzerland. 31st January 2011 - Hundreds of thousands of Horse Mussell Shells wash up dead on beaches in Waiheke Island, New Zealand. 27th January 2011 - 200 Pelicans wash up dead on Topsail Beach in North Carolina. 27th January 2011 - 2000 Fish dead in Bogota, Columbia. 23rd January 2011 - Hundreds of dead fish in Dublin, Ireland. 22nd January 2011 - Thousands of dead Herring wash ashore in Vancouver Island, Canada. 21st January 2011 - Thousands of fish dead in Detroit River, Michigan. 20th January 2011 - 55 dead Buffalo in Cayuga County, New York. 18th January 2011 - Thousands of Octopus was up in Vila Nova de Gaia, Portugal. 17th January 2011 - 10,000 Buffalos and Cows died in Vietnam. 17th January 2011 - Hundreds of dead seals washing up on shore in Labrador, Canada. 15th January 2011 - 200 dead Cows found in Portage County, Wisconsin. 14th January 2011 - Massive fish death in Baku, Azerbaijan. 14th January 2011 - 300 Blackbirds found dead on highway I-65 south of Athens in Alabama. 7th January 2011 - 8,000 Turtle Doves reign down dead in Faenza, Italy. 6th January 2011 - Hundreds of dead Grackles, Sparrows & Pigeons were found dead in Upshur County, Texas. 5th January 2011 - Hundreds of Dead Snapper with no eyes washed up on Coromandel beaches in New Zealand. 5th January 2011 - 40,000+ crabs wash up dead in Kent, England. 4th January 2011 - 100 Tons of Sardines, Croaker & Catfish wash up dead on the Parana region shores in Brazil. 4th January 2011 - 3,000+ dead Blackbirds found in Louisville, Kentucky. 4th January 2011 - 500 Dead Red-winged blackbirds & Starlings in Louisiana. 4th January 2011 - Thousands of dead fish consisting of Mullet, Ladyfish, Catfish & Snook in Volusia County, Florida. 3rd January 2011 - 2,000,000 (2 Million) Dead fish consisting of Menhayden, spots & Croakers wash up in Chesapeake Bay, Maryland & Virginia. 1st January 2011 - 200,000+ Dead fish wash up on the shores of Arkansas River, Arkansas. 1st January 2011 - 5,000+ Red-winged blackbirds & Starlings fall out of the sky dead in Beebe, Arkansas. 20th December 2010 (est. date) - Thousands of Crows, Pigeons, Wattles & Honeyeaters fell out of the sky in Esperance, Western Australia. 2nd November 2010 - Thousands of sea birds found dead in Tasmania, Australia.
<urn:uuid:6b7427b8-34a2-4e75-b0da-eca0f34b3001>
2.921875
1,809
Personal Blog
Science & Tech.
73.702063
When we last checked in to the Nansen Sea Ice Graphs, it looked like they were heading towards the “normal” line in a hurry. Ice area seems to still be on that trend, while extent seems to be leveling off it’s growth rate. Area appears to be within about 200,000 square kilometers of the 1979-2007 monthly average and still climbing. Of course the fact that the 2007 data is included in the average line, means the average is a lower than usual target than one might expect. If we compare to ice area over at Cryopshere today, they use a 1979-2000 mean, which is higher. Still the rebound we are seeing is impressive. Sea ice extent looks like this: These graphs will automatically update, so check back often. For those of you wondering, here is the difference between area and extent, as described in the NSIDC FAQ’s page: What is the difference between sea ice area and extent? Why does NSIDC use extent measurements? Area and extent are different measures and give scientists slightly different information. Some organizations, including Cryosphere Today, report ice area; NSIDC primarily reports ice extent. Extent is always a larger number than area, and there are pros and cons associated with each method. A simplified way to think of extent versus area is to imagine a slice of swiss cheese. Extent would be a measure of the edges of the slice of cheese and all of the space inside it. Area would be the measure of where there’s cheese only, not including the holes. That’s why if you compare extent and area in the same time period, extent is always bigger. A more precise explanation of extent versus area gets more complicated. Extent defines a region as “ice-covered” or “not ice-covered.” For each satellite data cell, the cell is said to either have ice or to have no ice, based on a threshold. The most common threshold (and the one NSIDC uses) is 15 percent, meaning that if the data cell has greater than 15 percent ice concentration, the cell is considered ice covered; less than that and it is said to be ice free. Example: Let’s say you have three 25 kilometer (km) x 25 km (16 miles x 16 miles) grid cells covered by 16% ice, 2% ice, and 90% ice. Two of the three cells would be considered “ice covered,” or 100% ice. Multiply the grid cell area by 100% sea ice and you would get a total extent of 1,250 square km (482 square miles). Area takes the percentages of sea ice within data cells and adds them up to report how much of the Arctic is covered by ice; area typically uses a threshold of 15%. So in the same example, with three 25 km x 25 km (16 miles x 16 miles) grid cells of 16% ice, 2% ice, and 90% ice, multiply the grid cell area by the percent of sea ice and add it up. You’d have a total area of 675 square km (261 square miles).
<urn:uuid:c47cbe87-f48d-48de-a45a-14b4061cd61b>
3.421875
659
Knowledge Article
Science & Tech.
60.164292
The Acoustic Search In the field ARU mounted on a tree (left) with its battery (right). Photo by Chris Tessaglia-Hymes To search for acoustic evidence for Ivory-billed Woodpeckers in Arkansas and other states within the historical range, we record ambient sounds using autonomous recording units (ARUs). ARUs are programmable, battery-operated digital audio recorders developed by the Cornell Lab of Ornithology’s Bioacoustics Research Program. Each ARU contains a microprocessor, 12-bit analog-to-digital converter, an omnidirectional microphone, preamplifier and signal conditioning circuitry, and a hard disk for storing audio data. These components are packaged in a cylindrical PVC housing, and attached to tree trunks two to three meters above the ground or water surface. ARUs are typically deployed for periods of two to four weeks. ARU in Arkansas. Photo by Chris Tessaglia-Hymes ARUs are programmed to record for two four-hour periods each day, the first beginning 30 to 45 minutes before sunrise, the second ending 30 to 45 minutes after sunset. The range at which an ARU could detect sounds of an Ivory-billed Woodpecker is unknown, because there are no data available on the volume of kent calls or double knocks. We estimate, however, that these signals would be detectable by ARUs up to distances of approximately 200 meters. We select recording sites based on habitat quality, locations of previous Ivory-billed Woodpecker sighting reports, and presence of possible ivory-bill roost/nest cavities and feeding signs. Reviewing and analyzing the sounds Since the start of large-scale acoustic search efforts in 2004, our protocols for reviewing and evaluating ARU recordings have evolved in order to provide more consistent and informative evaluations of ivory-bill-like sounds. Our current protocol is summarized here. To find sounds similar to those of Ivory-billed Woodpeckers in the ARU recordings, we use a multi-step process: 1. Automated screening by computer: The digital recordings are scanned by software that detects sounds similar to known vocalizations of Ivory-billed Woodpeckers (from the 1935 Allen-Kellogg recording), and to double-knocks from other Campephilus woodpeckers. 2. Initial human screening: An acoustic analyst reviews all of the computer’s detections. Most of the sounds flagged by the computer are easily discarded at this stage as not being similar enough to ivory-bill sounds to warrant further attention. The computer flags many “false alarm” events because we adjust the software to be very sensitive, reducing the chance that a real ivory-bill call might be missed. Sounds that pass this stage are forwarded to the next stage of review. 3. Expert panel review: A panel of three or more experts (outside of the acoustic analysis team) reviews all of the sounds that pass stage two. The expert panel categorizes each sound as “implausible” or “plausible.” “Plausible” events are further categorized depending on whether a potential alternate source is identified, and if that alternate source is positively identified elsewhere on the deployment. Sounds categorized as “implausible” are either positively identified as an alternate source, or are deemed to be too different than an ivory-bill. Plausible categories are: - P1: Plausible Ivory-billed Woodpecker, no likely alternative known - P2: Plausible Ivory-billed Woodpecker, alternate possibility identified but not present in recording - P3: Plausible Ivory-billed Woodpecker, alternate possibility identified and present - P4: Insufficient signal for full analysis “Plausible” sounds are scored on various criteria, receiving a point for each positive response to one of several questions. A higher score indicates a greater likelihood that the sound originated from an Ivory-billed Woodpecker. Scoring criteria for vocalizations: 1. Is the harmonic interval between 580 and 780 Hz? 2. Is harmonic emphasis appropriate? 3. Is the event part of a biologically appropriate series? 4. Is there a temporal context or co-occurrence with other events of interest on the same day? 5. Is there a clear temporal context or co-occurrence with other events of interest across days? Scoring criteria for double-knocks: 1. Is the inter-knock interval between 60 and 120 milliseconds? 2. Is sound resonant and woody? 3. Is there an absence of confounding woodpeckers? 4. Is the event part of a biologically appropriate series? 5. Is there a clear temporal context or co-occurrence with other events of interest on the same day? 6. Is there a clear temporal context or co-occurrence with other events of interest across days? At every stage of the review process, researchers compared suspect sounds not only with those of ivory-bills and other Campephilus woodpeckers, but also with a variety of similar sounds from other species, and carefully considered the surrounding context. What have we discovered so far? Here we present some examples of “plausible” sounds collected in the Big Woods of Arkansas. Note: this website is not intended to be a complete and final analysis of our acoustic monitoring and research. Rather, we aim to provide a sampling of sounds that we believe are suggestive of ivory-bill and a number of “sound-alikes” that we hope will help inform other searchers about what to listen for. We are presently working on peer reviewed publication that will explain our findings in detail.
<urn:uuid:77b0bbf8-c845-462a-bae9-2a7692b39098>
3.21875
1,193
Knowledge Article
Science & Tech.
34.158975
How the universe is erasing evidence of its beginnings and moving faster toward its end In 1917, on the third floor of an apartment building in the Wilmersdorf borough of wartime Berlin, an ailing tenant named Albert Einstein sat focused on a lofty subject: the universe. In February of that year, he published a paper that effectively launched the modern field of cosmology. In it, he suggested that the fabric of space and time contains an innate tension, an energy that seethes beneath the surface of every inch of the universe. This “cosmological constant” was the force that held gravity in check and kept the universe from collapsing on itself, he said. In other words, the universe was in a holding pattern. A dozen years later, however, the astronomer Edwin Hubble discovered that the universe was not standing still, as Einstein had suggested. Hubble found that the universe was instead expanding—forever moving outward—and didn’t need anything to keep itself from collapsing. Hubble’s discovery led Einstein to repudiate his own claim of a cosmological constant and to write the incident off as the “worst blunder” of his career. In the years that followed, Einstein’s concept of a cosmological constant faded but never disappeared. Researchers continued to ask: If the universe is simply being carried out by its own momentum, does that necessarily mean that nothing, no tension is filling the vacuum of the universe? It turns out Einstein’s conclusions might have been less farfetched than he thought. Cosmologists continued to research this theory, and what they discovered is shedding light on the future of the universe—while simultaneously erasing traces of its past. A New Constant is Discovered In 1995, physicists Lawrence Krauss, Ph.D., then at Case Western Reserve University, and Michael Turner, Ph.D., of Fermilab in Illinois, argued in the journal General Relativity and Gravitation that the universe does, in fact, have a cosmological constant. It is a force that not only propels the expansion of the universe, but does so at ever-faster speeds, constantly accelerating, they said. The scientists pieced together data, including X-ray telescope observations of faraway galaxies and Hubble Space Telescope distance measurements to nearby ones. They concluded that something seems to be pushing the expansion of the universe ever faster. That force is dark energy, researchers say, and its existence means the universe will ultimately expand so far and so wide that the stars, planets and galaxies as we know them will disappear from view. Future astronomers will look skyward toward a barren universe that lacks any clues about its origins. “There will be ever-diminishing evidence that there was a Big Bang,” says Glenn Starkman, Ph.D., a Case Western Reserve physicist and director of the university’s Origins Initiative. That could mean the end of cosmology as we know it. “Cosmologists in general are trying to answer big questions,” Starkman says. “Most of the questions we’ve been trying to answer are about the past. But I think the big questions about the future are, in many ways, just as interesting.” Dark energy may indeed have a lot to say about the future, scientists are finding. In 1995, though, not everyone was on board with the concept of a new cosmological constant. “The concept turned out to be right, and that was a very remarkable thing,” says Will Kinney, Ph.D., a physicist at the State University of New York at Buffalo. At the time, Kinney says, “I don’t know that a lot of people took the cosmological constant seriously.” That changed in 1998, when an international coalition of astronomers released a sheaf of data in both the Astronomical Journal and the Astrophysical Journal that they said proved the universe is expanding at an increasingly rapid rate. Measuring the brightness of 102 exploding stars, or supernovae, in distant galaxies, the scientists found that these supernovae were often dimmer than expected. The findings fit a pattern that could only be explained by a universe whose expansion was accelerating over time. The cosmic self-pressure that the scientists observed—dark energy—has since been confirmed by independent observations, including careful measurements by high-tech instruments such as NASA’s Wilkinson Microwave Anisotropy Probe, which launched in 2001. Shedding Light on Dark Energy No one knows for certain what dark energy is or what generates it, but one thing is clear: It is pressuring space to expand. That makes dark energy stand apart from everything else in the universe because every other form of matter or energy gravitationally tugs on other matter. Dark energy’s peculiar feature is that it seems to fill any void or vacuum, including those created by the universe’s expansion. Even a patch of empty space that had been eradicated of all known forms of matter and energy still contains dark energy, Starkman says. “So if you have twice as much vacuum as you had before, then you have twice as much of that energy,” he says. “That’s really peculiar. If you take a box and stretch it, you get something for free. That’s the property that accounts for the ability of the vacuum to expand at an accelerating rate. The more you expand it, the more of the [dark energy] you have, and the more that it pushes.” If dark energy seems confusing, that’s because it is, Starkman says. The greatest minds in physics are baffled. Dark energy is one of the most perplexing unsolved mysteries in science today, and scientists’ best guess for what lies at the heart of dark energy and the cosmological constant lies in quantum physics, Starkman says. Quantum theory predicts that empty space will wiggle with low-level vibrations, even when all the energy in that space is depleted. It says that the simplest kind of motion conceivable, subatomic particles moving back and forth like miniature springs, will be present even when no other energy is present and they will never not move. Imagine a universe filled with simple quantum particles. Now rob the universe of every ounce of energy it contains. What quantum theory says is that, powered by nothing whatsoever, the universe will still vibrate with what is sometimes called “vacuum energy” or “zero-point energy.” Quantum vacuum energy is “the simplest explanation for the origin of [dark] energy,” Starkman says. But the explanation remains murky. Starkman holds out hope that in Geneva, Switzerland, the CERN laboratory’s Large Hadron Collider, the world’s most powerful particle accelerator, may uncover precious clues about dark energy. The accelerator, which began operating in September, will allow scientists to analyze high-energy beam collisions and possibly reveal a new world of unknown particles. The experiments could ultimately explain why those particles exist and behave as they do. They could reveal the origins of mass, shed light on dark matter, uncover hidden symmetries of the universe, and possibly find extra dimensions of space. In the meantime, the observed existence of dark energy—whatever its origins—is producing real consequences for the universe’s future. The Universe’s Beginning and End In 1999, Starkman co-authored a paper with fellow Case Western Reserve physicist Tanmay Vachaspati, Ph.D., and Mark Trodden, Ph.D., of Syracuse University. The research, which appeared in Astrophysical Journal, linked cosmic acceleration to a decidedly bleak future. The universe had entered an extended period of rapid growth, they said, and, eventually, the objects in it would move away so rapidly from our world that they would fall away from view. The evidence came from observations of supernovae, they said, which measurements showed were not only moving away, but moving away at ever faster speeds. Traditional Big Bang theory runs counter to this notion. It predicts that cosmic expansion will slow or even halt over time. Think of a fireworks explosion: an initial blast, streamers shooting out from the core at great speed, then a gradual slowing until the lights of the fireworks collapse and fade. If the universe’s expansion continues to speed up, not slow down, then light from distant galaxies will fade for a different reason: It eventually will be unable to keep up. “We realized that things were going to start disappearing,” Starkman says. “The longer you wait, the less you’ll see.” However, he adds, it will take scores of billions of years to lose sight of the universe’s landscape as we know it. Today, the universe is just a teenager, a spry 14 billion years young. The cosmic end-state comes when the universe nears 100 billion years old. As that faraway birthday approaches, cosmic expansion will have created vast stretches of void between galaxies. Today’s visible universe, with its hundreds of billions of galaxies stretchingfar into the great beyond, will have sunk below the Earth’s horizon. Our sun and solar system will be long gone, having fizzled somewhere near the 19 billion-year mark. If civilizations exist in other galaxies at such a late date, their conclusions about the universe will be incomplete. Light from neighboring galaxies will be unable to reach them because the expansion of space will have quickened beyond the lowly photon’s ability to keep up. Cosmology, particularly the study of the universe’s origins, will by then have reached an end. The science launched by Einstein’s notion of a cosmological constant will be destroyed by that very same constant. But scientists are not only considering questions of the past; they are also considering future prospects for life in the universe. In 1979, physicist Freeman Dyson, Ph.D., of the Institute for Advanced Study at Princeton University published a paper in the journal Reviews of Modern Physics that argued life could survive indefinitely in a universe that also expanded indefinitely. In Dyson’s view, biology could ultimately win the battle with a hostile universe. Of course, appearing 19 years before the discovery of accelerating cosmic expansion, Dyson’s paper did not consider dark energy or a cosmological constant. In 2004, Starkman co-wrote another paper with Lawrence Krauss that delivered the bad news: Life is eventually doomed. Einstein’s greatest blunder ultimately, after hundreds of billions of years, wrenches the universe apart. And with it goes the prospect for biology. “The universe is going to have a long, slow end,” Starkman says. “It will first begin with ignorance. And if we are right, it will end with death.” Kinney, of the University at Buffalo, expands on that argument. In a paper written with physicist Katherine Freese, Ph.D., of the University of Michigan, Kinney points out that no one knows for certain whether the cosmological constant is, in fact, constant. It could be that the acceleration of the universe’s expansion will change over time. In some scenarios, in which the amount of dark energy exponentially diminishes over time, they find that doom and gloom may not prevail. Under such circumstances, the universe and biological processes in it could, theoretically at least, continue far into the future. The question is, how far into the future? “We all agree that life can last longer if the cosmological constant isn’t constant,” Starkman says. “What we’re arguing over here is how long. The evidence doesn’t seem to suggest that it will last forever. But maybe the certainty of our continued existence isn’t the most important thing—maybe it’s the understanding that we gain while we’re here.”
<urn:uuid:71948182-97de-405f-ad9e-6111c2afa6e1>
3.765625
2,494
Knowledge Article
Science & Tech.
43.823468
One method for verification of correctness is to compare algorithm implementations to STL sort for assurance of equivalent results, but that assumes STL sort is correct. To not rely on correctness of STL sort requires implementing a correctness test for sorting algorithms. Correctness requires that array[i] ≤ array[i+1] for all elements of the array, which is simple to check. Of course, comparison to results from STL sort would be a useful redundant verification. These two tests were used for all implemented routines, including Intel's IPP library routines. Boundary cases of the input arrays of size 0 and 1 were also tested. The performance comparison setup was as follows: - Visual Studio 2008, optimization project setting is set to optimize neither speed or size, and inline any suitable function. - Intel Core 2 Duo CPU E8400 at 3 GHz (64 Kbytes L1 and 6 Mbytes L2 cache). - 14-stage pipeline with 1,333 MHz front-side bus. - 2 GB of system memory (dual-channel 64-bits per channel, 800 MHz DDR2). - motherboard is DQ35JOE. Random numbers were generated by using the following method for each element in the array: // each call to rand() produces 15-bit random number. unsigned long tmp = ((unsigned long)rand()) << 30 | ((unsigned long)rand())<< 15 | ((unsigned long)rand()); The arrays were all checked for percentage of unique values, which were all above 95% for arrays filled with 32-bit unsigned values. The range of min and max were also checked for each array, which were between 0 and near the max value for 32-bit unsigned numbers. Performance was measured by always processing 100 million elements. When 10 element arrays were being measured, then 10 million of them were allocated. When 100 element arrays were being measured, then 1 million of them were allocated, and so on. A different random-number generator seed was used for each array, but the same seeds were used across all algorithms. Time-stamp was taken before sorting the 10 million arrays and also after. The average value across all arrays is the value reported.
<urn:uuid:65f4c07c-fb2f-4325-beb3-21abc9c0b1bb>
3.171875
445
Documentation
Software Dev.
53.377581
For more information on the Concurrency Runtime Framework, see Concurrency Runtime: The Resource Manager. Visual C++ 2010 comes with new features and enhancements to simplify more native programming. The Concurrency Runtime (CRT), for instance, is a framework that simplifies parallel programming and helps you write robust, scalable, and responsive parallel applications. The CRT raises the level of abstraction so that you do not have to manage the infrastructure details that are related to concurrency. The Concurrency Runtime also enables you to specify scheduling policies that meet the quality of service demands of your applications. Figure 1 presents the architecture of Concurrency Runtime Framework. In this article, I discuss the Task Scheduler layer and examine how it works internally. To do so, I use CppDepend, an analysis tool that makes it easier for you to manage complex C\C++ (native, mixed, and COM) code base. The Task Scheduler The Task Scheduler schedules and coordinates tasks at runtime. A task is a unit of work that performs a specific job. The Task Scheduler manages the details that are related to efficiently scheduling tasks on computers that have multiple computing resources. Windows provides a preemptive kernel-mode scheduler -- a round-robin, priority-based mechanism that gives every task exclusive access to a computing resource for a given time period, then switches to another task. Although this mechanism provides "fairness" (every thread makes forward progress), it comes at some cost of efficiency. For example, many compute-intensive algorithms do not require fairness. Instead, it is important that related tasks finish in the least overall time. Cooperative scheduling enables an application to more efficiently schedule work. Cooperative scheduling is a mechanism that gives every task exclusive access to a computing resource until the task finishes or until the task yields its access to the resource. The user-mode cooperative scheduler enables application code to make its own scheduling decisions. Because cooperative scheduling enables many scheduling decisions to be made by the application, it reduces much of the overhead that is associated with kernel-mode synchronization. The Concurrency Runtime (CRT) uses cooperative scheduling together with the preemptive scheduler of the operating system to achieve maximum usage of processing resources. In this article, I examine the Task Scheduler design and lift its hood to see how it works internally. For information on the CRT Resource Manager, see Concurrency Runtime: The Resource Manager. Again, I use CppDepend to analyze the CRT source code. The CRT provides the interface Scheduler to implement a specific scheduler adapted to application needs. Let's examine classes that implement this interface: The CRT provides two implementations of the scheduler -- UMSThreadScheduler. As illustrated in the dependency graph in Figure 2, the SchedulerBase contains all common behavior of these two classes. Is the Scheduler flexible? A good indicator of flexibility is to search for all abstract classes used by the Scheduler. As shown in the dependency graph in Figure 3, the Scheduler uses many abstract classes. It enforces low coupling, and makes the scheduler more flexible, so adapting it to other needs is easy. To explain the role of each abstract class used by the Scheduler, I'll discuss its responsibilities. There are three major responsibilities assigned to the Task Scheduler: Getting resources (processors, cores, memory). When the scheduler is created, it asks for resources from the runtime Resource Manager (as explained in CRT Concurrency Runtime: Resource Manager). The Scheduler communicate with Resource Manager using IScheduler interfaces. Resources given by Resource Manager use scheduler policy to allocate resources to the Scheduler. The policy as shown in Figure 4 is assigned when the Scheduler is created. The CRT creates a default Scheduler if no Scheduler exists by invoking the GetDefaultScheduler method, and a default policy is used. The Task Scheduler enables applications to use one or more Scheduler instances to schedule work, and an application can invoke Scheduler::Create to add another Scheduler that uses a specific policy. Concurrency::PolicyElementKey enumeration defines the policy keys that are associated with the Task Scheduler. For more information on policy keys, see this article. The following collaborations between the Scheduler and Resource Manager shows the role of each interface concerned by the allocation. Ask for resource allocation: Getting resources from Resource Manager:
<urn:uuid:a8560120-dd89-457c-91b5-f91cc2a65d44>
3.0625
914
Documentation
Software Dev.
20.351805
are gamma rays? A gamma ray is a packet of electromagnetic energy--a photon. Gamma photons are the most energetic photons in the electromagnetic spectrum. Gamma rays (gamma photons) are emitted from the nucleus of some unstable (radioactive) atoms. What are the properties of gamma radiation? Gamma radiation is very radiation. Gamma photons have about 10,000 times as much energy as the photons in the visible range of the electromagnetic spectrum. Gamma photons have no mass and no electrical charge--they are pure Because of their high energy, gamma photons travel at the speed of light and can cover hundreds to thousands of meters in air before spending their energy. They can pass through many kinds of materials, including human tissue. Very dense materials, such as lead, are commonly used as shielding to slow or stop gamma photons. Their wave lengths are so short that they must be measured in nanometers, billionths of a meter. They range from 3/100ths to 3/1,000ths of a nanometer. What is the difference between gamma rays and x-rays? Gamma rays and x-rays, like visible, infrared, and ultraviolet light, are part of the electromagnetic spectrum. While gamma rays and x-rays pose the same hazard, they differ in their origin. Gamma rays originate in the nucleus. X-rays originate in the electron fields surrounding the What conditions lead to gamma ray emission? emission occurs when the nucleus of a radioactive atom has too much energy. It often follows the emission of What happens during gamma provides an example of radioactive decay by gamma radiation. Scientists think that a neutron transforms to a proton and a beta particle. The additional proton changes the atom to barium-137. The nucleus ejects the beta particle. However, the nucleus still has too much energy and ejects a gamma photon (gamma radiation) to become more stable. How does gamma radiation change in the environment? Gamma rays exist only as long as they have energy. Once their energy is spent, whether in air or in solid materials, they cease to exist. The same is true for x-rays. How are people exposed to Most people's primary source of gamma exposure is naturally occurring radionuclides, particularly potassium-40, which is found in soil and water, as well as meats and high-potassium foods such as bananas. Radium is also a source of gamma exposure. However, the increasing use of nuclear medicine (e.g., bone, thyroid, and lung scans) contributes an increasing proportion of the total for many people. Also, some man-made radionuclides that have been released to the environment emit gamma rays. Most exposure to gamma and x-rays is direct external exposure. Most gamma and x-rays can easily travel several meters through air and penetrate several centimeters in tissue. Some have enough energy to pass through the body, exposing all organs. X-ray exposure of the public is almost always in the controlled environment of dental and medical Although they are generally classified as an external hazard, gamma emitting radionuclides do not have to enter the body to be a hazard. Gamma emitters can also be inhaled, or ingested with water or food, and cause exposures to organs inside the body. Depending on the radionuclide, they may be retained in tissue, or cleared via the urine or feces. Does the way a person is exposed to gamma or x-rays matter? Both direct (external) and internal exposure to gamma rays or X-rays are of concern. Gamma rays can travel much farther than alpha or beta particles and have enough energy to pass entirely through the body, potentially exposing all organs. A large protion gamma radiation largely passes through the body without interacting with tissue--the body is mostly empty space at the atomic level and gamma rays are vanishingly small in size. By contrast, alpha and beta particles inside the body lose all their energy by colliding with tissue and causing damage. X-rays behave in a similar way, but have slightly lower energy. Gamma rays do not directly ionize atoms in tissue. Instead, they transfer energy to atomic particles such as electrons (which are essentially the same as beta particles). These energized particles then interact with tissue to form ions, in the same way radionuclide-emitted alpha and beta particles would. However, because gamma rays have more penetrating energy than alpha and beta particles, the indirect ionizations they cause generally occur farther into tissue (that is, farther from the source of radiation).
<urn:uuid:b21545f5-462d-4679-b8fb-a45f9d675683>
4.0625
1,048
Knowledge Article
Science & Tech.
41.566987
"That's not what I meant": human communication is fraught with misinterpretation. Written out in longhand, words and letters can be misread. A telegraph clerk can mistake a dot for a dash. Noise will always be with us, but at least a new JQI (*) device has established a new standard for reading quantum information with a minimum of uncertainty. Success has come by viewing light pulses not with a single passive detector with but an adaptive network of detectors with feedback. The work on JQI's new, more assured photonic protocol was led by Francisco Becerra and carried out in Alan Migdall's JQI lab. They report their results in Nature Photonics (**). Here are some things you need to know to appreciate this development. HOW TO MODULATE? Digital data, in its simplest form, can be read with a process called on-off keying: a detector senses the intensity of incoming bursts of electrons in wires or photons through fibers and assigns a value of 0 or 1. A more sophisticated approach to modulating a signal (not merely off/on) is to encode data in the phase of the pulse. In "phase-shift keying," information is encoded in the amount of phase shift imposed on a carrier wave; the phase of the wave is how far along the wave cycle you happen to be (say, at the top of a crest or the bottom of a trough in a sinusoidal, as in this figure). WHAT KIND OF ALPHABET? Larger words can be assembled from a small suite of symbols. The Roman alphabet has 26 letters, the Greek only 24. Binary logic, and most transistors, makes do with just a two-letter alphabet. Everything is a 0 or a 1, and larger numbers and letters and words are assembled from as many binary bits as are necessary. But what if we enlarged the alphabet from two to four? In quaternary logic more data can be conveyed in a single pulse. The cost of this increase is having to write and read 4 states of modulation (or 4 symbols). Even more efficient in terms of packing data, but correspondingly more difficult to implement, is logic based on 6 states, or 8, or any higher number. Digital data at its most basic---at the level of transistor---remains in binary form, but for communicating this data, higher number alphabets can be used. In fact, high-definition television delivery already involves high-level logic. No matter what kind of logic is used, errors creep in. A detector doesn't just unequivocally measure a 0 or a 1. The reading process is imperfect. And even worse, the state of the light pulse is inherently uncertain, and that is a real problem when the light pulses belong to a set of overlapping states. This is illustrated in the figure below for binary and quaternary phase states. On the left side of the figure, the measurement of the phase of a light pulse is depicted, where there are only two choices. Is the pulse in the alpha state or the –alpha state? Because the tails of one overlap the other there is a slight ambiguity that leads to uncertainty in which state a measurement indicates. On the right, four possible states are depicted on a complex-number graph (with real (Re) and imaginary (Im) axes). Here the overlap of the states is more complicated, but results in similar ambiguities of the measured states, seen mostly near the borders (decision threshold lines) between the states. STANDARD QUANTUM LIMIT Decades ago communications theory established a minimal uncertainty for the accurate transmission and detection of information encoded in overlapping states. The hypothetical minimal detection error using conventional schemes is called the standard quantum limit and it depends on things like how many photons of light comprise the signal, how many levels (binary, quaternary, etc.) need to be read out, and which physical property of light is used to encode the information, such as the phase. But starting in the 1970s with physicist Carl W. Helstrom, some scientists have felt that the standard quantum limit could be circumvented. The JQI researchers do exactly this by using not a single passive photo-detector, but an active detection process involving a series of stages. At each stage, the current light signal strikes a partially-silvered mirror, which peels off a fraction of the pulse for analysis and the rest goes on to subsequent stages. At each stage the signal is combined with a separate reference oscillator wave used as a phase reference against which the signal phase is determined. This is done by shifting the reference wave by a known amount and letting it interfere with the signal wave at the beamsplitter. By altering that known shift, the interference pattern can reveal something about the phase of the input pulse. By combining many such stages (see the figure below) and using information gained by previous stages to adjust the phase of the reference wave in successive stages, a better estimate of the signal phase can be obtained. Detecting phase in this adaptive way, and implemented in a feedback manner, the JQI system is able to beat the standard quantum limit for a set of 4 states (quaternary) encoding information as a phase. These states are represented as fuzzy distributions arranged at different angles around a circle as seen in the figure above where the angles represent the phase of the light pulses. The JQI noise-reduction achievement is depicted in the graph below. The error rate is plotted as a function of the mean number of photons used to deliver the information. The standard quantum limit (SQL) is the red line. The light gray line is the SQL line if you take into account that individual photon detector stages used were ~72% efficient rather than 100% (with the detector efficiencies being 84%. In the business of detecting single photons, 84% is top of the line.) The error probabilities measured for the system (black points with error bars) fall well below the quantum limit, by about 6 decibels in the center of the curve. This is equivalent to saying that the JQI receiver is performing better than the SQL by a factor of about 4 in determining the phase of an incoming signal. That is, the JQI receiver achieves an error probability that is 4 times lower than the so-called "Standard Quantum Limit." This graph shows results for a system that implements 10 adaptive measurements. The two other lines on the chart show what the expected uncertainty would be for a perfect system (100% efficient detectors) and without any of the imperfections that would be encountered in any realistic implementation, and a hypothetical ultimate-limit on uncertainty derived by Helstrom. To conclude, the JQI photon receiver features an error rate four times lower than perfect conventional receivers, over a wide range of photon number, and with discrimination for four states. The only previous detection below the quantum limit was for a very narrow range of photons and with only a 2-state protocol and only slightly below the SQL. (*)The Joint Quantum Institute (JQI) is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park. (**) "Experimental demonstration of a receiver beating the standard quantum limit for multiple nonorthogonal state discrimination," by F. E. Becerra, J. Fan, G. Baumgartner, J. Goldhar, J. T. Kosloski, and A. Migdall, Nature Photonics, published online 6 January 2013. Alan Migdall, firstname.lastname@example.org, 301-975-2331 Press contact at JQI: Phillip F. Schewe, email@example.com, 301-405-0989. http://jqi.umd.edu/ Phillip F. Schewe | Source: EurekAlert! Further information: www.umd.edu More articles from Physics and Astronomy: “Out of This World” Space Stethoscope Valuable on Earth, Too 22.05.2013 | Johns Hopkins Storms on Uranus, Neptune Confined to Upper Atmosphere 21.05.2013 | University of Arizona A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers. Droplets in this toroidal shape made ... Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry. Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada. High manufacturing cost and a short lifetime are still a major obstacle on ... University of Würzburg physicists have succeeded in creating a new type of laser. Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature. It also emits light the waves of which are in phase with one another: the polariton laser, developed ... Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions. They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics. “When water boils, its molecules are released as vapor. We call this ... Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset. For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ... 22.05.2013 | Life Sciences 22.05.2013 | Ecology, The Environment and Conservation 22.05.2013 | Earth Sciences 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
<urn:uuid:36bff55d-91aa-43fb-ad89-0f9a2f3528c4>
2.984375
2,242
Content Listing
Science & Tech.
47.581177
I am looking through a piece of code and I cannot figure out what this does. I have seen it in other code but as I am just learning java, I was hoping someone could tell me about it. Here is the code snipet: line = line.replaceAll("\t", " "); My question is what does \t do in Java???
<urn:uuid:1f31e17e-4422-4abf-b612-a066006dd45a>
2.875
76
Q&A Forum
Software Dev.
79.939868
Students can learn how geologists use stratigraphy, the study of layered rock, to understand the sequence of geological events. As students watch baking soda-vinegar "lava" flow from their clay volcanoes, they will see that the lava follows different paths. They will also learn how to distinguish between older and newer layered flows. Lava Layering Activity [82KB PDF file] This activity is part of the Exploring the Moon Educator Guide
<urn:uuid:99ce2c1f-fbae-47e1-a402-ab298e242ee3>
3.75
95
Tutorial
Science & Tech.
39.331667
Daniel Botkin, emeritus professor of ecology at UC Santa Barbara, argues in the Wall Street Journal (Oct 17, page A19) that global warming will not have much impact on life on Earth. We’ll summarize some of his points and then take our turn: Botkin: The warm climates in the past 2.5 million years did not lead to extinctions. Response: For the past 2.5 million years the climate has oscillated between interglacials which were (at most) a little warmer than today and glacials which were considerably colder than today. There is no precedent in the past 2.5 million years for so much warming so fast. The ecosystem has had 2.5 million years to adapt to glacial-interglacial swings, but we are asking it to adapt to a completely new climate in just a few centuries. The past is not a very good analog for the future in this case. And anyway, the human species can suffer quite a bit before we start talking extinction. Botkin: Tropical diseases are affected by other things besides temperature Response: I’m personally more worried about dust bowls than malaria in the temperate latitudes. Droughts don’t lead to too many extinctions either, but they can destroy civilizations. It is true that tropical diseases are affected by many things besides temperature, but temperature is important, and the coming warming is certainly not going to make the fight against malaria any easier. Botkin: Kilimanjaro again. Response: Been there, done that. The article Botkin cites is from American Scientist, an unreviewed pop science magazine, and it is mainly a rehash of old arguments that have been discussed and disposed of elsewhere. And anyway, the issue is a red-herring. Even if it turned out that for some bizarre reason the Kilimanjaro glacier, which is thousands of years old, picked just this moment to melt purely by coincidence, it would not in any way affect the validity of our prediction of future warming. Glaciers are melting around the world, confirming the general warming trends that we measure. There are also many other confirmations of the physics behind the predictions. It’s a case of attacking the science by attacking an icon, rather than taking on the underlying scientific arguments directly. Botkin: The medieval optimum was a good time Response: Maybe it was, if you’re interested in Europe and don’t mind the droughts in the American Southwest. But the business-as-usual forecast for 2100 is an entirely different beast than the medieval climate. The Earth is already probably warmer than it was in medieval times. Beware the bait and switch! Botkin argues for clear-thinking rationality in the discussion about anthropogenic climate change, against twisting the truth, as it were. We couldn’t agree more. Doctor, heal thyself. For years the Wall Street Journal has been lying to you about the existence of global warming. It doesn’t exist, it’s a conspiracy, the satellites show it’s just urban heat islands, it’s not CO2, it’s all the sun, it’s water vapor, and on and on. Now that those arguments are losing traction, they have moved on from denying global warming’s existence to soothing you with reassurances that it ain’t gonna be such a bad thing. Fool me once, shame on…shame on you. Fool me–you can’t get fooled again. -George W. Bush
<urn:uuid:e427ac79-68aa-493b-b504-a5492682a6f8>
2.96875
740
Comment Section
Science & Tech.
52.143282
Last week we proposed a bet against the “pause in global warming” forecast in Nature by Keenlyside et al. and we promised to present our scientific case later – so here it is. This is why we do not think that the forecast is robust: Figure 4 from Keenlyside et al ’08. The red line shows the observations (HadCRU3 data), the black line a standard IPCC-type scenario (driven by observed forcing up to the year 2000, and by the A1B emission scenario thereafter), and the green dots with bars show individual forecasts with initialised sea surface temperatures. All are given as 10-year averages. - Their figure 4 shows that a standard IPCC-type global warming scenario performs slightly better for global mean temperature for the past 50 years than their new method with initialised sea surface temperatures (see also the correlation numbers given at the top of the panel). That the standard warming scenario performs better is highly remarkable since it has no observed data included. The green curve, which presents a set of individual 10-year forecasts and is not a time series, each time starts again close to the observed climate, because it is initialised with observed sea surface temperatures. So by construction it cannot get too far away, in contrast to the “free” black scenario. Thus you’d expect the green forecasts to perform better than the black scenario. The fact that this is not the case shows that their initialisation technique does not improve the model forecast for global temperature. - Their ‘cooling forecasts’ have not passed a the test for their hindcast period. Global 10-year average temperatures have increased monotonically during the entire time they consider – see their red line. But the method seems to have produced already two false cooling forecasts: one for the decade centered on 1970, and one for the decade centered on 1999. - Their forecast was not only too cold for 1994-2004, but it also looks almost certain to be too cold for 2000-2010. For their forecast for 2000-2010 to be correct, all the remaining months of this period would have to be as cold as January 2008 – which was by far the coldest month in that decade thus far. It would thus require an extreme cooling for the next two-and-a-half years. - Even for European temperatures (their Fig. 3c, not part of our proposed bet), the forecast skill of their method is not impressive. Their method has predicted cooling several times since 1970, yet the European temperatures have increased monotonically since then. Remember the forecasts always start near the red line; almost every single prediction for Europe has turned out to be too cold compared to what actually happened. There therefore appears to be a systematic bias in the forecasts. - One of the key claims of the paper is that the method allows forecasting the behaviour of the meridional overturning circulation (MOC) in the Atlantic. We do not know what the MOC has actually been doing for lack of data, so the authors diagnose the state of the MOC from the sea surface temperatures – to put it simply: a warm northern Atlantic suggests strong MOC, a cool one suggests weak MOC (though it is of course a little more complex). Their method nudges the model’s sea surface temperatures towards the observed ones before the forecast starts. But can this induce the correct MOC response? Suppose the model surface Atlantic is too cold, so this would suggest the MOC is too weak. The model surface temperatures are then nudged warmer. But if you do that, you are making surface waters more buoyant, which tends to weaken the MOC instead of enhancing it! So with this method it seems unlikely to us that one could get the MOC response right. We would be happy to see this tested in a ‘perfect model’ set up, where the SST-restoring was applied to try and get the model forecasts to match a previous simulation (where you know much more information). If it doesn’t work for that case, it won’t work in the real world. - When models are switched over from being driven by observed sea surface temperatures to freely calculating their own sea surface temperatures, they suffer from something called a “coupling shock”. This is extremely hard, perhaps even impossible, to avoid as “perfect model” experiments have shown (e.g. Rahmstorf, Climate Dynamics 1995). This problem presents a formidable challenge for the type of forecast attempted by Keenlyside et al., where just such a “switching over” to free sea surface temperatures occurs at the start of the forecast. In response to the “coupling shock”, a model typically goes through an oscillation of the meridional overturning circulation over the next decades, of the magnitude similar to that seen in the Keenlyside et al simulations. We suspect that this “coupling shock”, which is not a realistic climate variability but a model artifact, could have played an important role in those simulations. One test would be the perfect model set up we mentioned above, or an analysis of the net radiation budget in the restored and free runs – a significant difference there could explain a lot. - To check how the Keenlyside et al. model performs for the MOC, we can look at their skill map in Fig. 1a. This shows blue areas in the Labrador Sea, Greenland-Iceland-Norwegian Sea and in the Gulf Stream region. These blue areas indicate “negative skill” – that means, their data assimilation method makes things worse rather than improving the forecast. These are the critical regions for the MOC, and it indicates that for either of the two reasons 5 and 6, their method is not able to correctly predict the MOC variations. Their method does show skill in some regions though – this is important and useful. However, it might be that this skill comes from the advection of surface temperature anomalies by the mean ocean circulation rather than from variations of the MOC. That would also be a an interesting issue to research in the future. - All climate models used by IPCC, publicly available in the CMIP3 model archive, include intrinsic variability of the MOC as well as tropical Pacific variability or the North Atlantic Oscillation. Some of them also include an estimate of solar variability in the forcing. So in principle, all of these models should show the kind of cooling found by Keenlyside et al. – except these models should show it at a random point in time, not at a specific time. The latter is the innovation sought after by this study. The problem is that the other models show that a cooling of one decadal mean to the next in a reasonable global warming scenario is extremely unlikely and almost never occurs – see yesterday’s post. This suggests that the global cooling forecast by Keenlyside et al. is outside the range of natural variability found in climate models (and probably in the real world, too), and is perhaps an artifact of the initialisation method. Our assessment could of course be wrong – we had to rely on the published material, while Keenlyside et al. have access to the full model data and have worked with it for months. But the nice thing about this forecast is that within a few years we will know the answer, because these are testable short term predictions which we are happy to see more of. Why did we propose a bet on this forecast? Mainly because we were concerned by the global media coverage which made it appear as if a coming pause in global warming was almost a given fact, rather than an experimental forecast. This could backfire against the whole climate science community if the forecast turns out to be wrong. Even today, the fact that a few scientists predicted a global cooling in the 1970s is still used to undermine the credibility of climate science, even though at the time it was just a small minority of scientists making such claims and they never convinced many of their peers. If different groups of scientists have a public bet running on this, this will signal to the public that this forecast is not a widely supported consensus of the climate science community, in contrast to the IPCC reports (about which we are in complete agreement with Keenlyside and his colleagues). Some media reports even suggested that the IPCC scenarios were now superseded by this “improved” forecast. Framing this in the form of a bet also helps to clarify what exactly was forecast and what data would falsify this forecast. This was not entirely clear to us just from the paper and it took us some correspondence with the authors to find out. It also allows the authors to say: wait, this is not how we meant the forecast, but we would bet on a modified forecast as follows… By the way, we are happy to negotiate what to bet about – we’re not doing this to make money. We’d be happy to bet about, say, a donation to a project to preserve the rain forest, or retiring a hundred tons of CO2 from the European emissions trading market. We thus hope that this discussion will help to clarify the issues, and we invite Keenlyside et al. to a guest post here (and at KlimaLounge) to give their view of the matter.
<urn:uuid:f973850b-0d59-4abe-934c-cc3630e05790>
2.8125
1,921
Academic Writing
Science & Tech.
45.906641
I have written the MATLAB code according to the algo given in he tutorials for edge detection. Edge detection is a technique to locate the edges of objects in the scene. This can be useful for locating the horizon, the corner of an object, white line following, or for determing the shape of an object. The algorithm is quite simple: sort through the image matrix pixel by pixel for each pixel, analyze each of the 8 pixels surrounding it record the value of the darkest pixel, and the lightest pixel if (darkest_pixel_value - lightest_pixel_value) > threshold) then rewrite that pixel as 1; else rewrite that pixel as 0; What the algorithm does is detect sudden changes in color or lighting, representing the edge of an object." I want to know how to get the threshold value for best results. If I calculate the threshold according to the original method of finding the mean of all the elements of image matrix, then its too big a value. See, in this algo we find the difference between the largest and smallest neighbour of a matrix element. Now for a grayscale image, this difference is not too big, not more than 50 at extreme points and generally around 20-30. But as the pixel value is almost same for all pixels in a grayscale image and hence the threshold calculated by the normal method is always greater than the difference and the resulting image is completely black. I ran the code on a 640x480 grayscale image. It gave the best result at threshold = 20 whereas the threshold calculated by computing the mean was coming out to be = 115. Now how to calculate the accurate threshold?
<urn:uuid:7ec9a08b-9de3-4992-90e5-5a26bd770fbb>
2.828125
349
Comment Section
Software Dev.
44.770238
Discover one of our 28 local entrepreneurial communities » Be the first to know as we launch in new countries and markets around the globe. Interested in bringing MIT Technology Review to your local market? Unsupported browser: Your browser does not meet modern web standards. See how it scores » Bacteria in termite guts could make ethanol from noncorn sources cheaper. Scientists say the results represent a new stage in synthetic biology. Engineered E. coli proves efficient at churning out the biofuel. GM teams with a startup aiming to produce low-cost biofuels. Stem cells from skin, myriad microbes, and a $350,000 personal genome. Advanced biofuels, more-efficient vehicles, and solar power top the most notable energy stories of 2007. A portable system converts biowaste into jet fuel and diesel for the military. As the primaries near, the presidential candidates are calling for similar, ambitious growth in ethanol biofuel. Researchers have designed a process to generate hydrogen from organic materials. Brazilian researchers report that exposure to magnetic fields increased ethanol yields by as much as 17 percent.
<urn:uuid:78d4bdd4-5927-4e46-aae7-d92866dd56a2>
2.703125
232
Content Listing
Science & Tech.
42.179545
April 19, 1996 This document contains a high-level proposal for embedding fonts in HTML documents on the World Wide Web. Clients interact with platform-specific services (called "embedding services" in this document) that provide much of the embedding functionality. The embedding services used by the clients perform the following functions: The Embedding Services create an embedded font structure from a specified font or fonts. The font structure has a known length and stream identifier, which the clients use to package the structure appropriately. Rather than actually embed a font structure in an HTML document, we propose that clients create a separate file to contain the embedded font or fonts, perhaps called a .FONT file, with its own URL. This file would have the following MIME specification: An HTML document would contain a reference to the associated font file, similar to the way graphics or other objects are referenced within a web document. We propose the tag <FONT FILE> to associate fonts with a web document. For example, <FONT FILE = Name.FONT> The following scenario outlines the process an authoring client might follow when embedding a font in an HTML document: Display clients will use a procedure similar to the following to load and display embedded fonts: OPEN ISSUE regarding HTML FORMS: Fonts with read-only embedding privileges (preview and print embedding) have previously only been allowed to be loaded for use in read-only documents. A web author may unknowingly embed a read-only font for use with an HTML form, which allows a user to modify and enter text. Rather than create a new embedding level for this purpose, or modifying the existing read-only level to permit this use of the font, the client should substitute a local font for the read-only font used in the form. Authoring clients need to determine which fonts are actually used in a document before embedding the fonts. Fonts that are associated with a document but not actually displayed should not be embedded. Authoring clients also need to determine which of the fonts used in a document should actually be embedded. Fonts that will exist on the remote system, such as the Windows core fonts, should not be embedded. Users may also notify the authoring client of fonts they do not want to embed. Authoring clients are responsible for maintaining a shared typeface exclusion list that lists fonts that should not be embedded. If an authoring client requests the font be subsetted, the client must supply the list of characters used in the document. Authoring and display clients are responsible for defining functions that the embedding services can use to write the font structure to the .FONT file. The embedding services report the embedding privileges the font creator has applied to the font, and clients must respect those privileges. After loading and displaying a document with a font intended for temporary use, a client must uninstall the font. When loading a document with embedded fonts that the creator has labeled fully-installable, the display client should ask the user whether to permanently install the font or use it only temporarily. Otherwise, users may unwittingly load numerous fonts on their computer that they never regularly use. Microsoft has worked with the font industry to develop standards for identifying embeddability within font files. The embeddability of a TrueType font is determined by the creator of the font. Information about the level of embedding permitted for the font is contained in the fsType bit field of the OS/2 table, as described in the TrueType 1.0 Font File Specification. OS/2 table Description fsType bit settings 1 Restricted License Embedding. The font must not be modified, embedded, or exchanged in any manner without first obtaining permission of the legal owner. 2 Preview and Print Embedding. The font may be embedded within documents, but must only be installed temporarily on the remote system. Documents containing the font can only be opened as "read-only." 3 Editable Embedding. The font may be embedded within documents, but must only be installed temporarily on the remote system. Documents containing the font can be opened for reading and writing.
<urn:uuid:c7b76406-7a9c-42c8-9d25-ffd8b8f77740>
2.90625
857
Documentation
Software Dev.
41.88008
Sequential compression and decompression is done using the classes BZ2Compressor and BZ2Decompressor. Create a new compressor object. This object may be used to compress data sequentially. If you want to compress data in one shot, use the compress() function instead. The compresslevel parameter, if given, must be a number between 9; the default Provide more data to the compressor object. It will return chunks of compressed data whenever possible. When you've finished providing data to compress, call the flush() method to finish the compression process, and return what is left in internal buffers. Finish the compression process and return what is left in internal buffers. You must not use the compressor object after calling this method. Create a new decompressor object. This object may be used to decompress data sequentially. If you want to decompress data in one shot, use the decompress() function instead. Provide more data to the decompressor object. It will return chunks of decompressed data whenever possible. If you try to decompress data after the end of stream is found, EOFError will be raised. If any data was found after the end of stream, it'll be ignored and saved in See About this document... for information on suggesting changes.
<urn:uuid:d7610c1a-d921-4d75-9cab-62610b94415e>
2.859375
281
Documentation
Software Dev.
50.375341
October 4, 2005: Intricate wisps of glowing gas float amid a myriad of stars in this image of the supernova remnant, N132D. The ejected material shows that roughly 3,000 years have passed since the supernova blast. As this titanic explosion took place in the Large Magellanic Cloud, a nearby neighbor galaxy some 160,000 light-years away, the light from the supernova remnant is dated as being 163,000 years old from clocks on Earth. This composite image of N132D comprises visible-light data taken in January 2004 with Hubble's Advanced Camera for Surveys, and X-ray images obtained in July 2000 by Chandra's Advanced CCD Imaging Spectrometer. The complex structure of N132D is due to the expanding supersonic shock wave from the explosion impacting the interstellar gas of the LMC. A supernova remnant like N132D provides information on stellar evolution and the creation of chemical elements such as oxygen through nuclear reactions in their cores. When viewing objects in space, one must realize that the speed of light is a finite quantity, and that many objects that we are observing with high-powered telescopes, like Hubble, are extremely far away. If we refer to the speed of light as an unchanging value, and state that nothing can go faster than this speed, we can then use the term "light-second," "light-minute," "light-hour", and so on up to "light-year" as finite quantities of distance that are equal to the distance that light travels in that amount of time. Based on the speed of light and the distance from Earth to the Sun, we can say that the Sun is 8 light-minutes away from the Earth and vice-versa. If the Sun showed a flare, it would be visible on Earth 8 minutes later. If an object is seen in the Large Magellanic Cloud (LMC), it takes 160,000 years for the light from the LMC to reach us. If some event occurs in the LMC, like a supernova, astronomers on Earth viewing the supernova going off today know that the supernova actually exploded 160,000 years ago. If our telescopes show that 3,000 years have passed since the time of the supernova, based on the presence of ejection material in the remnant, the actual clock-time of when that event occurred based on our Earth calendars was 3,000 + 160,000 years ago, or 163,000 years ago. Since similar objects are at various distances from Earth, astronomers usually remove the light-travel time to the object when talking about the age or when an event occurred.
<urn:uuid:eb362c51-222d-4ae3-8b43-9b171d658de2>
4.25
542
Knowledge Article
Science & Tech.
46.355431
(Submitted August 15, 1998) I'm a middle school geography teacher with no formal expertise in, but a lifelong fascination with, astronomy and space in general. I seem to remember from a long ago college astronomy course a discussion of Oblers' Paradox that explains why we don't have perpetual daylight despite the billions of bright stars that presumably send their light to all parts of the earth. In trying to explain this concept to my eight-year old daughter, I get tongue-tied by all the technical jargon involved. Can you help me put my explanation in layman's In an infinite universe, which has existed forever, we shouldn't have night. Imagine a universe divided into shells, with stars of a single brightness distributed evenly --- if you look at a shell twice as far, each star is only a quarter as bright, but there are four times as many stars, so each shell is equally bright. If you have an infinite number of shells, you end up with infinite brightness! The big bang cosmology solves this, mainly by the implied age of the universe. We only see light emitted within the last 12 billion years (or whatever the age of the universe might be). This is a long time, but certainly not infinite, and not enough to make the night sky bright. Koji Mukai & Maggie Masetti for Ask an Astrophysicist
<urn:uuid:50288173-7962-4a33-bd98-92118d6da69e>
3.578125
297
Q&A Forum
Science & Tech.
46.477452
not like the gas carbon dioxide for which only 'crazies' consider a pollutant. I suppose it could be said astronauts pollute their environment. They do not need to use the larger biosphere("to clean the air") which should tell you how easy it is. In addition to ground transport, air transport is a consideration. (We can leave sea for another day... a large place to hide or dump trash for a time) I did some editing and put in bold a comment about a or the rate-determining step involving a layer of atmosphere. The rds is a chemical term i searched for a few days ago and just got around to reading. You also will note that they do not use the word saturation but instead speak of a new equilibrium. Table 4 (CONCAWE (1997), EC (1996)) shows how the emissions of CO, hydrocarbons, NOx and particulate matter have been reduced in Europe, reflecting the ability of technology to deliver reductions in emissions. The data show how the largest reductions in emissions have already taken place, with projections that further reductions will be possible by the introduction of on-board diagnostic systems, in-service emissions testing, recall programmes and fuel quality improvements (CONCAWE, 1997). These reductions in petrol and diesel engined vehicle emissions are sufficient to leave little room for improvement by switching to alternative hydrocarbon fuels such as natural gas or vegetable oil. The only cleaner option, as far as local emissions are concerned, is for a zero-emissions vehicle powered by electricity or hydrogen fuel cells. For such vehicles, it is important to consider, however, the total environmental impact of their use, as the air pollution emissions from remote generation of electricity or production of hydrogen fuel could possibly exceed the exhaust emissions that a conventional vehicle would produce. The main advantage of zero-emission vehicles is that the emissions can be relocated to where they are further from human receptors, so benefits to human health can be obtained while other environmental impacts are not reduced (see Fig. 1). Many decades, they say. You can probably take that with a grain of salt. When comparing different impacts of aircraft upon the global atmosphere with each other, and with the effect of emissions from other transport sectors and non transport related activity, the most challenging aspect of CO2 is perhaps the time scale over which it has an effect. CO2 is chemically sufficiently unreactive for its dominant removal process to be physical. Solution in the water of the upper ocean and exchange of carbon between the atmosphere and terrestrial biomass are relatively rapid, with the combined annual flux amounting to 20% of the atmospheric carbon reservoir mass of 750 GT (Houghton et al., 1996), but these fluxes are bi-directional. The rate determining step for net removal of carbon is mixing from the surface and intermediate ocean to the much larger carbon reservoir of the deep oceans. At the turn of the 21st Century, anthropogenic carbon emissions of 7 to 8 GT per year (including deforestation) are greater than the equilibrium rate of removal at current atmospheric and surface ocean concentrations, such that an amount of carbon equal to around half the emissions each year are removed and the imbalance results in a steady increase in atmospheric carbon dioxide levels. Were emissions to remain constant at today’s rate, the atmospheric concentration would reach an equilibrium level about one third higher than today’s value towards the end of the 21st Century. The global total emissions of CO2 from aviation in 1990 was about 450 million tonnes of carbon (Barrett, 991), which was less than 20% of global road transport emissions and about 3% of total anthropogenic emissions. Furthermore, historical emissions of CO2 from aviation are almost zero going back just a few decades into the mid 20th Century, while around half the carbon dioxide from all anthropogenic sources currently in the atmosphere was emitted before 1980, so the overwhelming majority of the total is from non-aviation sources. The small contribution of aviation is, however, increasing, and the small amounts of CO2 being emitted by aircraft now will remain in the air for many decades. Finally, water vapour from jet engines can also form line-shaped clouds in the free troposphere. The temperature of these clouds is lower than that of Earth’s surface, so their black body radiation is less than what would be emitted from Earth’s surface were the clouds not there, resulting in net warming. This is more significant than the amount of incoming solar radiation reflected, so that overall the contrails have a warming effect on climate at the surface. Usually, contrails evaporate again within minutes or even seconds such that their impact is negligible, but under certain meteorological conditions they can be sufficiently persistent [and] a large part of the sky can become obscured continually along a major flight path until weather conditions change many hours or days later. In the stratosphere, contrails are never persistent because of the low ambient relative humidity there, although the water vapour from aircraft is not removed rapidly by precipitation as it is in the troposphere so has a small warming effect on climate because of its greenhouse gas properties. -Current ability to quantify impact and major sources of uncertainty- In theory, the impact of aircraft emissions on upper troposphere and lower stratosphere chemistry can be quantified using global models of circulation and chemistry (such as Johnson et al., 1999). However, despite the fact that the reaction mechanisms are now qualitatively understood, quantifying the impact of aircraft emissions remains elusive. There are two main reasons for this: Firstly, the chemical reaction cycles are complex, as different gas-phase and heterogeneous pathways become more important at different temperatures. Small errors in the predicted mix of different pollutants can propagate via resulting errors in the relative rates of two or more competing reactions to end up with quite unrealistic simulated O3 concentrations. Not only must the chemical composition of the upper troposphere and stratosphere be simulated accurately, but rates of mixing between layers as well as chemistry determines the composition, the temperature needs to be known to determine where heterogeneous processes occur, and the temperature has a large influence on the mixing. The whole process of stratospheric O3 destruction in particular is a highly non-linear catastrophic process. Secondly, emissions of aircraft in the upper troposphere and stratosphere occur along highly localised flight paths that vary in time and space. The physical size of these is much less than the resolution of the global-scale models that are required to simulate chemistry in the upper troposphere and stratosphere. This problem of scale is added to the fact that the total emissions from aircraft are at least as difficult to quantify as emissions for road traffic are on the ground. It is exacerbated by the fact that other sources of the same pollutants in the upper troposphere and lower stratosphere, such as lightening and mixing from the lower troposphere, are also very difficult to quantify accurately. Any one of these difficulties would make calculations of the total atmospheric impact of aircraft emissions liable to error. Combined, they present a very formidable challenge indeed for the science of atmospheric chemistry modelling. The most recent calculations indicate that the effect of aircraft NOx emissions on producing O3 in the upper troposphere / lower stratosphere is greater than the effect of sulphur and soot emissions on destroying O3, except at high latitudes Colvile et al., 2000. PDF
<urn:uuid:b0c003c3-8094-4242-b055-387bd1f23831>
2.84375
1,511
Comment Section
Science & Tech.
26.738492
The computer (or more accurately the compiler) doesn't really care at all what number base you use in your source code. Most commonly used programming languages support bases 8 (octal), 10 (decimal) and 16 (hexadecimal) directly. Some also sport direct support for base 2 (binary) numbers. Specialized languages may support other number bases as well. (By "directly support", I mean that they allow entry of numerals in that base without resorting to mathematical tricks such as bitshifting, multiplication, division etc. in the source code itself. For example, C directly supports base-16 with its 0x number prefix and the regular hexadecimal digit set of 0123456789ABCDEF. Now, such tricks may be useful to make the number easier to understand in context, but as long as you can express the same number without them, doing so - or not - is only a convenience.) In the end, however, that is inconsequential. Let's say you have a statement like this following: int n = 10; The intent is to create an integer variable and initialize it with the decimal number 10. What does the computer see? i n t n = 1 0 ; 69 6e 74 20 6e 20 3d 20 31 30 3b (ASCII, hex) The compiler will tokenize this, and realize that you are declaring a variable of type int with the name n, and assign it some initial value. But what is that value? To the computer, and ignoring byte ordering and alignment issues, the input for the variable's initial value is 0x31 0x30. Does this mean that the initial value is 0x3130 (12592 in base 10)? Of course not. The language parser must keep reading the file in the character encoding used, so it reads 0 followed by a statement terminator. Since in this language base 10 is assumed, this reads (backwards) as "0 ones, 1 tens, end". That is, a value of 10 decimal. If we specified a value in hexadecimal, and our language uses 0x to specify that the following value is in hexadecimal, then we get the following: i n t n = 0 x 1 0 ; 69 6e 74 20 6e 20 3d 20 30 78 31 30 3b (ASCII, hex) The compiler sees 0x (0x30 0x78) and recognizes that as the base-16 prefix, so looks for a valid base-16 number following it. Up until the statement terminator, it reads 10. This translates to 0 "ones", 1 "sixteens", which works out to 16 in base 10. Or 00010000 in base 2. Or however else you like to represent it. In either case, and ignoring optimizations for simplicity's sake, the compiler allots enough storage to hold the value of an int type variable, and places there the value it read from the source code into some sort of temporary holding variable. It then (likely much later) writes the resulting binary values to the object code file. As you see, the way you write numerical values in the source code is completely inconsequential. It may have a very slight effect on compile times, but I would imagine that (again, ignoring such optimizations such as disk caching by the operating system) things like random turbulence around the rotating platters of the disk, disk access times, data bus collisions, etc., have a much greater effect. Bottom line: don't worry about it. Write numbers in a base that your programming language of choice supports and which makes sense for how the number will be used and/or read. You spent far more time reading this answer than you will ever recover in compilation times by being clever about which number base to use in source code. ;)
<urn:uuid:0e08e071-3740-495f-909d-38d7f5336f7d>
3.859375
807
Q&A Forum
Software Dev.
58.284539
Science Fair Project Encyclopedia A ballistic missile is a missile, usually with no wings or fins, with a prescribed course that cannot be altered after the missile has burned its fuel, whereafter its course is governed by the laws of ballistics. In order to cover large distances ballistic missiles must be launched very high into the air or in space, in a sub-orbital spaceflight; for intercontinental missiles the altitude halfway is ca. 1200 km. When in space and no more thrust is provided, the missiles are freefalling. Long and medium range ballistic missiles are generally designed to deliver nuclear warheads because their payload is too limited for conventional explosives to be efficient, and because the extreme heat of re-entry would damage chemical or biological payloads. Many advanced ballistic missiles have several rocket stages and their course can be slightly adjusted from one stage to the next. Ballistic missiles can vary widely in range and use, and are often divided into categories based on range. The US distinguishes: - Intercontinental ballistic missile (ICBM): range greater than 5500 km - Intermediate-range ballistic missile (IRBM): range between 3000 and 5500 km - Medium-range ballistic missile (MRBM): range between 1000 and 3000 km - Short-Range Ballistic Missile (SRBM): range less than 1000 km. An example is the Scud. Medium to short range missiles are often called theatre ballistic missiles (TBM). Using a missile with a considerably longer range than the distance from launch site to target can make sense: it can reach a higher altitude and come down with a higher speed, making defense more difficult. E.g. a missile with a range 3000 km fired at a target that is only 500 km away could arrive at its target after having reached an altitude of about 1200 km - roughly the height reached by ICBMs. Like them, it would arrive at a speed of typically more than 6 km/s. The first ballistic missile was the V-2 rocket, developed by Nazi Germany in the 1940s, which was successully launched for the first time on October 3, 1942 and used for the first time in operation on September 8, 1944. Specific types of ballistic missiles include: - Agni missile - Blue Steel missile - Blue Streak missile - Minuteman missile - SS-24 missile - SS-18 missile - Peacekeeper missile - Polaris missile - Poseidon missile - Prithvi missile - CSS-2 missile - Condor missile - Jericho missile - Skybolt ALBM - Surya ICBM launched from fixed sites, mobile launchers and submarines. Specific types of ballistic missile submarines include: - Benjamin Franklin class submarine - Ohio class submarine - Resolution class submarine - Triomphant class - Redoutable class - additional ballistic missile submarines - http://www.fas.org/nuke/intro/missile/index.html - an introduction to ballistic missiles The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:730f7132-a857-46c4-90e7-d4a007bfd253>
3.921875
649
Knowledge Article
Science & Tech.
34.549926
Forests losing the ability to absorb man-made carbon The sprawling forests of the northern hemisphere which extend from China and Siberia to Canada and Alaska are in danger of becoming a gigantic source of carbon dioxide rather than being a major "sink" that helps to offset man-made emissions of the greenhouse gas. Studies show the risk of fires in the boreal forests of the north has increased in recent years because of climate change. It shows that the world's temperate woodlands are beginning to lose their ability to be an overall absorber of carbon dioxide. Scientists fear there may soon come a point when the amount of carbon dioxide released from the northern forests as a result of forest fires and the drying out of the soil will exceed the amount that is absorbed during the annual growth of the trees. Such a prospect would make it more difficult to control global warming because northern forests are seen as a key element in the overall equations to mitigate the effect of man-made CO2 emissions. Two studies published today show that the increase in forest fires in the boreal forests – the second largest forests after tropical rainforests – have weakened one of the earth's greatest terrestrial sinks of carbon dioxide. One of the studies showed that in some years, forest fires in the US result in more carbon dioxide being pumped into the atmosphere over the space of a couple of months than the entire annual emissions coming from cars and energy production of a typical US state. A second study found that, over a 60-year period, the risk of forest fires in 1 million sq kms of Canadian wilderness had increased significantly, largely as a result of drier conditions caused by global warming and climate change. Tom Gower, professor of forest ecology at the University of Wisconsin-Madison, said his study showed that fires had a greater impact on overall carbon emissions from boreal forests during the 60-year period than other factors such as rainfall, yet climate was at the heart of the issue. The intensity and frequency of forest fires are influenced by climate change because heatwaves and drier undergrowth trigger the fires. "Climate change is what's causing the fire changes. They're very tightly coupled systems," Professor Gower said. "All it takes is a low snowpack year and a dry summer. With a few lightning strikes, it's a tinderbox," he said. Historically, the boreal forests have been a powerful carbon sink, with more carbon dioxide being absorbed by the forests than being released. However, the latest study, published in the journal Nature, suggests the sink has become smaller in recent decades, and it may actually be shifting towards becoming a carbon source, Professor Gower said. "The soil is the major source, the plants are the major sink, and how those two interplay over the life of a stand [of trees] really determines whether the boreal forest is a sink or a source of carbon," he said. "Based on our current understanding, fire was a more important driver of the carbon balance than climate was in the past 50 years. But if carbon dioxide concentration really doubles in the next 50 years and the temperature increases 4C to 8C, all bets may be off." The second study, published in Carbon Balance and Management, found carbon dioxide emissions from some forest fires exceeded the annual car and energy emissions from individual US states. Christine Wiedinmyer of the US National Centre for Atmospheric Research in Boulder, Colorado, used satellite imaging datato estimate CO2 output based on the degree of forest cover in a particular area. In some years, the amount of CO2 released from forest fires was equivalent to about 5 per cent of the man-made total. But in other years, more widespread and intense forest fires resulted in massively increased emissions. "There is a significant potential for additional net release of carbon from forests of the United States due to changing fire dynamics in the coming decades," Dr Wiedinmyer said. From the blogs A slight deviation from style this week and admittedly a bit weird, but at least I can finally say I... Owen Howells is a DJ/producer who grew up in Australia but was born in the UK. He came back to the U... Justice, the bedrock of our society is for sale under the Government’s latest plan to sell legal aid... Take inspiration from the green-fingered brigade who have been showing off their creativity at the R... - 1 What, let gays get married? We must be bonkers - 2 'Something passed underneath us, quite close': Airbus A320 has close encounter with UFO - 3 Rocky Horror star Tim Curry 'suffers major stroke' - 4 Exclusive: How MI5 blackmails British Muslims - 5 Lord of the Sings: Sir Christopher Lee, 91, to release heavy metal album BMF is the UK’s biggest and best loved outdoor fitness classes Find out what The Independent's resident travel expert has to say about one of the most beautiful small cities in the world Nook is donating eReaders to volunteers at high-need schools and participating in exclusive events throughout the campaign. Get the latest on The Evening Standard's campaign to get London's children reading. Win anything from gadgets to five-star holidays on our competitions and offers page.
<urn:uuid:12b97da4-0dc3-412d-bf88-6b446af586a6>
3.34375
1,086
Truncated
Science & Tech.
44.760553
Fig 19-17. Section through a young leaf of F. chiloensis. A) stomata, B) air space, C) thick cuticle of upper leaf surface, D) upper epidermal cell, E) palisade cell, F) mesophyll cell. The interior cell surface exposed to air space is from 2.2 to 4.4 times greater than the exposed outer surface; in F. chiloensis it is about four times greater. Oxygen in the air enters through the stomata, comes in contact with the cell walls and enters the cells. Carbon dioxide and water are given off and go out through the stomata.
<urn:uuid:991a7d05-5364-4b8c-8798-05739ff3adbc>
3
138
Knowledge Article
Science & Tech.
74.59725
Color and Vision Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of light and color.Color Television Explore how a television uses R, G, and B pixels to produce ... millions of colors.PhET Simulation: Color Vision Mix R, G and B light with varying intensities using this Java applet from PhET.Mixing Colors Mix light colors at the Ontario Science Center and learn about the principles of color addition. Looking for a lab that coordinates with this page? Try the Color Addition Lab from The Laboratory.Curriculum Corner Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Color Addition The red-green-blue color swatches on this page provide a great opportunity to demonstrate addition of R, G, and B in varying amounts.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on visible light and color.General Atomics Sciences: Chromatics - The Science of Color This downloadable, 100-plus page book discusses various aspects of light production in the visible spectrum and color addition and subtraction.General Atomics Sciences: It's a Colorful Life Deepen your understanding of color with this free, downloadable book on color; contains theory and ideas for labs. Color perception, like sound perception, is a complex subject involving the disciplines of psychology, physiology, biology, chemistry and physics. When you look at an object and perceive a distinct color, you are not necessarily seeing a single frequency of light. Consider for instance that you are looking at a shirt and it appears purple to your eye. In such an instance, there may be several frequencies of light striking your eye with varying degrees of intensity. Yet your eye-brain system interprets the frequencies that strike your eye and the shirt is decoded by your brain as being purple. The subject of color perception can be simplified if we think in terms of primary colors of light. We have already learned that white is not a color at all, but rather the presence of all the frequencies of visible light. When we speak of white light, we are referring to ROYGBIV - the presence of the entire spectrum of visible light. But combining the range of frequencies in the visible light spectrum is not the only means of producing white light. White light can also be produced by combining only three distinct frequencies of light, provided that they are widely separated on the visible light spectrum. Any three colors (or frequencies) of light that produce white light when combined with the correct intensity are called primary colors of light. There are a variety of sets of primary colors. The most common set of primary colors is red (R), green (G) and blue (B). When red, green and blue light are mixed or added together with the proper intensity, white (W) light is obtained. This is often represented by the equation below: R + G + B = In fact, the mixing together (or addition) of two or three of these three primary colors of light with varying degrees of intensity can produce a wide range of other colors. For this reason, many television sets and computer monitors produce the range of colors on the monitor by the use of red, green and blue light-emitting phosphors. The addition of the primary colors of light can be demonstrated using a light box. The light box illuminates a screen with the three primary colors - red (R), green (G) and blue (B). The lights are often the shape of circles. The result of adding two primary colors of light is easily seen by viewing the overlap of the two or more circles of primary light. The different combinations of colors produced by red, green and blue are shown in the graphic below. (CAUTION: Because of the way that different monitors and different web browsers render the colors on the computer monitor, there may be slight variations from the intended colors.) These demonstrations with the color box illustrate that red light and green light add together to produce yellow (Y) light. Red light and blue light add together to produce magenta (M) light. Green light and blue light add together to produce cyan (C) light. And finally, red light and green light and blue light add together to produce white light. This is sometimes demonstrated by the following color equations and graphic: Yellow (Y), magenta (M) and cyan (C) are sometimes referred to as secondary colors of light since they are produced by the addition of equal intensities of two primary colors of light. The addition of these three primary colors of light with varying degrees of intensity will result in the countless other colors that we are familiar (or unfamiliar) with. Any two colors of light that when mixed together in equal intensities produce white are said to be complementary colors of each other. The complementary color of red light is cyan light. This is reasonable since cyan light is the combination of blue and green light; and blue and green light when added to red light will produce white light. Thus, red light and cyan light (blue + green) represent a pair of complementary colors; they add together to produce white light. This is illustrated in the equation below: R + C = R + (B + G) = Each primary color of light has a secondary color of light as its complement. The three pairs of complementary colors are listed below. The graphic at the right is extremely helpful in identifying complementary colors. Complementary colors are always located directly across from each other on the graphic. Note that cyan is located across from red, magenta across from green, and yellow across from blue. The production of various colors of light by the mixing of the three primary colors of light is known as color addition. The color addition principles discussed on this page can be used to make predictions of the colors that would result when different colored lights are mixed. In the next part of Lesson 2, we will learn how to use the principles of color addition to determine why different objects look specific colors when illuminated with various colors of light. 1. Two lights are arranged above a white sheet of paper. When the lights are turned on they illuminate the entire sheet of paper (as seen in the diagram below). Each light bulb emits a primary color of light - red (R), green (G), and blue (B). Depending on which primary color of light is used, the paper will appear a different color. Express your understanding of color addition by determining the color that the sheet of paper will appear in the diagrams below. 2. If magenta light and yellow light are added together, will white light be produced? Explain.
<urn:uuid:5801114e-0305-4468-91c9-85706beda87e>
3.96875
1,365
Tutorial
Science & Tech.
50.931962
Eutrophication is the biological response of water to overenrichment by plant nutrients, particularly nitrogen and phosphorus. Public concern began to rise in the 1960s (although the term "eutrophication" is older), when nutrient enrichment was rapidly making many bodies of water increasingly fertile. This eutrophication was mainly caused by the addition of plant nutrients from human activities, called, in this context, artificial or anthropogenic eutrophication. The phenomenon is a consequence of society's municipal, industrial, and agricultural use of plant nutrients and their subsequent disposal. Lakes and reservoirs have a finite life span. They may pass through periods in their existence when they become more or less fertile, according to different factors--principally their geographical position or the climatic conditions (Moss, 1988). The process of eutrophication has been used deliberately as a way to fertilize and thus to increase phytoplankton production and, indirectly, the population of fish within a lake or reservoir. What is new in the past few decades, however, is the extent of enrichment of lakes and rivers throughout the world as a result of the growing human population, more intensive agricultural and industrial activities, and the development of large sewage systems associated with large metropolitan areas. Until recently, a relative lack of control over the sources of the nutrients or over their effect upon the aquatic ecosystems has resulted in changes occurring within decades rather than over the centuries--or longer--in which such changes would appear naturally Many studies of lake s around the world have provided evidence of human-induced changes. Good examples of such studies are those carried out on the Great Lakes (Beeton & Edmondson, 1972; Sly, 1991). In the United Kingdom, eutrophication has been identified as an extremely widespread problem and has been blamed for damaging many aquatic sites in England known as Sites of Special Scientific Interest, despite government claims that only a few surface waters have been affected (Carvalho & Moss, 1995). In a study commissioned by English Nature, a statutory conservation agency in England, it was found that 79 Sites of Special Scientific Interest showed signs of eutrophication. As a result, English Nature has called for a large-scale investment program to deal with the eutrophication problem in aquatic wildlife sites (English Nature, 1997). Anthropogenic eutrophication appears to be the main problem. Excessive fertility in lakes and reservoirs results in heavy growth of phytoplankton, particularly of blue-green algae (cyanobacteria), that may form thick mats at the water surface and thus spoil the appearance of the lake. Some species of cyanobacteria may produce substances that are highly toxic to fish, birds, or mammals. In some cases, dense blooms of algae have resulted in fish kills by causing the hypolimnion to become anaerobic. Increased crops of phytoplankton often clog the filters of water treatment plants and make the treatment of water more costly Furthermore, some unwanted organic substances produced by the algae can pass through the filters at water treatment plants and cause unpleasant tastes and odors, or may even be toxic to human consumers. Eutrophication thus can not only impair aesthetic qualities of the water, but also affect the use of water for water supply, fisheries, and recreation. The essential elements required by living cells to sustain growth and reproduction are carbon, oxygen, hydrogen, other macronutrients, and trace elements. Of these, carbon is the most important, the main reservoir being atmospheric carbon dioxide. Carbon is easily soluble in water and is thus unlikely to be a limiting factor for algae growth, except during intense blooms. Oxygen and hydrogen are freely available in the water in most circumstances. The most important macronutrients are calcium, magnesium, potassium, phosphorus, nitrogen, sulfur, iron, and silicon. Phosphorus is important because it is the only nutrient whose proportional abundance is lower in the lithosphere than in plant tissue. It is thus a prime candidate to become a limiting factor in algae growth. The main reservoir of nitrogen is atmospheric dinitrogen, which is not available to plants directly, consequently nitrogen might be a limiting factor as well. Trace elements, including boron, chlorine, cobalt, copper, manganese, molybdenum, zinc, and, in some cases, vitamin complexes, are required in very small quantities. The "law of the minimum," which was first formulated by Justus von Liebig, states that growth is limited by whatever is in shortest supply (Gibson, 1971; Welch, 1980). For the reasons stated above, phosphorus and nitrogen are said to be "key nutrients"; in some circumstances, they may become limiting. Therefore, they are in most cases the nutrients that control algae growth, though some diatom species may be limited by silica. Other factors, such as light, may also limit algal productivity. Supply of Phosphorus and Nitrogen to Lakes Phosphorus is the 11th most abundant element in the earth's crust, and it is geochemically classed as a trace element. In nature, phosphorus exists almost exclusively as phosphate, a great part of which is sorbed to soil particles or incorporated into soil organic matter. Phosphate deposits occur in the earth's crust principally as the mineral apatite: [Ca.sub.5](F,C1,OH,1/2[CO.sub.3])[([PO.sub.4]).sub.3]. The initial natural source of phosphorus is weathering of such rocks. Weathering liberates phosphate from the mineral, and the phosphate can then enter the biosphere through uptake by plants. The initial source of nitrogen is the atmospheric reservoir of gaseous dinitrogen. Nitrogen gas is chemically very stable. It must be converted by nitrogen fixation, by microorganisms living principally in the soil but also in aquatic environments, before it is available to most living organisms. In natural water, nitrogen is present as dissolved dinitrogen, ammonia, and salts of the nitrate and nitrite ions; in addition, there are nitrogen-containing organic compounds primarily attributable to the presence of life. In a natural, undisturbed environment, nutrient sources are the drainage of the catchment, the direct atmospheric deposition (rainfall and dry depositions) onto the water surface, and the internal recycling from lake sediments. Ahl estimates the background phosphorus input to be in the range of 3 to 10 kilograms (kg) of phosphorus per square kilometer per year, depending on the size and the characteristics of the basin (Ahl, 1988). He also estimates the …
<urn:uuid:459653bf-4a69-4a13-a131-f6816c893a85>
3.671875
1,369
Knowledge Article
Science & Tech.
26.41734
This section illustrates you how to read and write from/to a serialized file through the hash table in Java. This section provides an example with the complete code of the program. Following program has the facility if the specified serialized file does not exist the the program creates the serialized file otherwise. This program first write to the specified serialized file if it exists otherwise read all contents by de-serializing the file. FileOutputStream fileOut = new FileOutputStream("HTExample.ser"): Above code creates an object "fileOut" of the FileOutputStream with it's constructor which takes a file name which contains ".ser" extension that determines for the serialized file either which has to be created or written with the specified value in a hash table in the program. Here is the code of the program: If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:8bf2a0d8-e6d7-4ad2-951d-82b0850b5ff7>
3.203125
211
Documentation
Software Dev.
41.85219
global - Define global variable Ordinarily, each Scilab function, has its own local variables and can "read" all variables created in the base workspace or by the calling functions. The global keyword allow to make variables read/write across functions. Any assignment to that variable, in any function, is available to all the other functions declaring it global . If the global variable doesn't exist the first time you issue the global statement, it will be initialized to the empty matrix. //first: calling environnment and a function share a variable global a a=1 deff('y=f1(x)','global a,a=x^2,y=a^2') f1(2) a //second: three functions share variables deff('initdata()','global A C ;A=10,C=30') deff('letsgo()','global A C ;disp(A) ;C=70') deff('letsgo1()','global C ;disp(C)') initdata() letsgo() letsgo1() who , isglobal , clearglobal , gstacksize , resume ,
<urn:uuid:a188654e-4c2d-433f-bd66-5251cc50bb99>
2.734375
243
Documentation
Software Dev.
38.765388
The fputcsv() function formats a line as CSV and writes it to an open file. This function returns the length of the written string, or FALSE on failure. |file||Required. Specifies the open file to write to| |fields||Required. Specifies which array to get the data from| |separator||Optional. A character that specifies the field separator. Default is comma ( , )| |enclosure||Optional. A character that specifies the field enclosure character. Default is "| Tip: Also see the fgetcsv() function. The CSV file will look like this after the code above has been executed: Your message has been sent to W3Schools.
<urn:uuid:5d43d039-7c25-4be0-b2cf-9d5c06f7d319>
3.34375
149
Documentation
Software Dev.
55.31249
- What is a Cetacean? - Common Species - Rare Species - Other Species Key FactsLength: Up to 3.8 metres Range: Widely distributed in all major oceans, absent in polar regions Threats: Marine litter, pollution, acoustic disturbance Diet: Manly squid, some octopus and cuttlefish Latin: Grampus griseus The Risso's dolphin has a robust, stocky body and a tall, falcate (curved) dorsal fin. The melon (forehead) is blunt and bulbous with a unique V-shaped crease running from the upper lip to the blowhole. This species has no prominent beak and just two to seven pairs of teeth in the lower jaw. Adult Risso’s dolphins measure between 2.6 to 3.8 metres in length and can live for more than 30 years. The colour pattern varies greatly between individuals, and with age. Calves are born grey, but turn darker grey to dark brown as they become juveniles. As they age, the skin tone lightens to silvery-grey in some cases and the body is increasingly covered with scratches and scars inflicted by other Risso’s dolphins and prey species such as squid. Habitat and Distribution Risso’s dolphins are widely distributed throughout most oceans and seas between 60° North and 55° South. The north of Scotland represents the northern limit for this species. In the Hebrides, Risso's dolphins tend to inhabit deeper water, which is home to their preferred prey of squid, octopus and cuttlefish. They can occasionally be seen in coastal areas. In the Hebrides, Risso's dolphins are usually seen singly or in groups of up to 20 animals, although in other areas they are reported in large groups of several hundred individuals. Social behaviour is gregarious and sometimes rough, possibly accounting for some of the scars and tooth rake marks seen in adult animals; observed behaviours include breaching, tail slapping, spy-hopping, splashing and sometimes striking one another. Risso's dolphins are commonly seen travelling and surfacing slowly and will rarely approach vessels or bow-ride. Food and Foraging The diet of the Risso's dolphin consists mainly of squid, with some octopus and cuttlefish, and it has been suggested that they feed at night-time when their preferred prey migrate towards the surface. They are able to dive for about 30 minutes to depths in excess of 1000 metres, and sometimes forage cooperatively. Their soft-bodied prey is caught with teeth in the lower jaw and swallowed whole. Scars from such encounters are visible on the skin surface. Status and Conservation Many squid eating marine animals, including turtles and sea birds, swallow plastic bags that they mistake for their prey. Once ingested, plastic may accumulate in the stomach of the animal causing starvation and eventual death. It is likely that Risso’s dolphins commonly encounter plastic bags in the ocean and may be affected by this. Risso’s dolphins are also subject to incidental capture in fishing nets causing drowning, may be disturbed by noise produced by offshore oil and gas exploration, and are exposed to marine pollutants including organochlorines (pesticides). Risso’s dolphins are protected under UK and EU law, principally under Schedule 5 of the Wildlife and Countryside Act 1981, the Nature Conservation (Scotland) Act 2004 and by the 1992 EU Habitats and Species Directive.
<urn:uuid:e4f04983-806c-4797-b4d7-884e285ff216>
3.734375
729
Knowledge Article
Science & Tech.
44.97615
Changing Planet: Fading Corals The delicate balance of life and environment which sustains coral reefs globally is under threat. The dramatic increase in atmospheric CO2 in the past few decades has produced an increase in ocean temperature and acidity. Coral diseases have also been on the increase, due to changes in their environment as well as pollution. Click on the video at the left to watch the NBC Learn video - Changing Planet: Fading Corals. Lesson plan: Changing Planet: Fading Corals Shop Windows to the Universe Science Store! is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9. You might also be interested in: How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more A coral reef is like an underwater city. Corals and algae construct the framework that rises off the tropical ocean floor and attract many diverse inhabitants. Schools of multicolored fish glide above...more New research has found that bacteria are responsible for killing 85% of the corals in reefs near the Florida Keys. The bacteria is a fecal coliform bacteria called Serratia marcescens and it is commonly...more Increasing amounts of carbon dioxide are released into the atmosphere from burning of fossil fuels. Some of that carbon dioxide makes its way into the world’s oceans. This changes the chemistry of...more Here's a safe and easy way to make lightning. You will need a cotton or wool blanket. This experiment works best on a dry, cool night. Turn out all the lights and let your eyes adjust to the darkness....more It takes 3 seconds for sound to travel 1 kilometer (5 seconds to travel 1 mile). The next time a thunderstorm comes your way, look out your bedroom window and watch for lightning. When you see a lightning...more Why do the 53rd Weather Reconnaissance Squadron and the Hurricane Research Division use different airplanes? Actually, they only use two main types. The top two airplanes in the graphic, the WC-130H Hercules...more Rain, wind, tornadoes, and storm surge related to hurricanes cause change to natural environments, damage to the human-built environment, and even loss of life. When a hurricane is over the ocean and far...more
<urn:uuid:e6b09bbb-6c74-4ab8-96fd-04edf5ee6680>
3.65625
516
Tutorial
Science & Tech.
56.299395
(rebroadcast of live show) Target: Grades 3-5 Length: 60 minutes Guide: Online, see Internet site Internet: http://scifiles.larc.nasa.gov ⇒ Floating tennis shoes and oil globs wash up on the beach to set the tree house detectives in motion to investigate a unique world under the sea. Join them as they dive into learning about ocean floor topography, ocean currents, oil clean-up, and more.To order a copy of this video, please visit the Central Operation of Resources for Educators Web site. This program was first broadcast on NASA TV Education File Schedule November 22, 2004.
<urn:uuid:787072b5-3502-4831-964e-0e0de0d635d8>
2.6875
140
Truncated
Science & Tech.
54.898182
Using a simple algorithm, Belokurov et. al discovered this almost perfect Einstein-ring around a luminous red galaxy in the SDSS database: They called it the Cosmic Horseshoe. The ring has a diameter of 10 arcseconds, which counts as large. The lensing galaxy has a mass of about 5 x 1012 solar mass - about ten times the mass of the Milky Way! For comparison, here's an other lens, also from the SDSS, discovered serendipitously, the "8 O'clock Arc": This is less ring-shaped, it has 3 images of the same background galaxy on the top and a fourth on the bottom. It shows a similar size, but the lensing galaxy is estimated to have a mass of about 1/5 of that in the center of the Cosmic Horseshoe. A news article. The scientific article.
<urn:uuid:4812e8bb-6c3e-4b20-9dea-fdf47f61876d>
2.890625
187
Personal Blog
Science & Tech.
61.626848
Use the following tips and techniques when you design a UML 2.0 Activity Diagram. Usually you create Activity Diagrams after State Machine Diagrams. To design a UML 2.0 Activity Diagram, follow this general procedure: - Create one or more activities. You can place several activities on a single diagram, or create a separate diagram for each. Warning: You cannot create nested activities. - Usually activities are linked to states or transitions on State Machine Diagrams. Switch to your State Machine Diagrams and associate the activities you just created with states and transitions. Tip: After that you can find that some more activities must be created, or the same activity can be used in several places. - Switch back to the Activity Diagram. Think about flows in your activities. You can have an object flow (for transferring data), a control flow, both or even several flows in each activity. - Create starting and finishing points for every flow. Each flow can have the following starting points: - Initial node - Activity parameter (for object flow) - Accept event action - Accept time event action Each flow finishes with a Activity Final or Flow Final node. If your activity has several starting points, they can be used simultaneously. - Create object nodes. You do not link object nodes to classes on your Class Diagrams. However, you can use hyperlinks for better understanding of your diagrams. - Create action nodes for your flows. Flows can share actions. Warning: You cannot create nested actions. - For object flows, add pins to actions. Connect actions and pins by flow links. - Add pre- and postconditions. You can create plain text or OCL conditions. - You can optionally create shortcuts to related elements of other diagrams. To add an activity parameter to an activity: - In the Tool Palette, press the Activity Parameter button. - Click the target activity. Or: Choose AddActivity Parameter on the activity context menu. Result: An Activity Parameter node is added to the activity as a rectangle. Note that the activity parameter node is attached to its activity. You can only move the node along the activity borders. Note: Activity parameters cannot be connected by control flow links.
<urn:uuid:fbb7db04-6d16-42b2-a184-8155e07ac796>
3.328125
472
Tutorial
Software Dev.
41.476515
When two declarations in the same scope describe the same object or function, the two declarations must specify compatible types. These two types are then combined into a single composite type that is compatible with the first two. More about composite types later. The compatible types are defined recursively. At the bottom are type specifier keywords. These are the rules that say that unsigned short is the same as unsigned short int, and that a type without type specifiers is the same as one with int. All other types are compatible only if the types from which they are derived are compatible. For example, two qualified types are compatible if the qualifiers, const and volatile, are identical, and the unqualified base types are compatible.
<urn:uuid:9c2223f5-6614-4c73-9814-2eafabe7e1d3>
3.09375
144
Documentation
Software Dev.
28.725081
Expression statements are used (mostly interactively) to compute and write a value, or (usually) to call a procedure (a function that returns no meaningful result; in Python, procedures return the value None). Other uses of expression statements are allowed and occasionally useful. The syntax for an expression statement is: An expression statement evaluates the expression list (which may be a single expression). In interactive mode, if the value is not None, it is converted to a string using the built-in repr()function and the resulting string is written to standard output (see section 6.6) on a line by itself. (Expression statements None are not written, so that procedure calls do not cause any output.) See About this document... for information on suggesting changes.
<urn:uuid:a966e3c7-a27a-4567-bbdc-1763d4e67c8a>
3.265625
168
Documentation
Software Dev.
39.1603
Date: Jan 27, 2013 7:06 PM Author: Jerry P. Becker Subject: SAYINGS XLII Taken from many sources ... some of them identified. "Hope is a good thing, maybe the best thing. And a good thing never dies." (Andy Dufrene, from Shaw Shank Redemption/movie) "If a child can't learn the way we teach, then maybe we should teach the way they learn!" (Ignacio Estrada) [Peggy McKee] "A man is like a fraction whose numerator is what he is and whose denominator is what he thinks of himself. The larger the denominator, the smaller the fraction." (Leo Tolstoy) [From signature of Michael de Villiers] "Modern cynics and skeptics see no harm in paying those to whom they entrust the minds of their children a smaller wage than is paid to those to whom they entrust the care of their plumbing." (President John F. Kennedy (1917-1963)) [Valerie Strauss] "True terror is to wake up one morning and discover that your high-school class is running the country." (Kurt Vonnegut) [From Mike Contino] "I asked God for a bike, but I know God doesn't work that way. So I stole a bike and asked for forgiveness." (Unknown) [From Sandy Lemberg] "The problem with a lot of educational reform activity is it's a lot of 'ready, fire, aim." "Life's greatest gift is the opportunity to work hard at work worth doing." "Who dares to teach must never cease to learn." (John Cotton Dana) [from Carol Brown] "Enjoy life's journey, but leave no tracks." (Native American Commandment) "I don't feel old. I don't feel anything until noon. Then it's time for my nap." "Researchers usually find that students flourish where there is stability in the school, with an experienced staff, clear expectations, small classes, and a rich curriculum." "Do good but don't expect to be remembered or celebrated after you (Interpretation of Last Native American Commandment above) [From a note from Loh Kok Khuan] "My mechanic told me, "I couldn't repair your brakes, so I made your "Do not regret growing older. It is a privilege denied to many." "If you are really thankful, what do you do? You share." (W. Clement Stone) "Statistical significance and educational significance are often two completely different things. One child out of a thousand who does something uniquely different from other children has no statistical significance, but it may have huge educational significance. We need only look at the history of mathematics and science to note the tremendous impact that some 'statistically insignificant' individuals have had by thinking vastly differently, and daring to deviate from the norm of their times." (Michael de Villiers) "You never get a second chance to make a good first impression." (Head and Shoulders TV Commercial) "Middle age is when your classmates are so gray and wrinkled and bald they don't recognize you." "The frogs tend to forget that once they were tadpoles, too." (Korean Proverb) [From mathe 2000 selected papers book] "Real peace is liberty in place of tyranny, health instead of disease, hope instead of fear. It comes when people have the freedom to voice their views, choose their own leaders, feed their families, and raise healthy children." (Jimmy Carter, 39th President of the U.S.) [From literature from the Carter Center] "It is important to remember, in all efforts at improving the teaching of mathematics, that we are teaching human beings, and that what we are teaching them is a human activity with uses and with beauty and with surprises." (E.J. McShane, 1964) [Sent by Ginger Warfield, daughter] "Never wrestle with a pig. You'll just get dirty. And the pig loves it!" "Hospitality: making your guests feel like they're at home, even if you wish they were." (Unknown) [From Sandy Lemberg] "My people are destroyed for lack of knowledge." (Hosea 4:6) [From the Chronicle of Higher Education, December 7, 2012] "A stone in its place is like a mountain... ... but a mountain in the wrong place is just like a stone." (Turkish proverb) [Seen on EDDRA2 listserve, from Sue Ramlo] "There is no smallest among the small, and no largest among the large, but always something still smaller and something still larger." (Anaxagoras - ca. 500 BC - 428 BC) [Spelling correction from CH Candy to earlier posting] "The mathematician's patterns, like the painter's or the poet's must be beautiful; the ideas, like the colours or the words must fit together in a harmonious way. Beauty is the first test: there is no permanent place in this world for ugly mathematics." ("A Mathematician's Apology" (London 1941). [ From Bill Richardson; also from Steve Sugden and Melanie Parker] "Pythagoras walks into an airport, the TSA asks, "Hey buddy, got an Identity?" (From John Nord) "What sort of education will teach the young to hate war?" ((Virginia Woolf, Three Guineas) [From Brian Greer] "Teachers are the only professionals who have to respond to bells every forty-five minutes and come out fighting." (Frank McCourt (1930-2009), teacher and author) "Teaching is not a lost art, but the regard for it is a lost tradition." (Jacques Martin Barzun (born 1907), historian) [From Valerie Strauss] "I spend time on window ledges because I am scared of widths." "Helping people in need is a matter of fundamental principle, responsibility, righteousness and justice, not an act of charity." (Source Unknown) [From Yvelyne Germain-McCarthy] "A little health tip for you: I heard a banana-a-day is a good thing to help keep your colon clean ... it turns out you are supposed to (Dwight York) [From a friend on a greeting card] "Beauty is the first test: there is no permanent place in the world for ugly mathematics." (From Z-MNU Universitat Bayreuth calendar. The "Beauty..." quote is from G.H. Hardy's "A Mathematician's Apology." Hardy's "apology" is not an excuse or an "I am sorry" statement. In the ancient Greek sense it is about defending a position on something. In this case, Hardy was "defending" his life's devotion to research mathematics. Every budding mathematician reads it. It is totally inspirational. That was my former life before I met and began to understand Also, from Matt Wyneken: BTW1 - I just noticed another one of Hardy's quotes at this website: "No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems unlikely that anyone will do so for many years." He wrote that in 1941. Yet only a few decades later, however, RMS encryption was invented and now rules everything secret in the world, military, economic, etc. BTW2 - You also included Neil Armstong's everlasting statement among your quotes. Armstrong and Buzz Aldrin landed on the Moon in 1969, with Michael Collins in support above, only some 60+ years after the invention of human flight (as noted from television's #1 comedy, The Big Bang Theory). BTW3 - by JPB ... If you are ever tooling down InterState 65 south by Huntsville, Alabama, make time to visit the Rocket and Space
<urn:uuid:ddb7f18d-54de-4e1d-aeb8-435d55053d6f>
2.6875
1,767
Comment Section
Science & Tech.
60.217948
|Official NOAA climate monitoring station with warm| air conditioning exhaust blowing on temperature sensor. Courtesy: Dr. Roger Pielke, Sr. Also in the news the last two days is that the IPCC (the folks that won the Nobel Prize) have been wrong about increasing malaria due to global warming. A recent example is the case of malaria and climate. In the early days of global-warming research, scientists argued that warming would worsen malaria by increasing the range of mosquitoes. "Malaria and dengue fever are two of the mosquito-borne diseases most likely to spread dramatically as global temperatures head upward," said the Harvard Medical School's Paul Epstein in Scientific American in 2000, in a warning typical of many. Carried away by confirmation bias, scientists modeled the future worsening of malaria, and the Intergovernmental Panel on Climate Change accepted this as a given. When Paul Reiter, an expert on insect-borne diseases at the Pasteur Institute, begged to differ—pointing out that malaria's range was shrinking and was limited by factors other than temperature—he had an uphill struggle. "After much effort and many fruitless discussions," he said, "I…resigned from the IPCC project [but] found that my name was still listed. I requested its removal, but was told it would remain because 'I had contributed.' It was only after strong insistence that I succeeded in having it removed." Yet Dr. Reiter has now been vindicated. In a recent paper, Peter Gething of Oxford University and his colleagues concluded that widespread claims that rising mean temperatures had already worsened malaria mortality were "largely at odds with observed decreasing global trends" and that proposed future effects of rising temperatures are "up to two orders of magnitude smaller than those that can be achieved by the effective scale-up of key control measures." Entire story here. So, while many of us sweat, the threat of catastrophic global warming continues to cool.
<urn:uuid:61e1ae79-2715-4427-865f-dcd7043d2300>
2.875
397
Personal Blog
Science & Tech.
37.299743
Three set of test procedures are used: the first only inserts n random integers into the tree / hash table. The second test first inserts n random integers, then performs n lookups for those integers and finally erases all n integers. The last test only performs n lookups on a tree pre-filled with n integers. All lookups are successful. These three test sequences are preformed for n from 125 to 4,096,000 where n is doubled after each test run. For each n the test cycles are run until in total 8,192,000 items were inserted / lookuped. This way the measured speed for small n is averaged over up to 65,536 sample runs. Lastly it is purpose of the test to determine a good node size for the B+ tree. Therefore the test runs are performed on different slot sizes; both inner and leaf nodes hold the same number of items. The number of slots tested ranges from 4 to 256 slots and therefore yields node sizes from about 50 to 2,048 bytes. This requires that the B+ tree template is instantiated for each of the probed node sizes! The speed test source code is compiled with g++ 4.1.2 -O3 -fomit-frame-pointer The results are be displayed below using gnuplot. All tests were run on a Pentium4 3.2 GHz with 2 GB RAM. A high-resolution PDF plot of the following images can be found in the package at speedtest/speedtest.pdf The first two plots above show the absolute time measured for inserting n items into seven different tree variants. For small n (the first plot) the speed of red-black tree and B+ tree are very similar. For large n the red-black tree slows down, and for n > 1,024,000 items the red-black tree requires almost twice as much time as a B+ tree with 32 slots. The STL hash table performs better than the STL map but not as good as the B+ tree implementations with higher slot counts. The next plot shows the insertion time per item, which is calculated by dividing the absolute time by the number of inserted items. Notice that insertion time is now in microseconds. The plot shows that the red-black tree reaches some limitation at about n = 16,000 items. Beyond this item count the B+ tree (with 32 slots) performs much better than the STL multiset. The STL hash table resizes itself in defined intervals, which leads to non-linearly increasing insert times. The last plots goal is to find the best node size for the B+ tree. It displays the total measured time of the insertion test depending on the number of slots in inner and leaf nodes. Only runs with more than 1 million inserted items are plotted. One can see that the minimum is around 65 slots for each of the curves. However to reduce unused memory in the nodes the most practical slot size is around 35. This amounts to total node sizes of about 280 bytes. Thus in the implementation a target size of 256 bytes was chosen. The following two plots show the same aspects as above, except that not only insertion time was measured. Instead in the first plot a whole insert/find/delete cycle was performed and measured. The second plot is restricted to the lookup / find part. The results for the trees are in general accordance to those of only insertion. However the hash table implementation performs much faster in both tests. This is expected, because hash table lookup (and deletion) requires fewer memory accesses than tree traversal. Thus a hash table implementation will always be faster than trees. But of course hash tables do not store items in sorted order. Interestingly the hash table's performance is not linear in the number of items: it's peak performance is not with small number of items, but with around 10,000 items. And for item counts larger than 100,000 the hash table slows down: lookup time more than doubles. However, after doubling, the lookup time does not change much: lookup on tables with 1 million items takes approximately the same time as with 4 million items.
<urn:uuid:ab19f71f-12f8-44ea-a83f-9fb3639e2e28>
2.890625
844
Documentation
Software Dev.
67.187473
But electrons in two dimensions can also behave as classical particles that interact only through the mutual repulsion of their negative charges. This occurs when they are spread much farther apart and has been difficult to achieve in the lab, so researchers are still seeing new phenomena. David Rees of RIKEN, a Japanese research institute, in Wako, Japan, and his colleagues, studied this regime using electrons floating above a liquid helium surface. At low temperatures, the electrons glide rapidly far above the surface--about 11 nanometers--and barely interact with it. At temperatures somewhat below 1 Kelvin, the repulsion between electrons generates a two-dimensional solid state known as a Wigner crystal. At higher temperatures the electrons act like a liquid. Of course, this is significantly different than when QM effects kick in, whereby we get the fractional charge/quantum hall effect. It is interesting to note that we always think that to get quantum behavior, it usually requires difficult conditions. Here, it seems that it is difficult to see classical behavior clearly when the system has such a tendency to behave quantum mechanically. D.G. Rees et al., PRL v.106, p.026803 (2011).
<urn:uuid:8056347b-afb8-4458-a201-2e8f6403ea78>
3.25
245
Personal Blog
Science & Tech.
43.095385
The Earth has one Moon, but it’s not the only rocky thing orbiting us…..Posted: December 21, 2011 I spend far too much time at pub quizzes. Perhaps it’s because I’m an irritating know-it-all or I just like a vaguely intellectual pretense for going to the pub. One of the more geeky parts of it is correcting the quiz-master when they are wrong (Reykjavik is north of Helsinki and Blazin Squad did not do the original of Crossroads etc.). One such wrong answer was a week or two back when it was claimed the Earth has four moons. Additional moons of the Earth have long been claimed and were popularised a few years back when QI claimed that a co-orbital body called Cruithne was a second moon. As far as the definition of stable, natural bodies orbiting the Earth goes there is only one, although it would be entertaining if schoolchildren were taught about the wonderfully named Wahrhafter Wetter-und Magnet Mond (or veritable weather and magnetic moon). However there are sometimes other bodies that briefly orbit the Earth. The Solar System is a crowded place. Besides the eight planets and numerous dwarf planets there are millions of asteroids. Some of these have orbits that bring them close to the Earth. While most of these whizz by us, some are in orbits which mean that they can gravitationally interact with the Earth and the Moon and go in to orbit around it. These orbits are not stable and the objects will eventually be kicked out of the Earth-Moon system. To date only one known object has been discovered to have undergone such a process. Known as 2006_RH120 it is a small body, only 3-5m across. In 2007-2008 it undertook four orbits of the Earth at a distance more than twice as far away as the Moon. But how often do objects like this perform their temporary dance with the Earth? Well a new paper of has been looking in to the rate of capture and when such events happen. The authors use a simulation of the how asteroids will pass through the Earth-Moon System. They select a series of objects with orbital elements in the range where they could possibly be captured and then examine how they would be affected by coming close to the Earth and Moon. Previously it was thought that a close encounter with the Moon gave objects a gravitational tug allowing them to be captured by the Earth. However the new model finds that while the Moon does play a role in the capture, none of their simulated near-Earth objects came close enough to the Moon to get a sufficient enough tug for capture. The model also found that capture most likely at aphelion and perihelion (when the Earth is furthest and closest to the Sun during its orbit). The same capture probability peaks were previously noted for temporary satellites of Jupiter. It’s also possible that the Moon itself could capture asteroids and get its own temporary satellites. However no objects in the simulation managed to complete an orbit of the Moon. Objects in unstable orbits around the Earth will of course have the possibility entering the atmosphere and becoming meteors. About 1% of objects in the simulation impacted on the Earth, none on the moon. This means that a temporarily captured object is 3.5 times more likely to strike the Earth than an near-Earth object in a similar orbit. In total the authors estimate that a tenth of one percent of objects striking the Earth were in temporary orbit around us. In all the authors estimate based on their model and the fact there aren’t a large population of observable temporary satellites that at any one time there is one object of approximately one metre in size temporarily orbiting the Earth along with potentially other smaller bodies. So the Earth only has one Moon, but it’s not the only natural object orbiting us. Granvik, M., Vaubaillon, J., & Jedicke, R. (2011). The population of natural Earth satellites Icarus DOI: 10.1016/j.icarus.2011.12.003
<urn:uuid:e129c807-068b-4949-a081-bfc5b3ed05f5>
3.265625
833
Personal Blog
Science & Tech.
55.906262
An important concept that comes from sequences is that of series and summation. Series and summation describes the addition of terms of a sequence. There are different types of series, including arithmetic and geometric series. Series and summation follows its own set of notation that is important to memorize in order to understand homework problems. So a series is just the summation of a sequence. So a sequence is just a bunch of numbers in a row, a series is what happens when we add up all those numbers together. Okay? So before me I have a general term for a sequence. a sub n is equal to n squared minus 1. And first we're asked to find the first four terms. Okay? So in order to find the first term, we would find a sub 1 which happens when we plug in 1. 1 squared minus 1 that's just 0. So our first term is going to be 0. To find the second term we plug in 2. a sub 2 is equal to 2 squared. 4-1 which is going to give us 3. Third term [IB] and repeat a sub 3 is 3 squared, 9-1 is 8. And the fourth term a sub 4, plug in 4. 4 squared, 16-1 is 15. So this right here is a sequence. It's 4 numbers written in order with commons in between. It's just a collection of numbers. Find the sum of those first 4 terms. So basically we already found the 4 terms, all we have to do is add them together. 0+3 is 3 plus 8 is 11 plus 15 is 26. So 26 is then the series, okay? Series is the way I remember it is, series is a shorter word therefore your answer should be shorter, one number. A sequence is a longer word, it's going to be a collection of data, a collection of numbers, okay? So basically all the series is is a summation of the sequence.
<urn:uuid:812b542f-d4de-4cf3-9bff-52518cbabffb>
4.40625
402
Tutorial
Science & Tech.
86.384637
Surveyor soars toward red planet Satellite to map Martian surface November 7, 1996 Web posted at: 1:30 p.m. EST CAPE CANAVERAL, Florida (CNN) -- NASA launched a 10-month, unmanned mission to Mars Thursday, the first step in a multi-spacecraft bid to determine if there is -- or ever was -- life on the fourth rock from the sun. (848K/19 sec. QuickTime movie) Global Surveyor, the first of 10 NASA probes bound for Mars the next decade, replaces one that mysteriously disappeared three years ago. The spacecraft soared aloft at noon EST atop a Delta 2 rocket launched from Cape Canaveral Air Station in Florida. The launch, originally scheduled for Wednesday, was postponed 24 hours because of Surveyor will take 10 months to make the 470-million-mile trip and another six months to ease into a mapping orbit. Later, it will dip into the Red Planet's thin atmosphere, using its wing-like solar panels as brakes. Surveyor will study the Martian surface and atmosphere, but will not land. More to come It is the first of three spacecraft, two U.S. and one Russian, destined for Mars this year. The next launch is Mars Pathfinder, equipped with a robotic ground vehicle, that is scheduled for liftoff December 2 and will land July 4, 1997, two months ahead of Surveyor. NASA plans to send pairs of spacecraft every 26 months through 2005 but has no firm plans for a manned mission to Thursday's launch comes amid controversial revelations by scientists of possible ancient life on Mars. "One of our goals is ultimately to return a sample of the surface of the planet itself," NASA's Wes Huntress told CNN in a live interview. Scientists hope the sample has "evidence on whether or not there was early life on the planet," Huntress said. Observer: lost in space Surveyor was designed and built in record time to replace NASA's $1 billion Mars Observer probe, which spun out of control -- for reasons unknown -- just days before it was due to enter the planet's orbit in 1993. Surveyor carries copies of five of the seven scientific instruments on its ill-fated predecessor, but at $215 million is much less expensive. It was made mostly from leftover parts from Observer. The problem: where to look? From an altitude of 230 miles (365 km), its telephoto camera will see objects on the surface as small as a compact car. By the end of one Martian year -- 687 Earth days -- 99 percent of the planet will have been mapped by Surveyor's The probe does not carry any instruments that could directly detect evidence of life, but it will scout out sites for a future robotic mission to recover samples of rock. Scientists must decide the best places to look "before we decide which of those interesting rocks to bring back," "We'll be able to identify areas that might have been conducive to past life," said Surveyor Mission Manager Glenn Cunningham. (288K/13 sec. AIFF or WAV sound) Correspondents John Zarrella, John Holliman and Reuters contributed to this report. Related sites: Note: Pages will open in a new browser window External sites are not endorsed by CNN Interactive. © 1996 Cable News Network, Inc. All Rights Reserved.
<urn:uuid:88e90af8-1d5c-43bb-99f9-5e5d90cac14b>
3.0625
757
Truncated
Science & Tech.
57.151778
My Saved Article |Atomic Number:||33||Atomic Symbol:||As| |Atomic Weight:||74.9216||Electron Configuration:||2-8-18-5| |Melting Point:||817 @ 28 atm.oC||Boiling Point:||sublimes @ 613oC| |Uses:||LEDs, deadly poison, semiconductors| History(L. arsenicum, Gr. arsenikon, yellow orpiment, identified with arenikos, male, from the belief that metals were different sexes; Arabic, Az-zernikh, the orpiment from Persian zerni-zar, gold) SourcesElemental arsenic occurs in two solid modifications: yellow, and gray or metallic, with specific gravities of 1.97, and 5.73, respectively. is believed that Albertus Magnus obtained the element in 1250 A.D. Schroeder published two methods of preparing the element. arsenopyrite, (FeSAs) is the most common mineral from which, on heating, the arsenic sublimes leaving ferrous sulfide. PropertiesThe element is a steel gray, very brittle, crystalline, semimetallic solid; it tarnishes in air, and when heated is rapidly oxidized to arsenous oxide with the odor of garlic. HandlingArsenic and its compounds are poisonous. Arsenic is used in bronzing, pyrotechny, and for hardening and improving the sphericity of shot. CompoundsThe most important compounds are white arsenic, the sulfide, Paris green, calcium arsenate, and lead arsenate; the last three have been used as agricultural insecticides and poisons. Marsh's test makes use of the formation and ready decomposition of arsine. Arsenic is finding increasing uses as a doping agent in solid-state devices such as transistors. Gallium arsenide is used as a laser material to convert electricity directly into
<urn:uuid:a283feed-f052-4dc2-bd51-f6d4be44bf24>
3.140625
442
Knowledge Article
Science & Tech.
29.915592
Technology Transfer Sponsored by Most consumer attention to oysters and mussels has centered on their taste, beautiful by-products or aphrodisiac effects; however, their adhesive properties are what caught the attention of Jonathan Wilker, PhD, associate professor of chemistry, and his research team at Purdue University. Wilker’s team has been studying marine biological adhesives for years and has found that the two mollusks produce adhesives that form a non-toxic, strong bond in wet environments. Although Wilker mainly has worked to develop synthetic versions of the adhesives for medical use, he is investigating other applications, which may include personal care. Wilker has studied the adhesives produced by various marine entities, including Mytilus edulis (the blue mussel) and Crassostea virginica (Eastern oyster)—an oyster popular in the human diet. He notes one similarity. “Mussels, oysters and barnacles all use cross-linked proteins, (long biological polymers), to make their adhesive,” said Wilker. The difference, however, is in the composition of the adhesive. To study the adhesive, Wilker and his team cut open the shells of oysters and observed the interface where they were attached; he compared this with separate, unattached portions of shell as a control. “[Since] the oyster’s adhesive is comprised of materials similar to the shell, we speculate the cement comes from the same place, system or organ as the shell,” he furthered. Both the oyster shell and adhesive consist of calcium carbonate and protein as starting materials but the shell is mostly calcium carbonate with a small amount of protein, whereas there is more protein and less calcium carbonate in the adhesive. “In the cement, the extra reactivity is added to the proteins so they crosslink together,” Wilker explained. The adhesive produced by oysters is 10-15% protein and 85-90% calcium carbonate (chalk), which according to Wilker results in a hard inorganic, cement-like material. Unlike oysters, Wilker notes that mussels separately produce their adhesive and shell. “If you crack open a mussel, a separate organ [is present that] produces the adhesive,” said Wilker. He added that the adhesive produced by mussels is about 99% proteins and more like soft organic glue.
<urn:uuid:19ffe944-e7e9-4824-81bd-a8e9654374da>
3.71875
506
Knowledge Article
Science & Tech.
25.190049
Dewar flask [for Sir James Dewar], container after which the common thermos bottle is patterned. It consists of two flasks, one placed inside the other, with a vacuum between. The vacuum prevents the conduction of heat from one flask to the other. For greater efficiency the flasks are silvered to reflect heat. The substance to be kept hot or cold, e.g., liquid air, is contained in the inner flask. See low-temperature physics. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Dewar flask from Fact Monster: See more Encyclopedia articles on: Physics
<urn:uuid:15865bbc-b557-4f4b-9456-d71459f377db>
3.3125
139
Knowledge Article
Science & Tech.
53.444167
Couple is a system of forces whose magnitude of the resultant is zero and yet has a moment sum. Geometrically, couple is composed of two equal forces that are parallel to each other and acting in opposite direction. The magnitude of the couple is given by Where are the two forces and is the moment arm, or the perpendicular distance between the forces. Couple is independent of the moment center, thus, the effect is unchanged in the following conditions. - The couple is rotated through any angle in its plane. - The couple is shifted to any other position in its plane. - The couple is shifted to a parallel plane. In a case where a system is composed entirely of couples in the same plane or parallel planes, the resultant is a couple whose magnitude is the algebraic sum of the original couples.
<urn:uuid:ff5ee6c4-71e3-4c99-9025-e5f95a169681>
3.78125
169
Knowledge Article
Science & Tech.
41.345625
Hurricane Philippe and tropical storm Rita formed in the North Atlantic on Sunday, making them the eighth hurricane and seventeenth tropical storm of the season. The US National Hurricane Center (NHC) expects Rita to become a full-fledged hurricane this week, amid fears that it could affect areas hard-hit by deadly Hurricane Katrina. Now in the southern Bahamas, Rita is expected to continue moving west, with its most likely course taking it between Cuba and southern Florida into the Gulf of Mexico. Hurricane warnings were issued late on Sunday afternoon for parts of Cuba and the Florida Keys. Current projections by the NHC indicate the storm will intensify over the Gulf and eventually hit southern Texas Saturday morning, but the paths of hurricanes are notoriously hard to predict days in advance. An obvious concern is if its path should bend north, toward hurricane-shattered New Orleans and surrounding areas of Louisiana, Mississippi and Alabama. Philippe formed further to the east of Rita and is following a north-northwest course in the Atlantic. It is not expected to approach land before nearing Bermuda on Saturday. At the start of August, the NHC predicted an exceptionally busy North Atlantic hurricane season, with 18 to 21 tropical storms, 9 to 11 of which would become hurricanes. So far that forecast has been borne out, and it could yet prove an underestimate. The record tropical storm season saw 21 cyclones, and occurred in 1933. If that number is exceeded, the NHC will use up its list of names and turn to Greek-letter designations. The record hurricane season was 12 in 1969. Katrina's devastation of New Orleans and the Gulf Coast has already made this the most expensive hurricane season ever for the US, in terms of life lost and damaged infrastructure. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:9014c37e-3560-4227-a303-34e756270f12>
3
438
Truncated
Science & Tech.
41.756367
Assume 14 < n < 30. Make boxes labelled 16-n to 30-n. Let Si be the set of the first i elements. Let si be the sum of the elements of Si. If si is less than or equal to 30-n, put Si in the box labelled si. Otherwise, put it in the box labelled si-n. There are 15 boxes, and 16 subsets, so at least one box has two subsets. Clearly, a box with two subsets must have one, say Si, with si = the box label, and one subset Sj with sj-n = the box label, with Sj containing Si, obviously. So Sj - Si is a subset whose elements have sum n. The complement of this set with respect to the full set of 16 elements has sum 30-n. You can probably express the above in 4 lines if you're especially terse.
<urn:uuid:1294ddef-6dd8-4747-94c9-62093f81d332>
3
189
Q&A Forum
Science & Tech.
89.103562
Radiation & fractionation I have been reading up on radiation, but have some trouble fully understanding fractionation. Is there any simple approximation of the relation of fractionation and recovery, or some sort of adjusted effective dose? A single continuous exposure of 1 Gy over 1 hour, compared to 20 acute doses of 50 mGy (which is still an absorbed dose of 1 Gy over 1 hour), what exactly is the difference/relation? And can it be modelled/approximated? Would the fractionation effectively be less (either expressed as an effective dose [Sv] or an arbitrary constant) than the continuous exposure? So the continuous exposure would be (assuming photons) 1 Sv, but the fractionation actually somewhat less than 1 Sv (due to the tissue/cells being able to partially recover). Or is it not possible to express it such a manner? I guess it's not a very well understood field? I've had trouble finding more practical or straight-forward material. Everything is pretty vague, "fractionation is less damaging as it spreads out the radiation," but no hard figures or formulas. No real explanation.
<urn:uuid:fc9ecbc3-dc48-43b3-bb27-c79c4c9ad985>
2.734375
231
Comment Section
Science & Tech.
42.223781
More About Ellipses Steven Dutch, Natural and Applied Sciences, University of Wisconsin - Green Bay First-time Visitors: Please visit Site Map and Disclaimer. Use "Back" to return here. Find the Center of an Ellipse Sometimes you have an ellipse but don't know the center. Finding the center is easy. - Draw two arbitrary parallel lines cutting chords across the ellipse (in - Bisect the chords and draw a line through the midpoints of the chords - Bisect the resulting line. The biscting point is the center. proof is to imagine doing this construction on a circle, the shearing the circle out of shape into an ellipse. Find the Axes of an Ellipse Since you can easily find the center of an ellipse, finding the axes is just - Given an ellipse with unknown axes and center, find the center as above. - Construct a circle with center at the center of the ellipse and intersecting the ellipse at four points. - Bisect the arcs of the circle (not shown), or - Construct the rectangle joining the points where the ellipse and - Construct the perpendicular bisectors of the sides of the rectangle, or connect opposing pairs of arc bisectors. Find the Foci of an Ellipse ||Given the major and minor axes of an ellipse, you can always find the foci. You need the foci for some construction methods. Just draw radii of length a from the ends of the minor axis. Given the foci, however, you can't uniquely determine the axes. You need additional information such as the length of one axis. However, the major axis is always along the line through the foci and the minor axis always perpendicularly bisects the line between the foci. Access Crustal Movements Notes Index Return to Professor Dutch's Home Page Created 28 December 1998, Last Update 30 January 2012 Not an official UW Green Bay site
<urn:uuid:f07108cc-2222-4284-a944-4269d648096b>
3.859375
439
Tutorial
Science & Tech.
48.585211
A baseball speeds from the hands of a pitcher, a slave to Newton’s laws. But in the brain of the batter who is watching it, something odd happens. Time seems to dawdle. The ball moves in slow motion, and becomes clearer. Players of baseball, tennis and other ball sports have described this dilation of time. But why does it happen? Does the brain merely remember time passing more slowly after the fact? Or do experienced players develop Matrix-style abilities, where time genuinely seems to move more slowly? According to five experiments from Nobuhiro Hagura at University College London, it’s the latter. When we prepare to make a movement – say, the swing of a bat – our ability to process visual information speeds up. The result: the world seems to move slower. At first glance, this might seem to contradict a now-classic experiment by David Eagleman. He threw volunteers off a tall fairground ride and asked them to stare at a special watch, to see if their perception of time would slow. It didn’t. They merely remembered the experience as being long and drawn out afterwards. (See my earlier post for the details.) But there’s a critical difference between the two studies. Eagleman studied time perception while people were actually undergoing a crisis—in this case, falling to their possible doom. But Hagura showed that time appears more leisurely before an event, rather than during it—when we’re preparing to move, rather than moving. Hagura first asked volunteers to press a key for as long as a white disc appeared on a screen. The disc would then be replaced by a hollow target. In some trials, the volunteers had to release their key and touch the target. In others, they were told to keep pressing the key. In every case, they had to say how long the white disc stayed up for, compared to all the previous trials in the experiment. Hagura found that the volunteers deemed the durations to be longer if they were preparing to move, than if they were planning to keep still. Perhaps the volunteers who were about to reach out were just more excited or attentive? Not so. When Hagura changed the task from pressing (or not pressing) the target, to naming (or ignoring) a letter, the time-slowing effect vanished. Preparing to move makes the difference, rather than just preparing for any old task. In a third variation, the white disc was replaced by two possible targets instead of just one. In some trials, the disc had a line that told the volunteers which of the two targets was correct, allowing them to prepare the right movement. In other trials, there was no line, and the volunteers had to make their move when the two targets appeared. As you might have guessed by now, they thought the white disc stayed up longer if they were preparing to move their arm in a specific direction, but not if they were simply waiting. These three sets of results support the idea that time moves more slowly when we prepare an action. But they could also be explained in the same way that Eagleman’s results were: Time only seemed to pass more slowly because the volunteers remembered it doing so. But two final experiments suggest that, instead, preparing to move actually slows “the flow of visual experience”. First, Hagura replaced the solid white target with one that flickered at different frequencies. The volunteers had to say whether it was flickering faster or slower than usual, compared to previous trials. If they were preparing to hit the screen, they said that the high-frequency flickers were slower than they actually were. Second, Hagura showed his volunteers a stream of rapidly flashing letters, while they held a key. Each letter appeared for just 35 milliseconds, and the whole series went by in less than a second. Somewhere in the stream, there was a C or a G, but never both. Once the sequence had stopped, as before, the volunteers either kept holding their key, or touched the screen. Their task was to say whether they had seen a C or a G. If the volunteers were preparing to reach out, they got the right answer about 66 percent of the time. If they kept still, their success rate was just 59 percent. By readying their arms to touch the screen, they were better able to spot their target amid the zooming letters. This difference was particularly marked if the C or G appeared towards the end of the flashing sequence – the longer the volunteers spent preparing to move, the slower time seemed to pass. How does the slowing effect actually work? We don’t know. Hagura notes that there are certainly connections between the parts of the brain that encode the passage of time, and those that prepare sequences of movement. The details, however, are still unknown. Why does the effect happen? Hagura argues that speeding up our powers of perception allows us to change, tweak and halt our course of action on the fly. He writes: “As expert ballgame players assert, being maximally prepared may allow ‘more time’ to perfect the hit.” That would be a clear benefit, but Andrew Welchman, who studies perception at the University of Birmingham, wonders if there are any drawbacks. “You never get anything in the brain for free, so if you get better at one moment in time, you should get worse at another,” he says. “Take someone who moves a lot versus someone how moves little. They should both be calibrated to the same external time, so the one who moves a lot needs to have more ‘downtime’ to keep in step.” A bout of Neo-like bullet-time should be followed by a burst of perceptual sluggishness. For example, Welchman says that when we move our eyes around, our visual sensitivity plummets immediately before, during and after the movement. This is called saccadic suppression. The standard interpretation is that we’re “filtering out the junk” – the “smeary visual signals” that we get when our eyes move too quickly. “But framed in light of this paper, it might be a way of resetting the clock so that the person stays calibrated to the visual world around them,” says Welchman. Reference: Hagura, Kanai, Orgs & Haggard. 2012. Ready steady slow: action preparation slows the subjective passage of time. Biology Letters http://dx.doi.org/10.1098/rspb.2012.1339
<urn:uuid:5db5cf11-115e-49eb-aa80-55ef0dca96ad>
3.3125
1,364
Nonfiction Writing
Science & Tech.
58.192157
Because is only 0.7 percent of naturally occurring uranium, its supply is fairly limited and could well only last for about 50 years of full-scale use. The other 99 percent of the uranium can also be utilized if it is first converted into plutonium by neutron bombardment: The production of plutonium can be carried out in a breeder reactorA nuclear reactor designed to produce nuclear fuel as it produces energy. which not only produces energyA system's capacity to do work. like other reactors but is designed to allow some of the fast neutrons to bombard the , producing plutonium at the same time. More fuel is then produced than is consumed. Breeder reactors present additional safety hazards to those already outlined. They operate at higher temperatures and use very reactive liquidA state of matter in which the atomic-scale particles remain close together but are able to change their positions so that the matter takes the shape of its container metals such as sodium in their cooling systems, and so the possibility of a serious accident is higher. In addition the large quantities of plutonium which would be produced in a breeder economy would have to be carefully safeguarded. Plutonium is an α emitter and is very dangerous if taken internally. Its half-lifeIn chemical kinetics, the time it takes for one half of the limiting reactant to be consumed. In nuclear chemistry, the time for half of a sample to undergo radioactive decay. is 24 000 years, and so it will remain in the environment for a long time if dispersed. Moreover, can be separated chemically (not by the much more expensive gaseous diffusionThe spreading of one substance into another (usually involves gases or liquids). used to concentrate ) from fission products and used to make bombs. Such a material will obviously be attractive to terrorist groups, as well as to countries which are not currently capable of producing their own atomic weapons.
<urn:uuid:81fa5386-755b-45d4-a2c0-8d9005faa120>
3.40625
375
Knowledge Article
Science & Tech.
35.239351
Our knowledge concerning the surface of Venus comes from a limited amount of information obtained by the series of Russian Venera landers, and primarily from extensive radar imaging of the planet. The radar imaging of the planet has been performed both from Earth-based facilities and from space probes. The most extensive radar imaging was obtained from the Magellan orbiter in a 4-year period in the early 1990s. As a consequence, we now have a detailed radar picture of the surface of Venus. The adjacent animation shows the topography of the surface as determined using the Magellan synthetic aperture radar (black areas are regions not examined by Magellan). An MPEG movie (303 kB) of this animation is also available. Much of the surface of Venus appears to be rather young. The global data set from radar imaging reveals a number of craters consistent with an average Venus surface age of 300 million to 500 million years. There are two "continents", which are large regions several kilometers above the average elevation. These are called Istar Terra and Aphrodite Terra. They can be seen in the preceding animation as the large green, yellow, and red regions indicating higher elevation near the equator (Aphrodite Terra) and near the top (Ishtar Terra). |Hemispheres of Venus (Ref)| The center image (a) is centered at the North Pole. The other four images are centered around the equator of Venus at (b) 0 degrees longitude, (c) 90 degrees east longitude, (d) 180 degrees and (d) 270 degrees east longitude. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. (Here is a more extensive discussion of these hemispheric views.) |A Volcano (Ref)||Apparent Lava Flows (Ref)| In all of these radar images you should bear in mind that bright spots correspond to regions that reflect more radar waves than other regions. Thus, if you could actually see these regions with your eyes the patterns of brightness and darkness would probably not be the same as in these images. However, the basic features would still be the same. There are rift valleys as large as the East African Rift (the largest on Earth). The image shown below illustrates a rift valley in the West Eistla Region, near Gula Mons and Sif Mons. |Rift valley on Venus| The perspective in cases like this is synthesized from radar data taken from different positions in orbit. African Rift on Earth is a consequence of tectonic motion between the African and Eurasian plates (the Dead Sea in Israel is also a consequence of this same plate motion). Large rift valleys on Venus appear to be a consequence of more local tectonic activity, since the surface of Venus still appears to be a |A Field of Craters||The Largest Crater (Ref)| |The surface of Venus from Venera 14 (Ref)|
<urn:uuid:98f84e8a-c73e-4cf2-9c53-4b613f94b23a>
4.34375
622
Knowledge Article
Science & Tech.
38.611933
A link from one of readers (thanks Ashley!) pointed us to a story on MSNBC about a very large Lion’s Mane jellyfish (Cyanea capillata) that broke apart and stung up to 100 people on a New Hampshire beach last Wednesday. Lion’s Manes can get very big, their bell can be over 3 feet. Their tentacles though are another story and quite intimidating! A small Lion’s Mane can have a tentacle trail 10 feet long. A much larger one may have over 150 tentacles trailing over 30 feet behind it! So how can jellyfish sting if they break apart or are dead and washed up on the beach? The tiny stinging cells, called nematocysts, can be thought of like a mouse trap. One you set the mouse trap it only needs a trigger to do its damage. It doesn’t need any outside help to be maintain. It just has purpose, to sit and wait for an unfortunate victim to trigger the hard-wired response that millions of years of evolution have refined into a potent venom delivery system. Much the like the mouse trap, once set it does not let go easily. From MSNBC/LiveScience writer Jeanna Bryner: Though not a common occurrence, marine biologist Sean Colin says with such a large jellyfish, and so many trailing tentacles (not to mention those that break off in the water), the occurrence is feasible. “It’s certainly not common, but it’s certainly in the realm of possibility, because they do have so many tentacles if they’re that large. If they’re broken up they could be all over the place,” said Colin who is at Roger Williams University in Bristol, R.I. Profile of a giant This species is typically found in the cooler regions of the Pacific Ocean, Atlantic Ocean, North Sea and Baltic Sea. And they rarely show up on this beach. “I haven’t seen anything like this in my life, said Brian Warburton, who has been with the New Hampshire State Parks department for six years. All the action transpired in about 20 minutes, when Warburton and his colleagues administered first aid (vinegar treatment). “There wasn’t time to sit and measure this thing. We just got rid of it,” Warburton told LiveScience. “Think about a glob of Jell-O you’re trying to pick up with two hands,” he said, explaining the need for a pitchfork to pick it up. Nematocysts are proteinaceous substances and are not living cells or organelles. They discharge extremely rapidly and work by building an immense amount of pressure inside the cell (up to 15 MPa or 2176 lbs/in2) by storing oodles of calcium ions. When discharged (see above), the ions are rapidly ejected into the surrounding cytoplasm, setting off the chain of events resulting in a painful sting. Research by Nüchter and colleagues measured the escape velocity and kinetics of nematocyst discharge in the freshwater hydrozoan, Hydra. The steps above took place during 700 nanoseconds, creating an acceleration of 5,410,000 g! Not all nematocysts are filled with venom though, but it is not a chance you should be willing to take. Since the stinging cells matter not whether its creator is alive (at least over the shorter term, the protein does degrade rapidly), it still does a lot of damage on its own its always a good idea to approach a beach jellyfish with caution and your flippy-floppies on. Very important to make sure your kids know never to touch a jellyfish unless with a stick from a safe distance. As I tell my 3 children, jellyfish are pretty from a distance! Nüchter, T., Benoit, M., Engel, U., Özbek, S., & Holstein, T. (2006). Nanosecond-scale kinetics of nematocyst discharge Current Biology, 16 (9) DOI: 10.1016/j.cub.2006.03.089
<urn:uuid:f918762f-3b67-4ad1-afde-567aa6fdb2e8>
2.921875
863
Comment Section
Science & Tech.
63.064304
The standard 'Scientific ' explanation is that the carbon-carbon bond s in diamond are too stable, no enzyme would be able to overcome the energy barrier necessary to disassemble diamond. However, diamond is something of a special case for carbon compounds (oh, fullerenes are probably pretty inedible). There are organisms that 'eat' rocks, reduce gold salts to elemental gold and other improbable diets (from the point of veiw of sugar-eaters). There are bacteria that derive energy from sodium and some that produce hydrogen or eat methane. Many organisms can synthesise silica polymers - and I don't in principle see why you couldn't engineer bacteria to make silicon chips. The message is that it's more surprising what Life can do than what it can't.
<urn:uuid:38c25a29-31f8-41e3-857d-ec29ed46d1b5>
2.9375
165
Personal Blog
Science & Tech.
36.135085
If you guys remember all of the big genome sequencing projects of the 90s and the early aughts, they’ve been continuing and the amount of raw data they have been giving back to us has exponentially accelerated. However, those of us trying to understand the biological realities of what all of those sequences actually mean were very quickly left behind and have been falling further and further behind as the advance of sequencing technology accelerates faster than we could ever hope to keep up with. The central problem is that while it turns out that we can get computers to do our pipetting for us if we pay engineers enough – we can’t get computers to do our thinking for us. Like mathematicians with some of the fanciest calculators imaginable, we can get the tools NCBI gives us to show us amazing things in amazing ways, but they can’t tell us what it all means. For the genomes we get to make any kind of sense a human being has to abstract meaning from it and communicate that meaning in understandable language – and there is no way around that limitation – there will only ever be ways to optimize it. This is really what synthetic biology is trying to do from its own weird and attractive but easily dangerously simplistic perspective. E Andrianantoandro, S Basu, et al. Published 2006 in Molecular Systems Biology. doi:10.1038/msb4100073 Credit: Chuck Wadey, www.ChuckWadey.com Synthetic biologists engineer complex artificial biological systems to investigate natural biological phenomena and for a variety of applications. We outline the basic features of synthetic biology as a new engineering discipline, covering examples from the latest literature and reflecting on the features that make it unique among all other existing engineering fields. We discuss methods for designing and constructing engineered cells with novel functions in a framework of an abstract hierarchy of biological devices, modules, cells, and multicellular systems. The classical engineering strategies of standardization, decoupling, and abstraction will have to be extended to take into account the inherent characteristics of biological devices and modules. To achieve predictability and reliability, strategies for engineering biology must include the notion of cellular context in the functional definition of devices and modules, use rational redesign and directed evolution for system optimization, and focus on accomplishing tasks using cell populations rather than individual cells. The discussion brings to light issues at the heart of designing complex living systems and provides a trajectory for future development. If there is a God of creation that went around designing the genomes of all of the living things on Earth, they are the sloppiest, most frustrating, terrible programmer you could possibly imagine. The Intelligent Design proponents are particularly frustrating to me as a biologist having seen how fundamentally unintelligent the design of living critters actually is when you get down to the real moving parts. At least it is designed according to a sort of logic so fundamentally alien to our own that by any human standard we couldn’t help but call it stupid. Looking at life through the lens of Max Delbrück’s slowly fulfilled dream of a science of molecular genetics to replace the stamp collecting of Drosophila genetics1, the organization of information, regulation, and function in genomes makes precious little intuitive sense in terms of human logic. When you think about it; silly things like fundamentally unrelated systems being piled on top of each other such that one can’t be manipulated without messing up the other – necessitating otherwise functionless patches to the paired system whenever the other is modified, or Rube Goldberg-esque fragile systems of regulation that respond to all kinds of wrong stimuli, or systems of global regulation that are pretty analogous to reading the same giant program in either Python or C++ to produce one of two desired global results, or the kinds of systems that you can just tell are 99.9% amateur patch jobs are really what you would expect from systems designed exclusively by the entropic trial and error of evolution. The end goal of the folks behind synthetic biology is pretty simple on the face of it. They want to turn biological systems into abstractions that can be manipulated by people who don’t understand the lower parts. While this might seem like a trivial goal, when you really understand what it means, it becomes clear that it has the potential to change the world in intensely profound ways and the very nature of life itself – that is if they can actually make that work in a functional way. At the moment genomes can only really be meaningfully understood or manipulated by folks like me with expensive and rare educations. This is because in order to create de novo anything like a solid grasp of how anything so beautiful as the lac operon works in E. coli one needs to have a pretty good understanding of how things like DNA-binding proteins work, how the structure of DNA relates to its function, how ligand binding works, how transcription initiation works, and how enzymes do their thing. Similarly, in order to have any hope of understanding how one would manipulate systems like that, you’d need to have a good understanding of how cell competency works and can be created, how to manipulate plasmid vectors, the anti-parallel nature of DNA , how to use antibiotics and resistance cassettes to select for desired strains, what TATA boxes do, how Shine-Delgarno sequences work, how RNA polymerases tend to like to bind, how to choose which regulation mechanism to use, and that doesn’t even include the technical skills necessary to actually do it yourself. Their idea is to turn genes, gene cassettes, and genetic systems into ‘BioBricks’ that their manipulators don’t need to understand to be useful (in a way analogous to how Perl programmers and Sys Admins don’t need to understand Assembly language to be useful) and can pay to have manipulated in industrially mechanized ways. At the moment the iGEM folks are using the levels of abstraction they can already create to harness to creativity of undergrads with their competition, but what may lie ahead is much much cooler. Until this summer I did nothing but make fun of the nascent science of Synthetic Biology, having only been exposed to its many nuttier proponents. Maryr of Metafilter was absolutely right when she went all Mol Bio hipster and declared: “I heard of iGEM before it was cool. BioBricks is for people who can’t handle real cloning,” in this thread about what is still solidly iGEMs neatest project. BioBrick really is just a new name for gene cassette, things that have been actively studied and manipulated since the 60s. What convinced me that this could actually be really amazingly cool was a talk Drew Endy gave at the most recent Bacteriophage conference in Brussels about the research that is going on in his lab, the parts he needed from us, and why. (37:38) [Don’t be intimidated by the technical nature of the talk – even if you zone out during the technical bits you can totally still get the point]. In it, he describes his lab’s quest to create what amounts to a living computer – programmable systems architecture within E. coli. The current project involves using the architecture he is building to create a trivially readable clock that reads out in binary that would track the number of generations that a culture of bacteria has gone through – which would itself be amazingly useful. However, if created, these kinds of systems architecture combined with sensor proteins, enzymes, and regulator molecules understood as BioBricks could make life understandable by people who are to us as programmers are to hardware engineers. Here is another detailed talk focused more towards computer folks than biologists and here is another shorter talk he has given that is more geared towards laymen at a higher level of abstraction. While I was sitting in that talk, knowing that the phage community does indeed have all of the parts he wants and then some, I couldn’t help but get goose bumps recalling one of my favorite stories from Science Fiction: The Nine Billion Names of God (part 2 ) by Arthur C. Clarke. Where suddenly I was, by way of analogy, a monk in his Lamasery slowly going about the task of annotating out the 10,000,000,000,000, 000,000,000,000,000,000 (1031) names of creation. If we really can systematize the genome of a living organism into neat little boxes like a well designed program according to the sensibilities and biases of human logic that would, in a very real and profound way, give us the ability to remake life in our own image in a way that very much evokes the line in Genesis that phrase comes from. How cool would that be? It is still however worth being very cautious about what promise synthetic biology may hold. There seems to be a whole cottage industry, particularly around the singularity movement, that has been set up to help people pretend they understand biology, and molecular genetics in particular, by calling it synthetic biology and making fanciful claims that people have different interests in being true. It preys on the scientific illiteracy of its audience counting on there being few enough people with the education to call them out on the sizable amount of fundamentally false false stuff they are communicating for them to get away with it. There are indeed huge limitations to this kind of thinking ever producing anything of meaningful value, which it has yet to do, that have nothing to do with a need for bigger computers or most anything else that singularity folks tend to point at as growing exponentially. The Singularity University is indeed an elaborate fraud run by folks with precious little understanding of biology. 1From the 1920s to the 1930s there was a mass movement of out of work physicists, having suddenly run out of things to do when we figured out to much of physics, to biology. They brought with them a mechanistic view of how the universe works that they used to cause massive transformations in how we understand and interact with biology. One of the most influential of these scientific interlopers was Max Delbrück who quickly reasoned that, if we were ever going to understand how life works, we would need to start with the simplest organism possible and work our way up. He isolated seven bacteriophages against E. coli B, originally just his lab strain, and named them in a series T1 through T7. The central idea was that he and his growing number of colleagues* would focus on truly understanding how these phages worked and use that knowledge to generalize to Escherichia coli, then the mouse, and then the elephant and us. An essential component of this was the “Phage Treaty” among researchers in the field, which Delbrück organized in order to limit the number of model phage and hosts so that folks could meaningfully compare results. What came out of their original focus, in many respects encapsulated in Erwin Schrödinger’s What is life?, has shed light on so much as to truly redefine our self-understanding, much less medicine The Luria–Delbrück experiment elegantly demonstrated that in bacteria, genetic mutations arise in the absence of selection, rather than being a response to selection, that is in all of life. The Hershey–Chase experiment showed once and for all that nucleic acids were in fact the heritable molecule in not just T2 phage and E. coli, but indeed all of life. Easily the snarkiest, most badass, and likely most important published scientific paper ever, written as an accessible single page, about the double helix structure of DNA. Jim Watson changed majors from ornithology to genetics after reading What is Life and became Luria’s graduate student, while Crick was an older former physicist who also claimed inspiration from Schrödinger. The structure of DNA, and its relationship to function that they discovered, is true for all of life. Soon afterwards the adapter hypothesis and central dogma, both of which are (at least simplistically) true for all of life.
<urn:uuid:3ef7c364-d1af-4794-b165-540dfb009e63>
2.796875
2,487
Personal Blog
Science & Tech.
30.832799
Marc Buie of the Lowell Observatory describes observations of Pluto in 1996 and their comparison with the 1994 observations that were reported in a 1996 press release. They showed structure on Pluto's surface. He has also compiled a list of "good Pluto WWW Pages," assigning grades of A+ to C for them. JPL continues to plan a Pluto Express mission to arrive at Pluto approximately AD 2010 to make close-up observations and to measure Pluto's atmosphere before it freezes out and settles onto the surface. The discovery of over a dozen objects orbiting the Sun beyond Neptune's orbit makes Pluto less special. The observational status of Pluto is discussed in the text, as well as the question "Is Pluto a Planet?" Over a dozen published sources refer to Pluto as something less than a planet, but Clyde Tombaugh is quoted in a summary article in USA Today (March 4, 1996, pp. 1-2) as saying "Pluto is far bigger than any asteroid.... The kids want Pluto to be a planet. I get hundreds of letters. [Talk of demoting Pluto] makes them mad." My own position? I have an interest in history and historical astronomy, and that sways me to the side of still saying that we have nine planets, with Pluto as one of them. Kaare Aksnes, president of the International Astronomical Union's panel on nomenclature, is quoted in the same USA Today article as saying, "I'm pretty sure all the members would be against demoting Pluto in this way." Even though the latest data minimize the importance of Pluto on a planetary scale, Aksnes continues, "we would do Pluto and Tombaugh an injustice and create confusion if we were to reclassify Pluto now. I believe that most people, be they astronomers or not," would agree." Though the Aksnes committee does not actually have authority to decide the issue, it is perhaps the nearest to the topic of IAU committees. The Hubble Space Telescope has imaged Pluto for the first time at sufficiently high resolution that we can see surface features. The resolution on Pluto is about 100 km, so there are two dozen pixels across the image. The two views show opposite hemispheres. We cannot know exactly what the dark and light areas are. They may be basins or impact craters. Probably, most of the light regions on the surface are regions of frost. These regions would change with Pluto's seasons. A movie is also available showing Pluto's rotation. Credit: Alan Stern (Southwest Research Institute), Marc Buie (Lowell Observatory), NASA and ESA Fran Bagenal at the University of Colorado has assembled a World Wide Web homepage for Pluto, giving both history and current science. Links are also provided to other Pluto homepages, including the Jet Propulsion Laboratory's Pluto Express, the Pluto subsection of the Los Alamos National Laboratory's set of planet homepages, and maps of Pluto and Charon computed by Marc Buie of the Lowell Observatory.
<urn:uuid:50e9f3b6-a817-42e1-b7c1-88d70e105e66>
3.109375
609
Knowledge Article
Science & Tech.
46.01259
Science subject and location tags Articles, documents and multimedia from ABC Science Thursday, 16 December 2010 StarStuff Podcast After a 33-year odyssey, NASA's Voyager 1 spacecraft reaches the very edge of our solar system. Plus: new theory explains rings and ice moons of Saturn; mysterious carbon-rich planet raises questions about how planets form; and test-flight success for Falcon-9 rocket. Tuesday, 23 March 2010 25 Great Moments in Science Dr Karl talks a lot because talking about stuff is his job. Even so, he was very surprised when he heard that women have more to say than men. Thursday, 1 October 2009 Dr Karl on triple j Why do we get an urge to wee when we hear the sound of running water? How did water come to be on Earth? And do you lose weight when you pass wind? Thursday, 24 September 2009 Dr Karl on triple j Why did the sun appear blue in the dust storm yesterday? What causes the white marks you can get on your fingernails? How can women who haven't given birth lactate? And can you smell danger? Thursday, 17 September 2009 Dr Karl on triple j How does stainless steel soap work? Can animals get sunburn and skin cancer? Who invented the concept of time? And why do you see different colours and shapes when you close your eyes? Thursday, 10 September 2009 Dr Karl on triple j Could my child love eating dirt because of an iron deficiency? Why do spacecraft re-enter the atmosphere so fast? And why do the mushrooms in my paddocks grow in big circles? Thursday, 3 September 2009 Dr Karl on triple j Why do tattoos become lumpy when the weather changes? If you were allergic to cats would you also be allergic to lions and tigers? Can the image resolution of a digital camera beat the human eye? Thursday, 27 August 2009 Dr Karl on triple j Why do you vomit when you overexercise? Can we create AC electricity from sunlight? Do you actually see red when you're angry? And why do people feel heavier when they're asleep? Thursday, 20 August 2009 Dr Karl on triple j What causes bags under your eyes? Does using sunscreen reduce your body's ability to produce Vitamin D? And what is a shooting star and why does it shoot? Thursday, 13 August 2009 Dr Karl on triple j Dr Karl debunks some persistent internet hoaxes. Plus: How are scientists able to reconstruct what a person looked like from only their skeleton? And how does hermaphroditism occur?
<urn:uuid:e4fac495-a61c-4e9f-95b0-b3f65fb09b40>
2.875
527
Content Listing
Science & Tech.
74.886751
Virtual file system Part 1 Virtual File System is an interface providing a clearly defined link between the operating system kernel and the different File Systems. The VFS supplies the applications with the system calls for file management (like “open”, “read”, “write” etc.), maintains internal data structures (the administrative data for maintaining the integrity of the File System), and passes tasks onto the appropriate actual File System. Another important job of the VFS is, performing standard actions. For example, as a rule, no File System implementation will actually provide an lseek() function, as the functions of lseek() are provided by a standard action of the VFS. Kernel’s representation of the File Systems The representation or layout of data on a floppy disk, hard disk or any other storage media may differ considerably from one implementation of File System to another. But the actual representation of this data in Linux kernel’s memory is the same for all File System implementations. The Linux management structures for the File Systems are similar to the logical structure of a Unix File System. The VFS calls the file-system-specific functions for various implementations to fill up these structures. These functions are provided by every File System implementation and are made known to the VFS via the function register_filesystem(). This function sets up the file_system_type structure it has passed, in a singly linked list headed by the pointer “file_systems”. The file_system_type structure gives information about a specific File System implementation. The structure is as follows: struct super_block *(*read_super)(struct super_block *, void *, int); struct file_system_type *next; · The function “read_super(..)” forms the mount interface, i.e. it is only via this function that further functions of the File System implementation will be made known to the VFS. It takes three parameters: * A super_block structure in which the data relevant to this instance of File System implementation is filled up. * A character string (in this case void *), which contains further mount options for the file system. * A flag, which is used to indicate whether unsuccessful mounting should be reported. This flag is used only by the kernel function mount_root(), as this calls all the read_super() functions present in the various File System implementations. * The “name” field contains the name of the actual File System.
<urn:uuid:cdb2f748-eadb-46e4-a8f9-7ce30a54c3bf>
3.640625
521
Documentation
Software Dev.
34.966883