text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Metals and acids Many, but not all, metals react with acids. Hydrogen gas forms as the metals react with the acid to form salts. This is a well-tried standard class experiment often used in the introductory study of acids to establish that this behaviour is a characteristic property of acids. The experiment is done first on a smaller scale using test-tubes (Lesson 1 below), with no attempt to recover the salts formed. This establishes that hydrogen production is a characteristic property of the reaction of metals and acids. It can then be done on a larger scale (Lesson 2 below), and the salts formed can be recovered by crystallisation, . Lesson 1 is a series of test-tube experiments in which each working group establishes as a common feature that hydrogen is given off as metals react with an acid – if the metal reacts at all. This should take around 40 mins, and most classes should be able to do this version. Each working group needs a small selection of metals and acids to test. The range of metals and acids tested can be extended to a teacher demonstration in the concluding part of this lesson. Lesson 2, in which the salt formed is recovered by crystallisation, takes longer, and the class needs to be reliable enough in behaviour and manipulative skills to cope with the hazards involved in heating acidic solutions in beakers on tripods. The time taken for the reaction depends on the particle size of the metal used. Using small granules helps to reduce the time taken. Each working group requires: Dilute hydrochloric acid, 1 M, 25 cm3 Dilute sulfuric acid, 0.5 M (IRRITANT), 25 cm3 Small granules, coarse filings, or foil pieces of these metals in small labelled containers: Copper, iron, magnesium, zinc Small zinc granules, about 5 g in labelled container Dilute sulfuric acid, 0.5 M (IRRITANT), 50 cm3 Refer to Health & Safety and Technical notes section below for additional information. Each working group requires: Test-tubes (100 x 16 mm or similar), 8 Corks or bungs to fit test-tubes loosely, 2 Wood splint (one per group, further splints from teacher in charge) Conical flask (100 cm3) Beaker (100 cm3) Measuring cylinder (100 cm3) Filter funnel, about 65 mm diameter Pipeclay triangle or ceramic gauze (Note 1) Heat resistant mat Evaporating basin, at least 50 cm3 capacity Crystallising dish (Note 2) Health & Safety and Technical notes Wear eye protection. The selection of metals can vary according to what is available as small granules (size <5 mm), coarse filings or foil. What matters is that each group has at least two metals that react readily and one that does not. Copper, Cu(s) - see CLEAPSS Hazcard. Iron filings, Fe(s) - see CLEAPSS Hazcard. Magnesium ribbon, Mg(s) - see CLEAPSS Hazcard. Magnesium turnings are HIGHLY FLAMMABLE. Distribution of pieces of magnesium ribbon should be supervised to avoid students taking several pieces and experimenting later with igniting them. Zinc granules, Zn(s) - see CLEAPSS Hazcard. While other metal/acid combinations react in the same way, recovering the salt by crystallisation (in lesson 2) may not be as successful as it is using zinc and sulfuric acid. Dilute hydrochloric acid, HCl(aq) - see CLEAPSS Hazcard and CLEAPSS Recipe Book. Dilute sulfuric acid, H2SO4(aq), (IRRITANT at concentration used) - see CLEAPSS Hazcard and CLEAPSS Recipe Book. 1 Ceramic gauzes can be used instead of pipeclay triangles to support the evaporating basin, but the evaporation will then take longer. 2 The evaporation and crystallisation stages may well be incomplete in the time available for lesson 2. In this case, the crystallisation dishes need to be set aside for crystallisation to take place slowly. However, the dishes should not be allowed to dry out completely, as this spoils the quality of the crystals. With occasional checks, it should be possible to decide when to decant surplus solution from each dish to leave good crystals for the students to inspect in the following lesson. a Place six test-tubes in the test-tube rack. b Add a 2–3 cm depth of dilute hydrochloric acid to the first three tubes, and a 2–3 cm depth of dilute sulfuric acid to the remaining three tubes. c Add a small piece of a different metal to each of the tubes with hydrochloric acid in them. Record which metal you add to each tube. d Add a small piece of the same metals to each of the tubes with sulfuric acid in them. Record which metal you add to each tube. e Your teacher will show you how to test the gas being produced in these reactions. Choose one of the metals that reacts rapidly with the acids, and in a clean test-tube add a piece of this metal to a 2–3 cm depth of one of the acids. This time place a cork loosely in the top of the test-tube so that any gas produced escapes slowly. Light a wood splint, remove the cork and immediately hold the flame to the mouth of the tube. If nothing happens, you may need to try again. a Measure 50 cm3 of dilute sulfuric acid using a measuring cylinder and pour it into the beaker. Warm this acid gently over a low, non-smokey, Bunsen flame. Turn off the Bunsen burner before the solution boils. Carefully remove the beaker of acid from the tripod as instructed by your teacher, and stand it on the heat resistant mat. Be very careful not to knock the tripod while the beaker is on it. b To this hot acid, add about half the zinc pieces provided. Avoid inhaling the acidic fumes that may rise from the beaker as a result of the vigorous bubbling. c If all the zinc reacts, add two more pieces and stir. Add more zinc until no more bubbles form. The acid is now used up. d Filter the warm solution into the conical flask to remove the excess zinc. Transfer the filtrate into an evaporating basin. e Place the evaporating basin on a pipeclay triangle or gauze on a tripod and gently boil the solution over a low Bunsen flame. Be very careful not to knock the tripod supporting the basin. When the volume has been reduced by about half, dip a glass rod in the solution and then hold it up to cool. If small crystals form on the glass rod, stop heating, otherwise continue until that point is reached. Do not continue to heat beyond the point when crystals start to appear on the top edge of the solution. f Pour the remaining hot solution into a crystallising dish as instructed by your teacher. Label the dish and leave until the next lesson to crystallise. The crystals can then be examined using a hand lens or microscope. Download some student questions. Safety is particularly relevant to younger students. Be aware of the problems associated with heating beakers or evaporating dishes on tripods, and with lifting such hot containers off a tripod after heating. Students should not be seated on laboratory stools whilst carrying out these operations. Using tongs of suitable size is a good way of lifting hot containers but some schools may not have these. If there is any doubt about the safety of this step, the teacher should first lift each beaker down onto the heatproof mat, using a thick cloth or wearing suitable thermal protection gloves, before the students add the zinc pieces. The same applies to moving the evaporating basin before pouring its contents into the crystallising dish. The procedure for safely testing the evolved hydrogen gas in the test-tube reactions needs to be demonstrated at a suitable point in Lesson 1. A loosely inserted cork allows sufficient build-up of gas in a slow reaction to enable a successful test. Nevertheless many students find it difficult to achieve a successful ‘pop’ test for hydrogen, so you may need to do follow-up demonstrations as well. This pair of experiments forms an important stage for younger students in developing an understanding of what an acid is. They need to understand how to generalise from sufficient examples, and to see the limits to that generalisation in metals that do not react. It may help to develop this discussion in the concluding stages of Lesson 1 by additional demonstrations of other metals and acids. In particular dilute nitric acid (< 0.5 M) does produce hydrogen with moderately reactive metals such as magnesium and zinc, even though reactions are different at higher concentrations, and with other metals. By the end of the lesson, students should be able readily to draw the conclusion: Metal + acid → salt + hydrogen This experiment is also a good opportunity for students to learn how to draw up suitable tables for recording experimental observations. In Lesson 2, selecting zinc and sulfuric acid as the example to follow through to producing crystals of the salt is governed by the need to have a salt that crystallises easily. Unfortunately the chlorides of magnesium and zinc are not easy to crystallise, while magnesium sulfate is so soluble that it takes longer to evaporate sufficiently. Iron(II) compounds may suffer from oxidation problems when the solution is evaporated, giving a visibly impure product. There is potential for producing hazardous fumes if classes are allowed to over-evaporate salt solutions, either from evaporation of any excess sulfuric acid or from decomposition of the salt. There is also a danger of hot material spitting out of the container. If crystals begin to appear, e.g. at the top edge of the solution, the Bunsen burner should be turned off immediately and the aolution left to cool. Refer to CLEAPSS Laboratory Handbook Section 13.2.6 for a discussion. If older students perform these experiments, they can be asked to write symbol equations: Mg(s) + 2HCl(aq) → MgCl2(aq) + H2(g) Mg(s) + H2SO4(aq) → MgSO4(aq) + H2(g) For reactions of these acids with iron or zinc, the students simply substitute Fe or Zn for Mg in these equations. Health and safety checked February 2008 Page last updated on 07 December 2011
<urn:uuid:2fcca0e9-6416-4c02-aec5-85301ec5884a>
4.1875
2,233
Tutorial
Science & Tech.
52.275613
This Example shows how to create an empty DOM Document . JAXP (Java API for XML Processing) is an interface which provides parsing of xml documents. Here the Document BuilderFactory is used to create new DOM parsers. There are some of the methods used in code given below for creating an empty DOM Document :- DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance():- This method Creates a DocumentBuilder Factory .DocumentBuilder Factory is a Class that enables application to obtain parser for building DOM trees from XML Document DocumentBuilder builder = Factory.newDocumentBuilder():- This method creates a DocumentBuilder object with the help of a DocumentBuilderFactory Document doc = builder.newDocument():- This method obtain a new instance of a DOM Document object to build a DOM tree. Element element=doc.getDocumentElement():- By this method we can have direct access to the root of the DOM Document. Output of the program:- |Value of the root of Dom Document created is: null DOM Document created Successfully If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:39dd6ece-2e4a-46cc-bdc9-8bb35b0c1111>
3.40625
255
Documentation
Software Dev.
30.46943
The SAGE III instrument operates by measuring the amount of solar light as it passes through the limb or edge of the Earths atmosphere as the spacecraft views the rising and setting of the sun during each orbit. The instrument measures light at wavelenghts in the part of the electromagnetic spectrum visible to the human eye, allowing it to make very accurate measurements of aerosols, ozone, water vapor and other trace gases. The advanced SAGE III instrument will also make measurements using moonlight that will provide new observations of trace gas species that affect ozone distibution. SAGE III willl also make essential temperature and pressure measurements for helping scientists better understand climate change. A limb measurement is taken as a satellite instrument views sunlight through the atmosphere as it ascends or descends from behind the Earth.
<urn:uuid:dac3de57-4b55-4595-8403-433c52a5343f>
3.796875
160
Knowledge Article
Science & Tech.
21.224059
Stephen James O'Meara's Secret Sky: Measure the Moon March 2010: Don't let psychological tricks fool you when determining the Moon's size. January 25, 2010 |In my January column, I challenged readers to look for some of the finest naked-eye features on the Moon during lunar perigee — when the Moon is closest to Earth in its elliptical orbit. But have you ever wondered if we can tell whether the Moon is at perigee or apogee (when it's farthest away) using only our unaided eyes?| The problem is that during the night the Moon sails across the vast vault of the sky unrivaled in size. During the day, only the Sun (which we should never look at without proper protection) compares to the Moon in apparent size. Furthermore, when the Moon lies just above the horizon, a physiological effect called the Moon illusion causes its image to swell in the mind's eye. And when the Moon stands high overhead, another illusion makes it appear smaller than it should. Nevertheless, noticing a change in the Moon's apparent size shouldn't be too difficult. At perigee the Moon lies some 25,000 miles (40,000 kilometers) closer to Earth than it does at apogee, making it appear roughly 10 percent larger. That's the equivalent of a quarter's size compared to a nickel's. The varying size of the Moon over time became clear to Kevin Krisciunas, who visually measured and graphed its changing sizes with homemade equipment. Photo by Kevin Krisciunas, : Roen Kelly Few would argue that a Full Moon rising attracts attention. But I've noticed a greater "Wow!" factor when people see a perigee Full Moon rising. Arguably, then, we can detect the difference, at least on a subconscious level. But can we perform a more reliable test? |Two thumbs up!| Over the years, I've enjoyed experimenting with the challenge noted above and have found a simple solution. Just go outside when the Moon is visible in the daytime sky, hold out your thumb at arm's length, and point it toward the Moon. With both eyes open, first focus on your thumb, then on the Moon. What happens? When you shift focus, your thumb should look doubled (the effect of parallax). Depending on your dominant eye, one of the thumbs will appear transparent enough for you to project the Moon against the phantom thumbnail! If you do this when the Moon is at apogee, then at perigee, you can detect a noticeable difference in the size of the Moon compared to the size of your nail. A precision experiment By pointing his homemade device at the Moon, Kevin Krisciunas emulates inventor Levi ben Gerson (1288-1344), famous for measuring the varying sizes of the Moon, the Sun, and various other stellar objects. Photo by Sandra Rodriguez Krisciunas Independently, Texas A&M University astronomer Kevin Krisciunas has also given this matter some thought. While teaching his students, Krisciunas began to wonder about the ancient Greeks and naked-eye Renaissance observers who long knew that the Earth-Moon distance varies — knowledge deduced primarily from observations of the Moon's size during total and annular solar eclipses. But those observed changes in size could have been due to the variable Earth-Sun distance or Earth-Moon distance, or both. "So," Krisciunas pondered, "can we measure (without any lenses) the Moon's correct angular size, and that the Earth-Moon distance varies by plus or minus 5 percent?" To find out, Krisciunas designed an experiment. Using a hole punch, he took a thin piece of cardboard and made a hole 0.24 inch (6.2 millimeters) across. He then attached the cardboard to a cross piece that could slide along a yardstick. By simply pointing the yardstick at the Moon and moving the cardboard back and forth, Krisciunas could match the Moon's angular size with the hole's angular size. A hole 0.24 inch across, held 27.05 inches (687mm) from the eye, should subtend an angle of 31 arcminutes — the Moon's average angular diameter. But that's not what he found. Using the following equation: θ = 60 × (h/d) × (180/π) where θ is the Moon's angular extent in arcminutes, h is the diameter of the hole (0.24 inch), and d is the distance of the hole from the eye (which he determined to be 32.56 inches [827mm]), Krisciunas determined θ to be only approximately 26 arcminutes. As Krisciunas discovered, and as the late Marcel Minnaert suggested in his 1954 book The Nature of Light and Color in the Open Air, psychological factors that are little understood play a role in perceiving the Moon. Indeed, Minnaert stated that if you look at the Moon through an aperture in a piece of cardboard with one eye, the Moon will appear smaller than when viewed directly with two eyes. A Correction factor Kevin Krisciunas shows off his homemade Moon-measurer, based on Gerson's staff of Jacob. Photo by Sandra Rodriguez Krisciunas Undaunted, Krisciunas took a disk 0.358 inch (9mm) in diameter and taped it to a door 32.8 feet (10 meters) away, so that its angular size would equal 31 arcminutes. When he looked at that disk through the 0.24-inch hole at a distance of 32.3 inches (821mm), he found a match of angular size (26 arcminutes). If we therefore divide the Moon's true average angular extent (31 arcminutes) by the measured average (26 arcminutes), we arrive at a correction factor of 1.2. "If I take my uncorrected measures of the angular diameter of the actual Moon and multiply them by 1.2," Krisciunas says, "I get, on average, the correct angular size of the Moon accurate to plus or minus 0.8 arcminute. I've convinced myself [that] with my very simple equipment I can measure the variation of the Moon's angular size and eliminate systematic errors with my correction factor" (see graph above). Krisciunas cautions, however, that everyone's eyes are different; observers may have to find their own correction factors. He'd like to know if you get the same correction factors for objects that are demonstrably 31 arcminutes wide, using a sighting hole that is about 0.24 inch. If a 0.35-inch circle viewed at a distance of 32.8 feet comes out to 31 arcminutes, then your correction factor is 1. If you derive an angular size less than 31 arcminutes, then your correction factor is greater than 1, like Krisciunas'. Sticking tape to his thumb, the author managed to semi-accurately determine the Moon's size, with an error factor of only 6 percent. Can you do any better? Photo by Stephen James O’Meara After speaking with Krisciunas, I took a piece of ½-inch masking tape on which I had penned a couple of measurement lines, and placed it on my thumbnail. I went outside, casually held my thumb up to the Moon, and measured the Moon's north-south extent against the ruled lines. Next I measured the distance from my dominant eye to my extended thumb. Back inside, I measured the Moon's projected size on the tape with a ruler. The result? At that date and time, the Moon was 32 arcminutes in apparent diameter. My measurement? 34 arcminutes, off by only 6 percent ... not bad for a casual experiment! I know I can do better, and I bet you can, too! Krisciunas and I would love to hear about your experiences. Send reports to email@example.com and firstname.lastname@example.org. Look for this icon. This denotes premium subscriber content. Learn more »
<urn:uuid:8fb53fad-1330-4216-b74f-87ac5727921e>
3.59375
1,716
Nonfiction Writing
Science & Tech.
61.247125
'The Nymphalidae are members of the Superfamily Papilionoidea, the true butterflies. Distributed worldwide, butterflies of this family are especially rich in the tropics. They are highly variable, and there are more species in this family than in any other. Adults vary in size from small to large, and their front legs are reduced, unable to be used for walking. Wing is also highly variable: some species have irregular margins (anglewings and commas), and others have long taillike projections (daggerwings). Browns, oranges, yellows, and blacks are frequent colors, while iridescent colors such as purples and blues are rare. Adults of some groups are the longest-lived butterflies, surviving 6-11 months. Adult feeding behavior depends on the species, where some groups primarily seek flower nectar while others only feed , rotting fruit, dung, or animal carcasses. Males exhibit behaviors when seeking mates. Egg-laying varies widely, as some species lay eggs in clustsers, others in columns, and others singly. Caterpillar appearance and behavior vary widely. Brushfoots overwinter as larvae or adults. Emperors are members of the Family Nymphalidae. Found worldwide, they are a closely related group. Adults are brightly colored and stout-bodied. They are most closely related to the Charaxinae and Satyrinae subfamilies, as evidenced by their early developmental stages. In North America, they are limited to the genus Asterocampa. - Whittaker & Margulis,1978 - C. Linnaeus, 1758 - (Hatschek, 1888) Cavalier-Smith, 1983 - Grobben, 1908 - A.M.A. Aguinaldo et al., 1997 ex T. Cavalier-Smith, 1998 - Latreille, 1829 - Snodgrass, 1938 - Heymons, 1901 - C. Linnaeus, 1758 - Order: Lepidoptera () - C. Linnaeus, 1758 - Butterflies and Moths - Superorder: Panorpida () - Cohort: Myoglossata () - Infraclass: Pterygota () - Subclass: Dicondylia () - Class: Insecta () - C. Linnaeus, 1758 - Insects - Epiclass: Hexapoda () - Superclass: Panhexapoda () - Infraphylum: Atelocerata () - Heymons, 1901 - Subphylum: Mandibulata () - Snodgrass, 1938 - Phylum: Arthropoda () - Latreille, 1829 - Arthropods - Superphylum: Panarthropoda () - Cuvier - Infrakingdom: Ecdysozoa () - A.M.A. Aguinaldo et al., 1997 ex T. Cavalier-Smith, 1998 - Branch: Protostomia () - Grobben, 1908 - Subkingdom: Bilateria () - (Hatschek, 1888) Cavalier-Smith, 1983 - Kingdom: Animalia () - C. Linnaeus, 1758 - animals Name Status: Accepted Name . Members of the genus Hestina ZipcodeZoo has pages for 0 species and subspecies in this genus: - Search for Pictures: images.google.com - Search for Scholarly Articles: Google Scholar - Search using Scientific Name and Vernacular Names: All the Web | AltaVista Canada | AltaVista | Excite | Google | HotBot | Lycos - Search using Specialized Databases: GenBank | Medline | Scirus | CISTI/CAL | Agricola Periodicals | Agricola Books - Catalogue of the collection of palæarctic butterflies formed by the late John Henry Leech, and presented to the trustees of the British museum by his mother, Mrs. Eliza Leech. By Richard South, F. E. S. London, Printed by order of the Trustees, 1902. url p. 58. - Lepidoptera indica. By F. Moore. London, 1890-1913. url , p. 38. - The Philippine journal of science. Manila: Bureau of Science url p. 290. - To the snows of Tibet through China / London: Longmans, Green, 1892. url p. 268. - Brands, S.J. (comp.) 1989-present. The Taxonomicon. Universal Taxonomic Services, Zwaag, The Netherlands. Accessed January 31, 2012. - Biodiversity Heritage Library NamebankID: 7680067 - Catalogue of Life Accepted Name Code: Lep-163721.0 - Zipcode Zoo Species Identifier: 1570690
<urn:uuid:885978b4-09c2-4476-aa31-26cf1edadc32>
3.6875
1,040
Knowledge Article
Science & Tech.
41.940713
Tornadoes are rare at any one location, but out of anywhere in the United States, the central Oklahoma area has the greatest risk—and this day would prove no exception. Can shifting tides trigger earthquakes? Research done by Maya Tolstoy, a geophysicist at Lamont-Doherty Earth Observatory, suggests they do. Every indication is that thermal expansion will not dominate rates of sea-level rise in the future. As Earth’s climate marches toward equilibration with present-day CO2 levels, the climate will continue to warm. And this warming threatens the stability of a potentially much, much larger source for sea-level rise — the world’s remaining ice sheets. Why should society care that CO2 is now as high as 400 ppm? The reasons are multiple, but all trace back to the relationship between CO2 and temperature. Twice humans have witnessed the wasting of snow and ice from Peru’s tallest volcano, Nevado Coropuna—In the waning of the last ice age, some 12,000 years ago, and today, as industrial carbon dioxide in the air raises temperatures again. As in the past, Coropuna’s retreating glaciers figure prominently in the lives of people below. In an ongoing project, scientists at Columba University’s Lamont-Doherty Earth Observatory and partner institutions are reconstructing the ebb and flow of ice on Coropuna since the last ice age to understand how the tropics influence the global climate system, how ice-loss and a warmer climate will impact farming in the region, and what adaptation measures might help people survive in this hotter, drier world. Lamont-Doherty scientist Hugh Ducklow is featured in a documentary due out next summer on climate change and the West Antarctic Peninsula. Catch a preview in this newly-released trailer. I returned to New York on Monday, but Lamont-Doherty Earth Observatory scientists Andy Juhl and Craig Aumack remain working in Barrow, Alaska for another week. They’ll continue to collect data and samples in a race against deteriorating Arctic sea ice conditions as the onset of summer causes the ice to thin and break up. It’s near midnight and Lamont-Doherty Earth Observatory researchers Andy Juhl and Craig Aumack, and Arizona State’s Kyle Kinzler are gathered around a table in their lab at the Barrow Arctic Research Consortium discussing the best way to catch an isopod. One of the goals of Andy Juhl’s and Craig Aumack’s Arctic research is to determine the role of ice algae as a source of nutrition for food webs existing in the water column and at the bottom of the Arctic ocean. Our team spent most of Friday on the Arctic sea ice, drilling and sampling ice cores at our main field site. For each core collected, Lamont-Doherty Earth Observatory scientists Andy Juhl and Craig Aumack take a number of different physical, chemical and biological measurements
<urn:uuid:c7b2a2e6-7f50-487a-ac94-63cb544b0697>
2.859375
622
Content Listing
Science & Tech.
34.397429
Count some birds, shoot a wave, set out a rain gauge — the sky’s the limit Today is the first day of the annual Great Backyard Bird Count, when people all over North America tally the birds they see and record their results on the GBBC website. It’s a simple citizen science project to try. Even if you don’t know your birds, you can print out a list of what you’re likely to see in your area to help figure out which bird you’re looking at. And as the four-day project progresses, you can watch results come in from all over the continent. The Bird Count is important to scientists, too. The information you collect helps answer questions about how bird populations are doing and how migrating birds are responding to the weather or climate change But the Great Backyard Bird Count is far from the only citizen science project worth trying. While some science is done by people in crisp white lab coats, with specialized tools, a lot of it isn’t. Scientists don’t just work in labs, they don’t just use beakers and Bunsen burners, and most of the time they’re not wearing lab coats. Also: you don’t have to be a scientist to do science.
<urn:uuid:6807e630-f5bd-44ff-8886-650369341690>
3.515625
269
Personal Blog
Science & Tech.
67.353006
A constructive proof demonstrates the existence of a mathematical function, number or object by producing (constructing) it. This is in contrast with other styles of proof, such as proof by contradiction, which asserts the existence of an object by finding a contradiction if it did not exist. Such a proof is called nonconstructive and is not rarely valued by mathematicians, especially in applied mathematics and computer science. Presently, certain theorems have only been proved using nonconstructive methods. However, even after a nonconstructive proof is found for a result, work will still continue until a more useful constructive proof is found. A classical example of this is in Ramsey theory where a unsatisfactory proof using random graphs can determine Ramsey numbers. However, mathematicians will attempt to construct such a graph. Merely proving a hypothetical existence is not enough. The easiest way to prove the existence of transcendental numbers is by a nonconstructive proof, arguing that the set of real numbers is uncountable while the set of algebraic numbers is countable, and thus (many) transcendental numbers must exist. Of course, finding a specific example is a much more difficult endeavor.
<urn:uuid:21464423-4405-46d5-8e0b-20704eebffa8>
3.5625
238
Knowledge Article
Science & Tech.
27.553
This module implements the interface to NIST's secure hash algorithm, known as SHA. It is used in the same way as the md5 module: use new() to create an sha object, then feed this object with arbitrary strings using the update() method, and at any point you can ask it for the digest of the concatenation of the strings fed to it so far. SHA digests are 160 bits instead of MD5's 128 bits. Return a new sha object. If string is present, the method update(string) is made. The following values are provided as constants in the module and as attributes of the sha objects returned by new(): Size of the blocks fed into the hash function; this is always 1. This size is used to allow an arbitrary string to be The size of the resulting digest in bytes. This is always An sha object has the same methods as md5 objects: Update the sha object with the string arg. Repeated calls are equivalent to a single call with the concatenation of all the m.update(a); m.update(b) is equivalent to Return the digest of the strings passed to the update() method so far. This is a 20-byte string which may contain non-ASCII characters, including null bytes. Like digest() except the digest is returned as a string of length 40, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary Return a copy (``clone'') of the sha object. This can be used to efficiently compute the digests of strings that share a common initial See About this document... for information on suggesting changes.
<urn:uuid:4b2149fb-f341-45f6-8f46-04478b023f7d>
2.8125
375
Documentation
Software Dev.
62.031111
The forest, woodlands, shrublands, and heath of Southwest Australia are characterized by high endemism among plants and reptiles. Its unique vertebrate species include the numbat, honey possum, and the red-capped parrot. The western swamp turtle, which hibernates for nearly eight months of the year in response to dry conditions and hot temperatures, may be the most threatened freshwater turtle species in the world, although a successful conservation program has allowed its numbers to increase. The primary cause of habitat loss in Southwest Australia has been agricultural expansion, which is accentuated by extensive fertilizer use. A major threat for the native fauna has been the introduction of ivasive alien species like foxes and cats.
<urn:uuid:5447067d-524e-4a55-aca9-51969e8767e6>
3.796875
146
Knowledge Article
Science & Tech.
25.624579
© Mike Matthews, JILA. Atomic physics experiments routinely trap clouds of atoms, as described in Unit 5, but the atoms in these gas clouds are all distinct entities with separate quantum mechanical wavefunctions. This unit will describe how it is possible for the wavefunctions of all the atoms to merge into a single, macroscopic quantum state. This occurs when the interactions between the atoms are exactly the right strength and the atoms are very cold. The figure above shows three false color images of trapped atoms. The fastest atoms are colored red, and as they slow down their colors change to yellow, green, blue, and finally white. Moving from left to right, the three images show successive stages of cooling leading to the majority of atoms being in a single macroscopic quantum state in the trap at a temperature of ~10-8 K. The 2001 Nobel Prize in physics was awarded to Carl Wieman, Eric Cornell, and Wolfgang Ketterle for first creating these special quantum gases in their laboratories. (Unit: 6)
<urn:uuid:6b1382c5-cc9a-4f6e-9765-e5be55a0f46f>
3.75
209
Truncated
Science & Tech.
40.948603
This site is full of cool facts. You will learn about physical change, chemical change, and what the science of food is. You will also be learning about protons, electrons, neutrons, and atoms. You may think this has nothing to do with food, but guess what, it has a lot to do with food. Chemical change is when you change the appearance of an object and cannot change it back. For example, when I add yeast to my dough the yeast will get mixed in with the dough. Will I be able to take the dough back out? No, that makes it a chemical change. Physical change is when you change the appearance of an object and can change it back. For example when I melt an ice cube and then put it back in the tray and freeze it and it will look like it did before I melted it. Atoms are the tiny particles that make up an element. They are the smallest unit that can define an element. Atoms make up us and they make up food. A lot of people don't think about these things, but it's true, we eat atoms. Without atoms we wouldn't be much of anything and neither would our food. Next time you go to eat think about the science of food.
<urn:uuid:8f93f429-4615-4311-bc15-85aebc98e28a>
3.296875
261
Knowledge Article
Science & Tech.
76.979917
Simple math. Example: If there are 100,000,000 first generation spring migrants in the wild, very roughly around 5% (5,000,000) of them will be in Minnesota in May. If butterfly breeders ship around 300,000 adults and caterpillars per year for release, mostly in April-October, that’s only a maximum of 50,000 per month. Of that 50,000, only around 5% (2,500) would be shipped to Minnesota per month. 5,000,000 divided by 2,500 = 2,000. So wild monarchs in Minnesota would outnumber the captive raised ones by a factor of very roughly 2,000 to one. Thus the chances that a Journey North observer in Minnesota (or the Canadian provinces north of Minnesota) would encounter a captive raised monarch in the month of May would be in the neighborhood of 2,000 to one (an extremely remote chance).
<urn:uuid:d0753f00-90aa-421c-b7a2-78b3db01fa9e>
3.484375
192
Comment Section
Science & Tech.
69.266939
Bacteria and Locomotion Country: United States Date: January 2008 Why do bacterium move not how but why? is it just to transfer? Usually, it's to find better conditions for them to live – perhaps it might be for food, for oxygen (or to get away from oxygen), for light, or for nutrients. Bacteria can sense chemical gradients, and move in the direction of greater (or less) of whatever they are seeking. Other bacteria move for reproductive reasons, to avoid toxins, and some bacteria move for reasons we don't yet know. Hope this helps, Bacteria (not bacterium) move in order to encounter new sources of nutrients and to avoid unfavorable conditions like heat, cold, acidity, light, darkness, etc. Ron Baker, Ph.D. Click here to return to the Molecular Biology Archives Update: June 2012
<urn:uuid:ba686990-dc84-4a43-bb17-db4739f29284>
3.328125
193
Knowledge Article
Science & Tech.
49.044198
Differential Equation (ODE) are among the most important mathematical quantity used in engineering analysis. They are used across the discipline for design and simulation. The term "ordinary" is used for mathematical quantities that are a function of a single variable - velocity is a function of time, pressure is a function of depth etc. Problems in dynamics, vibrations, particle trajectory are naturally described as ODE. . You must have already come across many ODE's from different courses so far. In ODE's you will encounter an equation withderivative(s) called the differential equation Along with the differential equations (DE) you will also require boundary conditions (BC). Without BC the problem specification is incomplete. Typically, an ODE is used to express the mathematical model of the engineering problem that can capture the changes in the system or its properties. There are two variables in an ODE: The DE is usually written with all the dependent variable terms on the left of the equal sign. The solution of the ODE is to determine the dependent variables as functions of the independent variable between a start value and an end value variable (variable in the lower half of all the derivative symbols) variable - these are functions of the independent variable. They appear in the upper half of the derivative expressions In general differential equations can be characterized as In the following pages we will look at first order, second order and a fourth order example ( problem parameters are a function of a single variable) or Partial (problem parameters are a function of more than one (the terms in the in the DE are linear - dependent variables and their derivatives are not multiplied with each other and all of the derivatives are raised to the order of 1 or 0) and Nonlinear (when even a single term in the equation is not linear) DE has a value of 0 right of the equal sign) and Non homogenous (the right hand side is a function of the independent variable or its power) Coefficient (the terms on the left in the DE are multiplied by constant values) Variable coefficient ( the terms may include functions of the independent variable) the DE (the highest derivative in the DE on the left) conditions (BC) are essential to ODE. The number of boundary conditions required is the same as the order of the DE. There are two types of ODE's with regard to the BC Value Problem - all the BC'c are specified at the starting value of the independent variable Value Problem - some BC's are specified at the initial point and some at other point. If the other point is the final point then it is a two-point boundary variable problem. Otherwise it is a multipoint value problem. If the DE is linear, the solution to a homogenous DE is called the natural motion, the solution to the non-homogenous DE is called the particular solution. The total solution is the sum of the two solutions. The homogenous solution includes undetermined constants. The BC's are applied to the total solution to identify these constants. linear ODEs are solved in these pages
<urn:uuid:f9dfc551-41e2-4659-a2f5-9190e2d8e99f>
3.703125
682
Tutorial
Science & Tech.
31.456393
Hubble, as well as numerous other professional telescopes, use the Ritchey–Chrétien design. What optical and instrumental advantages does this kind of telescope have for professional astronomy? I think the first sentence from the Wikipedia article on Ritchey–Chrétien telescopes is one of the major compelling reasons: Elimination of optical abberations is very important in the RC design. In addition to eliminating spherical abberations, which all cassegrain telescopes do, a RCT eliminates coma. This helps to maintain image quality across a larger field of view allowing for larger detectors. In addition, it has a flat focal plane. This too is important for large field imagers since a lack of a flat focal plane means that if you focus the central area the edges will be out of focus and vice versa. This was an issue for old photographic plate systems as you would actually have to bend the glass of the plates to get the entire image in focus. Clyde Tombaugh used to tell a story about observing while looking for Pluto where he was out in the dome putting in a new photographic plate, and it shattered just after he finished applying the pressure to curve the plate into the focal plane of the telescope he was using. It was a cold night and he was afraid something more important had shattered. The RCT design still suffers from astigmatism and field distortion as you move off-axis but it does manage to correct three of the five major abberations. So this type of telescopes is preferred for professional systems because it has a better optical image compared to other designs and allows for larger field of view.
<urn:uuid:b3d323db-031d-4c19-bd7e-011e8e52d7dd>
2.84375
333
Q&A Forum
Science & Tech.
36.031625
I know that's stupid question, but I'm really confused what my teachers says, so I need to check that theory. Here are just two ordinary connected containers, which are full of water. On grounds ... When you fill a glass with water, water forms a concave meniscus with constant contact angle $\theta$ (typically $\theta=20^\circ$ for tap water): Once you reach the top of the glass, the water-air ... If I have two connected cylinders filled with water, and 50kg stones at both their ends (pressing both ends of the water) how much weight do I need to add on one side so that one of the stones reach ... If yes, why don't they fill up with water, and can you breathe the air there? Like, it's not exactly atmosphere there, but an underwater cave with higher ceiling. P.S. Possible that it has a ...
<urn:uuid:b94c69e4-2ec4-40ba-8e93-22e026ff09b7>
2.953125
191
Q&A Forum
Science & Tech.
80.333727
Photo courtesy of NBC4 Washington Yes and no. It depends on what type of prediction you're looking for. If you're willing to settle for predictions of the near future and can tolerate predictions in terms of probabilities, the goal is already at hand. But if you want definitive, long-range predictions of the precise weather, the answer is no—we'll never be able to do it perfectly. Weather prediction struggles with two enormous problems. First, any prediction of the future is no better than our understanding of the present. Just knowing that it's sunny or raining outside doesn't count for much; to make a reasonable prediction of the future, we need lots of data about the present. We need detailed measurements of air pressure, temperature, wind speed, humidity, and so on at an enormous number of locations and altitudes. We even need to know what people are doing because people influence weather. But the biggest problem in weather prediction is that the atmosphere is essentially chaotic. In this context, chaos means that a very slight alteration in the atmosphere's present arrangement will lead to a radically different arrangement just days or weeks down the road. Chaotic systems are exquisitely sensitive to initial conditions and the futures of two nearly identical arrangements of a chaotic system will diverge exponentially with the passage of time. In the case of the atmosphere, it is said that just one flap of a butterfly's wings will eventually alter the weather of the entire Earth. The time-scale over which these chaotic effects cripple weather prediction are days and weeks. The better you understand the present, the longer your prediction can hold out against chaos. However, exponential growth eventually beats down even the most ambitious efforts—each additional day of prediction accuracy requires that you measure the present many times more accurately than before. Beyond about one week, the measurement requirements become overwhelming. At that point, the best anyone can do is to predict weather probabilities. Such statistical predictions are all that's possible based on imperfect understandings of the present. Answered by Lou A. Bloomfield of the University of Virginia.
<urn:uuid:f0e839a3-27f4-47cb-9220-ca01db99aedc>
2.84375
418
Q&A Forum
Science & Tech.
40.002974
Friday, June 20, 2008 Although we have been enjoying summer heat for nearly a month... technically it won't we summer until 7:59 tonight! That is when the sun will ascend to its highest latitude (Tropic of Cancer, 23.5 degrees north latitude) on the celestial sphere making today the longest day of the year in the northern hemisphere while it will be the shortest in the southern hemisphere. One of the common questions we get in the weather department is why today is the highest point for the sun yet our maximum temperatures do not occur until July/August typically? The reason is something called the 'lag of the seasons' and it is the same reason why it is hotter in the mid-afternoon than at noon. We receive our maximum insolation (the time when the maxiumum solar energy is deposited during the day at a point on the surface of the Earth) at midday and on the first day of summer when the sun is the highest in the sky. Because the Earth, the atmosphere, and oceans store heat they release it at a slower pace than they stored it up. Thus it takes about a month for us to see our highest temperatures on average. The same happens with the first day of winter being in December, but us not seeing out coldest temperatures on average until January. You can learn more about the 'Summer Solstice' by clicking here. Have a great Summer!! Posted by Chris Smith at 4:29 PM Thursday, June 19, 2008 With summer season in full swing it means folks are busy enjoying their neighborhood pools and lakes. While the winter rains helped ease the drought a little replenish many of our lakes, some remain low and continue to get lower. (Check the current lake levels here) Lake Lanier peaked this spring at 1057.80' on May 24th. That was still more than 13' below full pool!! Since then the lake has dropped almost a foot to 1056.85'. One of the major reasons why is because now that we are into the summer season the sun evaporates nearly .2" of water every day. During the winter very little rain is evaporated because of the low sun angle. You can do a simple experiment and notice the difference the summer sun makes. Pour a cup of water on the pavement in the sun and time how long it takes to evaporate. Then, pour a cup of water on the driveway in the shade. With full sunshine, the water evaporates much more quickly. Even with above average rainfall during the summer it is very tough for the lakes to rise much because of the sun. The forecast for the area lakes over the month ahead shows Lake Lanier is expected to drop another foot! So, even with average to above average rainfall our lakes will drop. That's why is is so important that we all do our part to conserve! A great resource for conservation is called Watersmart. They are a local organization committed to water conservation! Have a great weekend and hopefully you will enjoy some of those scattered storms around the area. Posted by Chris Smith at 3:14 PM Tuesday, June 17, 2008 No, the moon is not ready to explode after a giant Thanksgiving feast, nor is it ready to crash into Earth. The giant moon you are witnessing this week at moonrise is called the 'solstice moon illusion.' What is happening is your eyes are playing tricks on you! Scientists are not a 100% sure why and there are several theory's as to why it happens. The neat thing is that while the moon looks huge to you, it is actually a 'normal' size if you look at it through a camera lens. The 'giant moon' is more apparent this time of year because when the sun is at its highest, the moon is at its lowest. With summer starting Saturday, that is the highest sun of the year and thus the lowest moon. Although it was originally thought that the moon was being magnified by the atmosphere, we now know this not to be true because images of the moon on film are the same size regardless of elevation. You can read more about this optical phenomenon and the theories as to why it happens by clicking here. Also, here are the moonrise times for the days ahead. Moonrise 6/17: 8:25 p.m. Moonrise 6/18: 9:16 p.m. Moonrise 6/19: 10:01 p.m. Posted by Chris Smith at 8:58 AM Wednesday, June 11, 2008 Just wanted to share this picture that a viewer sent me. It was taken by Kevin Turner in Hiram on Monday night. We had some very impressive electrical storms Monday night and Kevin, an amateur photographer, caught this strike. Notice how it forks across the sky and then you have the leader stroke hitting the ground. If you would like to learn more about lightning and lightning safety just log onto our website at CBS46.com. Posted by Chris Smith at 9:36 AM Monday, June 09, 2008 The 2008 hurricane season is under way and we have already had our first storm, Arthur, which hit the Belize/Mexico coast last weekend. The National Hurricane Center is predicting an above average season (see above graphic) for the coming year so we are expecting to stay pretty busy in the weather center. If we are lucky we might be able to get a nice tropical storm to give us some added rain in northern Georgia. All hurricane season you can keep up with the latest on our website at cbs46.com. We have a special hurricane section where you can track the individual storms and get the latest hurricane news. Posted by Chris Smith at 8:09 AM
<urn:uuid:31ae74f6-61b9-437e-8431-fb0fa9319086>
3.078125
1,178
Personal Blog
Science & Tech.
65.832991
Interfacing to GCC Output GCC is normally configured to use the same function calling convention normally in use on the target system. This is done with the machine-description macros described ( Target Macros.). However, returning of structure and union values is done differently on some target machines. As a result, functions compiled with PCC returning such types cannot be called from code compiled with GCC, and vice versa. This does not cause trouble often because few Unix library routines return structures or unions. GCC code returns structures and unions that are 1, 2, 4 or 8 bytes long in the same registers used for `int' or `double' return values. (GCC typically allocates variables of such types in registers also.) Structures and unions of other sizes are returned by storing them into an address passed by the caller (usually in a register). The machine-description macros `STRUCT_VALUE' and `STRUCT_INCOMING_VALUE' tell GCC where to pass this address. By contrast, PCC on most target machines returns structures and unions of any size by copying the data into an area of static storage, and then returning the address of that storage as if it were a pointer value. The caller must copy the data from that memory area to the place where the value is wanted. This is slower than the method used by GCC, and fails to be reentrant. On some target machines, such as RISC machines and the 80386, the standard system convention is to pass to the subroutine the address of where to return the value. On these machines, GCC has been configured to be compatible with the standard compiler, when this method is used. It may not be compatible for structures of 1, 2, 4 or 8 bytes. GCC uses the system's standard convention for passing arguments. On some machines, the first few arguments are passed in registers; in others, all are passed on the stack. It would be possible to use registers for argument passing on any machine, and this would probably result in a significant speedup. But the result would be complete incompatibility with code that follows the standard convention. So this change is practical only if you are switching to GCC as the sole C compiler for the system. We may implement register argument passing on certain machines once we have a complete GNU system so that we can compile the libraries with GCC. On some machines (particularly the Sparc), certain types of arguments are passed "by invisible reference". This means that the value is stored in memory, and the address of the memory location is passed to If you use `longjmp', beware of automatic variables. ANSI C says that automatic variables that are not declared `volatile' have undefined values after a `longjmp'. And this is all GCC promises to do, because it is very difficult to restore register variables correctly, and one of GCC's features is that it can put variables in registers without your asking it to. If you want a variable to be unaltered by `longjmp', and you don't want to write `volatile' because old C compilers don't accept it, just take the address of the variable. If a variable's address is ever taken, even if just to compute it and ignore it, then the variable cannot go in a register: Code compiled with GCC may call certain library routines. Most of them handle arithmetic for which there are no instructions. This includes multiply and divide on some machines, and floating point operations on any machine for which floating point support is disabled with `-msoft-float'. Some standard parts of the C library, such as `bcopy' or `memcpy', are also called automatically. The usual function call interface is used for calling the library routines. These library routines should be defined in the library `libgcc.a', which GCC automatically searches whenever it links a program. On machines that have multiply and divide instructions, if hardware floating point is in use, normally `libgcc.a' is not needed, but it is searched just in case. Each arithmetic function is defined in `libgcc1.c' to use the corresponding C arithmetic operator. As long as the file is compiled with another C compiler, which supports all the C arithmetic operators, this file will work portably. However, `libgcc1.c' does not work if compiled with GCC, because each arithmetic function would compile into a call to itself! automatically generated byinfo2html
<urn:uuid:bfa9d0d6-765e-4966-b5bd-08aa7ec36ba5>
3.359375
999
Documentation
Software Dev.
45.107023
Science Fair Project Encyclopedia Two dimensional gel electrophoresis Two dimensional gel electrophoresis, commonly abbreviated as 2-DE or 2-D electrophoresis, is a form of gel electrophoresis commonly used to analyze proteins. In 1-D electrophoresis, proteins (or other analytes) are separated in one dimension, so that all the analytes will lie along a line but be separated from each other by some property. 2-D electrophoresis begins with 1-D electrophoresis but then separates the analytes by a second property in a direction 90 degrees from the first. The result is that the analytes are spread out across a 2-D surface rather than along a line. Because it is less likely that two analytes will be the same in both properties than that they will be the same in just one property, analytes are more effectively separated in 2-D electrophoresis than in 1-D electrophoresis. To separate the proteins by isoelectric point, a gradient of pH is applied to a gel and an electric potential is applied across the gel, making one end more positive than the other. at all pHs aother than their isoelectric point, proteins will be charged. If they are positively charged, they will be pulled towards the more negative end of the gel and if they are negatively charged they will be pulled to the more positive end of the gel. Once they reach the region of the gel with pH corresponding to their isoelectric point, however, they will become neutraly charged and remain in that spot. Before separating the proteins by mass, they are treated with sodium dodecyl sulfate (SDS). This denatures the proteins (that is, it unfolds them into long, straight molecules) and attaches attaches a number of SDS molecules roughly proportional to the protein's length. Because a protein's length (when unfolded) is roughly proportional to its mass, this is equivalent to saying that it attaches a number of SDS molecules roughly proportional to the protein's mass. Since the SDS molecules are negatively charged, the result of this is that all of the proteins will have approximately the same mass-to-charge ratio as each other. Next, an electric potential is again applied, but at a 90 degree angle from the first field. The proteins will be attracted to the more negative side of the gel proportionally to their mass-to-charge ratio. As previously explained, this ratio will be nearly the same for all proteins. The proteins' progress will be slowed by frictional forces. This frictional slowing is roughly inversely proportional to the protein's size which, as noted previously, is roughly proportional to its mass when the protein is denatured. The electric feild is applied for as long as it takes the smallest protein to reach the far end of the gel. The result of this is a gel with proteins spread out on its surface. These proteins can then be detected by a variety of means, but the most common is silver staining. In this case, a silver colloid is applied to the gel. The silver bonds to cysteine groups within the protein. The silver is darkened by exposure to ultra-violet light. The darkness of the silver can be related to the amount of silver and therefore the amount of protein at a given location on the gel. This measurement can only give approximate amounts, but is adequate for most purposes. - A 2-D electrophoresis tutorial on the web site of the Parasitology Group at Aberystwyth University - Discussion forum about 2D gel image analysis The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:1bdd8985-3371-465d-8ea9-c2b4cd565ed0>
3.671875
778
Knowledge Article
Science & Tech.
41.269173
The Arctic’s frozen waters have entered a radically new state. How a deep-drilling experiment is fighting climate change. Scientists analyze ancient glaciers to understand climate change today. Drought stressed the groundwater stores of the southern United States in 2011 and 2012. In 2010, El Niño switched quickly to a strong La Niña, resulting in intense weather. Scientists map Port-au-Prince's earthquake zone in detail. Volcanologists use every device at their disposal to monitor Sicily's Mount Etna. Although 2010’s ozone hole was sizeable, Earth’s ozone layer is on the mend. Ancient mineral crystals help scientists understand Earth's earliest era. An innovative twin-satellite mission is watching water move across our blue planet.
<urn:uuid:814d0e34-64a0-47bb-900f-c7db3e0b0d38>
3.109375
158
Content Listing
Science & Tech.
46.884231
A 295 kg piano slides 4.5 m down a 30° incline and is kept from accelerating by a man who is pushing back on it parallel to the incline (Fig. 6-36). The effective coefficient of kinetic friction is 0.40. I have tried for over 4 hours to get this problem right, but no luck. I am willing to give MAX Karma points to anyone who can provide ALL solutions and ALL answers. Thanks. See Link (a) Calculate the force exerted by the man. _445_ N (Correct) (b) Calculate the work done by the man on the piano. (c) Calculate the work done by the friction force. (d) What is the work done by the force of gravity? (e) What is the net work done on the piano?
<urn:uuid:f8c35e22-6992-4961-8527-838a27a5907d>
2.859375
183
Q&A Forum
Science & Tech.
84.752692
The basic indentation command is <TAB> indent-for-tab-command), which was documented in Indentation. In programming language modes, <TAB> indents the current line, based on the indentation and syntactic content of the preceding lines; if the region is active, <TAB> indents each line within the region, not just the current line. The command C-j ( newline-and-indent), which was documented in Indentation Commands, does the same as <RET> followed by <TAB>: it inserts a new line, then adjusts the line's When indenting a line that starts within a parenthetical grouping, Emacs usually places the start of the line under the preceding line within the group, or under the text after the parenthesis. If you manually give one of these lines a nonstandard indentation (e.g., for aesthetic purposes), the lines below will follow it. The indentation commands for most programming language modes assume that a open-parenthesis, open-brace or other opening delimiter at the left margin is the start of a function. If the code you are editing violates this assumption—even if the delimiters occur in strings or comments—you must set nil for indentation to work properly. See Left Margin Paren.
<urn:uuid:c6175052-2408-4854-871c-88fa09ade7c4>
3.5625
288
Documentation
Software Dev.
39.037067
...making Linux just a little more fun! In the last article we saw some of the basic GCC options, and noted that it supports several CPU architectures. One of the topics we will cover in this article is how to turn on optimizations for different architectures and what happens when they are turned on. We will also look at some other nifty tricks which we can do with GCC. Profiling is a method of identifying sections of code that consume large portions of execution time. Profiling basically works by inserting monitoring code at specific points in the program. This code can be inserted by using the -pg option of GCC. When debugging we need extra information added to the binaries. Programs compiled with the -g flag additional information which can be used by gdb (or other debuggers) is added to the binary. This increases the size of the binaries but is necessary for debugging. When compiling debugging binaries we should turn off all optimization flags. GCC can add debugging information in several different formats such as stabs, dwarf-2 or coff format. $ gcc -g -o helloworld helloworld.c #for adding debugging information $ gcc -pg -o helloworld helloworld.c #for profiling Programs compiled with profiling and/or debugging turned on are usually referred to as debug binaries, as opposed to production binaries which are compiled with optimization flags. We previously saw that the compilation of the program to get an executable binary consists of different phases. Each of the main compile stages (compiling to assembly language, assembling and linking) is done by different executables (e.g. cc1, as and collect2). We use the -time option to GCC to get a breakdown of the time required for each stage. $ gcc -time helloworld.c # cc1 0.02 0.00 # as 0.00 0.00 # collect2 0.04 0.01 We can also gather more fine-grained statistics about the various stages of the compiler cc1 using the -Q option. This shows the real time spent as well as time spent in userspace and kernel modes. $ gcc -Q helloworld.c main Execution times (seconds) preprocessing : 0.00 ( 0%) usr 0.00 ( 0%) sys 0.24 (38%) wall parser : 0.01 (50%) usr 0.00 ( 0%) sys 0.02 ( 3%) wall expand : 0.00 ( 0%) usr 0.00 ( 0%) sys 0.03 ( 5%) wall global alloc : 0.00 ( 0%) usr 0.00 ( 0%) sys 0.03 ( 5%) wall shorten branches : 0.00 ( 0%) usr 0.00 ( 0%) sys 0.04 ( 6%) wall symout : 0.00 ( 0%) usr 0.00 ( 0%) sys 0.01 ( 2%) wall rest of compilation : 0.01 (50%) usr 0.00 ( 0%) sys 0.00 ( 0%) wall TOTAL : 0.02 0.00 0.64 Before we move on to optimizations, we need to look at how a compiler is able to generate code for different platforms and different languages. The process of compilation has three components: There are two kinds of optimizations possible - optimizations for speed and optimizations for space. In an ideal world, both would be possible at the same time, (Actually some optimizations do both - such as common sub-expression elimination). More often than not optimizing for speed increases the memory footprint (the size of the program loaded in memory) and vice versa. Expanding functions inline is a good example of this case. Inlining functions reduces the overhead of a function call but ends up replicating code wherever the inline function has been called, thus increasing the size of the executable. Turning on optimizations will increase the compilation time as the compiler has to analyze the code more. GCC offers four optimization levels. These are specified by the -O<Optimization Level> flag. The default is no optimization or -O0 (notice the capital O). Various optimizations are turned on by the each of the different levels (-O1, -O2 and -O3). Even if we give higher optimization levels such as -O25, they have the net effect of enabling the highest level of optimizations (-O3). In addition to these four optimization level there is another optimization level -Os which enables all the optimizations for space as well as those optimizations which do not increase the size of the code but give speed improvements. In -O1 optimization level, only those optimizations are done which reduce code size and execution time without increasing compilation times significantly. In -O2 optimization level, those optimizations which do have a space-execution time tradeoff are done. Almost all optimizations are turned on by -O3 optimization level but compilation time might increase significantly by turning it on. $ gcc -O3 -o hello3 helloworld.c $ gcc -O0 -o hello0 helloworld.c $ ls -l -rwxr-xr-x 1 vinayak users 8722 2005-03-24 17:59 hello3 -rwxr-xr-x 1 vinayak users 8738 2005-03-24 17:59 hello0 $ time ./hello3 > /dev/null real 0m0.002s user 0m0.001s sys 0m0.000s $ time ./hello0 > /dev/null real 0m0.002s user 0m0.000s sys 0m0.003s As seen above, compiling the program with -O3 optimization level reduces the size of the executable compared to the compiling with -O0 optimization level (no optimization) It is also possible to have CPU or architecture specific optimizations. For example a particular architecture may have numerous registers. These can be utilized by the register allocation algorithm intelligently so as to store temporary variable between calculation to minimize cache and memory accesses thus ensuring considerable speed ups in CPU-intensive operations. Some of the platform specific optimization flags can be done using -march=<architecture type> or -mcpu=<CPU name>. For the x86 and x86-64 family of processors, -march implicitly implies -mcpu. Some of the architecture types options in this family are ix86 (i386, i486, i586, i686), Pentiumx (pentium, pentium-mmx, pentiumpro, pentium2, pentium3 ,pentium4) and athlon (athlon, athlon-tbird, athlon-xp, opteron). But executables built with platform specific flags may not run on other CPUs. For example, executables generated with the -march=i386 will run on i686 platform because of backward compliance of the platforms. However, executables generated with the -march=i686 may not run on the older platforms as some of the instructions (or extended instruction sets) do not exist on older CPUs. If you use the Gentoo Linux distribution, you might be already familiar with some of these flags. $ gcc -o matrixMult -O3 -march=pentium4 MatrixMultiplication.c #optimise for Pentium 4 $ gcc -o matrixMult -O3 -march=athlon-xp MatrixMultiplication.c #optimise for Athlon-XP Also you can give specific flags for optimizations at the command line such as -finline-functions (for integrating simple functions into their callers) and -floop-optimize (to optimize loop control structures). The important thing to remember is that the order of the flags on the command line matters. The option on the right will override ones on the left. This is a good way to choose particular options without cluttering the compilation command line. It is also possible to do it for platform-specific optimizations. $ gcc -o inlineDemo -O3 -fno-inline-functions InlineDemo.c $ gcc -o matrixMult -march=pentium4 -nosse2 MatrixMultiplication.c In the example, the optimization level -O3 will enable inlining of functions - the effect is same as -O2 -finline-functions -frename-registers. So if you have the option -fno-inline-functions on the right on the command line, it will disable inlining of functions. This command will turn on all the Pentium4 specific optimizations but code generated will not contain any MMX, SSE and SSE2 instructions. All the options supported by GCC on your machine can be seen by giving the following command: $ gcc -v --help | less This will list out all the different options that are supported by GCC and the processes invoked by GCC on your machine. This is a pretty huge list and should contains most of the options discussed by us in this article and the earlier one in this series and more. In this article, we looked mainly at the various GCC optimizations options and how they work. In the next part of this series we will look at another development tool called make used for building big projects. Vinayak Hegde is currently working for Akamai Technologies Inc. He first stumbled upon Linux in 1997 and has never looked back since. He is interested in large-scale computer networks, distributed computing systems and programming languages. In his non-existent free time he likes trekking, listening to music and reading books. He also maintains an intermittently updated blog.
<urn:uuid:e3b0f3ff-d9ae-4db1-b38b-fb9b4adf06c4>
3.453125
2,001
Personal Blog
Software Dev.
56.650997
This approach predicted the design of alloys with roughly half the stiffness of current implants — promising to reduce pain for the millions of patients who receive hip transplants each year. It also opens opportunities for theory-guided design of future biomedical products. Soft materials form organized systems at scales beyond those of molecules, within products ranging from plastics in yoghurt cups, to foods such as mayonnaise and to biological molecules including DNA. They differ from ‘hard matter’ (such as steel or gold) by lower energy density: the bonds are 100–1,000 times weaker than those in metallic crystals5,6. Thermal effects influence the organization of soft materials. Hence, models cannot use quantum calculations alone; thermodynamic calculations must also be included. Modelling soft materials starts with an atomic description, and then derives a broader model for simulations over a longer period. This approach can rapidly switch between levels of resolution, and can be run in reverse to study melting behaviour. Various soft-material problems, such as polymer stability or liquid-crystal switching, can be solved this way. Imagine moving from a study of individual alloy atoms to simulating the material in an automotive test crash or using the results of biomolecular simulations to describe the workings of blood vessels. Multi-scale modelling could make such visions reality, but challenges remain. Unified theories of matter, which can bridge computational scales in a physically consistent manner, must be developed. Experimental observations are needed to verify model predictions at all levels. Successful multi-scale modelling must be capable of handling the complexity of real-life situations to avoid generating incorrect predictions. A multiscale model, based on coupled ab initio and continuum simulations, has helped to unravel the molecular and mesoscopic structure and properties of chitin-based natural polymers. These materials form the exoskeleton of arthropods, including insects, spiders and decapods. The model can be used to design advanced synthetic polymers. The project was realized cooperatively by the Max Planck Society, the Bulgarian Academy of Sciences and the Massachusetts Institute of Technology (Nikolov. S. et al. Adv. Mater.22, 519–526, 2010 ).
<urn:uuid:794c8675-4a1d-4fa1-a21d-d782985d740b>
3.046875
448
Knowledge Article
Science & Tech.
28.355195
Episode 537: Preparation for the Deep Scattering and Quarks Topics From the Institute of Physics, this topic gives clear evidence for the size of the nucleus and for the fact that nucleons are not fundamental particles but contain different parts. This leads onto Gell-Mann and Zweig’s quark model. The learning episodes in this topic are: Episode 538: Electron scattering Episode 539: Deep inelastic scattering Episode 540: Quarks and the standard model After the activities, students should: 1. know that Rutherford’s experiment, using alpha particles, cannot probe the nucleus because the alpha particles will interact with the nucleus by the strong nuclear force 2. know that electrons, being leptons, do not ‘feel’ the strong nuclear force, and so can probe the nucleus 3. use electron wavelength and scattering data to calculate the size of the nucleus 4. understand that the complex scattering from a nucleus reveals that nucleons are not simple points, but are themselves composed of smaller particles 5. describe how hadrons are made from two or three quarks 6. deduce the properties of a hadron from the properties of its constituent quarks 7. draw Feynman diagrams involving quarks and gluons. HEALTH and SAFETY Any use of a resource that includes a practical activity must include a risk assessment. Please note that collections may contain ARCHIVE resources, which were developed at a much earlier date. Since that time there have been significant changes in the rules and guidance affecting laboratory practical work. Further information is provided in our Health and Safety guidance.
<urn:uuid:faf44bec-284d-4c2b-b310-9074c4b83c5c>
4.15625
342
Tutorial
Science & Tech.
41.043971
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 10 results on physics.org and 79 results in our database of sites 59 are Websites, 2 are Videos, and 18 are Experiments) Search results on physics.org Search results from our links database An introduction to pressure which discusses various everyday applications and the relationship between pressure and surface area. The fundamental SI unit of pressure is the Pascal (Pa), but it is a small unit so kPa is the most common direct pressure unit for atmospheric pressure. Percy Williams Bridgman (1881 - 1961) research concerned the effects of high pressures on materials and their thermodynamic behavior. Ever wondered why it hurts so much when someone wearing spiked heels steps on your foot? It's all about pressure. physics.org page with fascinating facts all about atmospheric pressure. Did you know that you have one tonne of air pressing down on you? A handy conversion calculator for some common pressure units. Description of this concept and links to related information. Pressure is defined as force per unit area. This page from NASA tells you all you need to know about atmospheric pressure. Find out why your ears pop or have a go at flying a hot air balloon. A collection of interesting experiments dealing with air pressure and aerodynamics. From the companion website to several recent NOVA programs on Mt. Everest. This page looks at the effects of air pressure at high altitudes. Showing 1 - 10 of 79
<urn:uuid:585eb434-f698-470d-bd34-d89f7509c6d1>
3.265625
344
Content Listing
Science & Tech.
55.085535
At more than 200 times sun’s mass, this giant sets a new record. Found in: Astronomy and Atom & Cosmos BLOG: A study showing a genetic basis for exceptionally long life in humans has come under fire from critics. (p. 10) Found in: Body & Brain and Genes & Cells The Rosetta orbiter makes its second swing past a relic of the early solar system. Astronomers from the United Kingdom have published papers criticizing some of the evidence used to support theories of dark matter and energy. Found in: Atom & Cosmos Hayabusa is the little spacecraft that could. Having survived countless technical challenges over its seven-year journey, the Japanese probe returned to Earth on June 13, disintegrating as planned in a blazing fireball over Australia’s nighttime skies. But before it burned up in the atmosphere, Hayabusa released its precious cargo: a 40-centimeter-wide capsule that, scientists hope, contains samples of the asteroid the probe visited in 2005. Protected in a larger container, the capsule parachuted down to the Woomera military installation in South Australia, where ground tea... Climate experts turn their gaze north to monitor this summer's Arctic melt. Found in: Earth and Environment If the object that crashed into Jupiter on June 3 left behind a bruise, it’s a tiny one. No ground-based telescope has found evidence of a scar. But an image taken June 6 with the sharp eye of the Hubble Space Telescope may provide the final say. Researchers haven’t yet had a chance to get their hands on the Hubble data, says planetary scientist Heidi Hammel of the Space Science Institute in Boulder, Colo. “Until we get that back, it will not be clear whether we have an impact that left a [scar], or a meteor which did not.” With observations planned months in advance, “... Found in: Astronomy, Atom & Cosmos and Planetary Science A new theory suggests atmospheric answer to the continuing paradox of why early Earth wasn’t icy. Found in: Earth, Earth Science and Planetary Science Astronomers at the American Astronomical Society meeting in Miami presented images of exoplanets in high-angle orbits. Found in: Atom & Cosmos Physicists are embroiled in a verbal slugfest over a few measly WIMPs.WIMPs, or weakly interacting massive particles, are hypothetical subatomic particles that, if shown to exist, might account for some of the invisible dark matter that astronomers say makes up some 85 percent of the mass of the universe. Astronomers are eager to find dark matter, because it would help them understand the unseen gravitational glue that keeps galaxies and galaxy clusters from flying apart. And a WIMP version of dark matter in particular would thrill many physicists, because it would validate a theory called su...
<urn:uuid:4a1b824c-df23-40df-8e54-b1cec7fbf808>
3.0625
597
Content Listing
Science & Tech.
44.568942
The EPA has listed the Ventura River as an "impaired water body" for a variety of problems, including trash, bacteria, water diversion and pumping, and pesticides (DDT/PCBs). The Clean Water Act requires that government take action to solve the problems to ensure the river is fishable and swimable. The regulatory mechanism for this is "Total Maximum Daily Load," or TMDL. The primary concern is that algae growth may be fueled by excess nutrients (nitrate and phosphate), which in turn creates large daily swings in dissolved oxygen (DO.) If DO drops below 4 mg/l, aquatic life can become stressed and fish kills may occur. (This is also called 'eutrophication') The presence of endangered species makes this issue even more critical. Stream Team volunteer data has been used along with scientific analysis at UCSB to monitor and study algae over the past year. This graph is actual data from 24 hours of sampling on the Ventura River. It illustrates how photosynthesizing algae releases O2 during daylight hours, generating peak DO measurements in the early afternoon. However, overnight DO levels drop dramatically, with a minimum in the pre-dawn hours. Traditionally, nutrients are seen as the driver for excess algae growth. Nutrients may originate from broad land uses such as agriculture, livestock, septic tanks, treated wastewater, as well as atmospheric deposition. It turns out that algae is widespread throughout the Ventura River watershed, and is highly variable with season and annual climate (wet or dry year.) Algae is also dependent on river flows, water temperature, sedimentation, and a host of other variables. Because of these complex relationships, algae may be seen as a symptom of ecological stress, rather than a problem in itself. This raises a complex question when it comes to regulating algae as a pollutant: the TMDL process was originally developed to control point-source pollution. It is clear that in this watershed with shallow, over-drafted aquifers and strong surface water/groundwater interactions, a meaningful algae TMDL will require a watershed approach that takes into account ecosystem processes. Integrated watershed management will be necessary to address excess algae in the Ventura River. Paul Jenkin is the Environmental Director of the Ventura County Chapter of the Surfrider Foundation, and founder of the Matilija Coalition. The Surfrider Foundation is an international environmental organization dedicated to the protection and enhancement of the world’s waves and beaches through conservation, activism, research, and education (CARE). Since 1994, Paul has worked to restore the coast and watershed where he lives, in Ventura, California.
<urn:uuid:fd7bc1cb-4298-4712-8d3a-384c8c14cdc8>
3.484375
539
Knowledge Article
Science & Tech.
30.025
(1) In programming, a symbol or number used to identify an element in an array. Usually, the subscript is placed in brackets following the array name. For example, AR identifies element number 5 in an array called AR. If the array is multidimensional, you must specify a subscript for each dimension. For example, MD identifies an element in a three-dimensional array called MD.
<urn:uuid:49a7f8e3-e20f-4dcf-a2da-1a240797aa13>
3.46875
80
Structured Data
Software Dev.
44.471125
Reprogrammed stem cells on Dipity. Babies can be anything when they grow up, but it's a lot harder for a 45-year-old accountant to start a new life as a firefighter. Likewise, embryonic stem cells can become any kind of cell in the human body, but it's another thing entirely to force a specialized adult cell out of its comfort zone. For instance, scientists can strip an adult blood cell of its programming, and make it act like a stem cell again. But the results aren't perfect. And, now, it looks like these "induced pluripotent stem cells" (or iPSCs) are even more flawed than researchers previously realized. Science blogger extraordinaire Ed Yong explains: The history of iPSCs is written in molecular marks that annotate its DNA. These 'epigenetic' changes can alter the way a gene behaves even though its underlying DNA sequence is still the same. They are like Post-It notes - you can stick them to a book to point out parts to read or ignore, without editing the underlying text. Epigenetic marks separate different types of cells from one another, influencing which genes are switched on and which are switched off. And according to Kim, they're not easy to remove, even when the cell has apparently been reprogrammed into a stem-like state. He focused on one such marker - the presence of methyl groups on DNA, which typically serve to switch off genes. They're like Post-it notes that say "Ignore this". Kim found that iPSCs have very different methylation patterns depending on the cells they came from. Those that come from brain or connective cells have methyl groups at genes that are necessary for making blood cells, and vice versa. The iPSCs even have distinctive methyl marks if they come from slightly different lineages of blood cells. Now, Ryan Lister and Mattia Pelizzola from The Salk Institute have found the same reprogramming errors in human iPSCs, and to a much greater extent than even Kim had suspected. At first, the iPSCs seemed to have a spread of methyl marks that looked superficially similar to those of embryonic cells. But when Lister and Pelizzola looked more closely, the cracks started to appear in this tidy picture. The duo found plenty of hotspots around the iPSC genomes that were unusually ridden with methyl marks. None of these marks existed in true embryonic stem cells, and some sat in places that could switch off important genes. That's a problem. There might be ways around it, Ed says. And there are other ways to turn adult cells into stem cells. Trouble is, none of those technologies are as well-developed, and they're more likely to spark ethical debates. If we're going to be able to use stem cells in a really productive, wide-spread way, this is a big hurdle that will have to be cleared. Why didn't we know this earlier? Because the path of research is long, winding, and bumpy. To get an idea of what it took to get to this point, check out the awesome interactive timeline Ed made to accompany this story. Not Exactly Rocket Science: Reprogrammed stem cells are loaded with errors Maggie Koerth-Baker is the science editor at BoingBoing.net. She writes a monthly column for The New York Times Magazine and is the author of Before the Lights Go Out, a book about electricity, infrastructure, and the future of energy. You can find Maggie on Twitter and Facebook.
<urn:uuid:687e4238-d466-46de-b18e-efdeb2ef4ac8>
3.203125
733
Personal Blog
Science & Tech.
52.382733
Modern molecular phylogenetics, in conjunction with sensitive mineral surface geochemical analysis, was used to determine the quantitative and qualitative distribution of micro-organism in Dry Valley soils, the key environmental factors that determine microbial distribution and what the role of micro-organism in community structure are. Several sites were sampled including Mt Erebus, the Miers ... Valley, Bratina Island and Beacon Valley. At Mt Erebus, the site was probed for soil temperatures and the areas hot spots were mapped. The temperature, pH and soil moisture was measured over three transects and the soil was sampled at intervals along each transect. A temperature logger with two probes, one at 4cm depth and the other on the surface was installed for one year. At the Miers Valley, a survey of hypoliths was conducted to gain a quantitative estimation of habitat and biomass values. Hypolith dimensions (cm2), weight of the hypolith rock (g), gross weight of the community (g), the depth of rock insertion (mm) and the depth of penetration (mm) were all recorded. Environmental data (temperature, humidity and irradiance levels) of hypoliths was also recorded with probes at the underside of the hypolith (3cm deep), the soil surface, open mineral soil (3cm) and non-translucent rock (3cm). Photographs were taken of the hypoliths and samples were collected for phylogenetic analysis, pigment spectrophotometric analysis and ATP analysis. A transect was established and soil samples were collected at intervals along the transect. An area with several (>48) seal carcases was surveyed with transects running through individual carcasses. Transects were described, sampled at intervals, pH measured, CO2 profiles determined, the relative humidity and temperature under the carcass measured, sampled for RNA analysis and microscopy and DNA samples were extracted. Other seal carcasses were sampled for carbon dating. Two experiments were set up: 1) Hypolith growth experiment to determine if soils are capable of seeding de novo hypoliths and 2) DNA longevity was determined in soil next to a hypolith sample using a clonesaver card cut into 9 bits. Soil types were also plated onto media for isolation of certain bacteria. At Bratina Island, two ponds (P70 and Orange) were sampled, and temperature and pH recorded. A short visit was made to the Beacon Valley where soils were sampled across a 25m transect for comparison. Samples (comprising DNA extracts, soil samples, mummified seal tissue samples, RNA later-stabilised soil samples, rock samples, hypolith samples and culture plates) collected from the Miers and Beacon Valleys were returned to the University of Waikato.
<urn:uuid:fab97332-37e4-4907-a25a-08c3cabab130>
3.171875
569
Academic Writing
Science & Tech.
26.411957
Census of Antarctic Marine Life (CAML) Archive of Project DocumentationEntry ID: CAML_Project_Archive.Media_Education_Outreach Abstract: The Census of Antarctic Marine Life (CAML) Project Archive is a collection of scanned documents, maps, videos, and other related material that comprise the organisation and management documentation associated with a major research project of international significance. CAML measured the distribution and abundance of life in the Southern Ocean around Antarctica so that future impacts of climate ... change and human activities can be better understood. CAML coordinated the largest-ever survey of the Southern Ocean with 18 voyages in Antarctic waters, and inventoried over 16,000 marine species with hundreds new to science, provided DNA barcodes for 1,500 species, and has so far produced more than 600 scientific publications. CAML is a key activity of the Scientific Committee on Antarctic Research (SCAR); a subproject of the Census of Marine Life (CoML); and was a major initiative of the 2007-2009 International Polar Year (IPY). Purpose: Media_Education_Outreach includes: two videos (Oceans of Ice and Life Under the Ice); CAML fact sheet and two brochures; list of CAML accomplishments, short and long versions; cover image from the special issue of Deep Sea Research Part II edited by CAML personnel; poster and presentation from the 2010 project finale in London; press release about CAML, short and long versions; and other press releases, including an editorial from "The New York Times." (Click for Interactive Map) Start Date: 2001-10-20Stop Date: 2002-02-04 ISO Topic Category Role: DIF AUTHOR Phone: 303 381 7470 Fax: 303 381 7501 Email: bjorn at unavco.org 6350 Nautilus Dr Province or State: CO Postal Code: 80301 Johns, B., 2001-2002 Season Report, UNAVCO 2002. Creation and Review Dates DIF Creation Date: 2005-10-12 Last DIF Revision Date: 2005-10-17
<urn:uuid:683e367e-dfe7-40b1-aa8a-341f87ee7936>
2.6875
445
Content Listing
Science & Tech.
34.414149
||Reinforcement Learning and Information and Resources The ambition of this page is to provide information and links for using Python. Python Documentation, Tutorials and General Information Tkinter (Python version of tk) (GUI) Introduction to Tkinter (html) - Also check the Python books; Tkinter has been the default GUI for Python for many years and ships with it so many Python manuals also - The mac does not have tcl/tk on it by default; you must install tcltkaqua first. After this is loaded, you can open the Package Manager and install Python on the Mac Downloads and documentation, including IDLE, Python Launcher and the - Tkinter on the Mac: - The mac does not have tcl/tk on it by default; you must install After this is loaded, you can open the Package Manager (see MacPython entry above) and install - You must use pythonw instead of python for python scripts which use Tkinter. - You cannot use the Python IDE for applications with Tkinter. - You can use IDLE with Tkinter, but only if you make the changes - IDLE on the mac Python on Windows XP - Make sure that all old .pyc files are removed before trying to use the toolkit or other package. .pyc files from other machines will not work properly on Windows. Python on Linux - Be careful what you use to unpack (unzip, untar) tgz files (like RLtoolkit). Used zcat with tar messes up the __init__.py files. They are saved as __init__.py.bin files and are gibberish. I'm not quite sure why this happens. Perhaps someone can fill in a method here that works well without this problem. Mixing Python and C Other Python Packages edits, colorizes and prints code, but does not run it. - IDLE the standard IDE for Python - colorizes, runs, debugs, prints - does it all. - IDLE requires Tkinter: make tk is installed on your machine (see tkinter section above) - Macintosh IDLE: - Make sure you have the MacPython download (from above) installed. IDLE will be in - The key bindings are initially set for Windows. To change this, go to the Options menu and select Configure IDLE. A window will pop up with several tabs on it. Choose the Keys tab and select the IDLE Classic Mac setting; then click Apply and Ok. - To use Tkinter in python scripts: change IDLE start with pythonw, not - From a terminal window, go to - Do one (or both) of the following: - Edit (e.g. vi) idle.py file to use pythonw instead of python (first line). The first line should look - Start up Python Launcher (same location as IDLE - tell it to use pythonw as the interpreter (/usr/bin/pythonw) - tell it NOT to allow override #! from a script. - Save the file (modified or not) as idle.py somewhere useful for you (desktop, your home directory, etc). - Do Get Info (cntl click gives you a menu; select Get from there) on the file. Under Open With, choose PythonLauncher. Close the Get Info window. - Now you can double click on this idle.py file to get started (hint - move it into your dock in the document area and you will be able to click on it and start idle from there). - DO NOT try to start IDLE using the Macintosh under /Applications/MacPython-2.3. It will start with python, not pythonw, even with the changes you have made. - To get interactiveness with tk scripts (i.e. build interactive windows and still be able to use the shell): - Download a new run.py file from Jason - Replace the run.py file in idlelib with the new one. The idlelib location depends on your machine. For example: - Macintosh: there are two places to replace the run.py file (use Terminal to do this) - Linux Redhat 9: /usr/lib/python2.3/idlelib - Windows XP: c:\Python23\Lib\idlelib (need to - Now when you use gMainloop or tk's mainloop functions, you will get the message: 'IDLE: now running in event loop'. now use your windows interactively, and type commands into the shell have them run. of Python Editors and IDE's Python is not as fast as c or Lisp, but there are some things that can be done to speed it up. Please add to these as you discover them!!! And remember, you can call c from Python, so that is an option too! - Some hints - While loops are faster than for loops - Use xrange instead of range on for loops for greater speed - Python profiler - shows you where you are spending the most tile (included in python distribution) - import profile - profile.run('python command')
<urn:uuid:f32c00bb-0bf4-43ec-9edd-0cb1d810ce5b>
2.875
1,154
Tutorial
Software Dev.
61.611065
Laws of Source Code and Software Development I’ve worked in a variety of projects, in a myriad of languages, and have learned the following universal truths about software development the hard way. - Commented out code are not comments – Use version control, don’t track code changes by commenting them out. Commented out code is schizophrenic code. - Let your reputation and code precede you – If you work on open source projects, blog, and work your network, you will get more job offers even when you aren’t looking for a job than people that are looking for a job and just email out resumes. - Don’t make excuses for code, let it speak for itself – You are paid to find solutions using code, not find excuses for your code. ‘It worked on my machine’ is not a solution. You will not ship out your computer to the client with the application. - Don’t take code personal – Don’t take code reviews personally, it is not about you but a business feature and the overall performance of the application. - Your code is your career legacy – For years after you leave, those that will maintain your code will either curse you or thank you. - Coding does not equal programming – Writing code is not the same thing as software development, one requires thought while the other does not. Just like playing with your iPod does not make you a musician. - Code is about learning – Moore’s Law states that technology doubles every 18 months, you should keep up. If you are not learning you are doing it wrong. Every project is an opportunity to learn. - Code is communication – People will read the code you write. Use best practices and common design patterns and idioms. Strive for simplicity over impressing the monkey on your back. Your code should communicate clearly and concisely it’s intent. Code talks, bugs walks! - It is not the tools that make a developer – Know your tools and use them to their full power but don’t use them as a crutch! Switching between IDEs should not stop you on your tracks because you can’t find the correct code generation wizard. Michelangelo was a great artist with nothing more than a chisel and a slab of marble. - Don’t trust your code – Trust in your coding abilities does not replace repeatable testing. Don’t trust your code, assumptions, or users. - Code is not written in Latin – Code is not dead once the application ships. Code is always refactored, modified, re-used, and evolving. Your greatest strength is not writing mountains of new lines of code but maintaining, refactoring, and herding existing code into performing business requirements as per an agreed specification. - Respect the API – Your API is a contract others will depend on. Keep the API clean and explicit! The least amount of methods you expose is less testing and maintenance and documentation that you need to maintain. - Code outlives its intention – As much as you would like, writing your application from scratch in the latest programming language or framework will not benefit the number of end users that for one reason or another are stuck with the current version of the software. Code can outlive it’s original intention, design for extensibility and adaptability. - Code means different things to different people – In the end, to end users code simply means the ability to do what they expect.
<urn:uuid:4ce0c887-a79c-4d83-824c-c21a21a24ab1>
2.8125
725
Listicle
Software Dev.
51.357138
Auto-Implemented Properties (Visual Basic) Auto-implemented properties enable you to quickly specify a property of a class without having to write code to Get and Set the property. When you write code for an auto-implemented property, the Visual Basic compiler automatically creates a private field to store the property variable in addition to creating the associated Get and Set procedures. With auto-implemented properties, a property, including a default value, can be declared in a single line. The following example shows three property declarations. An auto-implemented property is equivalent to a property for which the property value is stored in a private field. The following code example shows an auto-implemented property. The following code example shows the equivalent code for the previous auto-implemented property example. When you declare an auto-implemented property, Visual Basic automatically creates a hidden private field called the backing field to contain the property value. The backing field name is the auto-implemented property name preceded by an underscore (_). For example, if you declare an auto-implemented property named ID, the backing field is named _ID. If you include a member of your class that is also named _ID, you produce a naming conflict and Visual Basic reports a compiler error. The backing field also has the following characteristics: The access modifier for the backing field is always Private, even when the property itself has a different access level, such as Public. If the property is marked as Shared, the backing field also is shared. Attributes specified for the property do not apply to the backing field. The backing field can be accessed from code within the class and from debugging tools such as the Watch window. However, the backing field does not show in an IntelliSense word completion list. Any expression that can be used to initialize a field is valid for initializing an auto-implemented property. When you initialize an auto-implemented property, the expression is evaluated and passed to the Set procedure for the property. The following code examples show some auto-implemented properties that include initial values. You cannot initialize an auto-implemented property that is a member of an Interface, or one that is marked MustOverride. When you declare an auto-implemented property as a member of a Structure, you can only initialize the auto-implemented property if it is marked as Shared. When you declare an auto-implemented property as an array, you cannot specify explicit array bounds. However, you can supply a value by using an array initializer, as shown in the following examples. Auto-implemented properties are convenient and support many programming scenarios. However, there are situations in which you cannot use an auto-implemented property and must instead use standard, or expanded, property syntax. You have to use expanded property-definition syntax if you want to do any one of the following: Add code to the Get or Set procedure of a property, such as code to validate incoming values in the Set procedure. For example, you might want to verify that a string that represents a telephone number contains the required number of numerals before setting the property value. Specify different accessibility for the Get and Set procedure. For example, you might want to make the Set procedure Private and the Get procedure Public. Create properties that are WriteOnly or ReadOnly. Use parameterized properties (including Default properties). You must declare an expanded property in order to specify a parameter for the property, or to specify additional parameters for the Set procedure. Place an attribute on the backing field, or change the access level of the backing field. Provide XML comments for the backing field. If you have to convert an auto-implemented property to an expanded property that contains a Get or Set procedure, the Visual Basic Code Editor can automatically generate the Get and Set procedures and End Property statement for the property. The code is generated if you put the cursor on a blank line following the Property statement, type a G (for Get) or an S (for Set) and press ENTER. The Visual Basic Code Editor automatically generates the Get or Set procedure for read-only and write-only properties when you press ENTER at the end of a Property statement.
<urn:uuid:f6392b81-89eb-4689-bb2c-a0a02a42a11d>
3.296875
887
Documentation
Software Dev.
27.645722
Luiz Rocha, the curator of ichthyology at the California Academy of Sciences, writes from Belize, where he conducts research on the social wrasse, one of the world’s most endangered fish. Tuesday, Dec. 17 The weather is much better, and we were able to do several dives and get a good grasp on just how bad the lionfish invasion in Belize really is. They are everywhere. We saw and collected them in all habitats we visited, including coral reef, sea grass and mangrove. Finding them in the last two is especially disheartening, as they are nursery habitats for many coral reef species. Lionfish are eating young reef fish before they can even get there. In the morning we dove along the barrier reef to estimate lionfish abundance there. During a 45-minute dive, we counted more than 20 of them. For perspective, we were a group of five divers, swimming slowly over the edge of the barrier reef. Collectively we would see a lionfish on average every two minutes. Even though this area of Belize has some spectacular diving, it is relatively far from the main airport, and there are no major scuba dive centers. In addition, the main fisheries here are for queen conch and lobster, so nobody is actively fishing the lionfish. Without any control from divers or fishermen, the lionfish population has been allowed to grow unchecked. Carole’s hand was still swollen from the lionfish sting the previous day and she could not use a spear, so Diane Pitassy from the Smithsonian was my dive buddy and helped handling the lionfish. This time we used a wire that went through the gill and out the mouth of the lionfish to carry them instead of the catch bag. The change in strategy worked — no accidental jabs and searing pain this time around. We collected 18 more lionfish in social wrasse habitat, and most of them had social wrasses in their stomachs. We cannot make a precise estimate of how much this is influencing the social wrasse yet. This is mostly because many of the unidentifiable fish remains that we are getting from some lionfish may also be social wrasses. But we do know the number is high. We also found more signs of social wrasse habitat degradation and destruction. Most dive sites in the inner barrier reef — closer to coastal development — had very poor underwater visibility and a lot of silt in the water. This is probably a result of sand and mud runoff from the islands after the mangrove is cut down. The complex network of mangrove roots serves as a trap for sediments that run off from both the coast and the island, and as the roots are taken out and replaced by artificial beaches, the water quality deteriorates. Large expanses of mangroves were and continue to be cut down to give room to exclusive resorts or large houses. Boat traffic is intense. The social wrasse is being hit from all directions. From the top, humans are destroying mangroves and contributing to habitat degradation; from the bottom, the social wrasse seems to be one of the main components of the diet of the invasive lionfish. We really hope that with this blog and more research papers planned for the near future we can influence decision making in Belize and help protect this, and other, Belizean reef fishes.
<urn:uuid:efb1cf4f-0536-4e5e-8395-ce499a1fa21c>
2.84375
693
Personal Blog
Science & Tech.
49.480097
Let's be honest, some mammals look bizarre - giant teeth, deadly feet, skin covered in everything from piercing quills to impenetrable plates of bony armor, and six-foot long hands. Despite our differences, all mammals inherited the same basic body features - some type of hair or fur; backbones; and mammary glands, among others. But, clearly, an amazing variety of creatures have evolved over the 200-million year history of mammals. How could mammals evolve such a wide variety of seemingly strange features? Well, evolution proceeds by acting on the tiniest differences among individuals. For example, a mammal with a distinguishing feature like large front teeth might have an advantage in one particular environment. Over time, as individuals with that trait produce more offspring, all or most members of that species might have large front teeth. Eventually, some might end up with extreme adaptations--like giant tusks. ""Check out the ossicones on that giraffe." OK, that may sound a little weird, but it's better than, "Look at the tooth on that narwhal." After a look at some of the more extreme noses among mammals, you might rethink the saying, "It's as plain as the nose on your face." In fact, you might never say that again. Mammals' mouths contain up to four main types of teeth: incisors, canines, premolars, and molars. This basic but incredibly flexible tool kit evolved early in the evolutionary history of mammals. Mammals have large brains for their body size--larger than most members of other vertebrate groups. What's 11 feet tall, 10,000 years old, and wears a skirt? The woolly mammoth...obviously! But our ancestors lost their tails about 18 million years ago. Today, humans retain only the shortest remnants--just a few hidden bones at the base of the pelvis. When you take a look at the reproductive habits of some mammals, you'll find it is sometimes a little more interesting than the standard "birds and the bees."
<urn:uuid:44688fc4-4f52-45fc-8ca8-f9584d1ceee9>
3.984375
432
Knowledge Article
Science & Tech.
53.85351
Earth Sciences: Year In Review 2012Article Free Pass Heat persisted over a vast area of the U.S. in 2012. Across the contiguous states, 2011–12 saw the fourth mildest meteorological winter (December–February) in 118 years of record keeping, with the third smallest snow cover. The unusually warm winter heralded the warmest spring and the third hottest summer since record keeping began. Extraordinary March heating made it the warmest March on record, and the July heat wave that featured temperatures exceeding 38 °C (about 100 °F) across much of the country caused that month to rank as the hottest July, and the hottest month overall, in 118 years. Scant rainfall occurred in June and July in the Midwest, and the Corn Belt measured its third driest June and July. The heat and dryness resulted in a rapid expansion of the drought across the Corn Belt. By late July the Midwestern drought had gripped close to the entire region; it combined with the ongoing dry spell in the West to form the largest extent of drought across the contiguous U.S. observed since December 1956. One common metric of the phenomenon, the Palmer Drought Severity Index, indicated that moderate to extreme drought covered 57% of the contiguous U.S. during July. In its devastation of crops and pasture, the 2012 drought was comparable to a historic 1988 event. Rains, including some that came from Hurricane Isaac, relieved parts of the Midwest in August and September, but they came too late to improve crop prospects materially. Preliminary figures released in October by the U.S. Department of Agriculture placed corn production at 10.7 billion bu, down 13% from 2011 and down 28% from early-season projections. Similarly, forecast soybean production dropped to 2.86 billion bu, down 8% from 2011 and 11% from earlier projections. The value of the corn and soybean losses approximated $24 billion and $8 billion, respectively. The total damage made the 2012 drought the most expensive drought in U.S. history in nominal dollars. Additional losses were sure to accrue when declines in hay, sorghum, and other crops were considered. Other parts of the Northern Hemisphere measured increased heating, though not as consistently as in the continental U.S. Some locations even broke long-standing low-temperature records. For example, Alaska logged the coldest January on record. Nevertheless, data compiled by the National Climatic Data Center indicated that each month from April to July set records for the warmest Northern Hemisphere land temperatures. In addition, the global combined land-sea temperature for January to September 2012 was the eighth warmest since record keeping began in 1880. Another metric of climate change set a record in 2012 when sea ice coverage in the Arctic Ocean declined to its smallest geographic extent since satellite monitoring began in 1979. The National Snow and Ice Data Center (NSIDC) noted that Arctic ice extent had always varied from year to year according to weather conditions but that there had been an overall decline over the past 33 years. The linear trend of the NSIDC’s August data showed a decline of 10.2% per decade. On Sept. 17, 2012, ice coverage fell to a record 3.41 million sq km (1.32 million sq mi). In contrast, Antarctic sea ice coverage in September, at 3.5% above normal, grew to its largest extent in the 1979–2012 period. The summer of 2012 saw abnormal ice melt in Greenland as well. According to NASA, Greenland’s July surface ice cover melted at an unusually rapid rate, with satellite data showing an estimated 97% of the surficial ice melting at some point during that month. Although that was the greatest ice melt observed in more than 30 years, researchers were not sure that it would affect the overall volume of ice loss that summer. Discussions of the relationship between extreme weather and changing climate took on new urgency in 2012, given the number of billion-dollar weather disasters in 2011 and the drought and heat of 2012. Most climatologists believed that climate warming increased the odds for some weather extremes, but they remained reluctant to associate climate change with a specific event. The final 2012 report of the Intergovernmental Panel on Climate Change (IPCC), which considered the links between extreme events and climate change, reported that “in general, single extreme events cannot be simply and directly attributed to anthropogenic climate change,” although the probability for some extremes had changed. A much-discussed paper by NASA’s James Hansen and colleagues argued that extreme anomalies, such as the heat waves in Texas in 2011 and in Moscow in 2010, “were a consequence of global warming,” because their likelihood of occurrence would have been “exceedingly small” without such warming. Hurricane Sandy transitioned into an intense nor’easter “Superstorm” and made landfall in New Jersey on October 29. The storm’s northward track combined with the abnormally warm Gulf Stream waters, a deep upper-level kink in the jet stream, and a blocking high near Newfoundland to create one of the most damaging storms to ever strike the Northeast. High winds combined with storm surge and tidal levels of 2.4–4.3 m (8–14 ft) devastated the coasts of New York and New Jersey, submerging low-lying areas of New York City. The storm cut off power to some 8.5 million customers from Indiana to Maine and caused more than 200 deaths along its path. Preliminary insured U.S. damage estimates ranged from $20 billion–$25 billion, with total economic costs of more than $60 billion, making “Superstorm Sandy” one of the most expensive natural disasters in U.S. history. What made you want to look up "Earth Sciences: Year In Review 2012"? Please share what surprised you most...
<urn:uuid:5ce70066-70a3-488d-a95e-a007178652e0>
3.640625
1,204
Knowledge Article
Science & Tech.
54.044308
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. Pogonophorans were first classified as a distinct phylum in the middle of the 20th century. The first species, Siboglinum weberi, described in 1914, came from the seas of the Malayan Archipelago; the second species, Lamellisabella zachsi, which came from the Okhotsk Sea, was described in 1933. In 1937 a new class called Pogonophora was established for Lamellisabella. In... What made you want to look up "Siboglinum weberi"? Please share what surprised you most...
<urn:uuid:3084ba60-be90-47ee-91e4-08589250533a>
3.015625
161
Knowledge Article
Science & Tech.
49.174872
Sciurus niger, the eastern fox squirrel, is the largest tree squirrel in North America. Squirrels are interesting in that their skulls are highly conserved, having changed little since the oldest-known squirrel (Protosciurus) appeared in the late Oligocene. This can be easily seen by comparing Sciurus to Cynomys ludovicianus, the black-tailed prairie dog, Spermophilus columbianus, the Columbian ground squirrel, and S. variegatus, the rock squirrel. For this reason, one could call squirrels 'living fossils'. To date, the lack of significant variation between squirrel species has frustrated efforts to discover their phylogenetic relationships. However, allozyme studies suggest that Sciurus is most closely related to Microsciurus, the neotropical dwarf squirrels, and together they form a clade that is the sister taxon to Tamiasciurus, the red squirrels. CT scanning offers the opportunity to easily acquire information on the internal anatomy of squirrel skulls, and to apply morphometric tools to their study. It is hoped that this will contribute to the eventual resolution of the squirrel family tree. This specimen was collected in Wadesboro, Anson County, North Carolina, by J. D. Billingsley on December 29, 1955. It was made available to the University of Texas High-Resolution X-ray CT Facility for scanning by Dr. Donald Swiderski of the University of Michigan Museum of Zoology. Funding for scanning was provided by a National Science Foundation Digital Libraries Initiative grant to Dr. Timothy Rowe of The University of Texas at Austin. Black, C. C. 1963. A review of the North American Tertiary Sciuridae. Bulletin of the Museum of Comparative Zoology 130:109-248. Emry, R. J., and R. W. Thorington, Jr. 1982. Descriptive and comparative osteology of the oldest fossil squirrel, Protosciurus (Rodentia: Sciuridae). Smithsonian Contributions to Paleobiology 47:1-35. Emry, R. J., and R. W. Thorington, Jr. 1984. The tree squirrel Sciurus (Sciuridae, Rodentia) as a living fossil; pp. 23-31 in N. Eldredge and S. M. Stanley (eds.), Living Fossils. Springer-Verlag, New York. Hafner, M. S., L. J. Barkley, and J. M. Chupasko. 1994. Evolutionary genetics of New World tree squirrels (tribe Sciurini). Journal of Mammalogy 75:102-109. Koprowski, J. L. 1994. Sciurus niger. Mammalian Species 479:1-9. Roth, V. L. 1996. Cranial integration in the Sciuridae. American Zoologist 36:14-23. Mammalian Species account of Sciurus niger (American Society of Mammalogists) The brain of Sciurus carolinensis (Comparative Mammalian Brain Collections website) Sciurus niger on The Animal Diversity Web (The University of Michigan Museum of Zoology) Sciurus niger on The Mammals of Texas Online Edition
<urn:uuid:8cad3c8a-8bf9-4659-ae1d-8e26d4aa86be>
3.015625
683
Knowledge Article
Science & Tech.
50.893199
||This article needs additional citations for verification. (July 2009)| Half-life (t½) is the time required for a quantity to fall to half its value as measured at the beginning of the time period. In physics, it is typically used to describe a property of radioactive decay, but may be used to describe any quantity which follows an exponential decay. Half-life is used to describe a quantity undergoing exponential decay, and is constant over the lifetime of the decaying quantity. It is a characteristic unit for the exponential decay equation. The term "half-life" may generically be used to refer to any period of time in which a quantity falls by half, even if the decay is not exponential. For a general introduction and description of exponential decay, see exponential decay. For a general introduction and description of non-exponential decay, see rate law. The converse of half-life is doubling time. The table on the right shows the reduction of a quantity in terms of the number of half-lives elapsed. Probabilistic nature of half-life A half-life usually describes the decay of discrete entities, such as radioactive atoms, which have unstable nuclei. In that case, it does not work to use the definition "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom with a half-life of one second, there will not be "one-half of an atom" left after one second. There will be either zero atoms left or one atom left, depending on whether or not that atom happened to decay. Instead, the half-life is defined in terms of probability. It is the time when the expected value of the number of entities that have decayed is equal to half the original number. For example, one can start with a single radioactive atom, wait its half-life, and then check whether or not it has decayed. Perhaps it did, but perhaps it did not. But if this experiment is repeated again and again, it will be seen that - on average - it decays within the half-life 50% of the time. In some experiments (such as the synthesis of a superheavy element), there is in fact only one radioactive atom produced at a time, with its lifetime individually measured. In this case, statistical analysis is required to infer the half-life. In other cases, a very large number of identical radioactive atoms decay in the measured time range. In this case, the law of large numbers ensures that the number of atoms that actually decay is approximately equal to the number of atoms that are expected to decay. In other words, with a large enough number of decaying atoms, the probabilistic aspects of the process could be neglected. There are various simple exercises that demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program. For example, the image on the right is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. However, with more atoms (right boxes), the overall decay is smoother and less random-looking than with fewer atoms (left boxes), in accordance with the law of large numbers. Formulas for half-life in exponential decay An exponential decay process can be described by any of the following three equivalent formulas: - N0 is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.), - N(t) is the quantity that still remains and has not yet decayed after a time t, - t1/2 is the half-life of the decaying quantity, - τ is a positive number called the mean lifetime of the decaying quantity, - λ is a positive number called the decay constant of the decaying quantity. The three parameters , , and λ are all directly related in the following way: where ln(2) is the natural logarithm of 2 (approximately 0.693). Click "show" to see a detailed derivation of the relationship between half-life, decay time, and decay constant. Start with the three equations We want to find a relationship between , , and λ, such that these three equations describe exactly the same exponential decay process. Comparing the equations, we find the following condition: Next, we'll take the natural logarithm of each of these quantities. Using the properties of logarithms, this simplifies to the following: Since the natural logarithm of e is 1, we get: Canceling the factor of t and plugging in , the eventual result is: By plugging in and manipulating these relationships, we get all of the following equivalent descriptions of exponential decay, in terms of the half-life: Regardless of how it's written, we can plug into the formula to get - as expected (this is the definition of "initial quantity") - as expected (this is the definition of half-life) - , i.e. amount approaches zero as t approaches infinity as expected (the longer we wait, the less remains). Decay by two or more processes Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life T1/2 can be related to the half-lives t1 and t2 that the quantity would have if each of the decay processes acted in isolation: For three or more processes, the analogous formula is: For a proof of these formulas, see Decay by two or more processes. There is a half-life describing any exponential-decay process. For example: - The current flowing through an RC circuit or RL circuit decays with a half-life of or , respectively. For this example, the term half time might be used instead of "half life", but they mean the same thing. - In a first-order chemical reaction, the half-life of the reactant is , where λ is the reaction rate constant. - In radioactive decay, the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides. the half life of a species is the time it takes for the concentration of the substance to fall to half of its initial value Half-life in non-exponential decay The decay of many physical quantities is not exponential—for example, the evaporation of water from a puddle, or (often) the chemical reaction of a molecule. In such cases, the half-life is defined the same way as before: as the time elapsed before half of the original quantity has decayed. However, unlike in an exponential decay, the half-life depends on the initial quantity, and the prospective half-life will change over time as the quantity decays. As an example, the radioactive decay of carbon-14 is exponential with a half-life of 5730 years. A quantity of carbon-14 will decay to half of its original amount (on average) after 5730 years, regardless of how big or small the original quantity was. After another 5730 years, one-quarter of the original will remain. On the other hand, the time it will take a puddle to half-evaporate depends on how deep the puddle is. Perhaps a puddle of a certain size will evaporate down to half its original volume in one day. But on the second day, there is no reason to expect that one-quarter of the puddle will remain; in fact, it will probably be much less than that. This is an example where the half-life reduces as time goes on. (In other non-exponential decays, it can increase instead.) The decay of a mixture of two or more materials which each decay exponentially, but with different half-lives, is not exponential. Mathematically, the sum of two exponential functions is not a single exponential function. A common example of such a situation is the waste of nuclear power stations, which is a mix of substances with vastly different half-lives. Consider a sample containing a rapidly decaying element A, with a half-life of 1 second, and a slowly decaying element B, with a half-life of one year. After a few seconds, almost all atoms of the element A have decayed after repeated halving of the initial total number of atoms; but very few of the atoms of element B will have decayed yet as only a tiny fraction of a half-life has elapsed. Thus, the mixture taken as a whole does not decay by halves. Half-life in biology and pharmacology A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration in blood plasma of a substance to reach one-half of its steady-state value (the "plasma half-life"). While a radioactive isotope decays almost perfectly according to so-called "first order kinetics" where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics. For example, the biological half-life of water in a human being is about 7 to 14 days, though this can be altered by his/her behavior. The biological half-life of cesium in human beings is between one and four months. This can be shortened by feeding the person prussian blue, which acts as a solid ion exchanger that absorbs the cesium while releasing potassium ions in their place. See also |Look up half-life in Wiktionary, the free dictionary.| - Nucleonica.net, Nuclear Science Portal - Nucleonica.net, wiki: Decay Engine - Bucknell.edu, System Dynamics - Time Constants - Subotex.com, Half-Life elimination of drugs in blood plasma - Simple Charting Tool
<urn:uuid:282fa41a-298d-476c-bfe2-b689e74d837c>
3.84375
2,151
Knowledge Article
Science & Tech.
39.649331
The proposed mechanism for increased amounts of the sun's geomagnetic activity having an impact on the Earth's Climate is quite complicated, since there are many ways in which the sun can impact the climate. One of the most famous ways is with the reduction in Cosmic Rays reaching Earth. The reduction in Cosmic Rays creates fewer Low Clouds, and since Low Clouds have an overall cooling impact, a decrease in these clouds would lead to a warming impact. Why does increased geomagnetic activity create lower amounts of Cosmic Rays? Because of higher amounts of Solar Wind preventing the Cosmic Rays from reaching Earth. Indeed, there are strong correlations between temperature and various solar output variables. Georgieva et. al 2005 Georgieva et. al 2005 used the Geomagnetic AA Index to quantify the solar impact on Climate Change, rather than the sunspot number, because using the sunspot number to quantify the solar contribution to climate change, as many studies do, leads to an underestimation of the Solar impact on Climate Change. The above figure from Georgieva et. al shows the Geomagentic AA Index with the broken line, and the Global Temperature Anomalies with the solid line. They find that the correlation coefficient between the AA Index and Global Temperatures is 0.85, meaning that the sun can explain 85% of the variances in temperatures over the last ~150 years. Cliver et. al 1998 Cliver et. al 1998 also used the Geomagnetic AA Index to estimate the solar contribution to climate change. Above figure: From Cliver et. al 1998. The AA Index is the dotted line, and the solid line are the temperature anomalies. They found that 50-100% of the warming could be due to the sun, but it should be noted that this analysis does not include other factors like volcanic activity and anthropogenic greenhouse gas emissions when estimating the total contribution. Nonetheless, this study also shows that other studies which do include these factors are only at the lower end of the 50-100% range for the solar contribution over the last 100-150 years. It also supports other studies with a larger solar contribution to climate change because of the remarkable correlation with the AA Index and temperatures.Scafetta and West 2008 Scafetta and West 2008 adresses the uncertainty raised in the first paper. If a TSI curve that shows an upward trend from Solar Cycle 21 to 22 is used from the ACRIM TSI composite rather than the flat PMOD TSI composite, then a higher contribution from the sun would be needed. The authors find that up to 69% of the variances in temperatures can be explained by solar activity. The image above from Scafetta and West 2008 shows the divergence between the PMOD and ACRIM TSI datasets, which makes attribution to past climate change even harder. The red curve is the ACRIM TSI composite, the blue curve is the PMOD TSI Composite, and the black curve and green line are the Global Temperature anomalies. Scafetta and West 2007 The ACRIM verses PMOD controversy continues in this paper. 50% or more of temperatures can be attributed to the solar forcing, depending if the ACRIM TSI composite is used or not. This further adds on to resolving the uncertainty between the PMOD and ACRIM datasets during the ACRIM Gap. The graph above from Scafetta and West 2007 shows the excellent correlation between solar activity and temperatures. It also shows that a large portion of the warming can be attributed to solar activity. Over the last 30 years, a significant portion of the warming can be attributed to solar activity if the ACRIM TSI composite is used. Ogurtsov 2007 Ogurstov 2007 estimated that the solar contribution directly and indirectly caused about 0.25-0.35 degrees C of the warming that took place during the 20th Century. Using the Skeptical Science trend calculator gives an approxiate warming of 0.6 Degrees C during the 20th Century. This means that 41-59% of the trend upward can be attributed to solar activity over the past 100 years.Blanter et. al 2008 Blanter et. al 2008 found that temperatures correlated remarkably well for all periods between the solar activity indicies and the observed temperatures for stations in Europe and the United States during the 20th Century. They used a finding from a previous study that the temperatures at weather stations correlated remarkably well if they were up to a 1000 km distance from each other. They also state in the abstract that these changes can "possibly" be extended onto a Global scale. This figure from Dorman 2012 above combines the global temperature anomalies to the Cosmic Ray Flux (CRF) from 1937-1994. There is a very good correspondance between the two variables, suggesting that Cosmic Rays (modulated by solar activity) play a large and dominant role in current climate change. So there appears to be a VERY strong relationship between various solar parameters and temperatures over the last 100 years! Does this imply that Cosmic Rays are the cause of climate change. Not quite, since correlation does not imply causation, but there is a large range of evidence that suggests that Cosmic Rays have a large impact on the atmospheric parameters. Yuri Stozhkov (who was also one of the authors in the CERN paper) et. al found that during large Forbush Decreases, there are precipitation decreases observed, suggesting that this is a cause from a sudden decrease in Cloud Cover. Dragic et. al 2011 found that with Forbush Decreases exceeding a GCR decrease of 7%, a noticeable increase with the DTR was observed. This can only be explained through cloud cover decreases, since clouds reduce the DTR range, and a decrease in cloud cover would create an increase in the Diurnal Temperature Range. Kniveton and Todd 2001 This paper evaluates whether there is empirical evidence to support the hypothesis that solar variability is linked to the Earth's climate through the modulation of atmospheric precipitation processes. Using global data from 1979–1999, we find evidence of a statistically strong relationship between cosmic ray flux (CRF), precipitation (P) and precipitation efficiency (PE) over ocean surfaces at mid to high latitudes. Both P and PE are shown to vary by 7–9% during the solar cycle of the 1980s over the latitude band 45–90°S. Alternative explanations of the variation in these atmospheric parameters by changes in tropospheric aerosol content and ENSO show poorer statistical relationships with P and PE. Variations in P and PE potentially caused by changes in CRF have implications for the understanding of cloud and water vapour feedbacks.]Kniveton and Todd 2001Svensmark et. al 2009 used 17 Forbush Decreases (large and sudden decreases in Cosmic Rays after a Coronal Mass Ejection) after 1998 (since this is when AERONET started) that were above a 7% decrease, and compared these changes in GCRs to corresponding changes in aerosol particles. Aerosoles are the "seeds" to cloud formation. Without these, water vapor droplets would not be able to condense onto a physical substance to form a cloud. Svensmark et. al found that for each FD event analyzed, a sudden decrease in aerosoles was also observed. This indicates a significant GCR impact on the atmospheric composition. Shaviv 2005 Regardless of if it is Cosmic Rays or not, CMC, (though there is much evidence for a GCR-climate connection), an amplifying mechanism is needed for the sun-climate connection. Over the 11-year solar cycle, small changes in the total solar irradiance (TSI) give rise to small variations in the global energy budget. It was suggested, however, that different mechanisms could amplify solar activity variations to give large climatic effects, a possibility which is still a subject of debate. With this in mind, we use the oceans as a calorimeter to measure the radiative forcing variations associated with the solar cycle. This is achieved through the study of three independent records, the net heat flux into the oceans over 5 decades, the sea level change rate based on tide gauge records over the 20th century, and the sea surface temperature variations. Each of the records can be used to consistently derive the same oceanic heat flux. We find that the total radiative forcing associated with solar cycles variations is about 5 to 7 times larger than just those associated with the TSI variations, thus implying the necessary existence of an amplification mechanism, though without pointing to which one. There are many more papers than this documenting a strong solar influence on the climate, I just wanted to give a snapshot of the various pieces of evidence for solar driven climate change floating around in the scientific literature.
<urn:uuid:08819b34-8b64-4b30-929c-6a2dde184d66>
3.828125
1,804
Comment Section
Science & Tech.
40.912163
fluid mechanics, branch of mechanics dealing with the properties and behavior of fluids, i.e., liquids and gases. Because of their ability to flow, liquids and gases have many properties in common not shared by solids. The special study of fluids in motion, or fluid dynamics, makes up the larger part of fluid mechanics. Branches of fluid dynamics include hydrodynamics (study of liquids in motion) and aerodynamics (study of gases in motion). Hydrodynamics is often used synonymously with fluid dynamics, since most of the results from the study of liquids also apply to gases. A plasma is also a fluid (see states of matter) and can be described by many of the principles of fluid mechanics, but its electromagnetic properties must also be taken into account. The study of plasmas in motion is known as magnetohydrodynamics and includes principles from several fields. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on fluid mechanics from Fact Monster: See more Encyclopedia articles on: Physics
<urn:uuid:6ae17013-8572-427a-b5b3-2472fc5418e4>
3.546875
220
Knowledge Article
Science & Tech.
38.358168
Learn math by playing with it! In math class, which is the most common delivery method for the math questions you need to do? On the board Votes so far: 1181 Easy to understand algebra lessons on DVD. Try before you commit. More info: Solve your algebra problem step by step! Online Algebra Solver » Share this page. A Mathematician's Lament - how math is being taught all wrong Why is math boring for so many students?... When zombies attack - a mathematical model This article uses math to predict what will happen if there is a zombie outbreak.... Making math accessible for the blind The abacus is a useful tool for helping blind people to learn math.... Open LiveMath file [You need LiveMath Viewer to see this file. Go to Download LiveMath for information.] Author: Murray Bourne+ | About & Contact | Feedback & questions | IntMath feed | Page last modified: 16 December 2010
<urn:uuid:bcd8e7f5-99f7-4830-90f6-4de08ad42551>
2.953125
213
Content Listing
Science & Tech.
66.707784
1. Gravity is an invisible force that occurs between two objects. 2. The reason things stay on the Earth’s surface is because of the gravitational pull toward the Earth’s center. 3. Gravity is also the reason the Earth spins around the sun. 4. The bigger the object's mass the more gravity it will have, and the smaller the mass of the object the less gravity. 5. Another thing that affects how much gravity pulls between two objects is the distance between them. The closer the two objects are, the stronger the gravitational pull. 6. Gravity can cause health problems such as bone loss, muscle atrophy, and fluid shifts. 7. Gravity guides the growth of plants and other vegetation. 8. Gravity makes stars burn by squeezing their matter together. 9. Black holes have the strongest gravitational pull in the entire Universe. 10. Sir Issac Newton discovered gravity about 300 years ago. The story is that Newton saw an apple fall out of a tree. When this happened he realized there was a force that made it occur, and he called it gravity.
<urn:uuid:6e9e4fd1-7149-4f1a-a45a-6ceac2a32866>
3.5625
227
Listicle
Science & Tech.
69.97587
Oh, to catch a rainbow. Well, it's been done for the first time ever – and with just a simple lens and a plate of glass at that. The technique could be used to store information using light, a boon for optical computing and telecommunications. All-optical computing devices promise to be faster and more efficient than current technology, but they suffer from the drawback that signals have to be converted back and forth from optical to electrical. The ability to "slow" light to a crawl or even trap it helps, as information in the light can then be manipulated directly. In 2007, Ortwin Hess of the University of Surrey in Guildford, UK, and colleagues proposed a technique to trap light inside a tapering waveguide, which is a structure that guides light waves down its length. The waveguide in question would use metamaterials – exotic materials that can bend light sharply. The idea is that as the waveguide tapers, the components of the light are made to stop in turn at ever narrower points. That's because any given component of the light cannot pass through an opening that's smaller than its wavelength. This leads to a "trapped rainbow". While numerical models showed that such waveguides would work in theory, making them out of metamaterials remained a distant dream. Now Vera Smolyaninova of Towson University in Baltimore, Maryland, and colleagues have used a convex lens to create the tapered waveguide and trap a rainbow of light. They coated one side of a 4.5-millimetre-diameter lens with a gold film 30 nanometre thick, and laid the lens – gold-side down – on a flat glass slide which was also coated with film of gold. Viewed side-on, the space between the curved lens and the flat slide was a layer of air that narrowed to zero thickness where the lens touched the slide – essentially a tapered waveguide. When they shone a multi-wavelength laser beam at the open end of the gilded waveguide, a trapped rainbow formed inside. This could be seen as a series of coloured rings when the lens was viewed from above with a microscope: the visible light leaked through the thin gold film. Shorter-wavelength green light was trapped at a point where the taper became too thin for it to penetrate the waveguide. Longer-wavelength red light was trapped further out, where the taper was thicker, with intermediate wavelengths in between (www.arxiv.org/abs/0911.4464). "I think it's beautiful that we can create such complex phenomena using a very, very simple configuration," says Smolyaninova. "It's amazing." Hess agrees. He is delighted to see his theoretical prediction validated and impressed by the simplicity of the experiment. Setting the lens on the slide, he says, "is a very, very elegant way of tapering". If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Thu Nov 26 18:23:21 GMT 2009 by Rushnerd Ronnie James Dio will be thrilled to hear this. Thu Nov 26 20:47:36 GMT 2009 by Simon Yeah, should brighten him up after the whole stomach cancer diagnosis thing :S ...fantastic stuff though Wed Dec 02 23:12:20 GMT 2009 by Dennis Wow that's pretty brilliant - heads up - you could see a full rainbow circle if you were in an airplane flying near a rainbow since the refraction of light would occur above and below you revealing a full circle rainbow. Thu Nov 26 22:47:10 GMT 2009 by dogigniter hehe, only if its dark :) Fri Nov 27 08:15:46 GMT 2009 by Dancaban Sad to hear, one of the few rock frontmen I haven't seen. Thu Nov 26 19:18:04 GMT 2009 by John Lambert This is a case where a clear diagram would be far more valuable than a photo. The description and photo together still leave me pondering the exact arrangement. Fri Nov 27 04:36:37 GMT 2009 by david m My thoughts exactly. e.g. where was light trajectory? Thu Nov 26 19:38:43 GMT 2009 by Graham What is the difference between this and the phenomenon that causes Newton's Rings? As I remember it Newton's rings is an effect where, when a lens is placed on a class plate, concentric rings are seen. The rings are formed by interference between light reflected from the flat and curved surfaces. This article seems to describe exactly the same thing. Fri Nov 27 09:19:10 GMT 2009 by Steve Absolutely. I can see nothing new in this at all. This is exactly Sir Isaac's setup and here (long URL - click here) there are photos of the rainbow produced by white light illuminating exactly this setup (lens on plane). Surely all these folk have done is to enhance the reflectivity by gilding the lily. I'll have a look in 'Optiks' and see if the old chap hadn't tried that as well - he silvered anything that didn't move. And-why can't NS put a stop to this blasted spamming from the christmas shop - it has been going on for ages and appears everywhere. Fri Nov 27 09:20:27 GMT 2009 by Vin yes now you mention it, it does look like the same set up, so i am similarly confused. Although there is the difference of the gold 30 nanometre layer. Maybe that makes all the difference? Fri Nov 27 14:33:42 GMT 2009 by Jeff Hecht The trapped rainbow pattern does look the same as Newton's Rings, but it's produced differently. Newton's rings are produced by light passing down through a lens sitting on a flat glass surface. Light that has passed through the lens is reflected from the flat glass back into the lens, causing interference effects. The trapped rainbow is produced by light entering from the side, between the curved lens and the flat glass, when the two surfaces are covered by thin gold films. The two processes are different, but produce effects that look the same. Fri Nov 27 18:52:23 GMT 2009 by Agent420 I'm not sure that it makes a difference which direction the light comes from, the frequencies of light are still separated. Sat Nov 28 17:01:21 GMT 2009 by Vin Thanks for clarifying that, but now im wondering, if the light is trapped, how can we see it? I mean if we can see it then something is stimulating our eyes' light receptors? Shouldn't it be like a black hole? All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:7b634f66-2a2e-4b8a-8f14-42effa78f84c>
4
1,514
Comment Section
Science & Tech.
65.458352
Element 106 finally has a name, twenty years after its discovery. The name - seaborgium (or Sg) - was announced on Sunday at a meeting of the American Chemical Society in San Diego. It was chosen in honour of Nobel laureate Glenn Seaborg, co-discoverer of plutonium and nine other 'transuranic elements': americium, curium, berkelium, californium, einsteinium, fermium, mendel-evium, nobelium, and now seaborgium. All are more massive than uranium, which is the heaviest naturally-occurring element. Seaborgium can be artificially created in particle accelerators, and has a half-life of less than a second. It was discovered in 1974 by researchers at Lawrence Berkeley Laboratory and Lawrence Livermore National Laboratory, but could not be officially named until the discovery was confirmed. Two teams finally confirmed it last summer. 'This is an extraordinary honour for me,' said Seaborg on hearing the news. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:36902ff6-e16b-4d04-a77a-383e798f2db1>
3.3125
237
Truncated
Science & Tech.
27.21125
The same material that formed the first primitive transistors more than 60 years ago can apparently be modified in a new way to advance future electronics. French scientists have combined two materials with advantageous electronic properties - graphene and molybdenite - into a flash memory prototype that offers significant potential in terms of performance, size, flexibility and energy consumption. Scientists say they've taken a big step toward the creation of two-dimensional electronics by combining a conductor and an insulator in layers just an atom thick. Engineers at Duke University engineers have layered atom-thick lattices of carbon with polymers to create unique materials with a wide range of uses, including artificial muscles. It’s probably been a while since you’ve thought about the Fukushima nuclear disaster that rocked Japan, and international headlines, in 2011. For the first time, semiconductors have been produced from graphene - a potential revolution for the electronics market. The Norwegian developers say products could be on the market in as little as five years. IBM scientists recently managed to differentiate the chemical bonds in individual molecules - for the first time - using a technique known as noncontact atomic force microscopy (AFM). Elon Musk has said the big breakthrough in electric vehicle energy may arrive with improved supercapacitors, not batteries. Nickel-iron batteries, a rechargeable technology developed by Thomas Edison more than a century ago, have been largely out of favor since the 1970s - until now. A team from the University of Exeter says it's discovered the most transparent, lightweight and flexible material ever for conducting electricity. Scientists and engineers at the University of Wisconsin-Milwaukee have discovered a completely new carbon-based material, synthesized from graphene, which could mark a big step towards faster electronics. One of the first creators of graphene, Professor Sir Andre Geim, has found a new use for the wonder material - distilling alcohol. A team at at the Technische Universitaet Muenchen says it's built the foundation for devices to communicate directly with the human brain. It sometimes seems as if there isn't anything that can't be done better with graphene. Now, researchers at Rensselaer Polytechnic Institute say that the stuff can outperform leading commercial gas sensors in detecting potentially dangerous and explosive chemicals. You can make graphene out of almost anything. Well, theoretically, anyways. And if you make it out of a box of Girl Scout Cookies, they could be worth $15 billion. 3D movies could be downloaded to a smartphone in seconds with a new technology developed at the University of California, Berkeley. A team at Vanderbilt University has found a way of using graphene to create windshields that don't need wipers. IBM seems to have concluded that graphene won't be replacing silicon inside CPUs anytime soon. University of Manchester scientists have created a new substance with thousands of potential applications, from a replacement for Teflon to electronic devices. Physicists at the University of California have taken a major step towards developing a "spin computer" by successfully tunneling "spin injection" into graphene.
<urn:uuid:88fe732b-4f1c-43c9-be12-d1c78fe6ccf1>
3.546875
638
Content Listing
Science & Tech.
22.75597
Study addresses questions and concerns related to limited sand resources along the Louisiana shelf and their implications to long-term relative sea-level rise and storm impacts, using newly acquired geophysical and vibracore data. USGS project to understand coastal evolution and modern beach behavior; to identify and model the physical processes affecting coastal ocean circulation and sediment transport; and to identify sediment sources and construct a regional sediment budget. Topics in Coastal and Marine Sciences provides background science materials, definitions, and links to give a common context for users from a variety of backgrounds. Coastal erosion was chosen as the first topic. Home page for Coastal and Marine Geology with links to topics of interest (sea level change, erosion, corals, pollution, sonar mapping, and others), Sound Waves monthly newsletter, field centers, regions of interest, and subject search system. Interactive map server to view and create maps using available coastal and marine geology data sets of offshore and coastal U.S. and the Gulf of Mexico. Links to available data and metadata that can be downloaded.
<urn:uuid:6ed66469-9981-4edc-a5dc-07273be1e6cf>
3.125
214
Content Listing
Science & Tech.
25.633746
Permitted Context: %Body.Content Content Model: %text HTML defines six levels of headings. A heading element implies all the font changes, paragraph breaks before and after, and any white space necessary to render the heading. The heading elements are H1, H2, H3, H4, H5, and H6 with H1 being the highest (or most important) level and H6 the least. For example: <H1>This is a top level heading</H1> Here is some text. <H2>Second level heading</H2> Here is some more text. Use the DIV element together with header elements when you want to make the hierarchical structure of a document explicit. This is needed as header elements themselves only contain the text of the header, and do not imply any structural division of documents into sections. Header elements have the same content model as paragraphs, that is text and character level markup, such as character emphasis, inline images, form fields and math. Headers play a related role to lists in structuring documents, and it is common to number headers or to include a graphic that acts like a bullet in lists. HTML 3.0 recognizes this with attributes that assist with numbering headers and allow authors to specify a custom graphic. The numbering style is controlled by the style sheet, e.g. - The style sheet specifies whether headers are numbered, and which style is used to render the current sequence number, e.g. arabic, upper alpha, lower alpha, upper roman, lower roman or a numbering scheme appropriate to the current language. - Whether the parent numbering is inherited, e.g. "5.1.d" where 5 is the current sequence number for H1 headers, 1 is the number for H2 headers and 4 for H3 headers. The seqnum and skip attributes can be used to override the default treatment of header sequence numbers, and provide for a continuity with numbered lists. The dingbat or src attribute may be used to specify a bullet-like graphic to be placed adjacent to the header. The positioning of this graphic is controlled by the style sheet. The graphic is for decorative purposes only and silently ignored on non-graphical HTML user agents. User agents are free to wrap lines at whitespace characters so as to ensure lines fit within the current window size. Use the entity for the non-breaking space character, when you want to make sure that a line isn't broken! Alternatively, use the NOWRAP attribute to disable word wrapping and the <BR> element to force line breaks where desired. Netscape includes two tags: <NOBR>...</NOBR>, and <WBR>. The former turns off wordwrapping between the start and end NOBR tag, while WBR is for the rare case when you want to specify where to break the line if needed. Should HTML 3.0 provide an equivalent mechanism to WBR, (either a tag or an entity)? - An SGML identifier used as the target for hypertext links or for naming particular elements in associated style sheets. Identifiers are NAME tokens and must be unique within the scope of the - This is one of the ISO standard language abbreviations, e.g. "en.uk" for the variation of English spoken in the United Kingdom. It can be used by parsers to select language specific choices for quotation marks, ligatures and hypenation rules etc. The language attribute is composed from the two letter language code from ISO 639, optionally followed by a period and a two letter country code from ISO - This a space separated list of SGML NAME tokens and is used to subclass tag names. For instance, <H2 CLASS=Section> defines a level 2 header that acts as a section header. By convention, the class names are interpreted hierarchically, with the most general class on the left and the most specific on the right, where classes are separated by a period. The CLASS attribute is most commonly used to attach a different style to some element, but it is recommended that where practical class names should be picked on the basis of the element's semantics, as this will permit other uses, such as restricting search through documents by matching on element class names. The conventions for choosing class names are outside the scope of this specification. - Headings are usually rendered flush left. The ALIGN attribute can be used to explicitly specify the horizontal - The heading is rendered flush left (the - The heading is centered. - The heading is rendered flush right. - Heading lines are justified where practical, otherwise this gives the same effect as the default <h1 align=center>This is a centered heading</H1> Here is some text. <H2 align=right>and this is a flush right heading</H2> Here is some more text. - This attribute is common to all block-like elements. When text flows around a figure or table in the margin, you sometimes want to start an element like a header, paragraph or list below the figure rather than alongside it. The CLEAR attribute allows you to move down - move down until left margin is clear - move down until right margin is clear - move down until both margins are clear Alternatively, you can decide to place the element alongside the figure just so long as there is enough room. The minimum width needed is specified as: - clear="40 en" - move down until there is at least 40 en units free - clear="100 pixels" - move down until there is at least 100 pixels The style sheet (or browser defaults) may provide default minimum widths for each class of block-like elements. - A sequence number is associated with each level of header from the top level (H1) to the bottom level (H6). This attribute is used to set the sequence number associated with the header level of the current element to a given number, e.g. SEQNUM=10. Normally, the sequence number is initialized to 1 at the beginning of the document and incremented after each header element. It is reset to 1 by any header element of a higher level, e.g. an H1 header resets the sequence numbers for H2 to H6. The style of header numbering is controlled by the style sheet. - Increments the sequence number before rendering the element. It is used when headers have been left out of the sequence. For instance, SKIP=3 advances the sequence number past 3 omitted items. - Specifies an iconic image to appear preceding the header. The icon is specified as an entity name. A list of standard icon entity names for HTML 3.0 is given in an appendix of this - Specifies an image to appear preceding the header. The image is specified as a URI. This attribute may appear together with the MD attribute. - Specifies a message digest or cryptographic checksum for the associated graphic specified by the SRC attribute. It is used when you want to be sure that a linked object is indeed the same one that the author intended, and hasn't been modified in any way. For instance, MD="md5:jV2OfH+nnXHU8bnkPAad/mSQlTDZ", which specifies an MD5 checksum encoded as a base64 character string. The MD attribute is generally allowed for all elements which support URI based links. - The NOWRAP attribute is used when you don't want the browser to automatically wrap lines. You can then explicitly specify line breaks in headings using the BR element. For example: <h1 nowrap>This heading has wordwrap turned off<br> and the BR element is used for explicit line breaks</H1>
<urn:uuid:76e8d5a0-cf12-424a-84a1-d238c1d8e378>
3.328125
1,711
Documentation
Software Dev.
53.202715
Image of Europa taken by Galileo. Click on image for full size Courtesy of NASA Galileo is Still Going Strong! News story originally written on January 7, 2000 The spacecraft Galileo is almost done with its mission! Galileo has been flying for ten years. It studied the planet Jupiter and its moons. Galileo just went to the moon Europa. It took pictures that show there may be water there! Scientists don't know for sure, but if water is there, then there may also be life. Scientists don't know what to do with Galileo once it is done with its mission. It is still working well, so they may keep using it to study another planet or moon. Shop Windows to the Universe Science Store! Our online store includes fun classroom activities for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist are also full of classroom activities on different topics in Earth and space science! You might also be interested in: Galileo was a spacecraft that orbited Jupiter for eight years. It made many discoveries about Jupiter and its moons. Galileo was launched in 1989, and reached Jupiter in 1995. The spacecraft had two parts....more It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The sky was clear and the weather was great. This was the America's 123rd manned space mission. A huge...more Scientists found a satellite orbiting the asteroid, Eugenia. This is the second one ever! A special telescope allows scientists to look through Earth's atmosphere. The first satellite found was Dactyl....more The United States wants Russia to put the service module in orbit! The module is part of the International Space Station. It was supposed to be in space over 2 years ago. Russia just sent supplies to the...more A coronal mass ejection (CME) happened on the Sun last month. The material that was thrown out from this explosion passed the ACE spacecraft. ACE measured some exciting things as the CME material passed...more Trees and plants are a very important part of this Earth. Trees and plants are nature's air conditioning because they help keep our Earth cool. On a summer day, walking bare-foot on the sidewalk burns,...more
<urn:uuid:e0e380f1-edcd-4699-b964-7a391f6ec281>
3.359375
516
Content Listing
Science & Tech.
64.201007
What Lies Beneath After a decade of research and exploration, the Census of Marine Life has released incredible new knowledge about life in our Earth's oceans. The study involved 2,700 scientists from 80 countries and more than 9,000 days at sea, ultimately revealing more than 6,000 potentially new species. Among the discoveries: a species of shrimp thought to have gone extinct 50 million years ago. The findings also help identify important changes, like the dwindling population of Atlantic bluefin tuna off the coast of Northern Europe. - Visit the Census of Marine Life website to learn more. - A Sea Change :: Film Trailer Hidden below the waves, ocean acidification is wreaking havoc on marine ecosystems. - Restoring California's Wild Watersheds Why more water for wildlife means more water for people. That means, we rely on support from our readers. Independent. Nonprofit. Subscriber-supported.
<urn:uuid:07fcfccb-11a9-45a6-949a-fbf951de0f2a>
3.46875
188
Content Listing
Science & Tech.
44.092696
Friday, November 03, 2006 Fill the Bill Third graders from Williams Memorial Elementary School learned about bird bill adaptations through practical experience. They each had a bill type represented by either a stapler remover, an eyedropper, a nail, tongs, tweezers, or a clothespin. There were various food items represented by spaghetti, bowtie pasta, staples in cardboard, colored water, sunflower seeds, and raisins. Students kept their bill as they rotated through each station of food and attempted to "eat". In the images, students are shown trying to "eat" individual sunflower seeds using stapler removers. The staple remover bill represented a bill found on raptors, which is designed to tear flesh and not eat seeds. Though the students had some success at this station, it was not their most efficient stop. Obviously, none of the bills worked well at all of the stations and some, like the eyedropper, only worked well at one station. In the end, students saw that bird bills are adapted to exploit every niche. Additionally, they saw that if all birds shared the same diet, only a smaller population of bird could be sustained. Posted by Swampy at 2:09 PM
<urn:uuid:6f0dd573-6692-4303-ba4d-baa7135141f3>
3.578125
256
Personal Blog
Science & Tech.
50.921148
I understand that the atmospheric temperature is sensed relative to external body temperature. However, is the sensation of warmth registered linearly, or is it on a logarithmic scale, similar to sound that works on the Decibel scale? Or is it on another scale entirely? A more general answer/comment is that technically almost nothing in biology is really linear. In a linear relation, when one variable increases the other increases proportionally. In biological systems this typically cannot happen because of physical limits (such as the number of molecules), which tend to produce saturation at some level. For example, when a substance is "sensed" by the receptors of a cell, sensing would become saturated as the receptors approach maximal capacity. In a similar manner, heat sensing will be limited by a maximal capacity and saturation of the nervous system. This is of course not specific to sensing but is a general characteristic of biological systems, because they are ultimately made of a finite number of components. That said, sometimes a linear-like relation is observed at a specific range of values. This is a mathematical characteristic of a class of functions called logistic functions. At ranges that are far from saturation, these functions have approximately linear behavior. For example, in the following figure I marked (black box) the "linear-like" regime of a simple logistic function y=1/(1+100*exp(-x)): The human sensation of warmth is not the measurement of static temperature. Humans detect the rate at which heat is leaving (or entering) their body. For instance, if you are sitting in your 0 degree car in the morning, your hand in the open air will feels warmer than your hand touching the steering wheel, despite them both being at 0 degrees. The reason is that heat conduction from your hand to the steering wheel is much faster than convective heat transfer from the air. So you feel "colder" touching the steering wheel, but it's really that you're losing heat faster. Scholarpedia's article on temperature receptors: http://www.scholarpedia.org/article/Thermal_touch#Temperature_receptors
<urn:uuid:2362f6dc-a751-43c4-9f27-5d6300dc65cd>
2.8125
441
Q&A Forum
Science & Tech.
35.432765
I’ve had a few requests to comment on Eli Rabett’s recent post, observing that he was unable to observe a Medieval Warm Period in the bristlecone chronology reported in Salzer and Hughes 2006. Looking at the tree ring index one can clearly see many large eruptions, the little ice age, but no European Warm Period, often called medieval. I can’t think how many times I’ve said that Graybill bristlecone chronologies have a hockey stick shape – which means obviously that they don’t have a MWP. That’s one of the reasons why the Hockey Team is addicted to bristlecone chronologies. Bristlecone and foxtail chronologies are “active ingredients” in virtually all the Team reconstructions. So one’s first reaction to a bristlecone chronology showing no MWP is – Well, duh. To illustrate this, I’ve shown below three bristlecone versions used in MBH99: on the left, the Sheep Mountain chronology, the Mannian PC1 and the “adjusted” PC1. We compared the Sheep Mountain chronology to the Mannian PC1 in our first submission to Nature in January 2004 observing that the Mannian PC1 was merely an alter ego for Graybill bristlecone chronologies, which were known to be problematic as a temperature proxy. On the right, I’ve shown an excerpt from the new Salzer and Hughes paper. Not much difference. So to that extent, there’s nothing newsworthy in a bristlecone chronology which doesn’t show a MWP. We already knew that. Figure 1. Left – Graybill and Mann Versions; right – excerpt from Salzer figure, The MWP in California On several previous occasions, I’ve observed that there is very strong paleoclimate evidence for the MWP in California – even, and perhaps, especially in the bristlecone-foxtail areas. Medieval treelines in California were higher than at present, discussed here and here. Post-medieval lakes have even submerged medieval trees. Miller (2006) discussed here and here estimated great warmth in alpine California as follows: Deadwood tree stems scattered above treeline on tephra-covered slopes of Whitewing Mtn (3051 m) and San Joaquin Ridge (3122 m) show evidence of being killed in an eruption from adjacent Glass Creek Vent, Inyo Craters. Using tree-ring methods, we dated deadwood to 815-1350 CE, and infer from death dates that the eruption occurred in late summer 1350 CE….Using contemporary distributions of the species, we modeled paleoclimate during the time of sympatry [the MWP] to be significantly warmer (+3.2 “C annual minimum temperature) and slightly drier (-24 mm annual precipitation) than present. Unfortunately, Salzer and Hughes do not discuss and or reconcile any of this literature. Do they disagree with Miller’s analysis? If so, why? And why wouldn’t the reviewers ask them to reconcile their observations with other paleoclimate evidence. But hey, it’s the Team. Hughes and the Ababneh Thesis The composite illustrated in Salzer and Hughes is a composite of 5 sites: Sheep Mountain, Campito Mountain, Mt Washington, Pearl Peak and San Francisco Peaks. Methuselah Walk and Indian Garden are also referred to. Take a look at the provenance of the series. There are 3 versions that reflect updates: Mt Washington, Pearl Peak and San Francisco Peaks. None of the updates has been archived, even though at least one of the updates is now 10 years old. But look how old the other versions are: Sheep Mountain ends in 1990, Campito in 1983, Indian Garden in 1980, Methesulah Walk in 1979. These are the Graybill versions – Graybill’s Sheep Mountain version being shown above. But we know that Linah Ababneh updated the Sheep Mountain data in 2002. We also know that Linah Ababneh’s update, aside from finding a difference between strip bark and whole bark chronologies, did not replicate Graybill’s results and had no HS shape whatever. (Figures for Sheep Mountain for strip bark and whole bark from 1600 on are shown separately in the thesis.) So the Sheep Mountain chronology had been updated – why wouldn’t this update have been used, aside from it not having a HS shape? Ababneh Fig. 5. Cold and warm periods as inferred from tree ring widths chronology (Ababneh, 2006, This study) fluctuations above and below the mean after normalizing, whole-bark and strip-bark chronologies are grouped together from two sites Patriarch Grove and Sheep Mountain. Maybe Rabett would argue that Hughes might have been unaware of the work; or that the work did not meet quality standards. Well, Hughes was not only aware of this work – he (and Jeffrey Dean) was on her Dissertation Committee! Can someone theorize as to a valid reason for not using the Ababneh update? I can’t imagine any. In passing, I also noticed inconsistencies between the data used for the old Graybill data sets and what has been archived. (Recall the Graybill tags at Almagre where we weren’t able to locate matches in the archive.) At Methuselah Walk and Indian Garden, the number of cores shown in the Salzer and Hughes table exactly matches the number of cores archived at ITRDB. But there a lot more Sheep Mt and Campito Mt cores referred to than archived – the difference may be early crossdated cores that precede the existing archive, but one wonders whether, like Almagre, there are Graybill measurements that have never been archived for reasons that no one knows. The Ababneh Data I’ve tried to obtain the Ababneh data without success. I emailed Linah Ababneh at what appears to be her present posting and got no response. I emailed David Meko of the University of Arizona, who has an excellent record of archiving chronologies and measurements, and inquired about a University of Arizona report by Stockton mentioned in the Ababneh thesis (that bender asked about) and about the Ababneh measurements. I reminded Meko that, in her thesis, she had undertaken to archive the measurements and presumably the university was responsible for ensuring that she completed the commitments in her thesis. Meko wrote back saying that he had checked around the department and had been unable to locate the Stockton report. He also said that they did not have any of Ababneh’s measurement data and that they had lost track of her. He gave me the name of someone who might know where she was. He agreed that she should archive the data and suggested that I write to the funding agency who might take that into consideration in their grant process – (these are the people who put up with Lonnie Thompson and they’re supposed to take it out on Linah Ababneh? C’mon). He didn’t seem to think that the university had any responsibilities in the matter. He was quite pleasant, and , as I mentioned above, Meko himself has an excellent archiving record. But what a typical climate science circus. Someone goes out and updates the critical Sheep Mountain data. It doesn’t show a Hockey Stick. Instead of using the updated version, Hughes uses the old version with a HS (doesn’t this sound like Jacoby and D’Arrigo at Gasp” where they withheld an update that didn’t have a HS and refused to give me the update when I learned that they were sitting on a non-HS update.) Now the person who got the data has moved and no one at Arizona has the data. Is the “professional” standard that Eli Rabett and Tamino are holding out for Pete H and myself? We sure plan to do better than this.
<urn:uuid:15643d03-6db8-45ac-9f49-29a664ff1299>
3
1,698
Personal Blog
Science & Tech.
47.860265
How long does it take the Big Dipper to move in the sky? What length of time is required for the Dipper to change from one position to the other? It depends on which is the "one position" and which is the "other position". :) In the Northern Hemisphere, all of the stars appear to rotate about a point in the sky that is due north, and at an elevation equal to the latitude at which you're standing. This also happens to be the point in the sky occupied by the North Star. (The same thing happens in the Southern Hemisphere, but there things rotate about a point in the south.) This apparent rotation of the sky is actually a result of the fact that the Earth itself is rotating. So the sky does one complete rotation every 24 hours. Of course, you can't see the stars during the day, but the sky is still "rotating" then nonetheless. So you see, there aren't just two positions for a constellation like the Big Dipper. But over the course of an entire night (~12 hours), you should be able to see it move from one end of its "path around the North Star" to the opposite end. Get More 'Curious?' with Our New PODCAST: - Podcast? Subscribe? Tell me about the Ask an Astronomer Podcast - Subscribe to our Podcast | Listen to our current Episode - Cool! But I can't now. Send me a quick reminder now for later. How to ask a question: If you have a follow-up question concerning the above subject, submit it here. If you have a question about another area of astronomy, find the topic you're interested in from the archive on our site menu, or go here for help. This page has been accessed 25914 times since March 9, 2003. Last modified: March 9, 2003 2:46:02 PM Ask an Astronomer is hosted by the Astronomy Department at Cornell University and is produced with PHP and MySQL. Warning: Your browser is misbehaving! This page might look ugly. (Details)
<urn:uuid:49d2d028-a8eb-4d53-9985-60882446db78>
3.546875
432
Q&A Forum
Science & Tech.
67.775217
Climate Change · Part One Climate Change · Part Two Introduction to Astronomy Introduction to Astronomy Syllabus 1.0 - Introduction 2.0 - How Science is Done 3.0 Big Bang, Elements & Radiation · 3.1 - The Big Bang · 3.2 - The Formation of Elements 4.0 - Discovery of the Galaxy 5.0 - Age and Origin of the Solar System 6.0 - Methods of Observational Astronomy 7.0 - The Life-Giving Sun 8.0 - Planets of the Solar System 9.0 - The Earth in Space 10.0 - The Search for Extrasolar Planets 11.0 - Modern Views of Mars 12.0 - Universe Endgame Life in the Universe Glossary: Climate Change Glossary: Life in Universe Was there really a Big Bang? A Very Brief History of the Universe:Notes: (abbreviated G) is the international term for 1,000,000,000 or 109 . In the U.S. this is called a billion – almost everywhere else it is a milliard. We use G to avoid confusion. M is for million. The universe is about 13G years old. eV are electron -volts; and electron has a mass-energy (E=mc2 , remember) of 0.511 MeV. rad = radiation , CBR= cosmic background radiation =quasars, BH=Black hole , Msun= mass of our Sun. A.U. is astronomical units (Sun-Earth distance, = 8 light-minutes); pc is parsecs : 2.36 light-years. How can we possibly know? Here are four proofs: The Steady State alternative - The observed expansion of the universe – the redshifts of galaxies first described by Edwin Hubble in 1929 implied a beginning to the universe, as had been suggested by earlier philosophers. - The observed abundances of deuterium, helium, and lithium; production of the observed quantities of hydrogen's heavier isotope and the next two heavier elements is thought to be due primarily to their synthesis in the first three minutes of the Big Bang. These elements are not produced in the required quantities in observed stellar fusion reactions. - The thermal spectrum of the Cosmic Microwave Background Radiation (CMBR) was predicted by Big Bang theory before its observation - always a convincing argument! - The CMBR appears hotter in distant clouds of gas. The speed of light is finite, so we are seeing these distant clouds at an earlier epoch, when the universe was denser and hotter, as expected from Big Bang Theory. The Steady State Model of scientists Hermann Bondi and Thomas Gold (and augmented by Fred Hoyle) postulated that there was no origin to the universe, that the large-scale features of the universe are constant from one epoch to the next, and thus to maintain the average density of galaxies in an expanding universe, whole new galaxies must be popping into existence between the previous ones. In addition, to explain the CMBR, a whole new class of 1014 weak microwave emitting sources must exist. This is “about 100,000 times the total number of visible galaxies” (according to Hoyle). More modern estimates place the number of galaxies at about 1011, or "only" 1,000 times fewer. This lack of supporting evidence for the Steady State theory and the perfect match between Big Bang predictions and the later discovery of the CMBR has led to almost universal acceptance of the Big Bang theory. Theoretical work in the 1960s “showed that the universe could have had a singularity, a big bang, if the theory of relativity was correct (Stephen Hawking).” Mathematician Roger Penrose and physicist Stephen Hawking went on to prove in 1970 that there must have been a Big Bang singularity, provided only that Einstein’s theory of general relativity is correct and that the universe contains only as much matter as we observe. We will see later that this latter condition is probably not true, but for now, the Big Bang theory prevails.
<urn:uuid:f041af74-f9a5-4496-a124-6e64dae5209d>
3.671875
879
Content Listing
Science & Tech.
55.854213
Page:Popular Science Monthly Volume 4.djvu/706 THE POPULAR SCIENCE MONTHLY. ALTHOUGH not by a balloon, yet the Atlantic has been crossed in the air, and "what has been can be." There are enough well-authenticated cases of the occurrence of American wild birds on the west coast of Europe to prove that the trip can be made by birds, and it is probable that successful navigation of the air will be the fruit of careful study of that natural flying-machine, a bird's wing. Every person who has not given more than a passing thought to the mechanism of flight is confident that he understands the whole subject, and tells you, if you ask, that the bird rows through the air with its wings, and that our lack of available force and of a sufficiently strong and light material is the only difficulty in the way of a successful flying-machine. A very little study of a bird's wing and its action will show that it is not by any means simple, and that every part and every curve and angle has a use, and helps in the performance of the function of the whole, which function is not yet perfectly understood, but does not in the least resemble the action of a paddle or oar. We shall also learn that all attempts to construct flying-machines have been made with an utter disregard of every thing that a wing might have taught. To this sweeping assertion I know of only two exceptions; a boy's kite, and the little circle of cardboard which runs up the kite-string in such a mysterious way, bear a very slight resemblance to a wing, in their mode of action, and may contain the germ of a successful flying- machine. To point out some of the facts already known about flying is one of the objects of this paper; another is to show how much there is to be learned about any natural object, and the way to set about it; for he who knows all that is to be learned about a wing has a good store of useful information, but he who knows all that may be learned from a wing is a wise man. Let us examine a feather. When I say "examine a feather," I mean, let "every one take the trouble to pull a quill-feather from an old duster, or find an old quill-pen, or in some way get possession of an actual feather, to see for himself what I wish to show; for, if what I have to say is not worth this trouble, it is not worth reading at all. Having found your feather, notice, first, the great strength of the shaft, compared with its lightness, and how secured by placing almost all the material on the outer wall of the quill. Notice, too, that the quill, where strength is most necessary, is tubular, while the rest of the shaft has a groove on its lower surface, and tapers toward the tip,
<urn:uuid:1f337787-c802-4b76-8faa-0981fdf8bf86>
2.90625
608
Truncated
Science & Tech.
42.90724
In a data transfer statement, a simple list of items takes the following form: The variable must not be an assumed-size array, unless one of the following appears in the last dimension: a subscript, a vector subscript, or a section subscript specifying an upper bound. Any expression must not attempt further I/O operations on the same logical unit. For example, it must not refer to a function subprogram that performs I/O on the same logical unit. The data transfer statement assigns values to (or transfers values from) the list items in the order in which the items appear, from left to right. When multiple array names are used in the I/O list of an unformatted input or output statement, only one record is read or written, regardless of how many array name references appear in the list. The following example shows a simple I/O list: When you use an array name reference in an I/O list, an input statement reads enough data to fill every item of the array. An output statement writes all of the values in the array. WRITE (6,10) J, K(3), 4, (L+4)/2, N Data transfer begins with the initial item of the array and proceeds in the order of subscript progression, with the leftmost subscript varying most rapidly. The following statement defines a two-dimensional array: If the name ARRAY appears with no subscripts in a READ statement, that statement assigns values from the input record(s) to ARRAY(1,1), ARRAY(2,1), ARRAY(3,1), ARRAY(1,2), and so on through ARRAY(3,3). An input record contains the following values: The following example shows how variables in the I/O list can be used in array subscripts later in the list: When the READ statement is executed, the first input value is assigned to J and the second to K, establishing the subscript values for ARRAY(J,K). The value 721.73 is then assigned to ARRAY(1,3). Note that the variables must appear before their use as array subscripts. DIMENSION ARRAY(3,3) ... READ (1,30) J, K, ARRAY(J,K) Consider the following derived-type definition and structure declaration: The following statements are equivalent: TYPE EMPLOYEE INTEGER ID CHARACTER(LEN=40) NAME END TYPE EMPLOYEE ... TYPE(EMPLOYEE) :: CONTRACT ! A structure of type EMPLOYEE READ *, CONTRACT READ *, CONTRACT%ID, CONTRACT%NAME For More Information: For details on the general rules for I/O lists, see Section 10.2.2.
<urn:uuid:7f922293-65df-4ac5-a38e-324f600b6cb0>
3.078125
590
Documentation
Software Dev.
58.920481
PURPOSE: To aid in illustrating how a siphon works. DESCRIPTION: A chain passes over a pulley between two beakers. When one of the beakers is raised, the chain flows from the higher into the lower beaker, just as water flows from the higher to the lower container of a siphon. This is due the greater weight of the chain, or water, on the side of the lower container. Note that the cause of the siphon flow is not air pressure, which is greater at the surface of the lower container! SUGGESTIONS: Ask your students if a mercury siphon would work on the moon, in the absence of an atmosphere. REFERENCES: (PIRA 2B60.29) EQUIPMENT: Siphon chain model, as photographed. SETUP TIME: None. Go back to Lecture-Demonstration Home Page.
<urn:uuid:769be8f0-d3ed-4d10-8508-564559d072d1>
3.953125
190
Tutorial
Science & Tech.
61.267081
Of course you never want to look at an eclipse directly. Unless you have the correct solar filters, your best bet would be use an indirect means of observing. There are many ways to do this. The simplest way to observe a solar eclipse would be to make a pinhole through some cardboard and allow the sunlight to pass through to hole to a paper. This works but makes a very small image. For today's eclipse we rigged a pair of astronomical binoculars by allowing the sunlight to pass through the large end and projected it on a piece of white paper. We could even see sunspots using this method. Here are some pictures of what we could see from the Phoenix, Arizona area: My favorite way of observing an eclipse does not require any equipment. All you have to do is look around you. As light gets filtered through tree leaves, blinds in your home, and other small pinpoints of light, crescents appear around you. This is an actual image of the eclipse, similar to the way the pinhole viewer works. Here is an example on my neighbor's house where the sun's crescent appeared everywhere: The blinds in my house were also allowing just the right amount of light to allow multiple little eclipses on the wall.
<urn:uuid:a26e7e42-4921-4363-84fa-fbf818faa969>
2.953125
257
Personal Blog
Science & Tech.
54.925794
Cosmic Rose Blooms In Star Cluster Photo The photo depicts the star cluster NGC 371, a stellar nursery in our neighboring galaxy the Small Magellanic Cloud, a dwarf galaxy about 200 000 light-years from Earth. Such regions of ionized hydrogen--known as HII regions--are sites of recent star birth. NGC 371 is an open cluster surrounded by a nebula. The stars in open clusters all originate from the same diffuse HII region, and over time the majority of the hydrogen is used up by star formation, leaving behind a shell of hydrogen such as the one in this image, along with a cluster of hot young stars. Image credit: ESO/Manu Mejias
<urn:uuid:65a145e1-53c7-45e2-95bb-870d4a7181cd>
2.96875
145
Truncated
Science & Tech.
45.812763
Is it really greener to go on the bus, or to buy local? Practice your skills of measurement and estimation using this interactive measurement tool based around fascinating images from biology. Andy wants to cycle from Land's End to John o'Groats. Will he be able to eat enough to keep him going? When you change the units, do the numbers get bigger or smaller? Which units would you choose best to fit these situations? In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book. Can you choose your units so that a cube has the same numerical value for it volume, surface area and total edge length?
<urn:uuid:fd03049f-f42f-4023-8367-e2c91266b205>
2.96875
141
Content Listing
Science & Tech.
57.306883
General remarks on the temperature of the earth and outer space General remarks on the temperature of the earth and outer space. American Journal of Science. 32, 1-20 (1837) by Ebeneser Burgess. English translation of "Remarques générales sur les températures du globe terrestre et des espaces planétaires." Annales de Chimie et de Physique. (Paris) 2nd ser., 27, 136-67 (1824), by Jean-Baptiste Joseph Fourier. Essay about this article Jean Baptiste Joseph Fourier (1798-1830) is best known today as a mathematical physicist who developed Fourier Analysis and studied heat transfer. His contemporaries knew him as an administrator and scientist whose fortunes rose and fell with those of Napoleon Bonaparte. In his 1824 article reproduced here, Fourier compared the heating of the atmosphere to the action of glass in a greenhouse, but made assumptions about the Earth’s heat budget that are vastly different than those of today. He described the heating of the Earth by three distinct sources: (1) solar radiation, which is unequally distributed over the year and which produces the diversity of climates; (2) the temperature communicated by interplanetary space irradiated by the light from innumerable stars; and (3) heat from the interior of the Earth remaining from its formation. Examining each of these three sources and the phenomena they produce, Fourier concluded that the temperature of the Earth can be augmented by the interposition of the atmosphere, “because heat in the state of light finds less resistance in penetrating the air, than in repassing into the air when converted into non-luminous heat.” For Fourier both the atmosphere and the ocean resisted the free exchange of heat. "The transparency of the waters appears to concur with that of the air in augmenting the degree of heat already acquired, because luminous heat flowing in, penetrates, with little difficulty, the interior of the mass, and non-luminous heat has more difficulty in finding its way out in a contrary direction." Fourier compared the atmosphere to a giant heliothermometer, a thermometer encased in a box with glass panes that was used by mountain climbers to register solar intensity at high altitudes. In Fourier’s analogy the atmosphere was sandwiched between the surface of the Earth and an imaginary cap provided by the finite temperature of interstellar space. He also foresaw humanity’s inevitable modification of the Earth’s heat budget. He pointed out that “the establishment and progress of human society…may in extensive regions produce remarkable changes in the state of the surface, distribution of waters, and the great movements of air. Such effects, in the course of some centuries,” Fourier continued, “must produce variations in the mean temperature for such places.” This work was subsequently cited by Pouillet, Tyndall, Arrhenius, and many others, but note that Fourier was not necessarily the “first” to examine what we today call the greenhouse effect. In 1681, Edme Mariotte wrote that although the sun's light and heat easily passed through glass and other transparent materials, heat from other sources ("chaleur de feu") did not. For further reading see Fleming, J.R. Historical Perspectives on Climate Change Oxford. Oxford Univeristy Press. (1998). Chapter 5. a. Who was Jean Baptiste Joseph Fourier and what else, in addition to this article, can you find out about his scientific interests and accomplishments? b. Can you find relations between his work on terrestrial temperatures and his other work? c. What assumptions did Fourier make about the Earth’s heat budget that are vastly different than those of today? What does this say about the course of scientific discovery? Articles citing this paper Washington, WM (2006) "Computer Modeling the Twentieth- and Twenty-First-Century Climate." Proc Amer Philos Soc vol. 150, 3 pg 414-427 Pierrehumbert, RT (2004) "Warming the world." Nature 432, pg 677 Bard, E. (2004) Greenhouse effect and ice ages: Historical perspective | [Effet de serre et glaciations, une perspective historique] Comptes Rendus - Geoscience vol. 336, nos. 7-8, pp. 603-638 van der Veen, C.J. (2000) Fourier and the "greenhoue effect." Polar Geography vol. 24, no. 2, pp. 132-152 Fleming, J.R. (1999) Joseph Fourier, the 'greenhouse effect', and the quest for a universal theory of terrestrial temperatures. Endeavour vol. 23, no. 2, pp. 72-75 Fleming, J.R. (1998) Charles Lyell and climatic change: speculation and certainty. Geological Society Special Publication vol. 143. pp. 161-169 Dolan, B.P. (1998) Representing novelty: Charles Babbage, Charles Lyell, and experiments in early Victorian geology. History of Science vol. 36 no. 3, pp. 299-327
<urn:uuid:b8ab6ba0-eba0-4f07-8561-c43fe11ef208>
3.6875
1,103
Knowledge Article
Science & Tech.
52.279006
Dynamics of Channel Erosion The physical processes that create eroded channels and drainage networks are recreated to study the physics of channelization. The depth of channels as a function of time is measured using a specialized laser-aided topography technique. Water is fed from a reservoir and the pressure is maintained at a constant level. The water seeps through a porous medium (in this case, "sand" consisting of monodisperse glass beads) and creates channels on the slope. This project is funded by the Department of Energy. Collaborators include Braunen Smith and Arshad Kudrolli (Clark University), and Alex Lobkovsky and Dan Rothman (Massachusetts Institute of Technology). This image was displayed at the APS March Meeting 2008, New Orleans, as part of the Topical Group on Statistical and Nonlinear Physics Gallery of Nonlinear Images. Image credit: Braunen Smith and Arshad Kudrolli, Clark University, Worcester, MA
<urn:uuid:a80ce144-0976-4257-8aaf-9a075dadc153>
3.21875
202
Knowledge Article
Science & Tech.
26.365734
You are hereLaunch Vehicles - Aero High Launch Vehicles - Aero High Aero High was a sounding rocket used to study the upper atmosphere. Its main purpose was to investigate chemiluminescent reactions. Twenty One (21) launches were made from 1964 to 1972. The program was concluded in 1972. The Aero High was superceeded by the Corella. |Period of Use:||1964 - 1972| |Payload Mass:||120 lb| |Stage 1||T+0||Gosling 4||100000 lb.sec||4.5 s| |Stage 2||T+14||VELA||30000 lb.sec||5.0 s|
<urn:uuid:ee2d6a4b-8451-482d-ba14-ca494947d2f9>
2.8125
148
Knowledge Article
Science & Tech.
86.959367
How would you change the course of an One idea - a massive spacecraft that uses gravity as a towline - is illustrated in this dramatic artist's view of a gravitational tractor in action. In the hypothetical scenario worked out in 2005 by Edward Lu and Stanley Love at NASA's Space Center, a 20 ton spacecraft tows a 200 meter diameter asteroid by simply hovering near the asteroid. thrusters are canted away from the surface. The steady thrust would gradually and predictably alter the course of the tug and asteroid, coupled by their mutual gravitational attraction. While it sounds like the stuff of science fiction, ion drives do power and a gravitational tractor would work regardless of the asteroid's structure or surface properties. Credit & Copyright: Dan Durda (FIAAA,
<urn:uuid:c6131eca-7c6f-4dcb-87f6-7e964cb10be7>
3.53125
177
Knowledge Article
Science & Tech.
35.057302
Concerns about rising wildfire fuels may be overblown Many fear that fire suppression efforts in the U.S. have made forests denser, increasing the risk of catastrophic wildfires. But a study in Forest Ecology and Management suggests that this assumption doesn’t hold true for all forests. Fuel-reduction schemes, which aim to thin out forests by removing trees and conducting controlled burns, operate on the premise that forests are denser today than they used to be. Researchers analyzed aerial photographs of Colorado Front Range forests from 1938 and 1940 and compared them to images taken in 1999. They found that average tree cover increased by only 4 percent, with high-elevation conifer forests showing no significant change over six decades. Density did go up substantially in historically open areas such as low-elevation ponderosa pine forests, the team notes, but these made up only 11 percent of the total region. They suggest that the results could help forest managers target areas that have changed the most. – Roberta Kwok Source: Platt, R.V. and T. Schoennagel. 2009. An object-oriented approach to assessing changes in tree cover in the Colorado Front Range 1938-1999. Forest Ecology and Management DOI: 10.1016/j.foreco.2009.06.039 Image © Mychko Alezander, iStockPhoto.com
<urn:uuid:b639d0d3-e3af-4c16-a0c1-b9f7d86ebd75>
3.484375
285
Truncated
Science & Tech.
53.498278
the process of combining two light nuclei to form a heavy nucleus is known as nuclear fusion when two light nuclei (having low binding energy the nucleon)are combined to form a heavy nucleus there occurs a small mass defect which gets converted in to energy process requires very high temperature for occurnce. why such a high temperature necessary? Ans: This is because when possitively changed nuclei come close to each other for Fusion they required very high energy to counter replusive force between them. [illustration] Two determine nuclei fuse to form a tritium nucleus and a proton as by product . compute the energy released. given: Mass of deterium=2.01414 Mass of tritium nucleus=3.01605u and mass of proton=1.00782u mass of reactants = 2.0141 x 2 = 4.028204 mass of products=3.01602 + 1.00782=4.02387u mass defect = 4.0282 - 4.02387 = 0.00433u Energy released = 000433 X 931 = 4.03 Mev how has the factor 931 come in computation? Ans: E = mc2, if mass is 1amu and it is converted in to energy by this relation then E = 931 Mev 1u mass defect is equivalent to931Mev of energy produced Radioactivity: the phenomenon of sportaneous emission of radiation from radioactive substance is known as Radioactivity. This is exhibited naturally by certain heavy elements like uranium,Radium, thorium etc. Few radioactivity decay are: - decay: in this case an particle(helium nuclus) is ejected and parent nucle looses two protons and two neutrons - decay: in this process change Ze of nucleus changes but the number of nucleons remin unchanged. This happens if nucleus emits an electron(- decay) or positron. In this process either protons is converted in to neutron or vice-versa (+ decay) is also accompained by release of a changeless and massless particle called neutrino decay: a gamma ray emmision does not affect either the change or number In a typical decay process, unstable nucleus (parent0 decays and results in the formation of daughter. Thus effectively no. of parent nucleus decress in the process. If initially there were N0 unstable nucleus, then after time t, the no. of parent nucleus N remaining is given by N = N0 where decay constant. This law is followed by all radioactive decays • The time Required to get the number of parents nuclei halved is given by • Another quantity that measure the rapidity of decay is the average or mean life time of a nucleus, tav is given by Average life=1.44 times the half life
<urn:uuid:20cc0774-24a6-4479-808c-1d2f54be561d>
4.125
614
Q&A Forum
Science & Tech.
53.466538
Maximum Likelihood is a method for the inference of phylogeny. It evaluates a hypothesis about evolutionary history in terms of the probability that the proposed model and the hypothesized history would give rise to the observed data set. The supposition is that a history with a higher probability of reaching the observed state is preferred to a history with a lower probability. The method searches for the tree with the highest probability or likelihood. The Maximum Likelihood method of inference is available for both nucleic acid and protein data. The following programs are available from the web: Advantages and disadvantages of maximum likelihood methods: Maximum likelihood evaluates the probability that the choosen evolutionary model will have generated the observed sequences. Phylogenies are then inferred by finding those trees that yield the highest likelihood. Assume that we have the aligned nucleotide sequences for four 1 j ....N (1) A G G C U C C A A ....A (2) A G G U U C G A A ....A (3) A G C C C A G A A.... A (4) A U U U C G G A A.... C and we want to evauate the likelihood of the unrooted tree represented by the nucleotides of site j in the sequence and shown below: (1) (2) \ / \ / ------ / \ / \ (3) (4) What is the probabliity that this tree would have generated the data presented in the sequence under the the chosen model ? Since most of the models currently used are time-reversible, the likelihood of the tree is generally independent of the position of the root. Therefore it is convenient to root the tree at an arbitrary internal node as done in the Fig. below, C C A G \ / | / \/ | / A | / \ | / \ | / A Under the assumption that nucleotide sites evolve independently (the Markovian model of evolution), we can calculate the likelihood for each site separately and combine the likelihood into a total value towards the end. To calculate the likelihood for site j, we have to consider all the possible scenarios by which the nucleotides present at the tips of the tree could have evolved. So the likelihood for a particular site is the summation of the probablilities of every possible reconstruction of ancestral states, given some model of base substitution. So in this specific case all possible nucleotides A, G, C, and T occupying nodes (5) and (6), or 4 x 4 = 16 possibilities _ _ | C C A G | | \ / | / | | \/ | / | L(j) = Sum(Prob | (5) | / |) | \ | / | | \ | / | |_ (6) _| In the case of protein sequences each site may ooccupy 20 states (that of the 20 amino acids) an thus 400 possibilities have to be considered. Since any one of these scenarios could have led to the nucleotide configuration at the tip of the tree, we must calculate the probability of each and sum and sum them to obtain the total probability for each site j. The likelihood for the full tree then is product of the likelihood at each site. N L= L(1) x L(2) ..... x L(N) = ½ L(j) j=1 Since the individual likelihoods are extremely small numbers it is convenient to sum the log likelihoods at each site and report the likelihood of the entire tree as the log likelihood. N ln L= ln L(1) + ln L(2) ..... + ln L(N) = SUM ln L(j) j=1 The model of evolution that attributes to each possible nucleotide or amino-acid substitution a certain probability is essential to obtain the correct tree. In the case of protein sequences the simplest model is the Poisson model, which assumes that all changes between amino acids occur at the same rate. This assumption is clearly unreasonable for protein sequence data. Therefore, the PROTML program in the MOLPHY package (Adachi and Hasegawa, 1992), as well as the PUZZLE program by Strimmer and von Haeseler (1995), have implemented an instantaneous rate matrix derived from the Dayhoff emperical substitution matrix. This has been called the Dayhoff model. Recently a model called the JTT model of evolution and based upon the updated emperical substitution matrix of Jones et al. (1992) has been developed and and implemented in these programs. The maximum likelihood tree The above procedure is then repeated for all possible topologies (or for all possible trees). The tree with the highest probablility is the tree with the highest maximum likelihood. created by :Fred Opperdoes Last updated: 8 August 1997.
<urn:uuid:6e523db7-9de9-4c13-ad46-c9525c516ece>
3.796875
1,020
Knowledge Article
Science & Tech.
54.128067
Arene substitution patterns are part of organic chemistry IUPAC nomenclature and pinpoint the position of substituents other than hydrogen in relation to each other on an aromatic hydrocarbon Ortho, meta, and para substitution - In ortho-substitution, two substituents occupy positions next to each other, which may be numbered 1 and 2. In the diagram, these positions are marked R and ortho. - In meta-substitution the substituents occupy positions 1 and 3 (corresponding to R and meta in the diagram). - In para-substitution, the substituents occupy the opposite ends (positions 1 and 4). The toluidines serve as an example for these three types of substitution. Ipso, meso, and peri substitution - Ipso-substitution describes two substituents sharing the same ring position in an intermediate compound in an electrophilic aromatic substitution. - Meso-substitution refers to the substituents occupying a benzylic position. It is observed in compounds such as calixarenes and acridines. - Peri-substitution occurs specifically in naphthalenes for substituents at the 1 and 8 positions. Cine and tele substitution - In cine-substitution, the entering group takes up a position adjacent to that occupied by the leaving group. For example, cine-substitution is observed in aryne chemistry. - Tele-substitution occurs when the new position is more than one atom away on the ring. The meanings of the prefixes ortho are all derived from Greek: respectively meaning straight or correct , following or after and akin to or similar . The relationship to the current meaning is perhaps not obvious. The ortho description was historically used to designate the original compound and an isomer was often called the meta compound. For instance, the trivial names orthophosphoric acid and trimetaphosphoric acid have nothing to do with aromatics at all. Likewise the description para was reserved for just closely related compounds. Thus Berzelius originally called the racemic form of aspartic acid paraaspartic acid (another obsolete term: racemic acid ) in 1830. The use of the descriptions ortho, meta and para for multiple substituted aromatic rings starts with Wilhelm Körner in the period 1866–1874 although he chose to reserve the ortho prefix for the 1,4 isomer and the meta prefix for the 1,2-isomer. The current nomenclature (different again from that of Körner) was introduced by the Chemical Society in 1879 . Examples of the use of this nomenclature are given for isomers Catechol, resorcinol and hydroquinone are isomers also: Phthalic acid has two isomers, the meta isomer isophthalic acid and the para isomer terephthalic acid:
<urn:uuid:d1c9ac86-cbd2-4f1f-a5c4-484fd6b809b1>
3.40625
642
Knowledge Article
Science & Tech.
21.968448
Apr. 12, 2008 As useful as nanotubes may be, the process of making them may have unintentional and potentially harmful impacts on the environment. Carbon nanotubes are 10,000 times thinner than a human hair, yet stronger than steel and more durable than diamonds. They conduct heat and electricity with efficiency that rivals copper wires and silicon chips, with possible uses in everything from concrete and clothes to bicycle parts and electronics. The have been hailed as the next “wonder material” for what could become a multi-billion dollar manufacturing industry in the 21st century. But as useful as nanotubes may be, the process of making them may have unintentional and potentially harmful impacts on the environment. MIT/WHOI graduate student Desirée Plata and her mentors—chemists Phil Gschwend of the Massachusetts Institute of Technology and Chris Reddy of the Woods Hole Oceanographic Institution—recently analyzed ten commercially made carbon nanotubes to identify the chemical byproducts of the manufacturing process and to help track them in the environment. Plata found that the ten different carbon nanotubes had vastly different compositions; most previous toxicity studies have generally assumed that all nanotubes are the same. This diversity of chemical signatures will make it harder to trace the impacts of carbon nanotubes in the environment. In previous work (first presented last fall), Plata and colleagues found that the process of nanotube manufacturing produced emissions of at least 15 aromatic hydrocarbons, including four different kinds of toxic polycyclic aromatic hydrocarbons (PAHs) similar to those found in cigarette smoke and automobile tailpipe emissions. They also found that the process was largely inefficient: much of the raw carbon went unconsumed and was vented into the atmosphere. The new research by Plata et al was published April 3 on the web site of the journal Nanotechnology. In the next phase of Plata’s work, she will collect real-time data from a European nanotube manufacturing facility that is poised to let her set up the same monitors she used in the MIT lab. “It is the indiscriminant use of poorly understood chemicals that causes environmental and public health costs,” Plata said. “We want to work proactively with the carbon nanotube industry to avoid repeating environmental mistakes of the past. Instead of reacting to problems, we hope to preclude them altogether." Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:fa0931ed-e83e-4dcd-ab43-b82d08677286>
3.8125
538
Truncated
Science & Tech.
25.829402
Most people connect twisters with Tornadoes, but tropical actually come twisters from the hurricanes. Hurricanes are, which scientists call “strong tropical eddy towers”. They are formed, if large ranges of the ocean are heated, and the air pressure surplus this range drop. This causes Thunderstorms and strong surface hoist. Eddy towers develop tropical or subtropical excessive water (e.g., in the Atlantic away of the coast of Africa or in the Pacific). Since they travel the long distances, which seize energy of the ocean, they are probable as strong tropical eddy towers to be classified. If 74 MPH reach the hoist of a tropical storm, then the storm is classified as hurricane.
<urn:uuid:77d5c3e2-8384-4f7d-9130-eba21fa7b158>
3.484375
144
Listicle
Science & Tech.
49.175317
In certain calculations in mathematics andrelated sciences, it is necessary to perform operations with numbers unlike any mentioned thus far in this course. These numbers, unfortunately called "imaginary" numbers by early mathematicians, are quite useful and have a very real meaning in the physical sense. The number system, which consists of ordinary numbers and imaginary numbers, is called the COMPLEX NUMBER system. Complex numbers are composed of a "real" part and an "imaginary" part. This chapter is designed to explain imaginary numbers and to show how they can be combined with the numbers we already know. The concept of number, as has been noted inprevious chapters, has developed gradually. At one time the idea of number was limited to positive whole numbers. The concept was broadened to include positive fractions; numbers that lie between thewhole numbers. At first, fractions included only those numbers which could be expressed with terms that were integers. Since any fraction may be considered as a ratio, this gave rise to the term RATIONAL NUMBER, which is defined as any number which can be expressed as the ratio of two integers. (Remember that any whole number is an integer.) It soon became apparent that these numberswere not enough to complete the positive number range. The ratio, p, of the circumference of a circle to its diameter, did not fit the concept of number thus far advanced, nor did such numbers as and Although decimal values are often assigned to these numbers, they are only approximations. That is, li is not exactly equal to 22/7 or to 3.142. Such numbers are called IRRATIONAL to distinguish them from the other numbers of the system. With rational and irrational numbers, the positive number system includes all the numbers from zero to infinity in a positive direction. Since the number system was not completewith only positive numbers, the system was expanded to include negative numbers. The idea of negative rational and irrational numbers to minus infinity was an easy extension of the system. Rational and irrational numbers, positiveand negative to ± infinity as they have been presented in this course, comprise the REAL NUMBER system. The real number system is pictured in figure 15-1. As shown in a previous chapter, the plussign in an expression such as 5 + 3 can stand for either of two separate things: It indicates the positive number 3, or it indicates that +3 is to be added to 5; that is, it indicates the op- eration to be performed on +3. Likewise, in the problem 5 - 3, the minussign may indicate the negative number -3, in which case the operation would be addition; that is, 5 + (-3). On the other hand, it may indicate the sign of operation, in which case +3 is to be subtracted from 5; that is, 5 - (+3). Thus, plus and minus signs may indicate positive and negative numbers, or they may indicate operations to be performed. Figure 15-1. -The real number system. The number line pictured in figure 15-1 represents all positive and negative numbers from plus infinity to minus infinity. However, there is a type of number which does not fit into the picture. Such a number occurs when we try to solve the following equation: Notice the distinction between this use of the radical sign and the manner in which it was used in chapter 7. Here, the ± symbol is included with the radical sign to emphasize the fact that two values of x exist. Although both roots exist, only the positive one is usually given. This is in accordance with usual mathematical convention. The equation raises an interesting question: What number multiplied by itself yields -4 ? The square of -2 is +4. Likewise, the square of +2 is +4. There is no number in the system of real numbers that is the square root of a negative number. The square root of a negative number came to be called an IMAGINARY NUMBER When this name was assigned the square roots of negative numbers, it was natural to refer to the other known numbers as the REAL numbers.
<urn:uuid:926848d7-f03f-47dc-ba29-6ddc5c8ceab0>
4.1875
844
Knowledge Article
Science & Tech.
43.112368
A microscope equivalent to the Hubble telescope has been installed at the Canadian Centre for Electron Microscopy at McMaster. It is said to be the most powerful and advanced electron microscope. It is called the Titan 80-300 and has been put to use already. The university is the first university with a microscope of this caliber. It is said that there are many applications for it in life sciences. It should lead to many new discoveries and new cures to different diseases. A test was conducted when they looked at aluminum alloy. They observed it at 14-million times magnification. \ I think this is a very important thing for biology and science in general. It will allow us to see things that we mabye have missed in the past or let us make new discoveries. It could progress medicine, products in how they are made, and life in genreal. To learn more about this amazing Microscope, Click Here.
<urn:uuid:08a4f9a4-e0d3-4a92-8e0f-7675de854fc4>
2.875
185
Personal Blog
Science & Tech.
50.257948
Gyre, without getting into a big long explanation, the source of the energy is generally referred to as "zero point energy". There are various claims for it's density. One claim is that "if all the energy contained in 1 cubic centimeter was changed into matter, it would produce more matter than exists in the observable universe." There are other claims; "Nobel Laureate Richard Feynman and one of Einstein.s protigis, John Wheeler, calculated that there is more than enough energy in the volume of a coffee cup to evaporate all the worlds. oceans" The Nobel prizes were long ago awarded for proving the existence of ZPE.. The work of Tesla on toroid transformers was followed up by Bob Boyce. It was also demonstrated in the TPU. Stan Meyers proved that an IC engine could be run on water. The current efforts are centered in reducing water to HHO rather than directly igniting it. Stan Meyers was poisoned for his efforts. He was directly igniting it. The HHO efforts of Boyce, et al are using a parallel-plate device to produce the HHO from water. Hundreds of researchers have long ago exceeded the "Faraday Limit" at producing HHO gas. The most interesting part of all this is the fact that we have almost NO understanding of water. It has been proven over and over that the HHO gas resulting from a certain production method [Brown's gas], can be used to remove the radioactivity from radioactive elements and isotopes. Obviously, this contradicts much of what we hold true in our beliefs of radioactivity. The other interesting area is transmutation. Researchers put pure Palladium electrodes in pure water and apply current,,,,, cold-fusion experiments. Under later analysis, they find dozens of elements present. There is no explanation except for low energy transmutation. This too flies in the face of much of our dearly held theories. Science understands VERY little of the properties of water. I don't post things because I believe that they are the absolute truth. I post them because I believe that they should be considered.
<urn:uuid:ed7017d9-f8e2-4315-8984-cce6e6e15a87>
2.703125
434
Comment Section
Science & Tech.
47.873753
Choose any three by three square of dates on a calendar page. Circle any number on the top row, put a line through the other numbers that are in the same row and column as your circled number. Repeat this for a number of your choice from the second row. You should now have just one number left on the bottom row, circle it. Find the total for the three numbers circled. Compare this total with the number in the centre of the square. What do you find? Can you explain why this happens? Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some other possibilities for yourself! What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? It is always good to start simply. What are the possible combinations of guesses? What possible scores can you be given and what could thery mean. A score will narrow your possibilities for the next move but will not necessarily remove all choice.
<urn:uuid:47fb2bc3-92a1-4591-aee5-baaa20f7ba36>
3.890625
254
Tutorial
Science & Tech.
71.394341
A simple accelerometer You tape one end of a piece of string to the ceiling light of your car and hang a key with mass m to the other end (Figure 5.7). A protractor taped to the light allows you to measure the angle the string makes with the vertical. Your friend drives the car while you make measurements. When the car has a constant acceleration with magnitude a toward the right, the string hangs at rest (relative to the car), making an angle $B$ with the vertical. (a) Derive an expression for the acceleration $a$ in terms of the mass m and the measured angle $B$. (b) In particular, what is a when $B$ = 45? When $B$ = 0? I don't care about the answers, the important thing is the following:- The book says The string and the key are at rest with respect to the car, but car, string, and key are all accelerating in the +x direction. Thus, there must be a horizontal component of force acting on the key. That's the reason the book decided to consider a force in the $+x$ direction, but I'm looking for a better explanation: how would I find detect the force in the $+x$ direction in another way? To me, when I draw the free body diagram of the string, there looks to be no force acting on the $+x$ direction! I understand it starts with noticing that the string is attached to the ceiling of the car, and that the car has force causing acceleration in one direction, but I don't know how to go further than that.
<urn:uuid:3474766e-7995-4e03-9e99-8bb3285fa050>
3.859375
337
Q&A Forum
Science & Tech.
67.761758
Nobel Scientists | Interviews with Nobel Prize winning scientists The 20th Century was a time that saw significant advances in all branches of science. In the 1980s, the BBC decided to preserve a record of some of the great minds behind these remarkable developments and recorded a series of in-depth interviews with Nobel Prize-winning scientists in conversation with biologist and broadcaster Lewis Wolpert. Never before broadcast, this footage was discovered within the BBC Archive and made available in partnership with BBC World Service and Wellcome Collection. It presents a unique record of key figures from the world of science in the recent past. Working with John Cockcroft to split the atom. How playing with iron filings led to a Nobel Prize in Chemistry. An interview with the scientist who helped unlock the genetic code. Discovering how our immune systems protect us. Understanding disordered systems. Discovering how nerves transmit messages around the body. Radio telescopes, pulsars and why stars 'twinkle'. This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.
<urn:uuid:4ff1e2a6-f620-427c-ba02-030e2f3e89f3>
3.1875
272
Content Listing
Science & Tech.
48.215676
Creating a dynamically named verb is rather simple, simply add the verb like you would a new object, supplying the name as an argument of new(). Now instead of the verb being named 'Activate' it will be called 'One' but it'll still execute Activate() when used. Pretty handy, huh? The new() proc for verbs can take three arguments, the first is the atom you're giving the verb to, the second is the name of the verb, and the third is the description of the verb. Unfortunately things like dynamic settings for categories and other 'set' stuff isn't avaliable. Maybe some day! Lets take a look at a more open-ended example of the above that'll make use of the third argument of new(). This will handle all of the verb naming and addition inside of the object's New() proc so that you only have to create the object to gain access to the verbs. You'll end up with a unique verb for each object, each with its own verb description. Using this method you'll make it easier for people to create macros and keep their stuff organized, as opposed to having a long odd-looking command input when you need to provide which object to use the Activate() verb on. Removing verbs can be done in a similar fashion, you wouldn't think new() would be used for deleting something; but it is! verbs -= new/obj/myobjs/proc/Activate(usr,"One") Will remove the command named 'One' from the list. That's about all there is to dynamic verbs, I'm sure you guys can find new and creative ways of using them. I'd love to hear about them, so don't hesitate to page me :). Next time, look-up and look-down type operators, what are they for and why doesn't anybody use them?
<urn:uuid:fd66cae2-e8b4-480e-b140-1b07efe5b99a>
2.765625
387
Comment Section
Software Dev.
62.729068
|Example of JL Component Composition| illustrate an example of JL component composition using the TransportIfc We now define three layers that use this interface. The TCP layer exports TransportIfc and provides data transport using TCP. The Secure layer provides data encryption/decryption and the KeepAlive layer automatically exchanges liveness notifications between communicating peers. The Secure and KeepAlive layers import and export the TransportIfc interface. These three layers are defined below as classes in JL. The TCP class is not parameterized, but the Secure and KeepAlive classes are mixins. In the latter two classes, the type parameter, T, is constrained by TransportIfc--any instantiation of Secure or KeepAlive requires an actual type parameter that implements the TransportIfc interface. The type equation below defines a new type, Trans, by composing our three layers. We say that Trans, which implements a secure TCP transport with the automatic keep-alive feature, is generated by its type equation. Trans implements TransportIfc because thatís the interface exported by the leftmost, or top, layer in the composition stack. Compositions are seen as stacks when viewed graphically, as in Figure 1 . Figure 1 - Trans Layer CompositionLayers in a composition can be thought of as stacked virtual machines that perform feature-specific processing. Though we haven't shown method implementations, we can walk through a hypothetical invocation of the send() method to illustrate this idea of virtual machines. When a Trans client invokes send(), the KeepAlive layer at the top of the stack gets control first. KeepAlive send() simply calls the Secure layer's send(). Next, the Secure layer encrypts the message and then invokes the TCP layer's send() to transmit the encrypted data. The ordering of layers is important in this scheme-if the KeepAlive and Secure layers were reversed, then liveness messages would be sent in the clear rather than encrypted. The ability to mix and match layers to generate a new type with the precise set of desired features gives JL compositional flexibility. Every time a new layer is added into a composition, the resulting type is refined with a new characteristic or behavior. This stepwise refinement of code characterizes JL programming methodology.
<urn:uuid:927e45cb-de33-4c84-8b61-25c1890997da>
2.859375
466
Documentation
Software Dev.
31.991081
From our childhood days in elementary school science class we have been taught that matter exists in three states: namely solids, liquids and gases. Recently an additional state known as plasma has been recognized but it is more theoretical than practical importance at present. H2O can exist in any of the three common states as steam, ice or water and, when the conditions are right, it can be found in all three simultaneously. The set of conditions under which this phenomenon occurs is known as the triple point. There are a few substances for which the pressure at the triple point is so high that under normal conditions these solids pass directly into the gaseous state from the solid. Examples of these are solid carbon dioxide (CO2) or “dry ice” and naphthalene (C10H8), commonly encountered as the active ingredient in traditional moth balls and water. Under certain conditions these materials can pass from the solid state into the gaseous state by means of a process known as sublimation. If water, steam and ice share the same molecular formula, namely H2O, then how do they have three distinct states of matter? The answer lies in what is known as the Kinetic Molecular Theory (KMT), which basically states that all molecules existing at temperatures above absolute zero are in motion and must perforce possess kinetic energy or the energy of motion manifested as force. If force equals mass times acceleration (F = MA) and the mass is constant due to the molecular structure, then the only variable is acceleration which translates into motion and thence to velocity. In the solid state the molecules merely vibrate within a rigid matrix or lattice. As the temperature is raised, the increase in energy causes the magnitude of this vibration to increase, ultimately to the point at which the molecules are able to break out of the matrix and begin to move freely within the mass of material. The solid has become a liquid and the temperature at which this change occurs is known as the melting point (or freezing point if the temperature is decreasing and the liquid becomes solid). Beyond this point, it becomes necessary to confine the material in a container such as a flask, drum or tank. Every molecule within the body of the liquid is acted upon by forces from every direction and these cancel each other out. However, a certain number of molecules form the surface of the liquid, and these are acted upon by the attractive forces from within the liquid without having any opposite forces to counteract them. Hence the liquid tends to pull into itself those molecules in the surface. The result is the assumption of spherical shaped drops when a liquid is unrestrained since this geometric configuration provides the maximum possible volume within the smallest surface area. In the case of a liquid contained in a vessel, such as a bowl, the unbalanced forces acting on the surface molecules create what is known as the surface tension membrane (STM). This is an actual membrane and, though it is fragile and it is self-sustaining, its existence can easily be confirmed by the following demonstration: Water is placed in a tall glass container, filling it two or three inches. Next, a like amount of salad oil is added down the side of the container, so as not to disturb the water. The container is allowed to stand until any bubbles or globules of oil have disappeared. Sprinkle a small amount of coarse ground pepper onto the top of the salad oil. The pepper will slowly sink to the bottom of the oil layer until it comes to rest on the STM, the interface between the oil and the water, where it remains suspended. If one now takes a stirring rod and breaks up the STM, the pepper sinks until it reaches the bottom of the container. Meanwhile, the STM recreates itself and the demonstration can be repeated. Molecules within a liquid are in motion and have velocity that can be measured and calculated. However, the result gives a value for the velocity of the average molecule. Some molecules move at a slower rate and some “speed demons” exceed it. They will move randomly in all directions until they collide with other molecules, the sides of the container or the STM. Some of these “speed demons” move so fast that their momentum causes them to break through the STM and become free floating particles of gas or vapor in the atmosphere near the liquid. As they leave the liquid, the fast-moving molecules carry their energy with them thereby reducing the energy content of the remaining liquid. When the energy content is reduced, the temperature is lowered and the number of molecules leaving the liquid is reduced, a phenomenon known as “auto refrigeration”. People think of vapors as coming from liquids as steam coming from hot water or vapor coming from liquefied petroleum gas (LPG). People think of chlorine, carbon dioxide and oxygen as gases. In actuality, these are one and the same. Steam is water vapor or gaseous water. Propane gas is really the vapor arising from the liquid when the pressure is reduced. Chlorine “gas” is the vapor arising from liquid chlorine, as are carbon dioxide or oxygen. For the purposes of this discussion the term “gas” includes those materials commonly thought of as “vapors.” Whether a substance exists as a gas (vapor) or a liquid (and occasionally a solid) depends upon the ambient conditions. Two parameters determine whether we have a gas (vapor) or a liquid, temperature and pressure. Charles’ Law tells us that as the absolute (Kelvin) temperature of a gas rises, the volume increases proportionately, assuming constant pressure. Boyle’s Law states that pressure increases so the volume decreases — this time assuming constant temperature. Since in the practical world both temperature and pressure are usually variables, both of these laws are, for computational purposes, combined into what is known as the Universal Gas Law, which multiplies the original volume (V1) by a temperature factor (T2 / T1) and then by a pressure factor (P1 / P2) to arrive at the final volume of the gas. The working formula for this law is: V2 = V1 X T2 / T1 X P1 / P2. All tempe-ratures must be in Kelvin (oC+273) while pressures must be stated in millimeters of mercury (mmHg), atmospheres or bars. According to the formula, as the absolute temperature decreases, the volume of the gas or vapor equals zero. In other words, the matter simply ceases to exist. Obviously this doesn’t happen. Instead, the gas or vapor undergoes a phase change and becomes a liquid. For each gas or vapor there is a unique temperature above which the material cannot exist in the liquid phase no matter how much pressure is applied to the system. This is known as the critical temperature and the pressure required to liquefy the gas at this temperature is known as the critical pressure. In short, a substance cannot exist as a liquid above this point. All of the properties of gases discussed here have a major bearing on the safety of those responding to an incident involving significant quantities of gaseous material as well as the outcome of their response effort. Because of this, there are a number of things that should be kept in mind when responding. 1. Gases may be involved in any incident due to a chemical reaction between a commodity and the environment or other involved materials. A case in point: the lading of a car carrying calcium carbide (CaC2) appears to be inert but when brought in contact with water it gives off large quantities of acetylene (C2H2), which is a highly flammable gas with an almost unlimited flammable range. Acids such as hydrochloric (HCL) may react with roadway material to give off CO2 and/or other gases such as hydrogen sulfide (H2S) or hydrogen cyanide (HCN), both of which are highly toxic. Therefore, all incident sites must be checked constantly for the presence of unsuspected gases or vapors. One cannot depend on their nose. A properly fitted SCBA protects against toxic vapors but blocks the sense of smell and does not protect against flammables. Another source of gaseous atmospheric contaminants is the products of incomplete combustion. Any fire is sure to involve some type of carbonaceous material. If the temperature of the fire is high enough and there is no shortage of oxygen, combustion of carbon will be complete giving rise to carbon dioxide, CO2. (C + O2 ‡CO2). However, suppression efforts decrease the combustion temperature and possibly cause a shortage of oxygen. In this case, the combustion is incomplete, resulting in the production of carbon monoxide (CO) according to the following equation: (2C + O2 !2CO). This is the old water gas reaction which has been used to produce fuel for internal combustion engines. The presence of nitrogen or sulfur in the combustion mixture raises the possibility of the generation of ammonia (NH3) hydrogen cyanide (HCN), phosgene (COCI2) and/or hydrogen sulfide (H2S), all of which are toxic. If there is plenty of oxygen and the temperature of combustion is high enough, all of these will be consumed by the fire. Therefore, there may be times when allowing a fire to burn itself out is a valid option. 2. Gases can, and will, move. A cloud of gas may be invisible but it is there and it can cause damage. Constant monitoring of the atmosphere is a must, as is accurate knowledge of the weather conditions on site, not at the weather service 30 miles away. Wind shifts must be considered as to their effect on the conditions at the work site. 3. Gases have mass; some are heaver than air and some are lighter. Natural gas, which is made up chiefly of methane (CH4) is slightly lighter than air and will therefore rise from ground level. LPG or propane (C3H8) is slightly heavier than air and tends to sink into low places and remains close to the release point. This makes a big difference in case of a gas leak in proximity to a storm sewer, underpass or other low-lying appurtenance. Gases can cause death by displacing the atmosphere with its supply of oxygen. For this reason even a relatively harmless gas such as CO2 or nitrogen (N2) can become deadly if an unsuspecting worker walks into a cloud of it and cannot retreat. The gas will not kill him but the lack of oxygen certainly can. 4. The possibility of a gas being present makes it mandatory that monitoring activities be initiated upon arrival at the incident site and carried out continuously until the response and final cleanup is complete. This monitoring must cover the entire site — not just the point of entry. Know which gas present, we must know what gas is present and, if possible, how much. The monitoring must be specific. If an instrument that is specific for a particular gas is not available, find a retired chemist and ask him for a “quick, cheap and dirty” on site test. An example of this is the bottle of lead acetate that sewer workers carried in bygone days. Before entering a manhole, they would soak a piece of paper towel in the lead acetate solution and lower it into the manhole on a string equipped with a clothespin. When they pulled it back up they looked for the tell-tale black stain on the paper. If it was there, they knew they were dealing with hydrogen sulfide (H2S) and strict precautions were mandatory. A rag soaked with ammonia water emits a white cloud of ammonium chloride (NH4C1) in the presence of hydrochloric acid (HCL) and the reverse is also true in the presence of ammonia (NH3) There are literally dozens of such tests that were quite common in the pre-electronic days and they are still useful in cases where an expensive and maintenance intensive instrument is not “cost effective” or is unavailable. 5. Gases are temperature sensitive. As a gas leaves the containment system through a leak or rupture, it carries energy with it. The result is a cooling of the system through auto-refrigeration. This effect may be great enough to reduce the flow of vapor and facilitate plugging of the leak. Lower temperatures such as those encountered in northern latitudes during the winter months will reduce the pressure of the gas within the containment vessel. This author once encountered a student from the Alaska Railroad who recounted an incident where LPG had been transferred by means of a trash pump. His fellow students thought he was “blowing smoke” until he revealed that the temperature at the time of the incident was fifty degrees below zero (-50 Fahrenheit). At that temperature, one could quite easily carry LPG in a bucket. As gases cool their density increases. They become heaver. As a result they may hug the ground until they absorb enough heat to rise. Again, constant and continuous monitoring is crucial to the safety of all concerned. The converse is also true, and gases normally heavier than air become lighter if they are heated in the course of the incident. 6. Finally one must not forget aerosols, those clouds of finely divided particles of liquid that are suspended in the atmosphere and act much like true gases. We see these in consumer products such as spray paints, household deodorants, insect sprays and hair spray. The nefarious mustard gas (C4H8CI2S) used in World War I was, in reality, an aerosol. As a weapon it was very effective. The liquid adhered to the skin and continued to corrode rather than dissipate as would a true gas. The after effects of exposure were horrendous. Aerosols can be inadvertently produced when materials are forced through a small opening such as a small leak in a containment vessel, and the possibility of their presence must not be overlooked. Aerosols tend to clog analytical instruments and render then insensitive unless adequate but inert filtration is provided for the sampling stream. This need for filtration carries with it the risk that the filtering agent may become contaminated with the aerosol and produce false positives. To prevent this, one should introduce a clean sample of air (perhaps from an SCBA) into the instrument after obtaining a positive reading. If the instrument is still showing a positive reading, the filter should be changed and the test repeated. It is also possible that the tubes, sensors and other components within the instrument could become contaminated. Should this happen, it requires complete disassembly, cleaning and purging of the instrument, an expensive and time consuming procedure normally requiring return of the instrument to a qualified service center or the manufacturer. To prevent such an instrument failure, the filter should be located as near to the intake of the sampling line as possible. In the event of contamination, only a short length of hose needs to be replaced. The line between vapors and aerosols can become blurred. An example is the carburetor on a gasoline engine. Is the fuel fed to the cylinders a true gas or an aerosol? In actuality it can be both, and this makes the selection of a filtering agent difficult. The use of an agent designed for the particular instrument in question and provided by the manufacturer is highly recommended. In short, gases, vapors and aerosols can be dangerous. They can “sneak up” on the unwary responder when they are least expected and cause great harm. The best defense is a good offense and eternal vigilance is the price of safety. C
<urn:uuid:557086e3-a398-454b-a94d-11654d2640ee>
3.953125
3,219
Knowledge Article
Science & Tech.
41.721342
Probability (part 1) What probability is. Probability (part 1) ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Good morning or evening or whatever it is where you are, - wherever you're happening to watch this movie. - Anyway, I've been requested to do a playlist on probability, - and I think that's an excellent idea, so I will start doing - a playlist on probability. - So let's do a playlist on probability. - It's a good place to start probability. - I don't do videos on spelling. - Probability: so what is it? - And I think all of us have kind of a sense of - it, very informally. - And as far as I can tell there actually isn't a formal - definition of what a probability is. - There are several almost formal competing definitions. - So just in our everyday life, you know if the weather man - says there's a 50% percent chance of rain the next - day, he's essentially giving a probability. - He's saying that-- well, there's a couple of ways - that you could interpret the 50% probability. - It could be that if 100% was that he is sure there's rain - tomorrow and 0% is that he is not sure that there's is rain - tomorrow, that 50% kind of means well, he's kind of - neutral between those two possibilities. - So one definition could be how strongly you believe. - Actually, there's a whole school of probability where - they view probability like this, and it's called the - Bayesian, and we'll go into that more. - Actually, when we do easy problems, all of these things - kind of are the same thing. - But later on we'll see what the difference is. - Another way of interpreting this and this is kind of the - frequentist school of thought, is if I were to have the data - that this-- or if the weather man had this data that he has - right now as far as where the clouds are and what the - barometer reads and where the moon is and all of the data. - Given all of the data that he has, when he has that same - exact data hundred times, fifty times or 50% of those - times there will be rain. - So you can almost view it as, given the data that he has, if - you had that data a hundred times or if you were able to - run this experiment a hundred times-- although that's very - unlikely that you would have that exact same number - of data points. - You know, whether Mars is in the right place and the - sun is flaring and all. - It's very unlikely you have those exact same-- you know, - the butterfly effect. - One butterfly can affect the wind patterns across the ocean. - So it's very unlikely that you could perform that experiment a - hundred times, but what the weather man could be saying is - well, if I did have data identical to this, a hundred - times, 50 of those times or 50% of the time we would - have rain the next day. - That's 50% of experiments with same-- I guess you could say - measurable initial conditions-- I'm kind of doing this on the - fly so don't take this as gospel. - But it I think it'll give you the sense. - With same initial conditions, would result in rain. - They're almost the same thing, but we'll see later that this - frequentist-- I tend to view the world kind of like this, - but there are a lot of circumstances where you - really-- it's hard to say that you could perform that same - exact experiment over again. - For example, if someone said in 2003 there's a 50% chance or - there's an 80% chance that Saddam Hussein has weapons of - mass destruction, that I think would-- and that would - be a probability. - You know, you'd have these CIA analysts who aren't being - influenced by their bosses saying hey, after all the data - we see, we can't be sure, but we think there's an 80% chance. - They would be in this camp, right? - Because you really couldn't perform that experiment - a hundred times. - There haven't been a hundred times or a thousand times or a - large number of times where you had that exact same set of - circumstances where you a guy with the big mustache in the - Middle East kind of giving the run around for - weapons inspectors. - Anyway, so let's move on. - This is very subtle, but it gives you the difference - between these two things. - It's quite subtle, but I think it gives you nice frame work - for what probability is. - So let's just do a little bit of notation. - I actually looked it up on Wikipedia and they had one - definition-- maybe it wasn't Wikipedia, it was - maybe another website. - And actually, I think you do see this definition of - lot where they say the probability-- sometimes it's - written as probability of a. - Sometimes it's just written as P of a. - So the probability of a occurring is equal to the - events in which a is true over total number of events. - And this, for the most part, can be a good definition, but - I'll show you one place where I think it's a little - bit more squirmy. - So if I told you that I'm going to flip a coin and-- - actually, even better. - Let's say, let's roll a dice. - And let's say I say the probability-- I'm going - straight to more difficult things. - So say the probability of an even number. - Well, let's use this definition that they gave. - Well, what's the probability that this event is true? - Well, let's see, what are all the numbers I could get? - I could get a 1, 2, 3, 4, 5, 6. - This is just a normal die, it's not one of those Dungeons - and Dragon's dice. - So what are the number of events where we get an even - number, where this is true, where even is true? - Let's see. - 2, 4, 6. - Those are all the situations where we get even as true. - So there are 3 where even is true. - And then, what is the total number of events? - Well, we could get 1 of 6 numbers, so there are 6 total. - And that equals 1/2. - And that also equals 50%, right? - We know how to convert fractions to percentages. - And this is right, this is completely right. - But the only time where you can really apply this and most of - what you'll do in school and things you can apply this, but - this assumes that all of the events are equally - likely to occur. - You could have had a dice or a die-- I forgot how to say - the plural or the singular. - You could have that situation where maybe the six-sided is - weighted a little bit more. - You know, someone's handed it down so it's more likely - to have a 3 or something. - And in that case, you wouldn't be able to use this definition. - So I'm going to modify this definition, although - I don't know if it's traditionally modified. - This is one possible, events in which a is true divided - by-- well, let's say, equally probable. - Equally probable events in which a is true divided by - equally probable total events. - So in order for this to hold true each of these six - circumstances have to have exact equal chance - of occurring. - And we're going to do maybe in this video, actually - probably not in this, I only have 3 minutes left. - But in this series I'll show you situations where you we'll - have an unfair dice or die or we'll have a set of - circumstances where all of-- each of the total number events - they're not equally probable. - So that's why I want you become a little bit - weary of this situation. - So with that said, let's do a couple of probability problems - that maybe give you a little bit more intuition for-- - whoops-- for what's going on here in the world - of probability. - So if I'm flipping a dice and I said, well, what's the - probability of heads? - That's pretty easy. - We could use that definition and it's a completely - fair dice. - We could use that definition and say, well, what are the - total number of events where I could get heads - or tails, right? - So there's 2 total events. - And the probability the getting heads, that's - one of the events. - So there's a 1/2 probability. - The way I like to think of it so we don't have to use that - previous definition is, if I were to conduct this experiment - a hundred times, what percentage of those times am - I likely to get heads? - And then I would say, well, there's 50% of the time - I would get heads. - And the reason why, you know, I could make a symmetry argument - that it's just as likely to go on heads as it is to tails. - There's no reason why I would expect 51 heads or - 49 nine tails, although that could happen. - But there's no reason I could expect it. - Heads and tails are equally likely. - They're just different words for different sides of a - coin that's equally likely to fall on either side. - Anyway, let's say I'm going to now flip a coin twice. - And it's the same coin. - So I'm going to flip it, and then I'm going to pick it up, - and I'm going to flip it again. - And so what's the probability that I get-- I'll - call it heads, heads. - So that's the probability that I get heads on the first flip - and then heads on the second flip. - Well, look at it this way. - If on the first flip we already know that we have a 5% chance - or 1/2 chance on the first flip, right? - So let's think of it of the frequentist philosophy. - So if I were to do this a hundred times, 50 of the - times I would get heads. - Let's call that on the first flip. - Then of course, 50 of the times I would get tails - on the first flip, right? - Now we're at this state of the universe and now we do the - experiment over again. - So of these 50 times, what percentage of the times is - the next flip going to be heads again? - Well, we could say it's going to be another 50% chance, or - you could say, well, in 50 tries the first one was heads, - and then of those 50, 50% are going to be heads again. - So we get 25%. - I just multiplied these two numbers. - And of course, to get heads and then tails would be 25% chance. - Heads Heads, heads tails, and then this is tails - heads, tails heads. - I'm getting confused. - Tail heads is 25%. - And then tails tails is 25%. - Anyway, I'm rushing it because I'm 25 seconds over. - I'll continue this in the next video. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
<urn:uuid:87570c61-39a2-41d6-9090-3a869fbc640c>
3.21875
2,882
Truncated
Science & Tech.
72.646747
A team of ceramists pinpoints a hitherto unknown internal phase transition. Why does a solid metal that is engineered for ductility become brittle in the presence of certain liquid metal impurities? The phenomenon, known as liquid metal embrittlement, or LME, has baffled metallurgists for a century. Now, ceramics researchers from Lehigh and Clemson University have shed light on LME by obtaining atomic-scale images of unprecedented resolution of the grain boundaries, or internal interfaces, where LME occurs. In doing so, says Martin Harmer, professor of materials science and engineering at Lehigh, the researchers have achieved the first direct observation in a metal system of a bilayer grain boundary phase transition. The study suggests that interior interfaces can undergo transitions similar to the solid-to-liquid and liquid-to-gas phase transitions that occur in larger, “bulk” materials. It also paves the way for scientists to prevent LME by strengthening the chemical bonds of the materials present at grain boundaries. “This gives us a much clearer understanding of the atomic mechanism of LME,” says Harmer, who directs Lehigh’s Center for Advanced Materials and Nanotechnology. “It promises to improve our ability to control and fine-tune the properties of metals and other materials during fabrication.” The researchers reported their findings Sept. 23 in Science. Their study was funded by the U.S. Navy. The group is continuing its work, with a focus on rectifying LME-related problems in metals. The critical grain boundary Harmer became interested in LME after his group in 2006 identified six grain-boundary “complexions,” each with a distinct rate of grain growth, in alumina. The discovery prompted him to seek insight into the embrittlement of metals. Using Lehigh’s JEOL 2200FS aberration-corrected scanning transmission electron microscope (STEM), which has unparalleled imaging capabilities, the group examined a nickel-bismuth alloy. They employed high-angle annular dark-field imaging (HAADF), which focuses a beam of electrons only 1 angstrom (0.1 nm) wide on a sample. Previous studies had revealed the existence of four interfacial phases at grain boundaries (GB) in metals. Harmer’s group found two more – a bilayer and a trilayer. “A bilayer had been seen before in a ceramic system,” says Harmer, “but no one had seen such examples of bi- and trilayers in metals.” The aberration-corrected STEM pinpointed a bilayer of bismuth atoms at the grain boundary as the source of a weak atomic-scale bond in the nickel-bismuth alloy. “There is a very strong bond between bismuth and nickel,” says Harmer, “so it had never been clear why the alloy is prone to embrittlement. But the bonds between bismuth atoms are weak. We are the first group to see the formation of the bismuth bilayer that weakens this material.” A comprehensive study Harmer’s group examined 12 independent interfaces and excluded “imaging artifacts” introduced by experimental error or by technology. To avoid distortions that result from projecting a 3-D image onto a 2-D film, they took images at different depths on the sample. “By looking sequentially at these images and their structural thickness,” says Harmer, “we were able to rule out artifacts that give the illusion of a bilayer.” In contrast with previous studies that looked at synthetic bi-crystals, the group examined polycrystalline nickel which resembles industrial materials. “Real grain boundaries are typically less symmetrical and have higher energy than synthetic bicrystals,” says Harmer. The group plans next to experiment with the chemistry of nickel-bismuth GBs to try to produce a more ductile behavior. ”Perhaps combining the bismuth with other elements that bond at the interface will prove effective,” says Harmer.
<urn:uuid:3aa291fd-4719-4ef3-914f-9e6c94cb558a>
3.375
873
Knowledge Article
Science & Tech.
38.512764
GREENLAND lost 1500 cubic kilometres of ice between 2000 and 2008, making it responsible for one-sixth of global sea-level rise. Even worse, there are signs that the rate of ice loss is increasing. Michiel van den Broeke of Utrecht University in the Netherlands and colleagues began by modelling the difference in annual snowfall and snowmelt in Greenland between 2003 and 2008 to reveal the net ice loss for each year. They then compared each year's loss with that calculated from readings by the GRACE satellite, which "weighs" the ice sheet by measuring its gravity. The team found that results from the two methods roughly matched and showed that Greenland is losing enough ice to contribute on average 0.46 millimetres per year to global sea-level rise. The loss may be accelerating: since 2006, warm summers have caused levels to rise by 0.75 millimetres per year, though van den Broeke says we ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:6180b7eb-f411-4f9e-b935-57b0620072e2>
4.0625
223
Truncated
Science & Tech.
58.318788
Date: Saturday, April 20, 2002 How do I tell the sex of my frog or toad? It depends on the species. Males of many species have enlarged thumb or forefinger, some have enlarged pads on those limbs, or harder spots, used for grasping the female during mating. Other species have differences in the throat sac. There is a slight difference in the side of the head of bull frogs. The Handbook of Frogs and Toads, by Wright and Wright, has an excellent page of photographs, if you can find the book. Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:8505d4bb-e16e-4ab7-869a-17e91de2568e>
3.234375
135
Knowledge Article
Science & Tech.
63.509412
Physical chemist whose investigations of dipole moments, X rays, and light scattering in gases brought him the 1936 Nobel Prize for Chemistry. Debye was a dominant figure in physical chemistry and chemical physics during the first half of the 20th century. Debye's first important research, his dipole moment studies, advanced knowledge of the arrangement of atoms in molecules and of the distances between the atoms. In 1916 he showed that solid substances could be used in powdered form for X-ray study of their crystal structures, thus eliminating the difficult step of first preparing good crystals. Two of his most significant achievements came in 1923 when he and Erich Huckel extended Svante Arrhenius' theory of the dissociation of the positively and negatively charged atoms (ions) of salts in solution, proving that the ionization is complete, not partial. That same year he described the Compton effect, which the American physicist Arthur Holly Compton had discovered shortly before. Main Page | About Us | All text is available under the terms of the GNU Free Documentation License. Timeline of Nobel Prize Winners is not affiliated with The Nobel Foundation. External sites are not endorsed or supported by http://www.nobel-winners.com/ Copyright © 2003 All Rights Reserved.
<urn:uuid:18d4305b-d34c-4732-bfdc-6273f4c104a2>
3.5625
262
Knowledge Article
Science & Tech.
37.058422
|Sep20-12, 06:59 PM||#1| Control of RH and Temp of Air I am trying to provide a humid air stream along some ¼” tubing, where I can roughly control the relative humidity and temperature, this doesn't need to be exact. I already have a compressed air tank, regulator and flow meter to provide the correct flow rate. I need to find out whether bubbling air through water will result in 100% relative humidity and whether or not this depends on the the amount of water it is bubbled through? If so I can heat the water to get the right amount of moisture in the air at 100% humidity and then heat the air further in the outlet pipe in order to lower the humidity to the level I require. Please excuse my ignorance as I do not have an extensive knowledge of this subject, any help or guidance you can give me will be greatly appreciated. |Similar Threads for: Control of RH and Temp of Air| |Temp to raise fluid temp inside a container - Heat transfer & thermodynamics||Engineering, Comp Sci, & Technology Homework||7| |Calculate refrigerant temp inside evaporator based on exit temp and pressure||Classical Physics||0| |PID temp control for distillation (was "Before i ask")||Chemistry||8| |Chemistry: 2 samples of the same liquid but different temp - final temp?||Biology, Chemistry & Other Homework||9|
<urn:uuid:af5ec1f8-8b19-4006-ad26-b333ddccbda2>
2.71875
311
Comment Section
Science & Tech.
33.622471
To be sure the science is extremely exciting. The Higgs was first proposed in the 1960s and is thought to be the remnant of a ubiquitous interaction common to all objects with mass. The Higgs discovery is solidly grounded in concrete, observable phenomena. It took a colossal new scientific instrument, the Large Hadron Collider, or LHC, at CERN — the European Organization for Nuclear Research in Switzerland — to produce the few hundred examples of this new object thought to be the Higgs. The LHC is an engineering tour de force. Made up of 17 miles of super-conducting magnets, it is designed to capture and accelerate counter-rotating bunches of protons that collide at several locations around the ring. It took two detectors the size of apartment buildings, each with about 100 million sensor elements, to detect the remnants of the Higgs, and to distinguish it from the billions of background processes that could mask it. The human capital needed to plan and build the LHC and the two detectors, ATLAS and CMS, is measured in tens of thousands of person-years. This much-anticipated discovery is at once the culmination of a huge intellectual effort and the beginning of a new field of research. The Higgs is unstable and quickly decays to daughter particles. The study of how and what it decays into will give scientists a glimpse at the way nature works at much higher energies than we have yet to probe. As exciting as this discovery is, and as meaningful as it is to the field of physics, the broader lessons of this human endeavor should not be lost on us. This discovery is the result of a truly worldwide effort. The funding was also global. These days few people are willing to extol the glories and virtues of taxation, but this discovery would not have happened without the taxpayer. In fact, the vast majority of the planet’s taxpayers had skin in the game. The Higgs discovery also represents a triumph of human curiosity. Mix in communication challenges from cultural differences, language barriers and the need to work across 24 time zones, and you have a recipe for failure. But somehow even without the profit motive or the need to survive – things that usually cause humans to pull together – the CERN teams succeeded. The Higgs could well be the first science discovery brought about by all of us in the broadest sense, the planet-wide human community. It seems fitting that nature’s secrets are unwrapped by all of us, that we own and enjoy the discovery corporately. Let us hope that this is the first of many such endeavors.Paul Tipton is a professor of physics at Yale University. He wrote this for the Los Angeles Times.
<urn:uuid:b1e29008-0b62-4e82-b3ca-d6baf33864ef>
3.171875
554
Nonfiction Writing
Science & Tech.
49.886481
Solar Storm Dumps Gigawatts into Earth's Upper Atmosphere March 22, 2012 A recent flurry of eruptions on the sun did more than spark pretty auroras around the poles. NASA-funded researchers say the solar storms of March 8th through 10th dumped enough energy in Earth’s upper atmosphere to power every residence in New York City for two years. “This was the biggest dose of heat we’ve received from a solar storm since 2005,” says Martin Mlynczak of NASA Langley Research Center. “It was a big event, and shows how solar activity can directly affect our planet.” Mlynczak is the associate principal investigator for the SABER instrument onboard NASA’s TIMED satellite. SABER monitors infrared emissions from Earth’s upper atmosphere, in particular from carbon dioxide (CO2) and nitric oxide (NO), two substances that play a key role in the energy balance of air hundreds of km above our planet’s surface. “Carbon dioxide and nitric oxide are natural thermostats,” explains James Russell of Hampton University, SABER’s principal investigator. “When the upper atmosphere (or ‘thermosphere’) heats up, these molecules try as hard as they can to shed that heat back into space.” That’s what happened on March 8th when a coronal mass ejection (CME) propelled in our direction by an X5-class solar flare hit Earth’s magnetic field. (On the “Richter Scale of Solar Flares,” X-class flares are the most powerful kind.) Energetic particles rained down on the upper atmosphere, depositing their energy where they hit. The action produced spectacular auroras around the poles and significant1 upper atmospheric heating all around the globe. “The thermosphere lit up like a Christmas tree,” says Russell. “It began to glow intensely at infrared wavelengths as the thermostat effect kicked in.” For the three day period, March 8th through 10th, the thermosphere absorbed 26 billion kWh of energy. Infrared radiation from CO2 and NO, the two most efficient coolants in the thermosphere, re-radiated 95% of that total back into space. A surge of infrared radiation from nitric oxide molecules on March 8-10, 2012, signals the biggest upper-atmospheric heating event in seven years. In human terms, this is a lot of energy. According to the New York City mayor’s office, an average NY household consumes just under 4700 kWh annually. This means the geomagnetic storm dumped enough energy into the atmosphere to power every home in the Big Apple for two years. “Unfortunately, there’s no practical way to harness this kind of energy,” says Mlynczak. “It’s so diffuse and out of reach high above Earth’s surface. Plus, the majority of it has been sent back into space by the action of CO2 and NO.” During the heating impulse, the thermosphere puffed up like a marshmallow held over a campfire, temporarily increasing the drag on low-orbiting satellites. This is both good and bad. On the one hand, extra drag helps clear space junk out of Earth orbit. On the other hand, it decreases the lifetime of useful satellites by bringing them closer to the day of re-entry. The storm is over now, but Russell and Mlynczak expect more to come. “We’re just emerging from a deep solar minimum,” says Russell. “The solar cycle is gaining strength with a maximum expected in 2013.” More sunspots flinging more CMEs toward Earth adds up to more opportunities for SABER to study the heating effect of solar storms. “This is a new frontier in the sun-Earth connection,” says Mlynczak, and the data we’re collecting are unprecedented.” NOTICE: Timebomb2000 is an Internet forum for discussion of world events and personal disaster preparation. Membership is by request only. The opinions posted do not necessarily represent those of TB2K Incorporated (the owner of this website), the staff or site host. Responsibility for the content of all posts rests solely with the Member making them. Neither TB2K Inc, the Staff nor the site host shall be liable for any content. All original member content posted on this forum becomes the property of TB2K Inc. for archival and display purposes on the Timebomb2000 website venue. Said content may be removed or edited at staff discretion. The original authors retain all rights to their material outside of the Timebomb2000.com website venue. Publication of any original material from Timebomb2000.com on other websites or venues without permission from TB2K Inc. or the original author is expressly forbidden. "Timebomb2000", "TB2K" and "Watching the World Tick Away" are Service Mark℠ TB2K, Inc. All Rights Reserved.
<urn:uuid:960b8e31-64d2-4f57-80a8-ab39506aa1f0>
3.078125
1,076
Comment Section
Science & Tech.
48.783193
Carolyn Porco Carolyn, leader of the imaging science team on the Cassini mission at the Google Tech Talks in May 23, 2007 A glistening spaceship, with seven lonely years and billions of miles behind it, glides into orbit around a ringed, softly-hued planet. A flying-saucer shaped machine descends through a hazy atmosphere and lands on the surface of an alien moon, ten times farther from the Sun than the Earth. Fantastic though they seem, these visions are not a dream. For seven years, the Cassini spacecraft and its Huygens probe traveled invisible interplanetary roads to the place we call Saturn. Their successful entry into orbit a thousand days ago, the mythic landing of Huygens on the cold, dark equatorial plains of Titan, and Cassini's subsequent explorations of the saturnian environment are already the stuff of legend. What they have shown us thus far, and the images they have collected, are being closely examined in the pursuit of precise scientific information on the nature of this very alien planetary system. This presentation will highlight the findings returned by these emissaries from Earth to the enchanting realm of Saturn.
<urn:uuid:ba8ed166-6227-493a-8182-d60d838497c1>
3.390625
247
Truncated
Science & Tech.
33.683947
Last year, astronomers saw the violent death throes of a star as it was literally torn apart by a black hole (see here, and links within). And now, they’ve seen it again: observations across the electromagnetic spectrum caught another star that wandered too close to a supermassive black hole, and suffered the ultimate fate. These observations show the before-and-after (left versus right) of the event. The top two are from GALEX, a satellite that observes the skies in the ultraviolet, and the bottom using Pan-STARRS1, a powerful telescope (located on which mountain, you ask? Why, Haleakala in Hawaii, of course) that scans the entire night sky looking for transients, things that change brightness. The light from the star’s violent demise reached us in June of 2010. The event happened in the heart of a distant galaxy, 2.7 billion light years away. At the center of that galaxy is a black hole with millions of times the Sun’s mass, comparable to the black hole in the center of our own Milky Way galaxy. The star apparently orbited the black hole in an elliptical orbit. Over millions or billions of years, the star evolved, and turned into a red giant. Over time, its orbit tightened, and one day it got too close. The enormous tides of the black hole tore the star apart. The flare happened when the stellar material spiraled into the hole. It formed a flattened disk right before the Ultimate Plunge, which got very hot and blasted out high-energy light — the ultraviolet light from this galaxy flared 350 times brighter than it was before! Some of the material from the star was also flung away into space. Astronomers put together a nifty video simulating what happened:
<urn:uuid:05d5ff17-6a6f-4a11-aa51-fa08f65e436f>
3.3125
367
Personal Blog
Science & Tech.
51.613819
If you're a long-time reader of this site, you're probably already aware of the existence of tardigrades or water bears, microscopic stumpy-legged invertebrates. Previous posts on Tardigrada have given an overview of the main subgroups of tardigrades, and suggested how you might find your own specimens. The next logical step, I suppose, would be to say a few things about tardigrade ecology, and for that I shall draw heavily from the excellent reviews of Nelson & Marley (2000) and Nelson (2002). Tardigrades may live in salt water, fresh water or terrestrially among mosses and leaf litter. However, because all tardigrades require at least a film of water to live in, the boundary between freshwater and terrestrial species is a trifle blurry and many species can be found in both. Tardigrades feed on plants and algae; their mouthparts have a piercing stylus through which they suck the cytoplasm out of cells. Different techniques are used for collecting marine and limno-terrestrial species, and I mention that solely because it gives me an opportunity to note that one of the methods for collecting marine tardigrades (and other sand-dwelling meiofauna) involves sieving material through a fine mesh net referred to as "Higgins' mermaid bra" (or, depending on author, "Gwen's mermaid bra", as it was Mrs Higgins who invented the tool used by her husband). In one of my earlier posts, I referred to the well-known ability of tardigrades to form resistant tuns when exposed to unfavorable conditions, a process called cryptobiosis. What I did not explain at that time was that five different types of cryptobiosis have been identified in tardigrades: encystment (production of a dormant phase without significant water loss), anoxybiosis (resistance to low oxygen levels), cryobiosis (resistance to freezing temperatures), osmobiosis (resistance to elevated salinity) and anhydrobiosis (resistance to desiccation). Not all tardigrades share all five resistances - for instance, anhydrobiosis (the best-known form) is only found among terrestrial tardigrades - and different species will have different degrees of resistance. Much has been made of the resilience of at least some tardigrade tuns, such as their ability to survive immersion for up to eight hours in liquid helium at -272°C (Rebecchi et al., 2007; for comparison, absolute zero is calculated to be -273.15°C) and even to survive exposure to the vacuum of space (Jönsson et al., 2008). However, the often-repeated claim that tardigrade tuns can survive for more than one hundred years seems to be unsupported (Jönsson & Bertolani, 2001, reviewed the 1948 report generally cited in support of this claim and found that the tuns tested in that report in fact failed to revive); tuns have not yet been definitely shown to survive for more than ten years. Cryobiosis, the ability to withstand freezing, allows tardigrades to inhabit cryoconite holes like the one shown above in a photo from here. Cryoconite holes develop when darkly-coloured dust accumulates in patches on a sheet of ice; the increased heat absorption by the dark dust melts the surrounding ice, forming a small patch of liquid water. This water may then become home to bacteria, algae and other microscopic organisms released by the melting ice - a self-contained microscopic ecosystem where a nematode may be the most fearsome predator in town. The cryoconite hole may freeze up again when the winter comes, of course, but its inhabitants can wait in the ice for the sun to come again. Jönsson, K. I., & R. Bertolani. 2001. Facts and fiction about long-term survival in tardigrades. Journal of Zoology 255 (1): 121-123. Jönsson, K. I., E. Rabbow, R. O. Schill, M. Harms-Ringdahl & P. Rettberg. 2008. Tardigrades survive exposure to space in low Earth orbit. Current Biology 18 (17): R729-R731. Nelson, D. R. 2002. Current status of the Tardigrada: evolution and ecology. Integrative and Comparative Biology 42 (3): 652-659. Nelson, D. R., & N. J. Marley. 2000. The biology and ecology of lotic Tardigrada. Freshwater Biology 44 (1): 93-108. Rebecchi, L., T. Altiero & R. Guidetti. 2007. Anhydrobiosis: the extreme limit of desiccation tolerance. Invertebr. Survival J. 4: 65-81.
<urn:uuid:f20122e3-ded0-4523-8c26-cbf69193cd4d>
3.03125
1,033
Knowledge Article
Science & Tech.
49.120513
Sieve of Eratosthenes In mathematics, the sieve of Eratosthenes (Greek: κόσκινον Ἐρατοσθένους), one of a number of prime number sieves, is a simple, ancient algorithm for finding all prime numbers up to any given limit. It does so by iteratively marking as composite (i.e. not prime) the multiples of each prime, starting with the multiples of 2. The multiples of a given prime are generated starting from that prime, as a sequence of numbers with the same difference, equal to that prime, between consecutive numbers. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime. The sieve of Eratosthenes is one of the most efficient ways to find all of the smaller primes (below 10 million or so). It is named after Eratosthenes of Cyrene, a Greek mathematician; although none of his works have survived, the sieve was described and attributed to Eratosthenes in the Introduction to Arithmetic by Nicomachus. To find all the prime numbers less than or equal to a given integer n by Eratosthenes' method: - Create a list of consecutive integers from 2 to n: (2, 3, 4, ..., n). - Initially, let p equal 2, the first prime number. - Starting from p, count up in increments of p and mark each of these numbers greater than p itself in the list. These will be multiples of p: 2p, 3p, 4p, etc.; note that some of them may have already been marked. - Find the first number greater than p in the list that is not marked. If there was no such number, stop. Otherwise, let p now equal this number (which is the next prime), and repeat from step 3. When the algorithm terminates, all the numbers in the list that are not marked are prime. The main idea here is that every value for p is prime, because we have already marked all the multiples of the numbers less than p. As a refinement, it is sufficient to mark the numbers in step 3 starting from p2, as all the smaller multiples of p will have already been marked at that point. This means that the algorithm is allowed to terminate in step 4 when p2 is greater than n. Another refinement is to initially list odd numbers only, (3, 5, ..., n), and count up using an increment of 2p in step 3, thus marking only odd multiples of p greater than p itself. This actually appears in the original algorithm. This can be generalized with wheel factorization, forming the initial list only from numbers coprime with the first few primes and not just from odds, i.e. numbers coprime with 2. An incremental formulation of the sieve generates primes indefinitely (i.e. without an upper bound) by interleaving the generation of primes with the generation of their multiples (so that primes can be found in gaps between the multiples), where the multiples of each prime p are generated directly, by counting up from the square of the prime in increments of p (or 2p for odd primes). Trial division can be used to produce primes by filtering out the composites found by testing each candidate number for divisibility by its preceding primes. It is often confused with the sieve of Eratosthenes, although the latter directly generates the composites instead of testing for them. Trial division has worse theoretical complexity than that of the sieve of Eratosthenes in generating ranges of primes. When testing each candidate number, the optimal trial division algorithm uses just those prime numbers not exceeding its square root. The widely known 1975 functional code by David Turner is often presented as an example of the sieve of Eratosthenes but is actually a sub-optimal trial division algorithm. To find all the prime numbers less than or equal to 30, proceed as follows. First generate a list of integers from 2 to 30: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 First number in the list is 2; cross out every 2nd number in the list after it (by counting up in increments of 2), i.e. all the multiples of 2: 45 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 30 Next number in the list after 2 is 3; cross out every 3rd number in the list after it (by counting up in increments of 3), i.e. all the multiples of 3: 45 67 8 9 1011 1213 14 15 1617 1819 20 21 2223 2425 26 27 2829 30 Next number not yet crossed out in the list after 3 is 5; cross out every 5th number in the list after it (by counting up in increments of 5), i.e. all the multiples of 5: 45 67 8 9 1011 1213 14 15 1617 1819 20 21 2223 24 25 26 27 2829 30 Next number not yet crossed out in the list after 5 is 7; the next step would be to cross out every 7th number in the list after it, but they are all already crossed out at this point, as these numbers (14, 21, 28) are also multiples of smaller primes because 7*7 is greater than 30. The numbers left not crossed out in the list at this point are all the prime numbers below 30: 2 3 5 7 11 13 17 19 23 29 The segmented version of the sieve of Eratosthenes, with basic optimizations, uses operations and bits of memory. Input: an integer n > 1 Let A be an array of Boolean values, indexed by integers 2 to n, initially all set to true. for i = 2, 3, 4, ..., √ : if A[i] is true: for j = i2, i2+i, i2+2i, ..., n: A[j] := false Now all i such that A[i] is true are prime. Large ranges may not fit entirely in memory. In these cases it is necessary to use a segmented sieve where only portions of the range are sieved at a time. For ranges with upper limit n so large that the sieving primes below √, as required by the sieve of Eratosthenes, cannot fit in memory, a slower but much more space-efficient sieve like that of Sorenson can be used instead. Euler's proof of the zeta product formula contains a version of the sieve of Eratosthenes in which each composite number is eliminated exactly once. It, too, starts with a list of numbers from 2 to n in order. On each step the first element is identified as the next prime and the results of multiplying this prime with each element of the list are marked in the list for subsequent deletion. The initial element and the marked elements are then removed from the working sequence, and the process is repeated: (3) 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 ... (5) 7 11 13 17 19 23 25 29 31 35 37 41 43 47 49 53 55 59 61 65 67 71 73 77 79 ... (7) 11 13 17 19 23 29 31 37 41 43 47 49 53 59 61 67 71 73 77 79 ... (11) 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 ... [...] Here the example is shown starting from odds, after the first step of the algorithm. Thus on kth step all the remaining multiples of the kth prime are removed from the list, which will thereafter contain only numbers coprime with the first k primes (cf. wheel factorization), so that the list will start with the next prime, and all the numbers in it below the square of its first element will be prime too. Thus when generating a bounded sequence of primes, when the next identified prime exceeds the square root of the upper limit, all the remaining numbers in the list are prime. In the example given above that is achieved on identifying 11 as next prime, giving a list of all primes less than or equal to 80. Note that numbers that will be discarded by some step are still used while marking the multiples, e.g. for the multiples of 3 it is 3 · 3 = 9, 3 · 5 = 15, 3 · 7 = 21, 3 · 9 = 27, ..., 3 · 15 = 45, ... . - Horsley, Rev. Samuel, F. R. S., "Κόσκινον Ερατοσθένους or, The Sieve of Eratosthenes. Being an account of his method of finding all the Prime Numbers," Philosophical Transactions (1683–1775), Vol. 62. (1772), pp. 327–347. - O'Neill, Melissa E., "The Genuine Sieve of Eratosthenes", Journal of Functional Programming, Published online by Cambridge University Press 9 October 2008 doi:10.1017/S0956796808007004, pp. 10, 11 (contains two incremental sieves in Haskell: a priority-queue–based one by O'Neill and a list–based, by Richard Bird). - The Prime Glossary: "The Sieve of Eratosthenes", http://primes.utm.edu/glossary/page.php?sort=SieveOfEratosthenes, references 16. November 2008. - Nicomachus, Introduction to Arithmetic, I, 13. - Clocksin, William F., Christopher S. Mellish, Programming in Prolog, 1981, p. 174. ISBN 3-540-11046-1. - Merritt, Doug (December 14, 2008). "Sieve Of Eratosthenes". Retrieved 2009-03-26. - Nykänen, Matti (October 26, 2007). "An Introduction to Functional Programming with the Programming Language Haskell". Retrieved 2009-03-26. - Colin Runciman, "FUNCTIONAL PEARL: Lazy wheel sieves and spirals of primes", Journal of Functional Programming, Volume 7 Issue 2, March 1997; also here. - Turner, David A. SASL language manual. Tech. rept. CS/75/1. Department of Computational Science, University of St. Andrews 1975. ( sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]; primes = sieve [2..]) - Pritchard, Paul, "Linear prime-number sieves: a family tree," Sci. Comput. Programming 9:1 (1987), pp. 17–35. - A. O. L. Atkin and D. J. Bernstein, "Prime sieves using binary quadratic forms", Mathematics of Computation 73 (2004), pp. 1023–1030. - Sedgewick, Robert (1992). Algorithms in C++. Addison-Wesley. ISBN 0-201-51059-6. , p. 16. - Jonathan Sorenson, An Introduction to Prime Number Sieves, Computer Sciences Technical Report #909, Department of Computer Sciences University of Wisconsin-Madison, January 2 1990 (the use of optimization of starting from squares, and thus using only the numbers whose square is below the upper limit, is shown). - Crandall & Pomerance, Prime Numbers: A Computational Perspective, second edition, Springer: 2005, pp. 121–24. - J. Sorenson, The pseudosquares prime sieve, Proceedings of the 7th International Symposium on Algorithmic Number Theory. (ANTS-VII, 2006). - J. C. Morehead, "Extension of the Sieve of Eratosthenes to arithmetical progressions and applications", Annals of Mathematics, Second Series 10:2 (1909), pp. 88–104. - Eratosthenes, sieve of at Encyclopaedia of Mathematics - Sieve of Eratosthenes by George Beck, Wolfram Demonstrations Project. - Sieve of Eratosthenes in Haskell - Sieve of Eratosthenes algorithm illustrated and explained. Java and C++ implementations. - A related sieve written in x86 assembly language - A highly optimized Sieve of Eratosthenes in C - A parallel implementation in C# - SieveOfEratosthenesInManyProgrammingLanguages c2 wiki page - The Art of Prime Sieving Sieve of Eratosthenes in C from 1998 with nice features and algorithmic tricks explained.
<urn:uuid:d9989885-7994-444e-8b35-61a34943bbaa>
4.34375
2,809
Knowledge Article
Science & Tech.
77.734465
Climate change threatens endangered freshwater turtle The Mary river turtle (Elusor macrurus), which is restricted to only one river system in Australia, will suffer from multiple problems if temperatures predicted under climate change are reached, researchers from the University of Queensland have shown. The scientists, who are presenting their work at the Society for Experimental Biology Annual conference in Glasgow on 3rd July 2011, incubated turtle eggs at 26, 29 and 32⁰C. Young turtles which developed under the highest temperature showed reduced swimming ability and a preference for shallower waters. This combination of physiological and behavioural effects can have dual consequences for survival chances. "Deeper water not only provides the young turtles with protection from predators but is also where their food supply is found," explains PhD researcher, Mariana Micheli-Campbell. "Young turtles with poor swimming abilities which linger near the surface are unable to feed and are very likely to get picked off by birds. These results are worrying as climate change predictions for the area suggest that nest temperatures of 32⁰C are likely to be reached in the coming decades." The Mary river turtle is already listed as endangered by the IUCN Red List and the population has suffered a large decline over the past decades. Some factors known to have affected the population include collection of the eggs for the pet trade and introduced predators such as foxes and dogs. "Whether climate change has already contributed to the decline is not clear," says Ms. Micheli-Campbell. "But these results show it may be a danger to this species in the future." These findings may be shared by other species of turtle, but the outcome is likely to be more extreme in the Mary River turtle as climatic warming is particularly pronounced for this area and the relatively shallow nests of freshwater turtles are more susceptible to changes in ambient temperature than the deeper nests of sea turtles. Further research is needed to understand the effects of climate change on incubation in other turtles. Source: Society for Experimental Biology - El Niño weather and climate change threaten survival of baby leatherback sea turtlesThu, 24 May 2012, 10:38:49 EDT - Rising heat at the beach threatens largest sea turtles, climate change models showSun, 1 Jul 2012, 16:33:55 EDT - Nesting site protection 'key to save turtles from climate change'Tue, 19 Feb 2013, 14:36:54 EST - Turtles alter nesting dates due to temperature change says ISU researcherThu, 6 Nov 2008, 17:14:31 EST - New UF study shows river turtle species still suffers from past harvestingTue, 25 Sep 2012, 19:06:18 EDT - Climate change threatens endangered freshwater turtlefrom Science BlogMon, 4 Jul 2011, 10:00:30 EDT - Climate change threatens endangered freshwater turtlefrom Science DailySun, 3 Jul 2011, 14:30:25 EDT - Climate change threatens endangered freshwater turtlefrom PhysorgSun, 3 Jul 2011, 13:30:37 EDT Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free! Learn more about Check out our next project, Biology.Net From other science news sites Popular science news articles - Mars had oxygen-rich atmosphere 4,000 million years ago - The contribution of particulate matter to forest decline - Seismic gap outside of Istanbul - Voices may not trigger brain's reward centers in children with autism, Stanford/Packard study shows - Scientists find new source of versatility so 'floppy' proteins can get things done - Even with defects, graphene is strongest material in the world - Detection of the cosmic gamma ray horizon: Measures all the light in the universe since the Big Bang - Genetic engineering alters mosquitoes' sense of smell - Allosaurus fed more like a falcon than a crocodile, new study finds - 'Popcorn' particle pathways promise better lithium-ion batteries
<urn:uuid:5492f0cb-d8f2-491b-b9cd-7e2ac1ca74a7>
3.546875
807
Truncated
Science & Tech.
23.503846
Need HELP on THIS PLEASE!!!!!! I'm stuck and I hate this. Someone help me out please. Write a graphics program that asks the user to specify the radii of two circles. The first circle has center (100,200), and the second circle has center(200,100). Draw the circles. If they intersect, then display a message "Circles intersect." Otherwise, display "Circles don't intersect." Hint: Compute the distance between the centers and compare it to the radii. Your program should draw nothing if the user enters a negative radius. I need a Circle.java file and a CircleIntersectApplet.java file. Here is what I have so far: public class CircleIntersectApplet extends Applet String input1 = JOptionPane.showInputDialog("Enter the Radius of the first circle"); init_radius1 = Integer.parseInt(input1); String input2 = JOptionPane.showInputDialog("Enter the Radius of the second circle"); init_radius2 = Integer.parseInt(input2); public void paint(Graphics g) Graphics2D g2 = (Graphics2D)g; final double x1Center = 100; //Fixed Center Point final double y1Center = 200; //Fixed Center Point final double x2Center = 200; //Fixed Center Point final double y2Center = 100; //Fixed Center Point double radius1 = init_radius1; double radius2 = init_radius2; //Draws the first circle = new Ellipse2D.Double(x1Center - radius1, y1Center - radius1, 2 * radius1, 2 * radius1); //Draws the second circle = new Ellipse2D.Double(x2Center - radius2, y2Center - radius2, 2 * radius2, 2 * radius2); private final double init_radius1; private final double init_radius2; When this is ran in an Appletviewer, it asks the User to input the First radius, then the Second Radius. After it is inputed, it draws two circles with the inputed Radius'. Now, I still need the if statements to compute the distance between the two centers = d, and the two Radius = r1 and r2. something like: if d > r1 + r2 then they don't intersect. I don't know how to write If else statements and don't know where I should plug it in. Also, I have to submit in 2 files... not 1. So I don't know what should be in the first file and the second one. Lastly, if they do intersect, it needs to say "Circles intersect." and if they don't intersect, it needs to say "Circles don't intersect." in the applet with the 2 circles showing. Please help. I've been on my *** tryin to do this for hours now... Top DevX Stories Easy Web Services with SQL Server 2005 HTTP Endpoints JavaOne 2005: Java Platform Roadmap Focuses on Ease of Development, Sun Focuses on the "Free" in F.O.S.S. Wed Yourself to UML with the Power of Associations Microsoft to Add AJAX Capabilities to ASP.NET IBM's Cloudscape Versus MySQL
<urn:uuid:42ee6b20-6551-4b1c-90b7-8586f739fa8d>
3.359375
723
Comment Section
Software Dev.
67.872592
Measuring Edge Effects on Nest Predation in Forest Fragments: Do Finch and Quail Eggs Tell Different Stories? American Midland Naturalist Experiments assessing rates of avian nest predation often find that nests near forest edges are at high risk of predation, suggesting the importance of forest fragmentation in recent population declines of ground-nesting passerines. However, the use of quail (Coturnix spp.) eggs in nest predation experiments may confound conclusions about edge effects because only large-mouthed predators are able to consume these relatively large eggs, but both large and small-mouthed predators consume smaller passerine eggs. We directly compared predation rates on artificial nests baited with quail eggs or with zebra finch (Poephila guttata) eggs; the latter are similar in size to the eggs of many neotropical passerines. In 1998 and 1999 we placed 392 artificial ground nests at edge and interior locations in two east-central Iowa forest fragments. Predation on these nests varied with egg type (quail or finch) and location (edge or interior) and there was a significant interaction between egg type and location: predation on quail eggs was greater at edges than in the interior, whereas finch egg predation was high in both edge and interior locations. Based on tooth imprints in clay eggs, we determined that large-mouthed predators were six times more active at edges, whereas activity of small-mouthed nest predators was evenly distributed between edge and interior locations. We suggest that the use of only quail eggs can exaggerate edge effects and that finch eggs or clay eggs used in conjunction with quail eggs in artificial nests can be used to estimate relative predation rates by large- and small-mouthed predators. Published Article/Book Citation American Midland Naturalist, 149:2 (2003) pp.pp. 335-343. This document is currently not available here.
<urn:uuid:05870779-bcdd-41ba-b5a6-ba9c2bd285e7>
3.5
405
Academic Writing
Science & Tech.
30.285923
Search Loci: Convergence: There is more in Mersenne than in all the universities together. In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992. A Euclidean Approach to the FTC This article is dedicated to all those students for whom the limit is their most bitter memory of calculus. I grant that the limit really is the heart of calculus and one of the most powerful ideas in mathematics today. I also grant that the story of how the limit evolved from being a logically-suspect proving tool in the hands of analysts like Newton and Leibniz in the 17th century into a highly-polished mathematical definition articulated by Cauchy in the 19th century is one of the more interesting stories in the history of mathematics. But it turns out that many results from calculus--including the pivotal fundamental theorem of calculus--can be proved without any notion of the limit whatsoever. In fact, my aim is to present a proof of the FTC using only mathematical tools that were available to Euclid nearly 2000 years before mathematicians began wrestling seriously with the idea of the limit. What's especially interesting about this proof is that it is not new. It's part of an often-overlooked chapter in the history of mathematics in which mathematicians endeavored to answer questions our students first see in calculus using the well-worn proving techniques of Euclidean geometry instead of the analytic techniques developed in the 17th century. Years before Newton and Leibniz published the results that eventually grew into the calculus we learn today, the proof I will present appeared in slightly modified form buried in the proof of a proposition found in The Universal Part of Geometry, a geometry book published in 1668 by the Scottish mathematician James Gregory. Table Of Contents
<urn:uuid:0fa25e73-ebdf-4830-a0d3-b93dc1439aeb>
2.828125
365
Academic Writing
Science & Tech.
35.902092