text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
The y-axis is power; the x-axis is time. For a finite energy resource, the product [energy] is a constant, which is a diagonal line on this (log-log) plot. This represents, for a given civilization power consumption (y axis), the time scale over which it is maintainable (x axis). For renewable sources, the limiting factor is power - a flat horizontal line.
The units are in watts of thermal power equivalent. For some sources (e.g. wind), this does not make sense. So I've multiplied them by a factor of three. This is roughly an equal footing: for instance, 1/3rd is a typical thermal power plant efficiency. But vehicle engines are less efficient. And natural gas heat is more efficient (100%, actually). So the exact factor is debatable. Luckily this plot is logarithmic.
For the nuclear reactions, I use the following energy densities. For D+T fusion, 17.6 MeV over 4 amu = 424 TJ/kg. (Why 4 amu not 5? Because this is really D+D fusion, breeding tritium from its own neutron flux.) For proton-proton fusion, 26.73 MeV over 2 amu = 645 TJ/kg. For the fission breeding cycles, 190 MeV over 238 amu = 77 TJ/kg. For the once-through fuel cycle (the current method), we're limited to the fissile isotopes (U-235), and even then the limiting factor is not energy potential but the mechanical lifetime of fuel rods. Assuming enrichment from 0.7% to 3% U-235, and a burnup of 60 gigawatt days per metric ton U, this is 1.2 TJ/kg of the original, natural uranium.
For the fusion reactions, I use the mass of the earth's hydrosphere, 1.4 10^21 kg, as the supply of hydrogen fuel. 1/9th of the mass is hydrogen, and of that, 0.03% is deuterium. This excepts the "Jupiter" scenario, which is total fusion of the entire 1.9*10^27 kg mass of Jupiter (mostly pure H2).
There are three solar scenarios. One is the energy potential of the entire surface of the earth. Another is a Dyson ring, 10 kilometers in width at a solar radius of 10 million km. The last is a complete Dyson sphere, which is level II on the Kardashev scale for extraterrestial civilizations.
For the fission fuel supply, there are six lines, properly representing the wide range of figures represented in media sources, including the alarmist 'peak uranium' stories which use the lowest figure (overlaps natural gas). They are the permutations of three levels of ore - high-level conventional, phosphate rocks, and seawater - and closed vs. once-through fuel cycles (as above). From the IAEA red book, the first two numbers are estimated 4.7 million and 35 million metric tons uranium metal, respectively. The seawater figure is 4.5 billion tons, from the concentration figure of 3.2 ppb and the mass of the earths' oceans (given earlier). Note the seawater figure flattens horizontally at a certain level: this represents the rate of erosion of uranium from rivers, as referenced by Bernard Cohen in this Am. J. Phys. article. (32,000 tons/year). In this sense, fission is indeed a fully renewable resource, although at a far lower potential than wind or solar.
For wind, wave, geothermal, I simply stole figures from wikipedia . The wind figure potential is slightly pragmatic, in that it ignores wind over open ocean (unlike the solar power figure). Same with the wave figure - apparently it only includes near-shore waves (but this makes small difference, since the "build-up" length for ocean waves is thousands of miles anyway: a mid-ocean wave farm would leave a wake thousands of miles long). The geothermal figure is the geological rate of heat dissipation out of the earth's crust, which is probably a huge overestimate. (But I'm ignoring near-surface pockets of heat, which can be exploited faster than natural conduction rates, but non-renewably so.)
I coudn't find a hydroelectric potential figure, so I made a quick estimate. From the histogram, I eyeballed an estimate of 500m for the average elevation of earth terrain (above sea level). Likewise, I eyeballed an average rainfall of 100 cm/year. Assuming they are uncorrelated, this represents a hydroelectric potential of 0.15 W/m^2, or an upper limit of 20 TW for the entire earth. This is the blue line between geothermal and ocean uranium breeders (they are all squished together).
Don't forget, I multiplied hydro, wind, and wave potentials by 3 to convert to thermal power equivalents. So the 20 TW of hydropower electricity are slightly greater than the 40 GW of geothermal heat.
The fossil fuel figures (coal/oil/gas) are from BP's statistical review. They are proven reserves, so they are underestimates.
Finally, the biomass figure is my own estimate, based on the IPCC figure for the carbon throughput of photosynthesis on earth (120 billion tons/year). This is the entire biosphere, not just anthropogenic farming.
The demand figures are (i) current world (thermal energy) consumption, and (ii) my guess at a near-future stablization point, which is a population of 10 billion with the same per-capita energy consumption as present-day USA.
Suggest improvements in the comments, and I may include them. | <urn:uuid:ee7d0536-0889-4061-85bf-090d1f7dc3ad> | 3.40625 | 1,203 | Personal Blog | Science & Tech. | 58.988019 |
Romeo Gacad / AFP - Getty Images
Rock formations are seen in Kabui Bay in Raja Ampat, eastern Indonesia's Papua region, in October 2011.
RAJA AMPAT, Indonesia — Throughout time, explorers have combed the farthest reaches of the world for that one shot of discovering new life.
Dr. Mark Erdmann has taken that shot 89 times.
Since coming to Indonesia in 1992 as a young Ph.D. student from the University of California Berkeley, Dr. Erdmann has been deeply immersed in the exploration and conservation of the underwater worlds of Indonesia and South East Asia, helping to discover 89 species across the region.
His interest in Raja Ampat — an archipelago of over 1,500 small islands in Western Papua — started while living in a small fishing community in South Sulawesi, where his local fishermen neighbors regularly came back from fishing trips speaking of reefs teeming with fish and sharks.
Conservation International works with Indonesian children to help them learn how to protect the most diverse underwater region in the world. NBC News' Richard Engel reports.
In 2002, he finally got his chance to visit Raja Ampat when he was sent to assess the marine biology diversity of this mysterious region and determine if it was worth conservation.
What he found floored him.
Romeo Gacad / AFP - Getty Images
Starfish on a bed of sea grass in the waters of Raja Ampat's Mansuar Island. Called the last paradise on earth, Raja Ampat's largely pristine environment is considered as one of the most important sites of marine biodiversity in the world.
With more than 600 species of coral, 42 fish species native to the region and an astounding record of 374 fish species identified on just one dive, Raja Ampat was a veritable gold mine of exciting new marine life.
Earlier this year, NBC News joined Dr. Erdmann, now the senior advisor to Conservation International’s Indonesia marine program, as he plunged into the waters of Raja Ampat to discover his 89th species — a local snapper — and to survey the stunning seascape many have dubbed an “Underwater Eden.”
He took time to answer questions about the scientific significance of Raja Ampat, his experiences as a marine biologist in the region and modern conservation strategies.
Q: Why is Raja Ampat so ecologically important?
A: I’d say that anyone that dives here recognizes immediately after just a couple days that there is a tremendous variety of habitats here. Every dive site looks different, every habitat has its own unique suite of species and that makes this just such a unique place.
It is the global epicenter of marine diversity in the world. This region has over 600 species of coral. By comparison the entire Caribbean Sea has only 58 species. So you are looking at 10 times the number of species in a much smaller area. Raja Ampat has 1,669 species of fish recorded to date and that total keeps rising every couple weeks. That number is far greater than the Great Barrier Reef, which is also a much larger area.
There is simply nowhere else on the planet that has this many species, so that’s certainly one very important aspect. But another factor that we think is also very important is our research here has shown this coral is also pre-adapted to climate change. They are regularly subjected to variations in temperature from 19-degrees to 36-degrees Celsius, a 17-degree range, which by any textbook no coral survive.
But if you look at the coral here, they are obviously quite happy. That says to us that the coral here is naturally adapted to massive fluctuations in temperature that are far higher than the ones predicted by climatologists over the next 50 years.
As such, we look at Raja Ampat as a coral bank which we anticipate we will be able to one day reseed reefs in the surrounding regions that aren’t quite as adaptable and eventually succumb to climate change.
Q: Why should people outside of Raja Ampat and scuba enthusiasts care about this place?
A: As the epicenter of marine biodiversity, Raja Ampat is essentially a giant repository for the raw material needed for adaptation to global change, so it’s actually really important. We have coral here that will survive climate change and they will be able to reseed coral areas that are not as lucky and don’t adapt to the coming changes in climate.
We have sponges, coral and other marine organisms that may very well hold the cure to anything from AIDS, to malaria to tuberculosis. The biomedical potential here is tremendous and totally untapped. The thought that you would allow that to go extinct or go through complete decimation before we have seen what it’s all worth, is not a prudent way forward.
This is absolutely a global priority from that perspective. By simply protecting Raja Ampat, you protect 75 percent of the coral species. You can’t do that from anywhere else in the world.
Q: You’ve been in this area for 21 years; do you still feel like there is something new to be discovered? Is the best yet to come?
A: The number of new discoveries here has definitely stabilized. If we started to push deeper, the number of new species would start to increase again. Also if we started to expand into other regions around Raja Ampat and Eastern Indonesia that have not been surveyed as well, I think we would absolutely pick up a number of new species there too.
Q: Can you talk about some of the discoveries you’ve have made here?
A: The snapper we found on this trip is No. 89 in terms of new fish species I’ve discovered in Southeast Asia, many of them in collaboration with Dr. Gerry Allen. In Western Papua (where Raja Ampat is located) alone, I discovered 56 of those species.
My favorite discovery here was a tilefish I found in 2006 that I still remember fondly. This tilefish was a beautiful deep-water species that builds these massive rubble mounds that can be up to a meter high and 2.5 meters across. I remember well it was a deep fish, living at about 60 meters.
I saw the fish and knew it was a new species, but I didn’t have any way to bring proof to the surface because I didn’t have a camera with me. So I found Gerry Allen at the surface and I said to him “I found this beautiful tilefish with tiger stripes!” He looked at me very skeptically and said back, “I think you’re imagining these stripes, sometimes they look like that underwater. “ I told him there were definitely stripes and he basically responded that he wouldn’t believe me until I speared one.
We were only in this area for one day and I really didn’t want to make another dive. But I wanted that fish, so I went back down and speared it, which isn’t easy because they are quite small. The problem though was that as I was coming up to do my recompression stop, I looked down at the fish and it was dying, making its stripes and colors disappear.
Without the stripes, it looks like a more common species of tilefish that Gerry had mentioned.
So there I was, trying to keep this fish alive so that the stripes wouldn’t go away before I got to the surface. I finally made it, Gerry saw the stripes and we decided to name the fish after me.
Q: Is Raja Ampat under threat? By what?
A: It is absolutely under threat. The main threats used to be marine-based — cyanide and bomb fishing — but increasingly as we have brought those problems under control, the threats are coming from land-based developments, including coastal mining (predominantly nickel) and irresponsible construction of “roads to nowhere” that hug the coastline with no buffer.
For example, if the local government is building a road and they come across a little stream, they don’t build a bridge, they just plough over it. That generates a lot of mud that gets dumped into the ocean when it rains. They also build these roads on impossibly deep slopes, which often when finished even a motorcycle can’t get over.
The roads and mines create an incredible amount of sediment that gets into the ocean and smothers coral reefs, killing them. Once you kill this coral, it’s very hard to bring it back. It would literally take multiple massive storms to clear the sediment from affected areas.
As far as marine-based threats, there is still some bomb fishing going on. Though the shark sanctuary created here has largely been successful in revitalizing the shark population in Raja Ampat, it has also turned this area into an increasingly hotter target.
Right now there are more sharks here than anywhere else in eastern Indonesia, so Raja Ampat is where people want to go to shark fin.
Q: Conservation International is involved in a number of conservation programs here in the Raja Ampat area to deal with such issues and to educate the local population. Can you talk about your presence here and what you do?
A: We’ve been working intensively in Raja Ampat since 2004 and currently have just over 100 staff members based here. They are strongly focused on setting up and running this network of marine parks around Raja Ampat. They are predominantly ethnic Papuans that we have recruited from the local population here and we have done our best to train them to become professional conservationists and marine park rangers.
The vast majority of our efforts go into maintaining these parks that include the community patrols and a number of economic livelihood programs such as helping villages transition from sea turtle catching to raising pigs.
Another important aspect of our program is the Kalabia marine conservation education program. The Kalabia is a floating education center that travels from village to village around Raja Ampat to basically educate the elementary school children in this area on marine conservation issues.
In the class we teach the kids lessons like why bomb fishing is such a horrible thing, why shark fining is bad for the ecology, how badly designed roads kill coral and how to properly dispose of trash in these areas where there is no governmental trash disposal system.
We also do engagement with the tourism sector to promote the expansion of sustainable tourism in Raja Ampat.
Q: Helping fishermen transition from turtle hunters to pig farmers, educating Raja Ampat’s youth — to a certain extent aside from your role as a marine biologist and conservationist, do you also view yourself as a social engineer?
A: When we talk about conservation, the public frequently thinks it’s about saving species, but in reality conservation is about changing people’s behavior. So unquestionably, if you are going to successfully do conservation, you have to be a social engineer.
The threat to these species has always been human based, so you need to focus on the humans. You need to understand what’s important for these people and then try to design a program that will change their behavior but one they will be happy with.
Absolutely, livelihoods are an extremely important element of what we do. We need to be concerned about the state of the local population’s economy, health care and food security because assisting with these factors are absolutely critical to gaining the support of locals for conservation.
So whatever we do, we need to address those aspects that most concern the local communities. It’s only by addressing those issues that we are going to get to conservation going.
Q: Is there room for another young aspiring Mark Erdmann in Raja Ampat?
A: Absolutely! It’s time for another one. It’s good to come to a program like Conservation International’s with a good marine science program. But you need to realize that if you really want to do conservation, it’s increasingly more and more about real social engagement.
We urgently need people who have a strong scientific background and understanding, but at the same time are interested in working with the local communities to help them better manage their natural resources like reefs and forests. | <urn:uuid:e78e7688-f0ba-401e-9bd4-846698ae8176> | 2.78125 | 2,552 | Audio Transcript | Science & Tech. | 46.485242 |
Science Fair Project Encyclopedia
Quercitron is a yellow dye obtained from the bark of the Black oak (Quercus velutina), a fine forest tree indigenous in North America. The name is a shortened form of quercicitron, from Latin quercus, oak, and citron, lemon, and was invented by Dr Edward Bancroft (1744-1821), who by act of parliament in 1785 was granted special privileges in regard to the importation and use of the substance. The dyestuff is prepared by grinding the bark in mills after it has been freed from its black epidermal layer, and sifting the product to separate the fibrous matter, the fine yellow powder which remains forming the quercitron of commerce. The ruddy-orange decoction of quercitron contains quercitannic acid, whence its use in tanning, and an active dyeing principle, quercitrin, CIiH22O12. The latter substance is a glucoside, and in aqueous solution under the influence of mineral acids it yields quercetin, C15H60O7, which is precipitated, and the pentoside rhamnose. Quercetin is a crystalline powder of a brilliant citron yellow color, entirely insoluble in cold and dissolving only sparingly in hot water?, but quite soluble in alcohol. Either by itself or in. some form of its glucoside quercitrin, quercetin is found in several vegetable substances, among others in cutch, in Persian berries (Rhamnus catharticus), buckwheat leaves (Fagopyrum esculentum), Zante fustic wood (Rhus cotinus), and in rose petals, &c. Quercitron was first introduced as ~t yellow dye in 1775, but it is principally used in the form of flavin, which is the precipitate thrown down from a boiling decoction of quercitron by sulphuric acid. Chemically, quercetin is a member of a fairly extensive class of natural coloring matters derived from ~3 phenyl benzoy-pyrone or flavone, the constitution of which followed on the researches of St von Kostanecki, A. G. Perkin, Herzig, Goldschmidt and others. Among the related, coloring matters are: chrysin from poplar buds, apigenin from parsley, luteolin. from weld and dyers broom, fisetin from young fustic and yellow cypress, galangin from galanga root, and myricetin from Nageia nagi.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:0da3dee9-d922-4880-b68c-8d131f0167fd> | 3.390625 | 595 | Knowledge Article | Science & Tech. | 34.795027 |
Managing database operations using ADO and C++, Part 1: Introduction to SQLby Patrick Mancier
For this article we will be using some very simple SQL tables and procedures. This is a brief encapsulation of SQL and more advanced concepts are beyond the scope of this article. Transact SQL (T-SQL) will be used in the syntax of the procedures to keep the operation as generic as possible. For a complete reference to T-SQL, see the MSDN library.
In describing the T-SQL scripting the format used is not the standard one used in the T-SQL documentation. For the purposes of this article it is much more compact and direct to use in our examples. The syntax we are using here does not take into account any constraints or foreign key relationships, for the purposes of this article we will not need to document the syntax. For ease of creating these tables and procedures it is recommended to do this within a management piece of software such as Microsoft Management Studio for example.
TablesTo create a table in SQL, the following syntax is used:
CREATE TABLE <schema>.<table name> ( <columnname identity value> INT IDENTITY(seed, increment) NOT NULL, <columnname 1> <type> [size] NULL, <columnname 2> <type> [size] NULL, . . <columnname n> <type> [size] NULL)The identity(seed,increment) statement creates an auto-increment record ID field. This column is incremented every time a new record is added to the table. In a database schema design it is vital that each record in a table is unique in some way. Using this auto-increment column is the easiest way to ensure this. A more advanced way would be to use some key that encompasses two or more columns that make the record unique.
In a table, there must be a column name, a column type, and the size in bytes of the column if applicable. This list is only a small sample of possible datatypes:
Example of a scripted table:
CREATE TABLE [dbo].[People]( [idRecord] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](50) NULL, [Age] [int] NULL ) ON [PRIMARY]
Stored ProceduresTo create a stored procedure in SQL the following syntax is used:
CREATE <schema>.<procedure name> [Parameter1 name] [type] [size] [ = default value], [Parameter2 name] [type] [size] ] [ = default value], . . [Parameter(n) name] [type] [size] ] [ = default value] AS BEGIN [DECLARE <local variable name> <type> <size>] <procedure logic> END
ParametersIn many stored procedures parameters are used which are similar to the parameters used in a C++ function. They have a type, size and a variable name.
The parameter list contains a set of variables that are passed into the procedure from the caller. The format of this list is very similar to the table definition in that it contains a name, variable type and size in bytes if applicable. However, each variable must be proceeded by an "at" symbol (@). For MS-SQL an @ symbol precedes all variable names whether they are parameters or local variables.
Return parametersIn a stored procedure there is an option for what is called a return parameter. The return parameter is a specialized procedure parameter and is set upon exit of the procedure. You can set ADO up to allow the retrieval of this return parameter upon executing a stored procedure. In the stored procedure simply put the desired return parameter after the "return" keyword and at the application level read this value from the returned status after execution of the procedure.
Example of a stored procedure:
CREATE PROCEDURE dbo.People_Select @idRecord int = 0 AS BEGIN SET NOCOUNT ON; if(@idRecord>0) begin SELECT PersonName, PersonAge from People where idRecord = @idRecord; return; end else SELECT PersonName, PersonAge from People; END
Basic SQL operationsThe following descriptions of SQL syntax are not meant to be comprehensive; rather they are being described as basic operations. These statements can get rather advanced depending on the needs of the database schema but for the purposes of designing our ADO management class they will be kept simple.
DELETE <table name> [WHERE [column = condition] [AND, OR] [column = condition] ]Insert Syntax
INSERT INTO <table name> VALUES (@value1,@value2, . @valuen)Select Syntax
SELECT [*, column names] FROM <table name> [WHERE [column = condition] [AND, OR] [column = condition] ]Update Syntax
UPDATE <table name> SET [Column1] = @value1, [Column2] = @value 2, . . . [Columnn] = @value n [WHERE [column = condition]];Here are some examples of using these operations. Please refer to the previous table example for the names of the tables and columns in this example.
DELETE People where idRecord = @idRecord; INSERT INTO People VALUES (.John Smith.,23); SELECT * FROM People where PersonAge = 23; UPDATE People SET PersonAge = 40 WHERE idRecord = @idRecord;
Naming convention for database class manager designIn order to look forward and design our ADO management class correctly it would be a good thing to consider a standard naming convention for the tables and stored procedures in the database. Generally the design will require that all operations on the database take place in a stored procedure. However, it is rather tedious to remember every single procedure name for every type of operation and to manually configure the ADO manager class to call all these different procedures.
It can generally be stated that the operations of DELETE, INSERT, SELECT and UPDATE are considered to be the four basic operations that take place on tables in a database schema. Therefore it makes sense to modularize these operations as procedures.
Given a table name of TestTable, the four basic procedures against it could be written as
With this naming scheme in place it will be very easy to design the ADO manager class to operate against this and all other tables in the database schema.
Database SchemaThis is a very simple database schema as an example to illustrate the concepts of managing database operations with C++. We will be using this later when we build our database ADO manager class.
List of procedures:
The database installation script that creates the above database schema is included in this article. In viewing the script notice that in the People_Insert procedure, a term @@IDENTITY is used as a return parameter. This can be useful to return the last known identity of a table after an insert is performed in order to do further processing on the record that was just inserted. It can also be used to reference this record such as for a node on a tree list control for example.
The script was generated from MS-SQL management studio from an existing database. It will wipe out any existing database called ADOTest and replace it with a new one. If there are any connections or "locks" to the database when this script is generated, i.e. you have a window open in the management studio or an ADO connection to the database, the script will fail because it will be unable to drop the existing database until all connections are severed.
Continue to part 2 of the tutorial
Back to the ADO tutorial index | <urn:uuid:598b52df-3fef-4569-be9e-6d24239b8d91> | 3.6875 | 1,590 | Tutorial | Software Dev. | 32.896843 |
Without a doubt, the Curiosity rover’s landing—specifically, the EDL or Entry Descent Landing phase—was a nail-biter. Geeks everywhere, especially NASA JPL engineers in California, watched and waited to learn the fate of the Curiosity on the surface of Mars.
In fact, the world had to wait roughly 14 minutes—the time it took for communication between the Earth and Curiosity—to learn the Curiosity rover’s fate. Fourteen minutes doesn’t sound like much in the retelling, but it was a harrowing, stressful time for some—especially NASA. Scientists anxiously awaited the first signals from Curiosity, and to learn whether the craft had landed safely or had been utterly destroyed during the EDL phase.
NASA and the U.S. had a great deal on the line, after all. Whereas its predecessors, the Spirit and Opportunity twin rovers, cost a total of $1 billion to design, develop, and operate, the higher-tech Mars Science Laboratory Curiosity rover carries a price tag of roughly $2.5 billion.
The mil/aero community is not new to such controversy, however. The President’s Budget is scrutinized annually, especially in the areas of aerospace and defense. No matter the level of investment, aerospace and defense funding (whether “too high” or “too low”) are often the subject of public outcry.
Aerospace organizations and other proponents argue that space exploration, and specifically Mars exploration, is critical: to understanding the history of the Earth and universe, to reinforcing the view of the U.S. as a space technology leader, and to motivating today’s youth to become engineers, thereby advancing and ensuring the health of the U.S.-based science, technology, engineering, and mathematics (STEM) fields. This geek wants to know your thoughts on the subject; weigh in by posting a comment! | <urn:uuid:363ea454-1c2c-4460-a84a-2da8fedf5c9f> | 3.515625 | 395 | Personal Blog | Science & Tech. | 44.710828 |
|Aug6-12, 10:36 AM||#1|
Effect of moment of inertia on rolling distance
Hi everyone, good day. this might be a simple question, but I need someone to check my answer.
A disk and a hoop, of same mass and same diameter, is first giving a torque (same amount of torque for both) then the torque is removed (the torque is acting on them for the exact same period of time), causing them to roll freely without slip. The question is, which one will roll further?
My answer to the question will be, both will roll the same distance. However, the disk will arrive at the end point first while the hoop will arrive at the end point later. Other than that, the disk will achieve higher maximum linear velocity in the process.
Is my answer correct? This is because they are being given same amount of energy. thus the distance they can roll should be the same.
Other than that, due to the different in mass moment of inertia, the rotational kinetic energy to translational kinetic energy ratio for disk and hoop is different. I am actually curious on what does this ratio implies? Higher ratio means more energy given is being converted to rotational KE, so? what does higher rotational KE implies? It got to be have effect on something, like the traveling time, the distance traveled, something like that. Can anyone relates these to me? Thank you very much for your time!
|Aug9-12, 09:56 AM||#2|
Hi everyone, First of all, I am very sorry that I misplaced the post.
Next, I noticed that the question is viewed by 135 people, but no replies. Am I not putting enough effort in solving my question? But i think i have first answered my question right?
I am new here, so please tell me if i overlooked any rules.
I really wish to know the answer to the questions.
|Aug9-12, 10:17 AM||#3|
If the only force acting on the rolling objects is friction with the surface they roll upon, then I think your answer is correct. However, if you take drag into consideration, things may be different.
|Similar Threads for: Effect of moment of inertia on rolling distance|
|Moment of inertia for ball rolling up a ramp.||Introductory Physics Homework||3|
|How to find the moment of inertia K from rolling an object down a ramp||Introductory Physics Homework||11|
|Moment Of Inertia Of Sphere At A Distance||Classical Physics||2|
|Snooker ball rolling, frictional effects, moment of inertia.||Introductory Physics Homework||1|
|Help me to understand how my teacher calculated the moment of inertia/ distance to CM||Introductory Physics Homework||4| | <urn:uuid:6d46bbef-e35e-4187-abb2-10b559f8fcc7> | 2.90625 | 592 | Comment Section | Science & Tech. | 58.439668 |
For a slick, supple mouthfeel, there’s nothing like a suspension of fine droplets of oil in water (or vice versa)—what scientists call an emulsion. Cream, butter and chocolate are emulsions, as are gravy, vinaigrette and cheese. But when an emulsion breaks, the results can get ugly: a layer of clear fat floating on top of the gravy boat, a salad dressing that comes out of the bottle all oil and no vinegar, a plate of nachos covered in greasy goo.
Making one means overcoming some powerful forces of nature. The repulsion between water and oil is electric. A water molecule is unbalanced, electrically speaking, in such a way that a polar charge develops among its atoms. As a result, groups of water molecules form exclusive cliques, aka droplets. Oil molecules, in contrast, are nonpolar and hydrophobic. It takes a surprising amount of force to persuade a polar liquid to mingle with a nonpolar one at an intimate level.
A blender is not always up to the job. The human tongue can detect particles (including liquid droplets) that are just seven to 10 microns across, but blenders generally cannot do better than 10 to 12 microns. When the cooks in our research kitchen were working out a recipe for eggless mayonnaise, they relied on a rotor-stator homogenizer instead. This countertop machine spins a small blade (the rotor) at up to 20,000 rpm within a slotted metal sheath (the stator). Tremendous shear forces rip the droplets down to just a few microns.
For another challenging recipe—a kosher, dairy-free veal “cream”—we tried even bigger iron: an ultrahigh-pressure homogenizer. Our model, which is about the size of a large sink, pressurizes the mixture to as much as 25,000 psi, then slams it into a metal wall to smash it to submicron bits. The result is delicious.
In the finest emulsions, the particles are just a few nanometers in diameter—so tiny the emulsion turns clear. Mountain Dew is a nanoemulsion, for example. To make a transparent nanoemulsion of essential oils from thyme and bay leaf for a chilled chicken soup, our cooks needed a handheld tool because the quantity of liquid was so small.
The solution was an ultrasonic homogenizer, which transforms several hundred watts of power into high-frequency sound waves that induce minuscule bubbles to form in the liquid. These cavitation bubbles then implode, tearing droplets apart as they do. The high-pitched tool gives new meaning to whine and dine.
This article was originally published with the title Making Liquids Go Bipolar. | <urn:uuid:f1236b51-78b2-4d41-9f76-198b44fe640c> | 2.765625 | 584 | Truncated | Science & Tech. | 47.190514 |
Another "supermoon" is in the offing. The perigee full moon on May 5, 2012 will be as much as 14 percent bigger and 30 percent brighter than other full moons of 2012.
Dr. James Garvin, chief scientist at NASA's Goddard Space Flight Center, answered questions regarding the supermoon phenomenon in a 2011 interview.
Question: What is the definition of a supermoon and why is it called that?
'Supermoon' is a situation when the moon is slightly closer to Earth in its orbit than on average, and this effect is most noticeable when it occurs at the same time as a full moon. So, the moon may seem bigger although the difference in its distance from Earth is only a few percent at such times.
It is called a supermoon because this is a very noticeable alignment that at first glance would seem to have an effect. The 'super' in supermoon is really just the appearance of being closer, but unless we were measuring the Earth-Moon distance by laser rangefinders (as we do to track the LRO [Lunar Reconnaissance Orbiter] spacecraft in low lunar orbit and to watch the Earth-Moon distance over years), there is really no difference. The supermoon really attests to the wonderful new wealth of data NASA's LRO mission has returned for the Moon, making several key science questions about our nearest neighbor all the more important.
Are there any adverse effects on Earth because of the close proximity of the moon?
The effects on Earth from a supermoon are minor, and according to the most detailed studies by terrestrial seismologists and volcanologists, the combination of the moon being at its closest to Earth in its orbit, and being in its 'full moon' configuration (relative to the Earth and sun), should not affect the internal energy balance of the Earth since there are lunar tides every day. The Earth has stored a tremendous amount of internal energy within its thin outer shell or crust, and the small differences in the tidal forces exerted by the moon (and sun) are not enough to fundamentally overcome the much larger forces within the planet due to convection (and other aspects of the internal energy balance that drives plate tectonics). Nonetheless, these supermoon times remind us of the effect of our 'Africa-sized' nearest neighbor on our lives, affecting ocean tides and contributing to many cultural aspects of our lives (as a visible aspect of how our planet is part of the solar system and space). | <urn:uuid:2e5eb8f8-ada2-4fb3-9bf4-931b0ed52439> | 3.578125 | 504 | Q&A Forum | Science & Tech. | 38.498421 |
A superconducting body will repel a nearby magnet. The repulsion is due to the perfect diamagnetism resulting from the Meissner effect. A small magnet will float above a superconducting disk at an equilibrium position over the disk center, stable against lateral displacements. It is not intuitively obvious why the potential energy of the magnet over a flat disk should have a minimum at the center, rather than a maximum. We have measured the properties of the attractive potential well of a YBa2Cu3O7 disk by two experiments. In the first, we use a low‐frequency magnetic field, 0–100 Hz, to excite oscillations of a small, freely levitating bar magnet about its equilibrium position. We find sharp resonances, corresponding to longitudinal, transverse, and torsional modes of oscillation. The frequencies of these resonances define the properties near the bottom of the potential well. In the second experiment, we attach the magnet to a vertical glass fiber of known stiffness. The magnet is suspended horizontally a small known distance, z, above the superconducting disk. By moving the magnet from the center of the disk to the edge and measuring the bending of the support fiber as a function of position we determine the shape of the potential curve for large displacements and the total energy needed to escape from the well. | <urn:uuid:32f9003b-e809-41fe-9537-697ec4d642b5> | 3.59375 | 274 | Academic Writing | Science & Tech. | 34.206701 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Advancing the Science of Climate Change
to dominate coastal ice losses. Numerical modeling (Nick et al., 2009) further supports this conclusion and suggests that tidewater outlet glaciers adjust rapidly to changing boundary conditions at the calving terminus. Expanded monitoring of both air and sea temperatures at high latitudes and an improved understanding of ice sheet dynamics will be needed to improve scientific knowledge of these processes.
Mountain Glaciers, Ice Caps, and Other Contributors to Sea Level Rise
The world’s glaciers and ice caps contain the water equivalent of up to 2.4 feet (0.72 meters) of sea level (Dyurgerov and Meier, 2005). They have consistently been contributing about one quarter of the total sea level rise over the past 50 years, staying roughly proportional to the overall rate of sea level rise (Bindoff et al., 2007). Mountain glaciers are expected to continue to be a significant contributor to sea level rise during this century, and their retreat poses significant risks to populations that depend on glacial runoff as a water source (see Chapter 8). However, unlike the Greenland and Antarctic ice sheets, mountain glaciers are relatively small and do not carry the potential for large and sudden contributions to sea level rise.
There are additional contributions to sea level rise from other human activities such as wetland loss, deforestation, and the extraction of groundwater for irrigation and industrial use. While estimates of the size of these sources are somewhat uncertain, they are believed to be small relative to land ice melting and may be partially offset by the increased storage of water behind dams and in other surface reservoirs over the past century and a half (e.g., Chao et al., 2008). Moreover, the observed recent sea level rise rate of over 0.12 inches (3.3 ± 0.4 millimeters) per year (Cazenave et al., 2010) is consistent with what would be expected from the combination of thermal expansion of the oceans and melting of ice on land (Bindoff et al., 2007). Hence, the overall contribution of other land-based sources to global sea level rise is thought to be small. Nonetheless, small glaciers and ice caps remain important contributors to sea level rise, and their respective contributions need to be better understood.
PROJECTIONS OF FUTURE SEA LEVEL RISE
The Intergovernmental Panel on Climate Change (IPCC) estimated that sea level would rise by an additional 0.6 to 1.9 feet (0.18 to 0.59 meters) by 2100 (Meehl et al., 2007a). However, this projection was based only on current rates of change and was accompanied by a major caveat regarding the potential for substantial increases in the rate of sea level rise. The 2007 IPCC projections are conservative and may underestimate | <urn:uuid:64582e8e-db0f-4601-a2e6-c7f28eda9e56> | 3.828125 | 601 | Knowledge Article | Science & Tech. | 44.015359 |
But what are the implications of exceeding a CO2 concentration of 400 ppm?
Some climate change skeptics will be quick to highlight the scientific evidence that shows the atmospheric CO2 has been at this level and higher on the past. They are absolutely correct. Somewhere in the region of three to five million years ago similar levels were present and the world was, on average, three or four degrees Celsius warmer with some areas up to ten degrees warmer. Life survived and flourished and it is likely to have influenced the evolutionary path that led to Homo Sapiens but humanity in its current form has never experienced the like, and as far as we can tell, the changes in CO2 levels were never as rapid as over the last 150 years.
We will not wake to a radically different world tomorrow. Climate change does not work like that. If we maintain CO2 levels at 400 ppm, without any further increase, it will take decades or even centuries for a new "normal" climate to become established. Some areas will dry out, others will become wetter, some will cool and others warm, all of which will change natural vegetation and agricultural practices which will feed back into local and global weather systems. The biggest threat for the human population will be the increased frequency of crop failures. In order to cope with the still growing human population combined with climatic changes will require more wilderness to be tamed for agriculture. Perhaps the tundra will be drained for fields once the permafrost melts.
Longer term effects will be the thawing of glaciers and ice caps which will raise sea levels and affect ocean currents and hence the climate. Some estimates indicate that when the world last saw 400 ppm CO2 sea levels were 40 metres higher than now. While a rise in sea level of that magnitude may not happen for a century or more, it would be catastrophic for the world as we know it. It wouldn't just be island states such as the Maldives that would disappear but large parts of low lying countries, such as Bangladesh and the Netherlands. Our ever more crowded cities will be badly affected too. The first few floors of the newly topped out One World Trade Center in New York will be submerged as will many parts of the Eternal City, which has been inhabited for two and a half millenia. Some of the world's most densely populated cities, such as Tokyo, Cairo and London, will be much depleted.
This will lead to hundreds of millions, perhaps billions, of people being displaced by rising sea levels again putting pressure on the wilderness. The new homes and infrastructure that will be required on a scale never before experienced will demand vast quantities of energy and natural resources. It won't be the end of the human race but it may suspend our humanity as wars, famine and pestilence of biblical proportions are likely as greater numbers compete for limited resources.
Such a dark future is by no means certain. We still have a little time to act, but not long. The more we procrastinate the harder it will be to head off the worst extremes coming our way. If we continue with "business as usual" we are on target for 450ppm by the middle of the century, accompanied by six degrees Celsius temperature rises. That doesn't bear thinking about. | <urn:uuid:4b9b07dc-0266-4d15-b9aa-c1b6c5ffb00d> | 3.625 | 657 | Personal Blog | Science & Tech. | 48.587917 |
Can you fit polynomials through these points?
Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs?
Many numbers can be expressed as the difference of two perfect
squares. What do you notice about the numbers you CANNOT make?
There are lots of different methods to find out what the shapes are worth - how many can you find?
In a three-dimensional version of noughts and crosses, how many winning lines can you make?
Watch these videos to see how Phoebe, Alice and Luke chose to draw 7 squares. How would they draw 100?
Which of these triangular jigsaws are impossible to finish?
Can you describe this route to infinity? Where will the arrows take you next?
Are these statistical statements sometimes, always or never true?
Or it is impossible to say?
These proofs are wrong. Can you see why?
Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him?
This group tasks allows you to search for arithmetic progressions
in the prime numbers. How many of the challenges will you discover
Is this a fair game? How many ways are there of creating a fair
game by adding odd and even numbers?
Explore the properties of some groups such as: The set of all real
numbers excluding -1 together with the operation x*y = xy + x + y.
Find the identity and the inverse of the element x.
Can you make a square from these triangles?
Match the charts of these functions to the charts of their integrals. | <urn:uuid:7951b07a-18be-4697-9ab1-58d8afc04506> | 3.15625 | 365 | Content Listing | Science & Tech. | 63.911144 |
RSS (Environment and Nature Dr Karl web feed) Dr Karl's Great Moments In Science › Environment and Nature
Wednesday, 1 May 2013
Explore more Environment and Nature
Tuesday, 27 November 2012 28
Rapid loss of Arctic sea ice - 80 per cent has disappeared since 1980 - is not caused by natural cycles such as changes in the Earth's orbit around the Sun, says Dr Karl.
Tuesday, 20 November 2012 32
We're told to drink eight glasses of water a day, but what's the evidence behind this advice? Dr Karl hoses down a popular myth.
Tuesday, 13 November 2012 10
We waste about 40 per cent of our food on its journey from the farm to our fork. Dr Karl dishes out some advice about how to cut down food waste.
Tuesday, 31 July 2012 8
Rubber dust is an environmental and health hazard. Dr Karl shares some dirty facts about particle pollution.
Tuesday, 10 July 2012 29
Breastfeeding nutures and protects babies and their mums in many ways, but Dr Karl discovers breast milk still has plenty of secrets to spill.
Tuesday, 22 May 2012 15
The famous Grand Canyon in Arizona isn't as grand as it is cracked up to be. Dr Karl travels the world in search of the grandest canyon of all.
Tuesday, 1 May 2012 19
Not too round, not too square ... Dr Karl cracks the reasons why eggs are exactly the right shape to fit snugly in nests and cartons.
Tuesday, 6 March 2012 4
The more reliant we become on electronic gadgetry, the more vulnerable we are to a solar superstorm. Dr Karl tries to look on the sunny side.
Tuesday, 1 November 2011 7
Computer espionage has taken on a newer and more sinister meaning this century. Dr Karl looks closely at the dirty work done by a computer worm, Stuxnet, as it attacked a uranium processing plant in Iran.
Wednesday, 26 October 2011 6
A politically motivated cyber-attack was recently successful in penetrating the computer defences of governments. Dr Karl pursues the path taken by the computer worm, Stuxnet, as it attacks a uranium processing plant in Iran. | <urn:uuid:ede4eb4b-3ed2-4849-92c0-3e26ec59b010> | 2.796875 | 444 | Content Listing | Science & Tech. | 69.185 |
Science Fair Project Encyclopedia
The Magellan spacecraft carried out a mission from 1989-1994, orbiting Venus from 1990-1994.
It was named after the sixteenth-century Portuguese explorer Ferdinand Magellan. Magellan was the first planetary spacecraft to be launched by a space shuttle when it was carried aloft by the shuttle Atlantis from Kennedy Space Center in Florida on May 4, 1989, on a mission designated STS-30. Atlantis took Magellan into low Earth orbit, where it was released from the shuttle's cargo bay. A solid-fuel motor called the Inertial Upper Stage (IUS) then fired, sending Magellan on a 15-month cruise looping around the Sun 1-1/2 times before it arrived at its orbit around Venus on August 10, 1990. A solid-fuel motor on Magellan then fired, placing the spacecraft in orbit around Venus. In 1994 it plunged to the surface as planned and partly vaporized; some sections are thought to have hit the planet's surface.
Magellan's initial orbit was highly elliptical, taking it as close as 294 kilometres (182 miles) from Venus and as far away as 8,543 km (5,296 mi). The orbit was a polar one, meaning that the spacecraft moved from south to north or vice versa during each looping pass, flying over Venus' north and south poles. Magellan completed one orbit every 3 hours, 15 minutes.
During the part of its orbit closest to Venus, Magellan's radar mapper imaged a swath of the planet's surface approximately 17 to 28 km (10 to 17 mi) wide. At the end of each orbit, the spacecraft radioed back to Earth a map of a long ribbon-like strip of the planet's surface captured on that orbit. Venus itself rotates once every 243 Earth days. As the planet rotated under the spacecraft, Magellan collected strip after strip of radar image data, eventually covering the entire globe at the end of the 243-day orbital cycle.
By the end of its first such eight-month orbital cycle between September 1990 and May 1991, Magellan had sent to Earth detailed images of 84 percent of Venus' surface. The spacecraft then conducted radar mapping on two more eight-month cycles from May 1991 to September 1992. This allowed it to capture detailed maps of 98 percent of the planet's surface. The follow-on cycles also allowed scientists to look for any changes in the surface from one year to the next. In addition, because the "look angle" of the radar was slightly different from one cycle to the next, scientists could construct three-dimensional views of Venus' surface.
During Magellan's fourth eight-month orbital cycle at Venus from September 1992 to May 1993, the spacecraft collected data on the planet's gravity field. During this cycle, Magellan did not use its radar mapper but instead transmitted a constant radio signal to Earth. If it passed over an area of Venus with higher than normal gravity, the spacecraft would slightly speed up in its orbit. This would cause the frequency of Magellan's radio signal to change very slightly due to the Doppler effect -- much like the pitch of a siren changes as an ambulance passes. Thanks to the ability of radio receivers in the NASA/JPL Deep Space Network to measure frequencies extremely accurately, scientists could build up a detailed gravity map of Venus.
At the end of Magellan's fourth orbital cycle in May 1993, flight controllers lowered the spacecraft's orbit using a then-untried technique called aerobraking. This maneuver sent Magellan dipping into Venus' atmosphere once every orbit; the atmospheric drag on the spacecraft slowed down Magellan and lowered its orbit. After the aerobraking was completed between May 25 and August 3, 1993, Magellan's orbit then took it as close as 180 km (112 mi) from Venus and as far away as 541 km (336 mi). Magellan also circled Venus more quickly, completing an orbit once every 94 minutes. This new, more circularized orbit allowed Magellan to collect better gravity data in the higher northern and southern latitudes near Venus' poles.
After the end of that fifth orbital cycle in April 1994, Magellan began a sixth and final orbital cycle, collecting more gravity data and conducting radar and radio science experiments. By the end of the mission, Magellan had captured high-resolution gravity data for an estimated 95 percent of the planet's surface.
In September 1994, Magellan's orbit was lowered once more in another test called a "windmill experiment". In this test, the spacecraft's solar panels were turned to a configuration resembling the blades of a windmill, and Magellan's orbit was lowered into the thin outer reaches of Venus' dense atmosphere. Flight controllers then measured the amount of torque control required to maintain Magellan's orientation and keep it from spinning. This experiment gave scientists data on the behaviour of molecules in Venus' upper atmosphere, and lent engineers new information useful in designing spacecraft.
On October 11, 1994, Magellan's orbit was lowered a final time and radio contact was lost the next day. Within two days after that maneuver, the spacecraft became caught in the atmosphere and plunged to the surface. Although much of Magellan was vaporized, some sections are thought to have hit the planet's surface intact.
Built partially with spare parts from other missions, the Magellan spacecraft was 4.6 metres (15.4 feet) long, topped with a 3.7 m (12 ft) high-gain antenna. Mated to its retrorocket and fully tanked with propellants, the spacecraft weighed a total of 3,460 kilograms (7,612 pounds) at launch.
The high-gain antenna, used for both communication and radar imaging, was a spare from the NASA/JPL Voyager mission to the outer planets, as were Magellan's 10-sided main structure and a set of thrusters. The command data computer system, attitude control computer and power distribution units are spares from the Galileo mission to Jupiter. Magellan's medium-gain antenna is from the NASA/JPL Mariner 9 project. Martin Marietta Corp. was the prime contractor for the Magellan spacecraft, while Hughes Aircraft Co. was the prime contractor for the radar system. It is widely thought that Magellan is based upon the NRO's Lacrosse terrestrial radar imaging reconnaissance satellite, as they have the same manufacturer, and are from the same facility.
Magellan was powered by two square solar panels, each measuring 2.5 m (8.2 ft) on a side; together they supplied 1,200 watts of power (100 watt per m²). Over the course of the mission the solar panels gradually degraded, as expected; by the end of the mission in the fall of 1994 it was necessary to manage power usage carefully to keep the spacecraft operating.
Because Venus is shrouded by a dense, opaque atmosphere, conventional optical cameras cannot be used to image its surface. Instead, Magellan's imaging radar uses bursts of microwave energy somewhat like a camera flash to illuminate the planet's surface.
Magellan's high-gain antenna sends out millions of pulses each second toward the planet; the antenna then collects the echoes returned to the spacecraft when the radar pulses bounce off Venus' surface. The radar pulses are not sent directly downward but rather at a slight angle to the side of the spacecraft, the radar is thus sometimes called "side-looking radar". In addition, special processing techniques are used on the radar data to result in higher resolution as if the radar had a larger antenna, or "aperture"; the technique is thus often called "synthetic aperture radar", or SAR.
Synthetic aperture radar was first used by NASA on JPL's Seasat oceanographic satellite in 1978; it was later developed more extensively on the Spaceborne Imaging Radar (SIR) missions on the space shuttle in 1981, 1984 and 1994. An imaging radar is also planned as part of the NASA/JPL Cassini mission to Saturn in 1997 to map the surface of the ringed planet's major moon Titan.
Besides its use in imaging, Magellan's radar system was also used to collect altimetry data showing the elevations of various surface features. In this mode, pulses were sent directly downward and Magellan measured the time it took a radar pulse to reach Venus and return in order to determine the distance between the spacecraft and the planet.
Study of the Magellan high-resolution global images is providing evidence to understand the role of impacts, volcanism, and tectonism in the formation of Venusian surface structures. The surface of Venus is mostly covered by volcanic materials. Volcanic surface features, such as vast lava plains, fields of small lava domes, and large shield volcanoes are common. There are few impact craters on Venus, suggesting that the surface is, in general, geologically young - less than 800 million years old. The presence of lava channels over 6,000 kilometers long suggests river-like flows of extremely low-viscosity lava that probably erupted at a high rate. Large pancake-shaped volcanic domes suggest the presence of a type of lava produced by extensive evolution of crustal rocks.
The typical signs of terrestrial plate tectonics - continental drift and basin floor spreading - are not in evidence on Venus. The planet's tectonics is dominated by a system of global rift zones and numerous broad, low domical structures called coronae, produced by the upwelling and subsidence of magma from the mantle.
Although Venus has a dense atmosphere, the surface reveals no evidence of substantial wind erosion, and only evidence of limited wind transport of dust and sand. This contrasts with Mars, where there is a thin atmosphere, but substantial evidence of wind erosion and transport of dust and sand.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:58960149-3b81-4432-9bbc-2c221a44e5bc> | 4.21875 | 2,035 | Knowledge Article | Science & Tech. | 42.473706 |
Submitted by kfesenma on Fri, 2012-01-13 08:00
A team of researchers at Caltech has devised a new method for making complex molecules. The reaction they have come up with should enable chemists to synthesize new varieties of a whole subclass of organic compounds called nitrogen-containing heterocycles, thus opening up new avenues for the development of novel pharmaceuticals and natural products ranging from chemotherapeutic compounds to bioactive plant materials such as morphine.
Submitted by mwoo on Thu, 2012-01-12 15:00
Astronomers from the California Institute of Technology (Caltech) and the University of Arizona have released the largest data set ever collected that documents the brightening and dimming of stars and other celestial objects—two hundred million in total.
Submitted by katien on Thu, 2012-01-12 08:00
Scientists have long seen evidence of social behavior among many species of animals. Dolphins frolic together, lions live in packs, and hornets construct nests that can house a large number of the insects. And, right under our feet, it appears that roundworms are having their own little gatherings in the soil. Until recently, it was unknown how the worms communicate to one another when it's time to come together. Now, however, researchers from Caltech have identified, for the first time, the chemical signals that promote aggregation.
Submitted by mwoo on Wed, 2012-01-04 18:00
Saturn's largest moon, Titan, is an intriguing, alien world that's covered in a thick atmosphere with abundant methane. Titan boasts methane clouds and fog, as well as rainstorms and plentiful lakes of liquid methane. The origins of many of these features, however, remain puzzling to scientists. Now, Caltech researchers have developed a computer model of Titan's atmosphere and methane cycle that, for the first time, explains many of these phenomena in a relatively simple and coherent way.
Submitted by katien on Tue, 2011-12-20 08:00
Identifying the composition of the earth's core is key to understanding how our planet formed and the current behavior of its interior. While it has been known for many years that iron is the main element in the core, many questions have remained about just how iron behaves under the conditions found deep in the earth. Now, a team led by mineral-physics researchers at Caltech has honed in on those behaviors by conducting extremely high-pressure experiments on the element.
Submitted by mwoo on Wed, 2011-12-14 18:00
It was the brightest and closest stellar explosion seen from Earth in 25 years, dazzling professional and backyard astronomers alike. Now, thanks to this rare discovery—which some have called the "supernova of a generation"—astronomers have the most detailed picture yet of how this kind of explosion happens. Known as a Type Ia supernova, this type of blast is an essential tool that allows scientists to measure the expansion of the universe and understand the very nature of the cosmos.
Submitted by mwoo on Wed, 2011-12-14 08:00
Physicists have announced that the Large Hadron Collider (LHC) has produced yet more tantalizing hints for the existence of the Higgs boson. The European Center for Nuclear Research in Geneva, the international team of thousands of scientists—including many from Caltech—unveiled for the first time all the data taken over the last year from the two main detectors at the LHC, the Compact Muon Solenoid and ATLAS. The results represent the largest amount of data ever presented for the Higgs search.
Submitted by mwoo on Tue, 2011-12-13 08:00
Researchers have set a new world record for data transfer, helping to usher in the next generation of high-speed network technology. The international team was able to transfer data in opposite directions at a combined rate of 186 gigabits per second (Gbps) in a wide-area network circuit. The rate is equivalent to moving two million gigabytes per day, fast enough to transfer nearly 100,000 full Blu-ray disks—each with a complete movie and all the extras—in a day.
Submitted by lorio on Mon, 2011-12-12 08:00
Caltech president Jean-Lou Chameau was in Paris on Monday, December 12, to announce the launch of Analytical Pixels.
Submitted by admin on Tue, 2011-12-06 08:00
We've all heard that no two snowflakes are alike. Caltech professor of physics Kenneth Libbrecht will tell you that this has to do with the ever-changing conditions in the clouds where snow crystals form. Now Libbrecht, widely known as the snowflake guru, has shed some light on a grand puzzle in snowflake science: why the canonical, six-armed "stellar" snowflakes wind up so thin and flat. | <urn:uuid:e0a9a53e-9234-4767-a8e3-34c9135647ff> | 2.9375 | 1,028 | Content Listing | Science & Tech. | 42.372126 |
|Problem Set #12||Answers||February 6, 1998|
Carboxylic Acid Derivatives
Ege has reams of exciting problems at the end of chapters 14 and 15. Try tackling, for example, the following.
14.41, 14. 39
15.21, 15.22, 15.25, 15.32, 15.34, 15.43, 15.44.
1.In addition you need to practice writing mechanisms. Take the transformations shown on the bottom of page 643 and top of page 644 (ethyl 5-chloropentanoate to the lactone of 5-hydroxypentanoic acid) and show the mechanism of each step.
2. How would you make the lactone of 5-hydroxypenatanoic acid from cyclopentene?
3. This problem should be of interest to the bio-types in the course, the environmentally concerned as well as car buffs. It involves the formation of a synthetic lubricant component from a natural oil.
Oleic acid can be derived from a natural oils. In what form does it occur and what reaction is required to liberate oleic acid?
Oleic acid can be reacted to form the two products A and B shown below. What reagent would you use for this? What are the names of the two products?
The two products A and B are separated and then converted to the acids C and D, respectively. How would you carry out these transformations. Name C and D.
Diacid D can be reacted with 1-tridecanol to form the diester. Similarly, C can be reacted with E to form a tetra ester. How would you carry out these transformations?
These esters are used as components in synthetic lubricants. They are mixed with other components to form a lubricant which is said to give better engine wear. They can also be recycled easily.
Step 1: Ozone followed by reductive workup using Zn or CH3SCH3
B: 9-oxo-nonanoic acid
Step 2: oxidation, variety of oxidizing agents eg., Cr2O7-
C: nonanoic acid
D: 1,9-nonane-dioic acid
Step 3: Ester formation; use acid catalyst
4. This question is intended for review and to help you study the material
Transformations between the oxygen containing functional groups
Fill in the reagents and conditions needed to effect the various transformations
JUST FOR REVIEW - FILL IN ON YOUR OWN
|Go to:||Instructions for Printing this Document
Chem2O6 Problem Sets & Answers
Chem2O6 Home Page.
06 feb 98; jp | <urn:uuid:e8d35007-c0dd-4c9a-9667-24a7f44e8243> | 2.71875 | 574 | Tutorial | Science & Tech. | 68.960321 |
|Suppose you collect a sample of 11-locus Y-chromosome haplotypes
from 100 men and every haplotype in the sample is
different. We know that the various loci are linked, so it
would not be valid to estimate the population frequency of a
haplotype by assuming the "multiplication rule" most haplotypes
figure to be more common than the product of the frequencies of the
alleles at each locus.
Since multiplication is out, what is a good guess as to
the frequency of each haplotype? 1/100?
No, 1/100 is a crazy guess. If I guess that every haplotype has a
frequency of about 1/100, then I am guessing that almost all the men
in the population are one of these 100 types. That's possible, but if
it were so, what is the chance to obtain a database such as I have
with each haplotype seen only once? It is
Do I really believe I am that lucky? The frequency of each
haplotype has to be something like 1/1002 in order to make it
reasonably likely that 100 random men will have no duplications. In
I think it is reasonable to guess that any particular haplotype
is at least as rare as 1/5000 (and the sky's the limit).
|Suppose instead that you sample 100 men, determine 9-locus
Y-chromosomal haplotypes, and there are some duplicates. However
70 haplotypes occur exactly once.
Pick any one of the unique haplotypes at random. What is the
probability that the next man you see has the same haplotype?
|About (ln 100/70)/99, or 1/278 is a reasonable answer in the sense of
First remark: This is an unpublished, and as far as I know, original result.
Please give a citation if you use it.
Second remark: Can you prove it? (I have a mildly implausible derivation,
but not a proof.)
Third remark (April, 2002): I can prove it.
Comments? Questions? Disputes?
Links: Forensic mathematics home page.
Posers in forensic mathematics. | <urn:uuid:82dd3a63-a601-408e-8585-403690286d11> | 2.984375 | 465 | Q&A Forum | Science & Tech. | 56.944314 |
php is a scripting language that has to be interpreted. this is done through server software. If you wish to use php on a winxp machine, you'll have to ensure that some form of server software is runiing (iis is most common for windows). Then you'll have to install the php interpreter, put the files in the localhost directory, and access the web page through an internet browser. You can get the required downloads (php, not iis) at php.net. It also has excellent documentation.
Operating systems do not really impact php programming per se. No matter what operating system you run, you will have to install a local server at the minimum, and view the resulting web pages through an internet browser. php is a server side scripting language, and therefore has to be interpreted through server functionality. If you us windows, you can install iis, or if you use linux, you can install apache, but server software (or at least software that emulates a server) is necessary. | <urn:uuid:fe9393e7-7875-4a6d-9ee3-ab830c178e08> | 2.75 | 209 | Comment Section | Software Dev. | 56.982679 |
[Haskell-cafe] The simple programming language,
juventino_85 at hotmail.com
Sun Mar 26 09:17:41 EST 2006
if somebody can help me wid this i wil b realy thankful
Finally, we come to the program. The simple programming language,
which includes loops, conditionals and sequential composition will be
a datatype in Haskell.
data Prg =
| PrintS String
| PrintE Exp
| Declare Variable
| Variable := Exp
| Prg :> Prg
| IfThenElse Exp (Prg, Prg)
| While Exp Prg
The program Skip does nothing. The program PrintS ‘prints’ the given
string to the output. Similarly, PrintE prints the value of the given
to the output.
Declare declares a variable, assigning its initial value to 0. The :=
performs variable assignment.
The :> operator is sequential composition, while IfThenElse and While
are conditional and while-loops respectively. The condition in both cases
resolves to true if the value of the expression is not zero.
A factorial program can be programmed in this small language as follows:
counter = "counter"
result = "result"
factorial n =
:> Declare result
:> result := Val 1
:> counter := Val n
:> While (Var counter)
( result := (Var result) :*: (Var counter)
:> counter := (Var counter) :-: (Val 1)
:> PrintS ("Factorial of "++show n++" is:")
:> PrintE (Var result)
(a) The simulate function takes a Store and a program, and returns an
updated store and list of output strings:
simulate :: Store -> Prg -> (Store, [String])
Using pattern matching, define simulate for Skip, PrintS and variable
(b) Add the cases for PrintE and variable assignment.
(c) Sequential composition can be defined by simulating the two instructions
in sequence. Complete the following case:
simulate store (p1 :> p2) =
let (store’, outs1) = simulate store p1
(store’’,outs2) = simulate store’ p2
(d) Define simulate for conditionals.
(e) We can ‘cheat’ by simulating while-loops in terms of conditionals and
simulate store (While condition program) =
( program :> While condition program
Using this definition, test your program with the factorial program.
View this message in context: http://www.nabble.com/The-simple-programming-language%2C-t1344638.html#a3596582
Sent from the Haskell - Haskell-Cafe forum at Nabble.com.
More information about the Haskell-Cafe | <urn:uuid:3c9ff51f-136a-4585-b5d0-bcfa6ca1f669> | 3.53125 | 630 | Comment Section | Software Dev. | 46.110045 |
Tuesday, September 2, 2008 01:12 AM
SOAP: Simple Object Access Protocol, is a protocol to communicate with a web service. SOAP is a xml which travels over http. SOAP xml can be a request or response.
WSDL: Web Servivce Discription Language, again this is a xml which describes about the web service. What kind of methods this webservice expose and wat are the parameters need to be passed for every method & what is the return value. If any of the method expects a complex type(non statndard java type) the WSDL contains the schema for the complex type.It contains information where this webservice is running.
Both the above are xml and are in a standard format given by w3c organization. As webservices maintain the same standard the webservices became language independent. | <urn:uuid:a6a9070c-55ad-4c20-9bf9-72046045a42e> | 3.328125 | 177 | Comment Section | Software Dev. | 53.167886 |
By Gustavo Arellano
By Aimee Murillo
By Matt Coker
By Vickie Chang
By Matt Coker
By LP Hastings
By Michael Goldstein
By R. Scott Moxley
With all the talk about global warming, you'd think that summer in Southern California would be hotter than ever. But anyone who lives within 10 miles of the ocean knows that every summer brings "June Gloom" and its incessant, miserable gray clouds that blow in from the Pacific and stick around until late in the afternoon (and which, despite their name, often arrive in April or May and last until July or August). The crappy weather means that summer usually lasts from late August until early October, which makes no sense. We asked Charlie Zender, an associate professor of earth systems science at UC Irvine, for some answers.
OC Weekly: What is June Gloom?
Charlie Zender:It's part of the natural cycle that affects Southern California around the Catalinas every year. It's fairly predictable and fits a pattern driven by a combination of ocean, atmospheric and land interaction. If you look at the globe from outer space, Southern California is clear and desert-like, but extending off the coast to Hawaii is a constant deck of stratospheric clouds. For much of the year there is a deck of stratocumulus, and during May and June, the winds and ocean circulation undergo a rearrangement that causes those clouds to intensify and be drawn over the land during the morning, usually burning off in the afternoon.
Where else on the planet does this happen?
There are three other places around the world that have these extensive stratocumulus decks: Peru, Namibia, along the Skeleton Coast, and off the West Coast of Australia. They are all arid areas. The key thing for Southern California is that our coast doesn't really go from North to South. It goes from North-West to South-East, and in May and June, the wind blows straight along the coast. This intensifies upwelling of cold water along the coast: the surface cools down, it intensifies the marine layer. Meanwhile, over the land here in Orange County during the summer, we are warming up because the sun is directly overhead, the desert is heating up, and daytime highs are increasing. The desert regions generate a huge thermal, which sucks cold marine air in to replace the air that is rising. The power for the sea breeze is that summertime heat cooking the desert, and you get a sea breeze on steroids in the summer that brings in a lot of clouds.
How will global warming affect summer weather in Orange County?
We aren't sure, but we have predictions consistent with all the observations so far. In the long run, in the next few decades, the offshore Pacific Ocean will probably get colder off the coast. This isn't part of global warming: it's part of a natural cycle. And the temperature will only go down by less than a degree. You're not going to see huge icebergs bumping into Catalina Island. But the effect of that is that it increases the change in temperature from the ocean to the land. If you strengthen that sea breeze conveyor belt, you can expect the June Gloom to penetrate deeper and last longer. Global warming could actually cause June Gloom to intensify even more. If both ocean and desert temperatures go up, there's no net increase in June Gloom, but with global warming, you increase temperatures over land more than over the ocean. It takes much longer to warm up the ocean. There is no disagreement that the land will warm more than the ocean. That will speed up the merry-go-round of the sea breeze.
Are you sure global warming is real? The Bush administration isn't so sure.
Global climate change is one of the saddest aspects of industrialization. You see all this great improvement in quality of life, and everyone wants that, and the bottom line is we can't currently industrialize and energize economies without fossil fuels. But there is no legitimate scientific dispute with the fact that greenhouse gases warm the planet. It is increasingly clear that without any compunction for future generations, we are going to blow through the available oil and inexorably damage the planet. You can already see the effect of this on tree frogs and polar bears. But I'm glad about the fact that the so-called debate about whether global warming is real has died down. People that just want to be provocative and debate global warming are quieting down, and now we can at least have a discussion about what we can do about global warming. The good news is that it turns out that you can preserve many of the services the climate provides by just reducing the rate of burning all the fossil fuels. It has important consequences for preserving the ocean cycles we have. I think we are closer to making the right decisions than we were 10 years ago.
Forget species extinction. Let's talk about summer. Are you saying that global warming could actually make our summers colder, cloudier and crappier than they already are?
Yes, but we'll be glad about that because we will feel the heat much less than areas further inland, like Bakersfield or the Inland Empire, which will see very large temperature increases over the coming decades. And you can save some money on sunscreen. There's always a silver lining.
Find everything you're looking for in your city
Find the best happy hour deals in your city
Get today's exclusive deals at savings of anywhere from 50-90%
Check out the hottest list of places and things to do around your city | <urn:uuid:2c038889-7c1f-4e83-88d6-d512c5d2207b> | 3.265625 | 1,154 | Audio Transcript | Science & Tech. | 53.437165 |
Get flash to fully experience Pearltrees
Only 29 percent of the world surface is land. The rest is ocean, home to the marine lifeforms. The oceans average four kilometers in depth and are fringed with coastlines that run for nearly 380,000 kilometres.
This animation uses Earth science data from a variety of sensors on NASA Earth observing satellites to measure physical oceanography parameters such as ocean currents, ocean winds, sea surface height and sea surface temperature.
MarineBio is deeply committed to marine conservation and founded on the concept that, by sharing the wonders of the ocean and marine life, people will be inspired to protect it. We hope you will consider becoming a MarineBio Conservation Society member to help us bring the ocean and the conservation message to as many people as possible. There are many other organizations working on marine conservation and other environmental issues such as biodiversity and global warming.
Advertisement. EnchantedLearning.com is a user-supported site. As a bonus, site members have access to a banner-ad-free version of the site, with print-friendly pages.
Interesting Ocean Facts Area: about 140 million square miles (362 million sq km), or nearly 71% of the Earth's surface. Average Depth: 12,200 feet (3,720 m).
The surface of the planet is approximately 71% water and contains (5) five oceans, including the Arctic, Atlantic, Indian, Pacific and Southern. Their borders are indicated on the world image (right) in varied shades of blue.
Six of the seven species of sea turtles in the world are found on the Reef: Green, Leatherback, Hawksbill, Loggerhead, Flatback and Olive Ridley.
A coral reef is home to many sea animals and plants.
Seahorses are truly unique, and not just because of their unusual equine shape.
Numerous species of krill inhabit the world’s oceans.(One world ocean http://oceanservice.noaa.gov/education/literacy/ocean_literacy.pdf.)One particular species, Antarctic krill, has made headlines lately because of its alarming decline. What’s driving this decline? | <urn:uuid:bae13f71-0c48-46ab-a19b-e12b546eee79> | 3.46875 | 445 | Content Listing | Science & Tech. | 41.898984 |
Climate Control Is Coming (Apr, 1958)
The catalog of techniques on the third page just looks like a list of environmental disasters nowadays.
Climate Control Is Coming
If Spain could have subdued the devastating storm that swept its Armada from the English Channel in July 1588. would all the Americas be speaking Spanish today?
If Napoleon’s proud legions could have neutralized Russia’s secret ally, “General Snow” how would the map of Europe look now?
If the Nazis could have ordered gales to batter Gen. Eisenhower’s vast invasion force off Normandy on June 6, 1944, what would historians now be writing about World War II?
Armchair strategists have long de- bated the tantalizing “ifs” introduced into history by the vagaries of weather. In military operations, weather is usually a potent foe or a mighty ally.
Up to now, man—at war and in peace—has remained at the mercy of nature. But there is mounting evidence that this will change. U.S., Russian, and other meteorologists are engaged in a critical race to impose their wills on the winds to create weather—even climate—to their liking. Or, conversely, to harass an enemy with storms or droughts.
Indeed, the question is no longer: “Can man modify the weather and control the climate?” but “Which nation will do it first, the United States or the Soviet Union?”
One of those working to tame the elements for the West is Capt. Howard T. Orville, U.S.N, (ret.), who for four years has headed President Eisenhowers Advisory Committee on Weather Control. In submitting his committee’s final report Orville said: “If an unfriendly nation gets into a position to control the large-scale weather patterns before we can, the results could even be more disastrous than nuclear warfare.”
One of Orville’s consultants, Dr. Bernard Vonnegut, a pioneer weather-control researcher, has compiled a separate report which lists some of the astonishing possibilities for weather control now being explored both in America and Russia. His study, soon to be made public, ticks off uses of weather as a weapon and in long-range economic rivalry.
Cloud-seeding techniques might be used to open large holes in cloud formations to increase visibility for air raiders, Vonnegut states. The same principles might also be employed to increase cloud cover over enemy territory — perhaps eventually to hang a long-lasting curtain over a given area, blotting out all sunlight.
Doctor Edward Teller, the hydrogen-bomb scientist, recently described the potentialities of such a fair-weather monopoly, “Please imagine,” he told the Senate Preparedness subcommittee, “a world . . . where (the Soviets) can change the rainfall in our country in an adverse manner. They will say, ‘we are sorry if we hurt you. We are merely trying to do what we need to do in order to let our people live.’ ”
To this warning Prof. Henry G. Houghton, Massachusetts Institute of Technology meteorologist, added: “I shudder to think of the consequences of a prior Russian discovery of a feasible method of weather control … an unfavorable modification of our climate in the guise of a peaceful effort to improve Russia’s climate could seriously weaken our economy and our ability to resist.”
The meteorologists’ growing understanding of how and where weather is born is allowing man to intervene more and more with the elements. Earth’s weather is brewed in the comparatively thin (8 miles deep) layer of the lower atmosphere by an exquisite balance of cosmic and terrestrial forces.
Life-giving solar radiation pours down on the earth’s surface; some heats the ground, some is reflected back to heat the air, and some evaporates water in the world’s oceans, lakes, and seas. > Overhead, like a glass roof of a giant greenhouse, the atmosphere imprisons the heat of the day, preventing it from radiating away into space at night. This heat balance, together with the rotation of the earth, propels the mighty ocean currents and the great rivers of air which determine what kind of a day it is today, and how it might change tomorrow.
Man is experimenting with this basic knowledge in new, ingenious ways. For example, both the U.S. and the Soviet Union are trying to put the free energy from the sun to work for them. One plan to reclaim frozen areas involves sprinkling sunlight-absorbing soot over snow-covered lands. They hope the resulting thaw will eventually permit productive agricultural use of such plateaus.
In a world where water is becoming the most precious mineral, control of the moisture balance between air, land, and sea becomes more and more important. The U.S. Geological Survey’s experimental laboratory in Denver, Colo., is using a harmless, tasteless chemical film (hexadecanol, a substance also found in ladies’ lipstick) that actually can seal in bodies of water to reduce evaporation.
If it could be done on a large scale, this would deprive adjacent land areas of rain. Other chemicals might be used for the opposite effect: By speeding evaporation, rainfall could be increased.
There has been much speculation about using hydrogen bombs to break up hurricanes. But the weather experts now think they have better ways to fight the fury of the winds. Sometime during the hurricane season this coming summer, the U. S. Weather Bureau may attempt to divert a hurricane away from the southeastern U. S. coast by using the heat updraft from massive patches of burning fuel oil poured on the sea at crucial points.
As for H-bombs, they may someday prove valuable in trimming mountaintops to redirect wind patterns. Atomic Energy Commission officials have hinted at such mammoth landscaping tasks for the radiation-free bombs it is trying to perfect. One early beneficiary of such a project might be smog-ridden Los Angeles;- if science could trim the surrounding mountains, a new wind pattern would sweep the smog away.
Some of today’s most spectacular weather-taming plans involve the Arctic and Antarctic iceboxes, principal breeding areas of the worlds cold fronts. Changes in the size and shape of the polar icecaps would have profound effects on the rest of the world. In the ultimate remodeling—say, the thawing of the north polar region—ocean levels would rise an estimated 40-100 feet, inundating New York, London, Le Havre, and other near sea-level ports.
Two methods to alter the polar packs have been discussed by would-be weather controllers: First, using scores of nuclear bombs to thaw some of the deep-ice areas in the Antarctic and, second, redirecting warm ocean currents—by dams, channels, or jetties—to reduce the Arctic’s ice fields.
The Russians have long been interested in the Arctic for strategic reasons and because so much of their territory borders the Arctic Circle. Dr. Harry Wexler, chief of research for the LI. S. Weather Bureau and a frequent polar visitor, gives this assessment of the Soviet efforts there to date: “They have been conducting big arctic expeditions since 1937. Literally they have covered the whole arctic basin within 100 miles of the North American continent. They make our own efforts look puny by comparison. They have clone excellent work in climatology, and in basic cloud physics, and have much greater facilities for studying weather.”
Aware of this challenge, Capt. Orville’s presidential committee recently urged more vigorous government support of basic meteorological research. Specifically, the committee suggested research in solar effects on weather, global air circulation, dynamics of cloud motion, and origin and movement of large-scale storms.
A confirmed believer in the feasibility of large-scale weather control perhaps in 20 or possibly fewer years, Orville says it “is essential to have some international cooperation in this field, possibly through the U.N.” Pending such agreement, however, he wants the U. S. second to none in weather knowledge.
U. S. Weather Bureau chief Francis W. Reichelderfer is all in favor of more money for such basic research but he also is convinced that a “crash” effort “will not give us the basic knowledge we need for a real weather program.”
Reichelderfer is supported in this warning by many meteorologists. Forecasting, for all the new rocket probes, radar plots, and electronic calculators, is still an imprecise science. Before man intervenes, for example, to increase solar radiation intake by blacking snow and speeding water evaporation, he must be sure what the over-all effects will be.
With imperfect knowledge, it is possible weather changes will boomerang on man, and his massive efforts to harness climate might instead initiate the return of the glaciers and a new Ice Age.
Despite this warning, the race to master weather—to make it a weapon—accelerates in the U.S. and the U.S.S.R. | <urn:uuid:91000d42-d9c8-45ef-941d-1039783d1bc1> | 2.890625 | 1,919 | Personal Blog | Science & Tech. | 47.900006 |
|Types of Flies
Types of Flies
Types of Insects
Small-headed flies (Acroceridae family) go against the grain of stereotypical fly behavior.
Unlike many other fly families, most adult small-headed flies are flower feeders. They are known for their extended proboscis. The above picture partially shows the proboscis reaching to the bottom of the flower for nectar.
Juvenile or larvae small-headed flies display equally remarkable behavior. They are carnivores, of spiders no less. Adults usually lay eggs near spider habitat. Young larvae hop on passing spiders, burrow inside, and proceed to feed on the spider.
© 2006 Patricia A. Michaels | <urn:uuid:8977c612-7c9d-4e01-b635-6ba4f8917759> | 3.046875 | 143 | Knowledge Article | Science & Tech. | 43.834543 |
Happy weekend, dear readers. My math classes are currently studying surface area and volume of solids and we came across the conundrum of cylinder volume. How oddly counter-intuitive is it, that a short, seamingly smaller cylinder, can actually hold more capacity than a taller, skinnier cylinder? Hence the video above, in which I try a classic conservation of quantity experiment out on Zombie Teacher's 4-year old son Ethan. (who is not currently a zombie)
Piaget would tell you that the conservation of continuous quantity is a developmental skill. By the time we are six years old, most of us have some understanding that equal amounts of liquid or solid don't change quantity simply by putting them in taller or fatter containers. But it's quite amusing to try out the experiment on younger children! And occasionally on older kids and grown-ups, just to see how spatially advanced they are.
For example, these two cylinders do NOT have the same volume. But can you guess which one holds more liquid? Think it's the taller one? You're WRONG! We are studying the math in class right now and it's as simple as stacking coins. Prism volume is equivalent to the area of its base, (big B) multiplied by its height. In the case of a cylinder, the area of the bottom "coin" or base, is equal to volume of a 1-coin cylinder (except in units cubed instead of squared). The "height" of the cylinder then becomes how many coins are stacked; thus, increasing the volume of the first "layer" by its height/layer factor.
Area of the shorter, "horizontally gifted" cylinder, then, is 7 x 7 x 7 pi = 343 pi
Area of the taller, "vertically gifted" cylinder, then, is 18 x 4 x 4 x pi = 288 pi
I like to show Ethan's conservation video after having taught the formula, and then to challenge my students to go home and try the experiment with family members (hopefully younger ones). AS LONG AS they make an attempt to explain not only the purpose of the experiment, but the math behind it, to the subject of their experiment. This forces them to internalize the formula in an effort to explain it to someone else. Because we all know the best way to really learn material is to have to teach it yourself!!
|Yes, those are match box cars playing with the solids.|
Other cool tricks that kids can try, or you can demo before the class, include the pyramid/prism and cone/cylinder volume conundrum. Did you know that there is a proportional relationship between the volume of a cylinder, and the volume of a cone? and a sphere too? There is a similar relationship between square- or rectangular- based prisms and pyramids of equal base sizes. It just doesn't visually make sense.
I like to survey my class first, either with fingers or on paper, to write down how many times they think the pyramid solids will fit into their prism counterparts. I tell them they can give me "half a knuckle" if they think it needs a decimal answer (it doesn't). Then I show them either with rice, or on this absolutely fantastic interactive web page from CMP2. Make sure you have the sound on because it has fantastic sound effects! The little sink "fills" the solids, and the drain "empties" them out.
First, fill the cone and pour it into the cylinder. It's not full. Do it again. It's still not full. Do it again. It's full! The cone fills the cylinder 3 times. Hence, the pi x radius squared x height formula is proportionally true for a cone; it's just 1/3 the answer.
Try filling the cone and dumping it into the sphere. The cone fills the sphere TWICE!
And the creepiest, coolest one of all, is to fill the cone and sphere, and empty them both into the cylinder to fill it perfectly! The cone fills up all the gaps between the round parts of the sphere, like melting ice cream from the cone back into the tub.
The 1/3 relationship also works with a rectangular- or square-based pyramid and it's equal-based prism counter-part, LxWxH (x1/3 for the pyramid). And you do NOT need to spend any money at all to use the online app. You just need either a computer bay for students to try it themselves, or a Smart board to demo it in front of the class.
I am lucky enough to have both... so I show it on the Smart board and with the rice first, and then give the kids a chance to try it themselves. You'd think it was a preschool party, with how many 7th graders try to swarm the rice "sensory" table to play with my solids sets. And oh, how nasty the floor gets. I get down on my hands and knees after school to scrape up as much of the rice mess as I can, so the custodians don't report me for destruction of the carpet! And I feed them a lot of cookies at Christmas time :o)
Try it out. Play. Experiment. See!? Math is fun. | <urn:uuid:bdb8709a-7c90-4ef6-839a-daf199ea3237> | 3.515625 | 1,090 | Personal Blog | Science & Tech. | 68.521617 |
Two complementary radar sounder instruments work together to discover hidden Martian secrets. They are the Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS) on the European Space Agency's Mars Express orbiter and the Shallow Subsurface Radar (SHARAD) on NASA's Mars Reconnaissance Orbiter.
MARSIS was designed to penetrate deep and it has delivered on its promise. This figure shows the base of Mars' south polar layered deposits at the deepest recorded point of 3.7 kilometer (2.3 miles).
In contrast, SHARAD was designed as a high-resolution radar for a maximum penetration of 1 kilometer (0.6 mile) has difficulty detecting the base of these layered deposits.
MARSIS was funded by NASA and the Italian Space Agency and developed by the University of Rome, Italy, in partnership with NASA's Jet Propulsion Laboratory, Pasadena, Calif. Italy provided the instrument's digital processing system and integrated the parts. The University of Iowa, Iowa City, built the transmitter for the instrument, JPL built the receiver and Astro Aerospace, Carpinteria, Calif., built the antenna. JPL is a division of the California Institute of Technology in Pasadena. Additional information about Mars Express is at www.esa.int/marsexpress.
SHARAD was provided by the Italian Space Agency (ASI). Its operations are led by the University of Rome and its data are analyzed by a joint U.S.-Italian science team. JPL, a division of the California Institute of Technology, Pasadena, manages the Mars Reconnaissance Orbiter for the NASA Science Mission Directorate, Washington. | <urn:uuid:e759a25a-73b6-45e3-8d31-8b2806fe32d9> | 3.421875 | 335 | Knowledge Article | Science & Tech. | 30.966067 |
I have the theory that: if you drive a vehicle into a static, unbreakable wall, you will feel the same G-force and get the same injuries as if you would drive into your exact copy but mirrored (same car, weight, velocity, angle) head to head.
Everybody [says] there would be more energy transferred to the drivers and more injuries ... but I don't agree. I think the effect should be exactly the same when the energy is divided between the bodies no matter if it's a wall or your mirror image/clone.
Force - Colliding With a WallConsider case A, in which car A collides with static unbreakable wall. The situation begins with car A traveling at a velocity v and it ends with a velocity of 0. The force of this situation is defined by Newton's second law of motion. Force equals mass times acceleration. In this case, the acceleration is (v - 0)/t, where t is whatever time it takes car A to come to a stop.
The car exerts this force in the direction of the wall, but the wall (which is static and unbreakable) exerts an equal force back on the car, per Newton's third law of motion. It is this equal force which causes cars to accordion up during collisions.
It is important to note that this is an idealized model. In case A, the car slams into the wall and comes to an immediate stop, which is a perfectly inelastic collision. Since the wall doesn't break or move at all, the full force of the car into the wall has to go somewhere. Either the wall is so massive that it accelerates/moves an imperceptible amount or it doesn't move at all, in which case the force of the collision actually acts on the entire planet - which is, obviously, so massive that the effects are negligible.
Force - Colliding With a CarIn case B, where car A collides with car B, we have some different force considerations. Assuming that car A and car B are complete mirrors of each other (again, this is a highly idealized situation), they would collide with each other going at precisely the same speed (but opposite directions). From conservation of momentum, we know that they must both come to rest. The mass is the same. Therefore, the force experienced by car A and car B are identical and are identical to that acting on the car in case A.
This explains the force of the collision, but there is a second part of Anton's question - the energy considerations of the collision.
Now For Energy...Force is a vector quantity while kinetic energy is a scalar quantity, calculated with the formula K = 0.5mv2.
In each case, therefore, each car has kinetic energy K directly before the collision. At the end of the collision, both cars are at rest, and the total kinetic energy of the system is 0.
Since these are inelastic collisions, the kinetic energy is not conserved, but total energy is always conserved, so the kinetic energy "lost" in the collision has to convert into some other form - heat, sound, etc.
In case A, there is only one car moving, so the energy released during the collision is K. In case B, however, there are two cars moving, so the total energy released during the collision is 2K. So the crash in case B is clearly more energetic than the case A crash, which brings us to Anton's next point...
From Cars to ParticlesAnton's letter goes on to say:
A couple of people bring up the particle accelerator and tell me that there's a reason why they accelerate two particles against each other. "It will create more energy, thus damaging the particles more, as with the cars and their drivers."
But I think that only shatters the atoms more, like throwing two glass bottles really hard and they shatter all over more than just throwing a glass at a wall. Cars don't shatter like that so I don't think it applies when the bodies come to a stop.
First, it's important to consider the major differences between the two situations. At the quantum level of particles, energy and matter can basically swap between states. The physics of a car collision will never, no matter how energetic, emit a completely new car!
The car would experience exactly the same force in both cases. The only force that acts on the car is the sudden deceleration from v to 0 velocity in a brief period of time, due to the collision with another object.
However, when viewing the total system, the collision in case B releases twice as much energy as the case A collision. It's louder, hotter, and likely messier. In all likelihood, the cars have fused into each other, pieces flying off in random directions.
And this is why colliding two beams of particles are useful, because in particle collisions we don't really care about the force of the particles (which we never even really measure), we care instead about the energy of the particles.
A particle accelerator speeds particles up but does so with a very real speed limitation (dictated by the speed of light barrier from Einstein's theory of relativity). To squeeze some extra energy out of the collisions, instead of colliding a beam of near-lightspeed particles with a stationary object, it's better to collide it with another beam of near-lightspeed particles going the opposite direction.
NOTE: Kinetic energy considerations in the relativistic situation within particle accelerators, where the particles are traveling near the speed of light, are not quite as straightforward as the classical case involving cars ... but using relativity and Lorentz transformations, analogous equations come out fairly easily that correct for the relativistic changes. When these are taken into account, colliding opposite beams still yields more energy output than colliding a beam with stationary matter.From the particle's standpoint I don't know that it would so much "shatter more," but definitely when the two particles collide more energy is released. In collisions of particles, this energy can take the form of other particles, and the more energy you pull out of the collision, the more exotic the particles are.
ConclusionIn answer to Anton's original theory, therefore, I believe that he's basically correct. His hypothetical passenger would not be able to tell any difference whether he was colliding with a static, unbreakable wall or with his exact mirror twin.
His friends are also right that the particle accelerator beams get more energy out of the collision if the particles are going in opposite directions, but they get more energy out of the total system - each individual particle can only give up so much energy, because it only contains so much energy. | <urn:uuid:afcc907f-2832-4174-8644-d9f1e2296df3> | 2.6875 | 1,384 | Comment Section | Science & Tech. | 47.568224 |
>I am fasinated with whales. I have got lots of information about whales
>but I haven' t got any about what whales evolved from. I have been
>watching walking with beasts and it told us a little about the earliest
>whales in programme 1 but I can't remember the information about them.
>Please send me all the information you know about the earliest whales.
>Alex Lucas (age 10)
I too am fascinated with whales, so I'm happy to discuss them with
you. Your question is very timely because there has been some new
information about whale evolution just recently. Below is the abstract of a
breakthrough development in the study of whale evolution. It's a wonderful
demonstration of how science progresses by the rules of empirical
methodology, and old assumptions are overturned by new evidence. I know
some of the words will be new for you, but I'll summarize the results, and
maybe your parents or teacher can help you with the big words.
A few years ago it was believed the whales evolved from a dog-like
animal, a carnivore with sharp teeth. Then some DNA studies showed that
sperm whales, which have very large teeth, were more closely related to
baleen whales, which have no teeth, than they were to the other toothed
whales. That didn't make sense at the tim. But now this new discovery shows
that the earliest whales are actually descended from some kind of
artiodactyl (grazing animals), although they don't know what it looked
like. So the first whales were not meat-eating predators, but were more
inclined to graze through concentrations of plankton, like copepods and
krill, and so they probably first evolved some kind of filter feeding
ability. That means the toothed whales evolved in several branches from the
baleen whales, which explains why sperm whales are more closely related to
baleen whales than they are to toothed whales like orcas.
There are new discoveries all the time in whale research, and
there are plenty of important questions still unanswered for future whale
Thewissen, J. G. M.; E. M. Williams; L. J. Roe and S. T. Hussain. (2001).
Skeletons of terrestrial cetaceans and the relationship of whales to
artiodactyls. Nature (London) 413(6853):277-281. 2001.
Abstract: Modern members of the mammalian order Cetacea (whales, dolphins
and porpoises) are obligate aquatic swimmers that are highly distinctive in
morphology, lacking hair and hind limbs, and having flippers, flukes, and a
streamlined body. Eocene fossils document much of cetaceans' land-to-water
transition, but, until now, the most primitive representative for which a
skeleton was known was clearly amphibious and lived in coastal
environments. Here we report on the skeletons of two early Eocene pakicetid
cetaceans, the fox-sized Ichthyolestes pinfoldi, and the wolf-sized
Pakicetus attocki. Their skeletons also elucidate the relationships of
cetaceans to other mammals. Morphological cladistic analyses have shown
cetaceans to be most closely related to one or more mesonychians, a group
of extinct, archaic ungulates, but molecular analyses have indicated that
they are the sister group to hippopotamids. Our cladistic analysis
indicates that cetaceans are more closely related to artiodactyls than to
any mesonychian. Cetaceans are not the sister group to (any) mesonychians,
nor to hippopotamids. Our analysis stops short of identifying any
particular artiodactyl family as the cetacean sister group and supports
monophyly of artiodactyls.
This archive was generated by hypermail 2b30 : Mon Feb 25 2002 - 21:06:00 EST | <urn:uuid:080e1d48-5533-48f7-8d5f-d898f008e5cb> | 2.96875 | 867 | Comment Section | Science & Tech. | 42.709504 |
The beetle order embraces more species than any other group in the animal
kingdom. At least 250,000 species are known, more than one-quarter of all
animal species. About 160 families exist; although some contain only 1
or 2 species, others, such as the weevil (q.v.) , contain 30,000 species,
far more, for example, than all mammal species. The classification of this
enormous number of forms is extremely difficult. The order is generally
divided into four suborders, namely, Archostemata, Adephaga, Myxophaga,
and Polyphaga; these are, in turn, subdivided into series and super families.
The families are further divided into subfamilies, and these are subdivided
into tribes and genera. Some coleopterists find still other groupings necessary
to indicate the many relationships and differences among beetles.
Beetles vary widely in their habits and are found under the most diverse conditions. A few live in salt water, more in fresh water, and a small number breed in hot springs.
Some beetles live under the bark of living and dead trees.
Numerous beetles feed on the roots, wood, leaves, flowers, and fruit of living plants, causing great economic damage. Some beetles, such as the ladybird beetle, prey on pest insects and thus are important in biological control.
Size: X 4.0
Others are scavengers, living on dung or dead animals. Some are parasitic and live in the nests of ants, bees, or termites, existing on food brought into the nest by the hosts or on the hosts themselves. Virtually every product of the animal or vegetable kingdom supplies some beetle, including the bookworm with food.
.Size: X 1.0
Common name for many species of insect in the order Orthoptera, which also includes grasshoppers and katydids. The species often called true crickets, such as the field cricket Gryllus assimilis of the Americas, make up the subfamily Gryllinae of the family Gryllidae. Some are cave or house dwellers. These insects have long antennae and hind legs adapted for jumping; their hearing organs are located on the front legs. Cricket species are characterized by the chirping call of the male, produced by rubbing a grooved ridge on the underside of one of the front wings against the sharp edge of the other front wing. The solitary animals remain by day in crevices or shallow burrows dug in the soil, emerging at night to feed on vegetation and on aphids, and other insects. During breeding season the male attracts a female with its call, sometimes driving off other males that intrude on its territory. The female uses its long, spear like ovipositor to insert eggs into the soil or plant stems. The young, called nymphs, resemble the adults and reach full size after 6 to 12 molts; as adults, they live 6 to 8 weeks.
Many other orthopterans are called crickets, such as the burrowing mole crickets, which have strong front claws for digging and hind legs that are not adapted for jumping. Some are only distantly related to true crickets. | <urn:uuid:e3266b16-bd5b-46b2-bd30-e215085efb43> | 3.84375 | 671 | Knowledge Article | Science & Tech. | 46.979677 |
CAS Lectures Spring 2013
The CAS Lecture Series is an opportunity for the general public to learn more about astronomy than the weekly observing sessions permit. The series consists of several talks given by CAS members or outside speakers on a variety of topics in astronomy or space exploration. In the event that the lecture series is not filled, it is complimented with episodes of the Cosmos series hosted by Carl Sagan. The lectures are held on Fridays throughout the fall and spring semesters at 7:00 PM in the first floor classroom at Fuertes. Little or no astronomy background is required. If weather permits, the observatory will be open for public viewing after the talk.
Most of the universe is invisible, almost literally. All of the cloudy nights we have in Ithaca that prevent us from seeing the night sky do not compare to the impossibility of trying to see dark matter with your eyes, a telescope, a microscope, or any other type of scope instrument. Dark matter is not hiding. It makes up over 75% of the mass in the universe and yet we cannot see it. I will talk about what dark matter is, why we think so much dark matter exists, why it is impossible to "see," and the new state-of-the-art attempts to detect dark matter --- something that could happen possibly even in the next few years.
This is NOT Your Parents' Solar System...
Dwarf planets, Kuiper Belt Objects, chaotic orbits, the Crater of Doom, and now hot Jupiters around other stars... Many people long for the Good Old Days(TM) when the solar system was simple: nine planets going around the Sun like clockwork, and just a few comets and asteroi-DUCK! being untidy. Well, how about the REALLY Good Old Days, when there were only seven movable things and a bunch of stars all going around the Earth? The history of planetary astronomy reflects the scientific process: as we get new data, we have to revise how we think about things--and our place in the universe. Join us for a tour of how we learned about the solar system, why the last 50 years has truly been the Golden Age of planetary science, and a bit about current research. As a lab practical (assuming the cloud gods permit us!), we'll observe a few of the planets and asteroids currently visible.
Heaven and Hell
Carl Sagan (in DVD form)
Almost everyone knows the name of famed astronomer and science popularizer Carl Sagan. A Cornell professor from 1971 until his death in 1996, one of his personal projects was the filming of a television series called Cosmos, which originally ran on public television in 1980 and held the record for most widely-watched series on PBS for the next decade. It addressed the history of science, and especially astronomy, throughout the ages, our findings about the universe, and the impact of this knowledge on humanity's past and future. We will be showing the fourth episode of the series, Heaven and Hell, instead of our ordinary Public Lecture this week. This series is really a classic, and well worth watching!
Blues for a Red Planet
Carl Sagan (in DVD form)
Almost everyone knows the name of famed astronomer and science popularizer Carl Sagan. A Cornell professor from 1971 until his death in 1996, one of his personal projects was the filming of a television series called Cosmos, which originally ran on public television in 1980 and held the record for most widely-watched series on PBS for the next decade. It addressed the history of science, and especially astronomy, throughout the ages, our findings about the universe, and the impact of this knowledge on humanity's past and future. We will be showing the fifth episode of the series, Blues for a Red Planet, instead of our ordinary Public Lecture this week. The episode focuses on the planet Mars, from our earliest visual observations through its current (as of the 1990s) exploration and the future possibility of colonization. This series is really a classic, and well worth watching!
Observations on Literature & the Night Sky from a Late-Blooming Star-Geezer
Wallace Watson, a CAS member and retired English professor and dean (Duquesne University, Pittsburgh PA,) will discuss, and show examples of, astronomical references in European and American literature, as well as his own recent ventures into amateur astronomy.
The Development of Rocket Technology
Don Barry, an astronomer working and teaching here at Cornell and a frequent visitor to Fuertes, will be giving a historical overview of the development of rocket technology as a whole, starting with Tsiolkovsky and Goddard, focusing some detail on the 1950s and 1960s, and in particular discussing citizen participation in that early period, when hundreds of people were involved in amateur observations to help in the determination of orbits of those first spacecraft.
This was also CAS's First Annual Yuri's Night Celebration!
CAS member Brecken Blackburn will be giving a lecture this week on asteroid impacts, a perennial favorite of science fiction novels and apocalyptic prophecies alike. But how much do we know about these silent killers? She will be talking about what asteroids actually are, how they have shaped Earth's development, and the technologies that could find and protect us from death from the skies.
ET Phones Home?: Astrobiological Thoughts
CAS President Adrian Poniatowski will be giving a lecture on the possibility of extraterrestrial life, reasonable considerations one can make about such beings, recently discovered exoplanets that could be host to ET, and finally ending with a debate about the philosophical implications of our place in the Universe, be it teeming with civilized life or devoid of it. | <urn:uuid:baf451a2-97ed-4146-8e0d-2385d0cd3e31> | 2.75 | 1,158 | Content Listing | Science & Tech. | 42.111406 |
P. Philip Thomson
John I. Dunlop
Dept. of Appl. Phys., School of Phys., Univ. of New South Wales, Sydney 2052, Australia
Mathematical models for characterizing the propagation of acoustic waves in shallow water require knowledge of such acoustic properties as dilatational velocity, attenuation constant, shear velocity, and attenuation of the seafloor. In situ measurement of these properties is difficult due to the remoteness of the sea bottom. There are uncertainties in predicting these properties from geological features such as porosity, grain size, density, etc., and there is a need for direct measurements. This paper outlines some exploratory work on the laboratory measurement of core samples taken from the North West Shelf of Australia and subsequent mathematical modeling to predict general propagation characteristics. The sound-speed ratio and attenuation constant were measured by timing a high-frequency wave packet through a length of sediment core. Shear wave measurements were made using a similar measurement frame with piezoceramic bender disk transducers of 1- to 2-kHz resonance frequency. Before making measurements, the samples were individually evacuated in a mild vacuum for a short period and then slowly infused with seawater at room temperature. Measurements were made at three different positions in the cores corresponding to different depths. | <urn:uuid:097fb2d4-e31c-46be-9a28-714b1ff34a23> | 2.71875 | 266 | Academic Writing | Science & Tech. | 24.630313 |
The term climate change refers to the variation in the Earth’s global and regional climate. The changes may come from internal processes, external forces or human activities. Anthropogenic climate change is climate change caused by human activity. In the context of environmental policy “climate change” describes the ongoing changes in the modern climate, including the process known as global warming. The average temperature of the Earth’s atmosphere is rising due to human activities. The primary sources of this is the increased volumes of carbon dioxide and other green house gasses released by the burning of fossil fuels, agricultural activity and deforestation.
Italy's high court has overturned government and local approval for Italian utility Enel's plans to convert its oil-fired Porto Tolle power plant into a coal technology plant. Approximately 250 MW of the plants capacity was planned with carbon dioxide capture and storage technology.comments
The Polish carbon dioxide capture and storage project in Bełchatów is granted 137 million euro from Norway Grants, which backs environmental, social and economic development programmes and projects in priority areas, agreed with each of the Grants’ 12 beneficiary countries in Europe.comments
Alstom has launched a detailed study of its 13 pilot and demonstration projects with carbon dioxide capture and storage (CCS). It shows that the electricity costs from CCS-equipped coal-fired power plants will be competitive with electricity generated from renewable sources.comments
China rules wind power development efforts to date. The country finished off 2010 with a whopping total of 41.8 GW of power and is hoping to reach 200 GW by 2020.comments | <urn:uuid:1b2d675a-e98a-484a-b4e8-d5a4ff514639> | 3.46875 | 320 | Content Listing | Science & Tech. | 31.026924 |
The Washington Invasive Species Council evaluated more than 700 invasive species in and around Washington to analyze which posed the greatest threat to the state’s environment, economy, and human health. The council selected 50 priority species for action in the short term.
To do the analysis, the council developed an assessment tool that evaluated each of the 50 species on their impact and ability to be prevented. The scores were plotted on the invasive species management priorities grid, which is being used as a management tool to guide council action.
It is important to note that there are many other groups doing important work with invasive species lists of their own. Some include only plants or aquatic species, some are specific to a region. The council’s statewide list represents the top threats from all categories of species – plants, animals, insects, algae, and pathogens. | <urn:uuid:5595075e-1843-4e1a-a1ad-9ea96dad1d61> | 3.265625 | 169 | Knowledge Article | Science & Tech. | 35.863793 |
In most microscopy applications, the sample is a thin slice or surface through a three-dimensional specimen, and the proper interpretation of the structure measurements is based on stereological rules. Modern stereological procedures emphasize the efficient and unbiased sampling of the specimen, and make use of relatively simple measurement or counting procedures, often using grids placed on the sample, to obtain the desired results. The metric properties of a structure such as the volume fraction, surface area, length, and curvature, can all be determined by the examination of representative section planes. Topological properties such as the number of discrete objects and the connectivity of networks require at a minimum the comparison of two parallel sections.
The Volume Fraction Measurement interactive Java tutorial illustrates procedures for measuring the volume fraction, in this case of the dark-stained organelles seen in TEM images. Thresholding the digitized images does not delineate the structures well, but a morphological opening and closing correct this. The total area of the organelles can be determined by counting pixels. Assuming that the images are representative of the specimen, the volume fraction is measured by the area fraction.
However, a preferred method for estimating the volume fraction is carried out by placing a sparse grid of points on the image, as shown, and counting the fraction of those points that “hit” or fall on the structures of interest. This might seem like a less accurate measurement, but it has several advantages. The most important is inherent in changing the procedure from one of measurement to one of counting. The statistics of counting independent events provides a direct estimate of the measurement precision.
Counting a few points on multiple fields of view is very quick, and by repeating the procedure until (for instance) the total number of hits reaches 400, an overall precision of 5 percent would be obtained. This is because the square root of 400 is 20, which is 5 percent of 400. For a precision of 10 percent only 100 hits would be needed, while 1000 hits would produce a precision of 3 percent.
Note in the example that the estimates of volume fraction for each field of view obtained by the area measurement and the grid point count are not too dissimilar, while the variation from field to field is considerable. It is important to examine enough fields of view to obtain a representative sample of the specimen, and the point count method generally insures this. Also note that while the area is measured by counting pixels, this is not the same as the use of the grid because the pixels are close together. For the square root of the number of hits to be a valid estimate of the precision of the count, the points must be independent samples of the structure, meaning that the grid must be sparse enough that the points rarely fall on the same feature.
The surface area of contact between different structures is also an important measure in many instances. In section images, the surfaces present in three dimensions appear as boundary lines. In the example, the boundaries of the white phase in the metal are delineated as contour lines. Measuring the length of the contour lines allows the calculation of the surface area per unit volume of the sample, according to the relationship shown in Equation 1. Note that unlike the volume fraction measurement above, which produces a dimensionless number, the surface area per unit volume measurement requires knowing the image magnification. In this case the area of the region in each image is 567 square µm.
As for the volume fraction, using a counting procedure is preferred to a measurement. In this case a grid of lines is drawn on the images. For the case of random section placement and orientation in the structure, any grid of lines can be used and the square grid is convenient. In the Surface Area Measurements interactive Java tutorial, the total length of the grid lines is 114 µm. Counting the number of “hits” that the contour lines make with the grid lines also allows calculation of the surface area per unit volume.
As for the volume fraction measurement, note that variation from one field of view to another is much greater than the differences between the results from the grid count and contour length measurements for each region.
The length of structures can also be measured by a counting procedure. The Length Measurement interactive Java tutorial shows a transmission light microscope image of dendritic processes viewed in a section of known thickness. Measuring the length of the structures seen in the image would not take into account their three-dimensional wanderings up and down in the section. However, any grid lines drawn on the image represent surfaces that extend vertically downwards through the section, and counting the intersections made by the structures with those lines can provide a correct measurement of the total length per unit volume as shown in Equation 2.
In this example the sections were cut using a specific orienting protocol which, together with the cycloid-shaped grids shown, produces unbiased isotropic sampling of the structure even if the structure itself has preferred orientation. The total length of the grid lines in the example is 326 µm and the section thickness is 4 µm.
In all of these examples, the generation of the appropriate grids of points or lines is conveniently performed by the computer. Counting may be performed manually, but in most cases combining the grid lines with the binary image representing the structure using a Boolean AND, and then automatically counting the number of hits, provides a more efficient way to handle the multiple fields of view that must be measured to obtain meaningful results.
The most straightforward way to count the number of features per unit volume requires the comparison of two section planes a known distance apart (and close enough together that features cannot “hide” in the space between them). Aligning images from serial sections is difficult, but the confocal light microscope simplifies this procedure by obtaining optical sections. Any feature that is seen in the bottom section and is absent in the upper section must have a unique uppermost point within the volume between the sections. Counting those tops provides an unambiguous and unbiased value for the number in the volume defined by the image area and the section spacing.
In the Counting Features Per Unit Volume interactive Java tutorial, fluorescence images of oil droplets from two sections are thresholded, and the features separated with a watershed. Placing the resulting images into the red and green channels of a color image shows yellow wherever the two colors overlap. Since features can change size from one section to the other, the presence of yellow within a feature means that it continues through the two sections, and therefore should not be counted. Those features that are entirely green are ones that appear in the lower section but not the upper, and are isolated and counted to obtain the result.
John C. Russ - Materials Science and Engineering Dept., North Carolina State University, Raleigh, North Carolina, 27695.
Matthew Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1998-2009 by Michael W. Davidson, John Russ, Olympus America Inc., and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our | <urn:uuid:cdac67b8-c54b-4230-96eb-9bc86385eb30> | 3.671875 | 1,514 | Knowledge Article | Science & Tech. | 34.459354 |
Newton's Second Law of Motion
Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and take a visual overview of Newton's laws of motion.PhET Physics
With this PHET simulation, you can earn great $$ working for the Robot Moving Company.
Looking for a lab that coordinates with this page? Try the Normal Force-o-meter
Lab from The Laboratory.
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Curriculum Corner
Practice makes perfect. Give your students practice with these problems from The Curriculum Corner.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Newton's second law.
As learned earlier in Lesson 3 (as well as in Lesson 2), the net force is the vector sum of all the individual forces. In Lesson 2, we learned how to determine the net force if the magnitudes of all the individual forces are known. In this lesson, we will learn how to determine the acceleration of an object if the magnitudes of all the individual forces are known. The three major equations that will be useful are the equation for net force (Fnet = m•a), the equation for gravitational force (Fgrav = m•g), and the equation for frictional force (Ffrict = μ•Fnorm).
The process of determining the acceleration of an object demands that the mass and the net force are known. If mass (m) and net force (Fnet) are known, then the acceleration is determined by use of the equation.
Thus, the task involves using the above equations, the given information, and your understanding of Newton's laws to determine the acceleration. To gain a feel for how this method is applied, try the following practice problems. Once you have solved the problems, click the button to check your answers.
An applied force of 50 N is used to accelerate an object to the right across a frictional surface. The object encounters 10 N of friction. Use the diagram to determine the normal force, the net force, the mass, and the acceleration of the object. (Neglect air resistance.)
An applied force of 20 N is used to accelerate an object to the right across a frictional surface. The object encounters 10 N of friction. Use the diagram to determine the normal force, the net force, the coefficient of friction (μ) between the object and the surface, the mass, and the acceleration of the object. (Neglect air resistance.)
A 5-kg object is sliding to the right and encountering a friction force that slows it down. The coefficient of friction (μ) between the object and the surface is 0.1. Determine the force of gravity, the normal force, the force of friction, the net force, and the acceleration. (Neglect air resistance.)
A couple more practice problems are provided below. You should make an effort to solve as many problems as you can without the assistance of notes, solutions, teachers, and other students. Commit yourself to individually solving the problems. In the meantime, an important caution is worth mentioning:
Avoid forcing a problem into the form of a previously solved problem. Problems in physics will seldom look the same. Instead of solving problems by rote or by mimicry of a previously solved problem, utilize your conceptual understanding of Newton's laws to work towards solutions to problems. Use your understanding of weight and mass to find the m or the Fgrav in a problem. Use your conceptual understanding of net force (vector sum of all the forces) to find the value of Fnet or the value of an individual force. Do not divorce the solving of physics problems from your understanding of physics concepts. If you are unable to solve physics problems like those above, it is does not necessarily mean that you are having math difficulties. It is likely that you are having a physics concepts difficulty.
1. Edwardo applies a 4.25-N rightward force to a 0.765-kg book to accelerate it across a tabletop. The coefficient of friction between the book and the tabletop is 0.410. Determine the acceleration of the book.
2. In a physics lab, Kate and Rob use a hanging mass and pulley system to exert a 2.45 N rightward force on a 0.500-kg cart to accelerate it across a low-friction track. If the total resistance force to the motion of the cart is 0.72 N, then what is the cart's acceleration? | <urn:uuid:992df66c-408d-4ce3-8ddf-2039b56ab667> | 3.96875 | 951 | Tutorial | Science & Tech. | 58.432959 |
Skepticism About Lower Atmosphere Temperature Data
Posted on 8 January 2012 by dana1981
Note: This article was submitted to Forbes as a correction to the op-ed by James Taylor in question, but Forbes declined to publish it, so instead we're posting it here.
Forbes recently published an op-ed written by James Taylor of the Heartland Institute on the subject of the University of Alabama at Huntsville (UAH) atmospheric temperature measurements on the record's 33rd anniversary. Unfortunately, the article contained a litany of errors which completely undermine its conclusions, and exhibited a distinct lack of true skepticism.
The main subject of the article was the fact that according to climate models, the Earth's lower atmosphere should warm approximately 20% faster than the surface, whereas UAH estimates place the lower atmosphere warming at about 20% less than surface temperature measurements. A true skeptic would acknowledge that there are three possible explanations for this discrepancy:
- The models are incorrect and the lower atmosphere should not warm faster than the surface.
- The surface temperature estimates are biased high, showing more warming than is actually occurring.
- The UAH lower atmosphere temperature estimates are biased low, showing less warming than is actually occurring.
Because the climate model expectation of greater lower atmosphere warming is based on solid fundamental atmospheric physics, and the accuracy of the surface temperature record was recently independently confirmed by Richard Muller and the Berkeley Earth Surface Temperature (BEST) project, the third possible explanation appears to be the most likely. This possibility is further supported by the fact that other groups have estimated greater atmospheric warming than UAH, and measurements by radiosondes (instruments on weather balloons) also show greater atmospheric warming than UAH.
It is certainly a possiblity that is worth considering, and yet it was notably absent from the three possible explanations for the model-data discrepancy provided by James Taylor in his article. In fact, every one of the three possible explanations offered by Taylor involved the man-made global warming theory being either exaggerated or incorrect. Refusing to consider a possibility which is inconvenient for one's pre-conceived notions and/or biases reveals a distinct lack of true skepticism.
Taylor's article contained a litany of additional errors. For example, he reported that the UAH temperature data "seem to show warming closer to 0.3 degrees over the 33 year period, or 0.09 degrees Celsius per decade," as opposed to the UAH-reported 0.14°C per decade warming. This is false. John Christy reported that if the influences of volcanic eruptions (which have a temporary cooling effect by releasing particulates into the atmosphere which block sunlight) are filtered out of the UAH record, the warming trend is reduced to 0.09°C per decade. However, in order to make an apples-to-apples comparison, the volcanic influence must also be removed from the climate models, which neither Christy nor Taylor did.
Additionally, a recent study by Foster and Rahmstorf filtered out the effects of not just volcanic eruptions, but also the El Niño Southern Oscillation (ENSO) and solar activity, which can also have significant short-term impacts on global temperatures. They confirmed Christy's finding that removing volcanic effects decreases the warming trend over the past three decades, but additionally removing ENSO and solar influences increases the trend over that same period. In other words, by only removing the influence of volcanoes, Christy and Taylor cherrypicked the effect which would minimize the observed warming trend. This again exhibits a distinct lack of true skepticism.
Taylor also implied that unlike surface temperature measurements, the UAH satellite data do not "require guesswork corrections." In reality, the UAH record requires a great number of corrections, because the satellite instruments do not even directly measure atmospheric temperatures. Rather, they measure the intensity of microwave radiation given off by oxygen molecules in the atmosphere, from which the scientists estimate the temperature. The satellites sensors face down toward the Earth and radiation therefore reaches the satellites having travelled upwards through a warming lower atmosphere and cooling upper atmosphere. This influences any warming signal received by the satellites, and because the lower atmosphere is what is being measured. creates a cooling bias that has to be accounted for. But it doesn't end there; bias also exists between the various instrument sensors on each satellite, and the satellite orbits decay over time. These and a number of other obstacles mean a lot of careful and painstaking analysis is required. As a result of all this complexity and data correction, there's much that can go wrong.
Considering these challenges, it's not a surprise that there have been a number of major corrections to the satellite temperature data over the years. Groups outside of UAH identified two major errors in the UAH analysis, both of which had caused Spencer and Christy to significantly underestimate the atmospheric warming. Despite the difficulties in the available data, and the numerous adjustments made to their analysis, Spencer and Christy have all along insisted that their data set is correct, and they (and James Taylor) continue with this overconfidence today. However, the most likely explanation for UAH showing less warming than models and atmospheric physics predict is that UAH is biased low.
Taylor's error-riddled article demonstrates that when it comes to climate science, we should listen to climate scientists, who are true skeptics, rather than a law and policy expert from a fossil fuel-funded think tank.
Rather than correct the errors by publishing this article, Forbes compounded the problem by publishing a very similarly erroneous post from serial misinformer Patrick Michaels (who admits that like Taylor, he is also heavily fossil fuel-funded). Ironically, Forbes recently published Peter Gleick's 2011 Climate B.S.* of the Year Awards. If Forbes continue with this trend of publishing and compounding misinformation while ignoring corrections, perhaps they will make a run for the 2012 award! | <urn:uuid:7512ce90-f769-4e30-8f67-1fab9e2c20af> | 3.234375 | 1,192 | Personal Blog | Science & Tech. | 24.421051 |
This section provides a brief overview of processes and properties associated with global climate change. The general concepts found in this section are:
This section includes seven classroom activities.
The climate of planet Earth is unstable. Our evolutionary origins lie in the warm, relatively benign climate of equatorial Africa, but our ancestors battled the cold, harsh, and unforgiving climate of the last ice age in order to spread across the planet.
Some 10,000 years ago, however, the ice age ended. We developed agriculture, civilization, industry, and technology generally in a global climate that was warm, pleasant, and mostly predictable. Regional climates have changed, sometimes drastically and disastrously for local human populations, but by and large the global climate has not dealt any significant long-term blows to the spread and development of human civilization.
One of the most significant accomplishments of our species is the discovery of fossil fuels and the means of turning the energy trapped within them into heat, transportation, and the basis for manufacturing and construction.
Present Climates and Human Activity
This discovery and the global industrial revolution that followed
changed the world forever for our species. In general, fossil fuels are a legacy
bequeathed to us by the biosphere of the distant past. On an ancient warmer
Earth with a high concentration of carbon dioxide ()
in the atmosphere, photosynthetic organisms (algae and higher plants) absorbed
the , and used it to produce
abundant organic material. When these organisms died, they were buried deep
within the earth and slowly turned into coal and oil.
Since the 1800s, we've been burning vast quantities of these fossil fuels to power our developing technological and global civilization. As a result, we've been releasing the trapped in the fuels in the form of energy-rich organic molecules back into the atmosphere, increasing the atmospheric concentration of . By itself, this is not a concern. Carbon dioxide comprises a very small proportion of the atmosphere, and no projected increase would affect our breathing. But has another significant property. As we explored in the Greenhouse Effect section, carbon dioxide absorbs heat. The other major component gases of Earth's atmosphere, oxygen () and nitrogen (), do not.
Since the 1800s, concentrations worldwide have increased from approximately 280 ppm (or 0.028%) to around 365 ppm (0.0365%). The increase seems trivial, but it also means that some 3 gigatons (3 billion metric tons) of are being added to the atmosphere every year. Because is a powerful greenhouse gas, we can reasonably conclude that the earth's temperature should go up as concentrations increase. In fact, climatologists have detected a steady but small increase in global average temperatures over the last few decades, based on weather data collected all around the world. Six of the last ten were the hottest years on record.
Regardless of the cause of the warming, we understand enough about global climate
to predict that as the temperature goes up, the entire global climate system
powered by heat energy should also change, although the magnitude and direction
of the changes are uncertain.
Future ClimatesThe Great Uncertainty
Are we seeing the end of the long period of benign climate since the last ice
age? Will the climate change for the worse because of our actions? In fact,
no one knows for sure. Most atmospheric scientists believe that the global climate
is warming at least partially because of a build-up of
from fossil fuel use, but what that means to humans and natural ecosystems is
largely unknown. The climate is vastly complex and strongly influenced by many
factors other than greenhouse gas concentrations. (Some of these factors are
explored in the Introduction to Climate Section.) This makes it extremely difficult
to link any climatic events or characteristics to a single cause. As a result,
controversy exists as to the magnitude and danger of global warming induced
by greenhouse gases. Many scientists take the issue very seriously and support
efforts to slow or reverse the build-up of atmospheric
with the expectation that global warming will slow as a result. Others, however,
contend that may not be affecting
the climate and that the changes are part of natural, long-term climatic cycles.
They suggest that efforts to reduce
emissions are unnecessary and dangerous to economic growth and development.
While the controversy rages, researchers around the world continue to gather
atmospheric data, develop and refine predictive computer models, and try to
reduce the uncertainty in our understanding of the earth's climate.
In this unit, you will explore the critical issues in climate change, exploring sources and sinks (or reservoirs) of , the nature of climate change and predictions of future changes, and the elements of the scientific and political debates that will ultimately determine how we respond to climate change.
We know that the earth's climate has changed over time. Throughout the earth's history, there have been periods of glaciation followed by warming trends in which the glaciers retreated toward higher altitudes and latitudes. Today's concerns focus on the current and projected rate of climate change based, in large part, on human activities. By going through this section, students should be able to answer the following questions:
The following activities will help your students better understand the concepts covered in this section.
To proceed, either click on Activities in the menu at the top or click on another unit to switch units. | <urn:uuid:1fb11633-2065-4075-9c11-c8fe7a674b5d> | 3.96875 | 1,105 | Tutorial | Science & Tech. | 32.429648 |
Kentucky Office of the State Entomologist
| Text Only |
If you are moving to Kentucky from an area that has gypsy moth please check out this website (YourMoveGypsyMothFree.com) to be sure you are compliant with Federal laws.
Why be so concerned with this
The European Gypsy Moth was deliberately introduced from Europe at Medford, Mass. in 1868 or 1869 by
Trouvelot (he made a living as an artist, painting mostly portraits, but
he had an amateur interest in entomology). Trouvelot hoped to raise this moth
for silk production. Unfortunately, some of his moths escaped. Trouvelot understood
the potential magnitude of this accident and notified local entomologists but
no action was taken. By 1889 the Gypsy Moth was doing heavy damage in certain
parts of the Boston area; it is now a serious pest throughout much of the Northeast
and is expanding its range.
As a caterpillar, the gypsy moth has a voracious appetite
and has been known to completely defoliate forests. The caterpillars feed
on about 500 different species of plants. The most preferred host is oak followed
by apple, cherry, hawthorn, hickory, maples, sassafras, sweet gum and willow.
Only the caterpillar stage feeds. When fully grown, the caterpillar is about
2 inches long, very hairy and has five pairs of blue dots followed by six
pairs of red dots along its back. The larval stage lasts about seven weeks.
Gypsy moths are spread in
two different ways. Natural spread occurs when newly hatched larvae are dispersed
by blowing wind. Over the past 10-15 years, gypsy moths have moved long distances
on outdoor household articles such as cars and recreational vehicles, firewood
and other items. It has been estimated that 85% of new infestations have been
through the movement of outdoor household articles. Once established, gypsy
moth numbers can fluctuate widely from year to year. Seasons with light damage
can be followed by seasons with severe damage. In periods of heavy outbreaks,
gypsy moth caterpillars crawl on walls, across roads, over outdoor furniture,
and sometimes will come inside homes.
The gypsy moth has four different life stages: egg, caterpillar, pupa and adult moth. The female moth lays eggs in masses which will contain between 500 and 1,000 eggs and will have a fuzzy tan appearance. The eggs hatch in early spring, coinciding with the bud break of most hardwood trees.
Want to know more about gypsy moth? Go to: | <urn:uuid:f34d2b66-de7e-4e95-b1b1-6929f0d43fa8> | 3.28125 | 559 | Knowledge Article | Science & Tech. | 48.991806 |
In the early 70s — when the media rarely addressed the far-out notion of climate change (or if they did, they put quotes around phrases like “the greenhouse effect”) — scientists at Boulder’s National Center for Atmospheric Research were beginning to realize that people (insignificant though they generally seemed) might be able to impact the global climate.
A 1972 article in the Daily Camera “NCAR, Others Will Study Man’s Effects on Shaky Equilibrium of Earth Climate” appears to be one of the first in the Boulder newspaper to tackle the idea that humans might be able to drive the world to some sort of climatic tipping point.
NCAR scientist William Kellogg said this in the article:
There are obviously stabilizing factors that are strong enough to keep our global climate within reasonably narrow bounds, permitting ice ages to come and go, but damping out any large fluctuations.
But, now, man has entered the scene, and we must ask whether he can reach any of the lever points on this gigantic environmental mechanism and influence it. If there are any lever points that he can reach, history has shown that he will probably be tempted to tamper with them.
The article didn’t talk much about greenhouse gases, other than to mention a growing “carbon dioxide blanket” that had the potential to warm the Earth. Read more | <urn:uuid:182a5887-ac3a-4b7c-bdb7-8fb32f2351ba> | 3.125 | 285 | Personal Blog | Science & Tech. | 30.539894 |
Wild chilies make their seeds and fruits to propagate themselves. But chili seeds eaten by mammals tend to be chewed up, and even if they are not, they don't germinate successfully after passing through our digestive systems, and even if they did, mammals don't tend to transport seeds as far as birds would. So it is well worth their while to produce a compound that dissuades mammalian seed predators and simultaneously encourages avian seed dispersers. Chili seeds pass though the guts of most seed eating birds perfectly primed to germinate, and get a ride far from their parent plant.
But in my bird feeder, the seeds are an enticement to the birds I want to attract, and a strong discouragement to the squirrels. And because it is whole seed, it won't wash away as chili powder would, and pouring whole seed it does not induce as much coughing.
I bought a very cheep bag of over-ripe Thai chilies from my local grocery store, dried them in the sun and then banged them around in the food processor until all the seeds had come loose and settled to the bottom. I then mixed them with commercial bird seed before filling the feeder. The squirrel, to my knowledge, visited the feeder only once. | <urn:uuid:dc0bfac8-04ea-4dab-abb9-b01945250732> | 2.890625 | 259 | Personal Blog | Science & Tech. | 55.097545 |
For each solar eclipse, an orthographic projection map of Earth shows the path of penumbral (partial) and umbral (total or annular) eclipse. North is to the top in all cases and the daylight terminator is plotted for the instant of greatest eclipse. The sub-solar point on Earth is indicated by a star shaped symbol.
The limits of the Moon's penumbral shadow delineate the region of visibility of the partial solar eclipse. This irregular or saddle shaped region often covers more than half of the daylight hemisphere of Earth and consists of several distinct zones or limits. At the northern and/or southern boundaries lie the limits of the penumbra's path. Partial eclipses have only one of these limits, as do central eclipses when the Moon's shadow axis falls no closer than about 0.45 radii from Earth's center. Great loops at the western and eastern extremes of the penumbra's path identify the areas where the eclipse begins/ends at sunrise and sunset, respectively. If the penumbra has both a northern and southern limit, the rising and setting curves form two separate, closed loops. Otherwise, the curves are connected in a distorted figure eight. Bisecting the 'eclipse begins/ends at sunrise and sunset' loops is the curve of maximum eclipse at sunrise (western loop) and sunset (eastern loop). The points P1 and P4 mark the coordinates where the penumbral shadow first contacts (partial eclipse begins) and last contacts (partial eclipse ends) Earth's surface. If the penumbral path has both a northern and southern limit, then points P2 and P3 are also plotted. These correspond to the coordinates where the penumbral shadow cone becomes internally tangent to Earth's disk.
A curve of maximum eclipse is the locus of all points where the eclipse is at maximum at a given time. Curves of maximum eclipse are plotted at each half hour Universal Time. They generally run between the penumbral limits in the north/south direction, or from the 'maximum eclipse at sunrise and sunset' curves to one of the limits. If the eclipse is central (i.e. total or annular), the curves of maximum eclipse run through the outlines of the umbral shadow, which are plotted at ten minute intervals. The curves of constant eclipse magnitude delineate the locus of all points where the magnitude at maximum eclipse is constant. These curves run exclusively between the curves of maximum eclipse at sunrise and sunset. Furthermore, they're parallel to the northern/southern penumbral limits and the umbral paths of central eclipses. In fact, the northern and southern limits of the penumbra can be thought of as curves of constant magnitude of 0.0. The adjacent curves are for magnitudes of 0.2, 0.4, 0.6 and 0.8 (i.e. - 20%, 40%, 60% and 80%). For total eclipses, the northern and southern limits of the umbra are curves of constant magnitude of 1.0. Umbral path limits for annular eclipses are curves of maximum eclipse magnitude.
Greatest eclipse is defined as the instant when the axis of the Moon's shadow passes closest to Earth's center. Although greatest eclipse differs slightly from the instants of greatest magnitude and greatest duration (for total eclipses), the differences are usually negligible. The point on Earth's surface nearest to the axis at greatest eclipse is marked by an asterisk symbol. For partial eclipses, the shadow axis misses Earth entirely. Therefore, the point of greatest eclipse lies on the day/night terminator and the Sun appears in the horizon.
Data pertinent to the eclipse appear with each map. At the top are listed the instant of conjunction of the Sun and Moon in right ascension and of the instant of greatest eclipse, expressed as both Universal Times and Julian Dates. The eclipse magnitude is defined as the fraction of the Sun's diameter obscured by the Moon at greatest eclipse. For central eclipses (total or annular), the magnitude is replaced by the geocentric ratio of diameters of the Moon and the Sun. Gamma is the minimum distance of the Moon's shadow axis from Earth's center in Earth radii at greatest eclipse. The Saros series of the eclipse is listed, followed by a pair of numbers. The first number identifies the sequence position of the eclipse in the Saros, while the second is the total number of eclipses in the series.
In the upper left and right corners are the geocentric coordinates of the Sun and the Moon, respectively, at the instant of greatest eclipse. They are:
R.A. - Right Ascension Dec. - Declination S.D. - Apparent Semi-Diameter H.P. - Horizontal Parallax
To the lower left are exterior/interior contact times of the Moon's penumbral shadow with Earth which are defined:
P1 - Instant of first external tangency of Penumbra with Earth's limb. (Partial Eclipse Begins) P2 - Instant of first internal tangency of Penumbra with Earth's limb. P3 - Instant of last internal tangency of Penumbra with Earth's limb. P4 - Instant of last external tangency of Penumbra with Earth's limb. (Partial Eclipse Ends)
Not all eclipses have P2 and P3 penumbral contacts. They are only present in cases where the penumbral shadow falls completely within Earth's disk. For central eclipses, the lower right corner lists exterior/interior contact times of the Moon's umbral shadow with Earth's limb which are defined as follows:
U1 - Instant of first external tangency of Umbra with Earth's limb. (Umbral [Total/Annular] Eclipse Begins) U2 - Instant of first internal tangency of Umbra with Earth's limb. U3 - Instant of last internal tangency of Umbra with Earth's limb. U4 - Instant of last external tangency of Umbra with Earth's limb. (Umbral [Total/Annular] Eclipse Ends)
At bottom center are the geographic coordinates of the position of greatest eclipse along with the local circumstances at that location (i.e. - Sun altitude, Sun azimuth, path width and duration of totality/annularity). At bottom left are a list of parameters used in the eclipse predictions while bottom right gives the Moon's geocentric libration (optical + physical) at greatest eclipse. The value for ΔT (the difference between Terrestrial Dynamical Time and Universal Time) is extrapolated from pre-1996 observations.
The eclipse predictions were generated on a Macintosh iMac using algorithms developed from the Explanatory Supplement with additional algorithms from Meeus, Grosjean, and Vanderleen . The solar and lunar ephemerides were generated from Newcomb and the ILE , respectively. The author uses a smaller value of k (=0.272281) for total and annular calculations than the one adopted by the 1982 IAU General Assembly. This results in a better approximation to the Moon's minimum diameter and consequently a shorter total or longer annular eclipse. The IAU value for k (=0.2725076) is retained for partial phases. All predictions are with respect to the Moon's center of mass; no corrections have been made for the center of figure.
All eclipse calculations are by Fred Espenak, and he assumes full responsibility for their accuracy. Some of the information presented on this web site is based on data originally published in:
Permission is freely granted to reproduce this data when accompanied by an acknowledgment:
"Eclipse Predictions by Fred Espenak, NASA's GSFC"
For more information, see: NASA Copyright Information | <urn:uuid:d645303f-b12c-4d9f-a854-cedf1296815a> | 3.8125 | 1,595 | Knowledge Article | Science & Tech. | 48.087921 |
Comprehensive DescriptionRead full entry
BiologyInhabit outer reef slopes to depths of at least 50 m, inner reef flats and lagoons. Juveniles common in weedy areas of estuaries (Ref. 4919). Also found in coastal bays and estuaries, usually near rocky reef or on sand-stretches between reefs with low algae-rubble reef to about 20 meters depth, or in shallow with sparse seagrass growth (Ref. 48637). Benthopelagic (Ref. 58302). Usually solitary and territorial on sandy to rubble areas. Feed on fleshy, calcareous, or coralline algae, detritus, mollusks, tunicates, sponges, corals, zoanthid anemones, crabs, tube worms and echinoderms (Ref. 1602). | <urn:uuid:f6d9f10f-5963-406b-a985-e4f31b0b096c> | 3.171875 | 180 | Knowledge Article | Science & Tech. | 50.996163 |
You can read through the entire section from the top or jump directly to a particular item by clicking on a list entry:
Imagine a very small sphere (the nucleus, shown in yellow) at the center of another sphere about 100,000 times larger in diameter. The nucleus contains the protons and neutrons in a very small volume. The electrons are distributed throughout the much larger volume of space, shown in gray.
A chemical element is a bunch or collection of atoms --all of the same type-- having different chemical and physical properties from any other collection of like atoms. There are about 109 known elements at present; some occur in nature, while others are created in nuclear reactors and exist but a short time. The naturally existing elements also vary widely in their abundance or occurrence in nature. Hydrogen, oxygen, silicon, and carbon are very common on earth, but gold, platinum, and palladium are much rarer (and hence more expensive!)
Each element has a name and is represented by an abbreviated symbol. Usually the symbol is the first or first and second letter of the name. In a few cases, the symbol does not match the English language name, because the element was named long ago in Latin (or some other language). Examples of some common element names and symbols are:
|Element Name||Element Symbol|
|Back to Top|
The table below compares the charge, mass and location of the subatomic particles of interest to chemists.
Because the mass of each particle is so small, a new unit of mass is defined. An atomic mass unit or amu = mass of 1 proton, which is also = mass of 1 neutron.
Notice how small the mass of an electron is. It would take 1835 electrons to weigh the same as 1 proton!
|Particle Name||Charge||Mass in kg||Mass in amu||Location|
The two heavy particles, the proton and neutron, are found in the nucleus only. For practical purposes, this means the entire mass of the atom is concentrated in the very small volume occupied by the protons and neutrons. The lighter electrons are outside the nucleus, and contribute essentially no mass to the total weight of the atom, but occupy an enormously large volume of space.
Since like-charged particles repel each other the repulsive forces among the protons are very large. You can think of the neutrons as "spacers" added to the nucleus to reduce how close the + charged protons get to each other. This helps lower the total repulsion energy.
|Back to Top|
The atomic number of any element = number of protons in the nucleus. Since an atom is electrically neutral, the number of protons must equal the number of electrons. So we can also say that an element's atomic number = number of electrons. The atomic number uniquely characterizes each element. Any two atoms of the same element have the same atomic number.
Protons and neutrons have practically the same mass, and each one is almost 2000 times heavier than an electron. Hence, an element's atomic mass in amu = the number of protons + the number of neutrons.
The number of neutrons a particular atom possesses can vary, and is not readily predicted. Two atoms of the same atomic number that have different numbers of neutrons are called isotopes. Some elements occur only as a single isotope, while others may have several. For example, all atoms of Lithium have atomic number 3 (= 3 protons in nucleus, 3 electrons outside nucleus). However, if we examine a sample of Lithium, out of every 100 Lithium atoms we find about 93 atoms have 4 neutrons and about 7 of 100 have 3 neutrons. Both Lithium isotopes have the identical chemical properties of Lithium. They differ only in atomic mass: one has a mass of 7 amu (= 3 protons + 3 neutrons), while the other has atomic mass of 6 amu (= 3 protons + 3 neutrons).
|Isotope Symbol||Atomic Number||Number of Neutrons||Atomic Mass|
Based on what we have just seen, the number of neutrons present in a particular isotope can be readily calculated as:
|Back to Top|
The protons and neutrons of an atom do not participate in ordinary chemical reactions -- they remain uninvolved unless nuclear fission or fusion processes occur. Instead, the reactions of interest to most chemists leave the nucleus unchanged and involve electrons only.
An atom is electrically neutral because it possesses equal numbers of protons (+ charge) and electrons (- charge). Atoms may lose or gain electrons to form ions. An ion has an unequal number of protons and electrons.
Cations are positively charged ions.
Anions are negatively charged ions.If an atom loses electrons, it becomes a cation, since its nucleus now has more + charged protons than there are - charged electrons outside the nucleus. Each electron lost increases the + charge 1 unit.Thus,
Atom ---> Cation+1 + electron-.For example,
Li ---> Li+ + electron-.
Mg ---> Mg+2 + 2 electron-.
Al ---> Al+3 + 3 electron-.As we will see later, it is possible to predict how many electrons an element may lose (or gain). Some elements can lose a variable number of electrons as well, forming differently charged cations. Iron (symbol Fe) can lose either 2 or 3 electrons, thus becoming Fe+2 or Fe+3, respectively.
Atom + electron----> Anion-1.For example,
Cl + electron- ---> Cl-.
O + 2 electron----> O-2.
N + 3 electron----> N-3.
|Back to Top| | <urn:uuid:e9f22baa-12bb-47ec-a96e-28b123c7a7a3> | 4.3125 | 1,197 | Knowledge Article | Science & Tech. | 46.64047 |
Instantiating an Object
An Object is an instance of a Class. After creating an Object, you can access the member variables and methods of the object and assign values to them. An Object is declared in the same way that a variable of a primitive type is declared.
String sampleStr; //This code is used to define an object name of String type.
You can define a variable that will be used to refer to an object or an instance of a class.
When you create an object, you need to assign memory to the object. This is done using the new operator. The new operator is followed by the class name and parentheses. The syntax for creating an object is displayed below:
class_name variable_name = new class_name();
Example: Test aTest = new Test();
This code is used to create an instance of the Test class. The variable aTest is decleared. Notice that this is an arbitary variable name. This variable is used to refer to the instance of the Test class. You can also state that the name of this particular Test instance is aTest.
When multiple instances are created, each instance maintains a seperate copy of the member variables of the class.
Member Access – Dot Notation
The member variables declared in a class are accessed to assign values or manipulate the values stored in them. The public member variables may be accessed from any object using dot notation. In addition, member variables that are declared without a modifier are considered to have package scope. These variables may be accessed using dot notation from any object defined in the same package. Whereas private members cannot be accessed using dot notations. We need to define public methods [Getters] to access the private members and Setters for assigning values to private member variables
HINT:: Getters and Setters
A getter is a method that gets the value of a specific property. A setter is a method that sets the value of a specific property.
A scope defines the logical boundary for accessing a variable.
HINT: Java allows you to use the same identifier for multiple variables as long as they are declared in a different scope.
A variable is in the member scope if it is accessible to the methods of the class. This variable is declared as the member variable of the class (i.e) Variables declared directly after class declarations.
A variable is considered to be in the local scope if it is declared within any block of code or if its declared inside a loop condition. A block is a group of statements that are enclosed within braces.
HINT: How Compiler handles?
The Java compiler refers to the variable definitions in a sequence to distinguish the current value of the variable. If first checks the variable in the current block. The current scope depends on the current statement being executed. If the compiler is unable to find the variable in the current block, it searches the variable in the current class. Finally, the compiler searches in the superclasses of the current class.
Variable Access – ‘this’ Keyword
The ‘this’ keyword can be used to access the member variables of the current object. These member variables of an object are also known as instance variables.
What is Instance Variables?
When a number of objects are created from the same class blueprint, they each have their own distinct copies of instance variables.
What is Class Variables?
Fields that have the static modifier in their declaration are called static fields or class variables. They are associated with the class, rather than with any object. Every instance of the class shares a class variable, which is in one fixed location in memory. Any object can change the value of a class variable, but class variables can also be manipulated without creating an instance of the class.
Benefits of Static Variables | <urn:uuid:3f323f34-5c08-4a5a-a865-51ddc8aa58fa> | 4.0625 | 783 | Documentation | Software Dev. | 45.613663 |
Even though the frequency of bunch crossings at the interaction point inside LHCb is 40MHz, only about 10MHz of events will have some particles from the proton-proton collision inside the acceptance of the detector.
The rate of events with all the particles from a B decay contained in LHCb corresponds to about 15kHz. However, the rate of the specific B meson decays that are interesting for physics analysis is a small fraction of that, amounting to a total of a few Hz.
The event rate that can be recorded is limited by the offline computing capacity to about 2kHz. The LHCb trigger aims to provide the highest efficiency for interesting B decays (and some control decays like those of the J/psi) within the allowed rate of 2kHz. It is organized in two levels.
1. Level Zero (L0)
The Level-0 trigger is implemented in custom electronics, and it reduces the rate to 1MHz. It makes use of the fact that particles from a B decay have a higher transverse momentum with respect to the particle beam axis (pT) than particles coming directly from the primary proton-proton interaction.
L0 makes use of those sub-detectors in which high-pT particles can be selected at the high rate required: the calorimeters and the muon system. In addition, it uses two dedicated silicon layers of the VELO to perform a simplified vertex reconstruction, which allows events with multiple proton-proton interactions to be rejected, for which it is especially difficult to reconstruct and analyze B meson decays.
2. High Level Trigger (HLT)
The HLT algorithm runs in a farm of 1000 16-core computers, and it has access to full detector information. It is divided in two sub-levels: HLT1, with an output rate of a few tens of kHz, and HLT2, which outputs the 2kHz that are recorded.
HLT1 is based on the concept of regions of interest: it confirms the high-pT L0 candidate particles with the addition of information from other detectors, using only the regions around the candidate direction when possible. In particular, information from the tracking stations and VELO is added. This allows particles to be selected according to another property that characterize particles from B decays: their high impact parameter to the proton-proton interaction vertex. This is due to the relatively long life-time of B mesons: they typically fly 1cm away from the proton-proton interaction before they decay. As soon as a candidate is not confirmed in a sub-detector, the event is discarded.
At the rate that HLT2 is executed, it is possible to run a complete reconstruction of the events, by using tracks in the VELO as seeds for the rest of the tracking. Displaced vertices away from the primary proton-proton interaction are searched for, as indications of B decays. Two types of selections are applied: inclusive and exclusive. Inclusive selections aim to collect decays of resonances which are useful for calibration and likely to have been produced in a B decay (D*, J/psi, etc). Exclusive selections are specifically designed to provide the highest possible efficiency for fully-reconstructed B decays of interest, using all available information, including the mass and vertex quality and separation for the B candidate and the intermediate resonances. | <urn:uuid:e5447a5b-4290-4cf5-b196-1446f43de983> | 2.78125 | 708 | Documentation | Science & Tech. | 39.644849 |
There is no need to create a file on the filesystem to get started with openpyxl. Just import the Worbook class and start using it
>>> from openpyxl import Workbook >>> wb = Workbook()
A workbook is always created with at least one worksheet. You can get it by using the openpyxl.workbook.Workbook.get_active_sheet() method
>>> ws = wb.get_active_sheet()
This function uses the _active_sheet_index property, set to 0 by default. Unless you modify its value, you will always get the first worksheet by using this method.
You can also create new worksheets by using the openpyxl.workbook.Workbook.create_sheet() method
>>> ws1 = wb.create_sheet() # insert at the end (default) # or >>> ws2 = wb.create_sheet(0) # insert at first position
Sheets are given a name automatically when they are created. They are numbered in sequence (Sheet, Sheet1, Sheet2, ...). You can change this name at any time with the title property:
ws.title = "New Title"
Once you gave a worksheet a name, you can get it using the openpyxl.workbook.Workbook.get_sheet_by_name() method
>>> ws3 = wb.get_sheet_by_name("New Title") >>> ws is ws3 True
You can review the names of all worksheets of the workbook with the openpyxl.workbook.Workbook.get_sheet_names() method
>>> print wb.get_sheet_names() ['Sheet2', 'New Title', 'Sheet1']
Now we know how to access a worksheet, we can start modifying cells content.
To access a cell, use the openpyxl.worksheet.Worksheet.cell() method:
>>> c = ws.cell('A4')
You can also access a cell using row and column notation:
>>> d = ws.cell(row = 4, column = 2)
When a worksheet is created in memory, it contains no cells. They are created when first accessed. This way we don’t create objects that would never be accessed, thus reducing the memory footprint.
Because of this feature, scrolling through cells instead of accessing them directly will create them all in memory, even if you don’t assign them a value.
>>> for i in xrange(0,100): ... for j in xrange(0,100): ... ws.cell(row = i, column = j)
will create 100x100 cells in memory, for nothing.
However, there is a way to clean all those unwanted cells, we’ll see that later.
If you want to access a range, wich is a two-dimension array of cells, you can use the openpyxl.worksheet.Worksheet.range() method:
>>> ws.range('A1:C2') ((<Cell Sheet1.A1>, <Cell Sheet1.B1>, <Cell Sheet1.C1>), (<Cell Sheet1.A2>, <Cell Sheet1.B2>, <Cell Sheet1.C2>)) >>> for row in ws.range('A1:C2'): ... for cell in row: ... print cell <Cell Sheet1.A1> <Cell Sheet1.B1> <Cell Sheet1.C1> <Cell Sheet1.A2> <Cell Sheet1.B2> <Cell Sheet1.C2>
If you need to iterate through all the rows or columns of a file, you can instead use the openpyxl.worksheet.Worksheet.rows() property:
>>> ws = wb.get_active_sheet() >>> ws.cell('C9').value = 'hello world' >>> ws.rows ((<Cell Sheet.A1>, <Cell Sheet.B1>, <Cell Sheet.C1>), (<Cell Sheet.A2>, <Cell Sheet.B2>, <Cell Sheet.C2>), (<Cell Sheet.A3>, <Cell Sheet.B3>, <Cell Sheet.C3>), (<Cell Sheet.A4>, <Cell Sheet.B4>, <Cell Sheet.C4>), (<Cell Sheet.A5>, <Cell Sheet.B5>, <Cell Sheet.C5>), (<Cell Sheet.A6>, <Cell Sheet.B6>, <Cell Sheet.C6>), (<Cell Sheet.A7>, <Cell Sheet.B7>, <Cell Sheet.C7>), (<Cell Sheet.A8>, <Cell Sheet.B8>, <Cell Sheet.C8>), (<Cell Sheet.A9>, <Cell Sheet.B9>, <Cell Sheet.C9>))
or the openpyxl.worksheet.Worksheet.columns() property:
>>> ws.columns ((<Cell Sheet.A1>, <Cell Sheet.A2>, <Cell Sheet.A3>, <Cell Sheet.A4>, <Cell Sheet.A5>, <Cell Sheet.A6>, ... <Cell Sheet.B7>, <Cell Sheet.B8>, <Cell Sheet.B9>), (<Cell Sheet.C1>, <Cell Sheet.C2>, <Cell Sheet.C3>, <Cell Sheet.C4>, <Cell Sheet.C5>, <Cell Sheet.C6>, <Cell Sheet.C7>, <Cell Sheet.C8>, <Cell Sheet.C9>))
Once we have a openpyxl.cell.Cell, we can assign it a value:
>>> c.value = 'hello, world' >>> print c.value 'hello, world' >>> d.value = 3.14 >>> print d.value 3.14
There is also a neat format detection feature that converts data on the fly:
>>> c.value = '12%' >>> print c.value 0.12 >>> import datetime >>> d.value = datetime.datetime.now() >>> print d.value datetime.datetime(2010, 9, 10, 22, 25, 18) >>> c.value = '31.50' >>> print c.value 31.5
>>> wb = Workbook() >>> wb.save('balances.xlsx')
This operation will overwrite existing files without warning.
Extension is not forced to be xlsx or xlsm, although you might have some trouble opening it directly with another application if you don’t use an official extension.
As OOXML files are basically ZIP files, you can also end the filename with .zip and open it with your favourite ZIP archive manager.
The same way as writing, you can import openpyxl.load_workbook() to open an existing workbook:
>>> from openpyxl import load_workbook >>> wb2 = load_workbook('test.xlsx') >>> print wb2.get_sheet_names() ['Sheet2', 'New Title', 'Sheet1']
This ends the tutorial for now, you can proceed to the Simple usage section | <urn:uuid:5d27c986-3239-4af7-b106-07e44dc204c0> | 3.09375 | 1,565 | Tutorial | Software Dev. | 90.33311 |
Archive for the ‘Science Experiments at Home’ Category
Question: Why does the carbonation in coke affect the height of the fizz when added with mentos?
I am doing a science experiment and need this explained for a paper.
Answer: All details are on their home page
Mentos And Coke Experiment
Question: physics vday dorky!?
ok….my bf and i met in physics lab and had physics 1 & 2 together and were lab partners where we developed a great friendship…we’re both science majors and really dorky at heart. anyway, i figured the best gift for valentines day would be one that includes a physics experiement or something (yea, dorky, i know!) i dont even know if this is at all possible! but i figured it would be super cute to incorporate a simple physics experiment for v-day…like, i know how to make slime/gack and stuff like that and i work for the physics dept at school and have access to everything i could ever need. but if theres a way to make a heart or do something physics-y and cool to a picture or make something artistic somehow, please let me know! i dont know how realistic this request is, but if you are dorky enough to have an idea, please share!
Answer: haha – I love it! I’m a huge dork…a bit more in a linguistics/psychology type of way, but I do love physics. What about something with…IDK…ok, yeah I have no idea. The only thing I can think of is building a series of simple machines and moving a ball or something that ends with a message/love note! Ha! That’s the first thing that came to mind.
Lame, I know – but I do wish you the best of luck!
How to Make SLIME!!! (Science Experiment)
Question: Why was there boiled liver in our digestion experiment in science?
In science, we did an experiment on enzymes, we where given 3 boiling tubes with H2O2 in them and a few drops of washing up liquid, we were instructed to put a small piece of raw liver in one boiling tube, one small piece of boiled liver in a boiling tube, and a small piece of carrot into the last, we then had to observe for 5 minutes or so and measure how tall the bubbles were in each tube, the raw liver got the tallest bubbles, But then our teacher asked us all why there was boiled liver in this experiment, and know one could answer.
So i was wondering, why is there boiled liver in our experiment?
Answer: raw liver has catalase ,which degrades H2O2 to form oxygen(the bubbles you see)
the boiled liver serves as a control,the catalase has denatured,so no bubbles.
carrots also have catalase.you would also get the same result with crushed beans. | <urn:uuid:f1b2f265-2d0e-4b07-9dd7-48ab9f09fe71> | 2.78125 | 626 | Comment Section | Science & Tech. | 59.709146 |
What's the Difference Between Hibernation and Sleep?
Biologists love to argue about how to classify things, and hibernation is no different. A common definition of hibernation is a long-term state in which body temperature is significantly decreased, metabolism slows drastically and the animal enters a comalike condition that takes some time to recover from. By this definition, bears don't hibernate, because their body temperature drops only slightly and they awake relatively easily. Not everyone accepts this narrow definition, however. For the purposes of this article, we'll use the term hibernation to describe any long-term reduction in body temperature (hypothermia) and metabolism during winter months.
When an animal enters a hibernationlike state during the summer, it's known as estivation. It's much less common than hibernation. Hibernation in reptiles is sometimes called brumation. It differs from mammalian hibernation because reptiles are cold-blooded -- they can't control their own body temperature, so they need to spend the winter in a place that will stay warm enough.
Torpor is another word that causes some confusion. It's sometimes used as an umbrella term to describe all the various types of temperature- and metabolism-reducing functions. More commonly, it's used to describe short-term periods of reduced temperature that occur as often as every day and only for a few hours at a time. This is the usage we will use to avoid confusion.
Tom J. Ulrich/Visuals Unlimited/Getty Images
So is hibernation basically a really long nap? No. These animals aren't just sleeping, they're undergoing physiological changes that can be very drastic. The most significant element of hibernation is a drop in body temperature, sometimes as much as 63 degrees F. We'll get into the details shortly, but for now it's sufficient to say that a hibernating animal's vital signs are very different from the vital signs of an awake animal.
Sleep, by contrast, is a mostly mental change. There are physiological aspects of sleep that are similar to hibernation, such as a reduced heart and breathing rate and lowered body temperature, but these changes are very slight compared to hibernation. Sleep is also pretty easy to break out of -- if you're awakened from even your deepest sleep, you can be fully awake within several minutes. Sleep is primarily characterized by changes in brain activity. In fact, the brain waves of hibernating animals closely resemble their wakeful brain wave patterns, though they're somewhat suppressed. When an animal awakes from hibernation, it exhibits many signs of sleep deprivation and needs to sleep a lot over the next few days to recover. | <urn:uuid:5c8f8592-3bfc-4610-8616-847ed1247b77> | 3.265625 | 543 | Knowledge Article | Science & Tech. | 34.854731 |
Notes: 12494 / 2 months ago
this is a view of saturn’s north polar region, taken by cassini’s imaging science subsystem (ISS) on february 26, 2013. you can see the rings in the top of this image as well as its mysterious hexagon.
Notes: 3 / 1 year ago
Before sunrise on March 27th, sky watchers up and down the eastern seaboard of the United States witnessed a strange apparition. A quintet of milky-white plumes appeared in the night sky, twisting in the winds at the edge of space. The plumes were chemical tracers (trimethyl aluminum) deposited in the upper reaches of Earth’s atmosphere by five rockets launched rapid-fire from NASA’s Wallops Flight Facility in Virginia. The goal of the experiment, named ATREX (Anomalous Transport Rocket Experiment), is to study 3D turbulence in the thermosphere. | <urn:uuid:4856c665-2b2a-410e-ada5-0b9daf7cf2f9> | 3 | 199 | Content Listing | Science & Tech. | 46.378387 |
Science Fair Project Encyclopedia
- For the Bernstein polynomial in D-module theory, see Bernstein-Sato polynomial.
In the mathematical subfield of numerical analysis, a Bernstein polynomial, named after Sergei Natanovich Bernstein, is a polynomial in the Bernstein form, that is a linear combination of Bernstein basis polynomials.
Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Stone-Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval [0,1], became important in the form of Bézier curves.
The n + 1 Bernstein basis polynomials of degree n are defined as
A linear combination of Bernstein basis polynomials
is called a Bernstein polynomial or polynomial in Bernstein form of degree n. The coefficients βν are called Bernstein coefficients or Bézier coefficients.
The Bernstein basis polynomials have the following properties:
- bν,n(x) has a root with multiplicity ν at point x = 0
- bν,n(x) has a root with multiplicity n − ν at point x = 1
- bν,n(x) ≥ 0 if x in [0,1]
- bν,n(x) has a global maximum at x = ν/n
- b’ν,n(x) = n [bν-1,n-1(x) - bν,n-1(x)]
- bν,n(x) = 0, if ν < 0 or ν > n
The Bernstein basis polynomials of degree n form a partition of unity:
The first few Bernstein basis polynomials are
Approximating continuous functions
Let f(x) be a continuous function on the interval [0, 1]. Consider the Bernstein polynomial
It can be shown that
uniformly on the interval [0, 1]. This is a stronger statement than the proposition that the limit holds for each value of x separately; that would be pointwise convergence rather than uniform convergence. Specifically, the word uniformly signifies that
Bernstein polynomials thus afford one way to prove the Stone-Weierstrass approximation theorem that every real-valued continuous function on a real interval [a,b] can be uniformly approximated by polynomial functions over R.
Suppose K is a random variable distributed as the number of successes in n independent Bernoulli trials with probability x of success on each trial; in other words, K has a binomial distribution with parameters n and x. Then we have the expected value E(K/n) = x.
Because f, being continuous on a closed bounded interval, must be uniformly continuous on that interval, we can infer a statement of the form
And so the second probability above approaches 0 as n grows. But the second probability is either 0 or 1, since the only thing that is random is K, and that appears within the scope of the expectation operator E. Finally, observe that E(f(K/n)) is just the Bernstein polynomial Bn(f,x).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:6f92f870-00f6-481d-ba9e-d7d9d7b3bc70> | 3.515625 | 707 | Knowledge Article | Science & Tech. | 47.361257 |
The WRITE_SRF procedure writes an image and its color table vectors to a Sun Raster File (SRF).
WRITE_SRF only writes 32-, 24-, and 8-bit-deep rasterfiles of type RT_STANDARD. Use the UNIX command
to convert these files to 1-bit deep files. See the file
for the structure of Sun rasterfiles.
This routine is written in the IDL language. Its source code can be found in the file
subdirectory of the IDL distribution.
The array to be written to the SRF. If Image has dimensions (3, n,m ), a 24-bit SRF is written. If Image is omitted, the entire current graphics window is read into an array and written to the SRF file. Image should be of byte type, and in top to bottom scan line order.
Set this keyword to write the image from the top down instead of from the bottom up. This setting is only necessary when writing a file from the current IDL graphics window; it is ignored when writing a file from a data array passed as a parameter. | <urn:uuid:41940187-97c7-40ce-b74b-a7e4831a5dbb> | 2.953125 | 234 | Documentation | Software Dev. | 64.567652 |
Effects of Suspended Sediment and Burial on Scleractinian Corals From West Central Florida Patch Reefs
Authors: Rice, Stanley A.; Hunter, Cynthia L.
Source: Bulletin of Marine Science, Volume 51, Number 3, November 1992 , pp. 429-442(14)
Abstract:The distribution and abundance of eight species of endemic scleractinian corals were determined for patch reefs off west central Florida in the depth range of 7-18 m. Phyllangia americana and Cladocora arbuscula were the most abundant species encountered. Five of the eight species censussed were absent from the shallowest reef (7 m). Seven species of corals were exposed to increasing levels of suspended sediment in the laboratory and seven species were subjected to prolonged burial in sediment. Survival rates were not affected by 10-day exposures to suspended sediment concentrations of 49, 101, 165, or 199 mg˙liter−1. Combined mean growth rates for six species were significantly different between control and experimental treatments at 165 mg˙liter−1 suspended sediment. Coral burial experiments produced survival LT50 values of 7 days for Scolymia lacera, 7.2 days for Isophyllia sinuosa, 10 days for Manicina aereolata, 13.6 days for Siderastrea radians, 15 days for C. arbuscula, 16.2 days for Stephanocoenia michelinii, and 15+ days for Solenastrea hyades. The results of these experiments indicate that the species tested are among the most resistant corals in the Caribbean region to the effects of suspended sediment and physical burial. These findings are consistent with the fact that west central Florida patch reefs are exposed to more severe environmental conditions, such as high turbidity and low light penetration (in addition to a broader range of temperatures) than more tropical reefs to the south.
Document Type: Research article
Publication date: 1992-11-01
- The Bulletin of Marine Science is dedicated to the dissemination of high quality research from the world's oceans. All aspects of marine science are treated by the Bulletin of Marine Science, including papers in marine biology, biological oceanography, fisheries, marine affairs, applied marine physics, marine geology and geophysics, marine and atmospheric chemistry, and meteorology and physical oceanography.
- Editorial Board
- Information for Authors
- Subscribe to this Title
- Terms & Conditions
- ingentaconnect is not responsible for the content or availability of external websites | <urn:uuid:ce455e11-c7cc-46c8-98be-2da2d7f9ef54> | 3.21875 | 520 | Academic Writing | Science & Tech. | 27.144573 |
Posted by Phil on Saturday, January 9, 2010 at 8:27pm.
Well, downward forces are W, and upward forces are his Pull, and tension.
So his pull + Tension=W on the tire side.
On the other side, tension=Pulling force
do the math.
Ok, I think I got it. Are these free body diagrams correct? (ignore the dots)
.T.....(This tension is down)
T.F....(This tension is up)
2F=W so F=(1/2)W
Now I'm confused. Why does changing the direction of a force double it? I'm also confused about whether the tension is a reaction to the weight of the tire and the boy or the force that the boy pulls down on the rope with. Also if the force in the free body diagram of the system were another person pulling the tire and the boy up, why would the force not be half the weight in that case?
physics - A rope goes over a circular pulley with a radius of 6.3 cm. If the ...
Physics - Note: I am in Algebra-based Physics Two 35.0 N weights are suspended ...
physics help pleease!! - A rope goes over a circular pulley with a radius of 6.3...
physics - A pulley is attached to the ceiling (as shown in the figure below). ...
Physics - The diagram for this problem is a pulley. On the left end Box 1 is ...
Physics - You pull downward with a force of 27 N on a rope that passes over a ...
Physics - A 70 kg person stands on a 30 kg platform and pulls on a massless rope...
Physics - A 59-kg housepainter stands on a 16-kg aluminum platform. The platform...
Physics - You pull downward with a force of 32 N on a rope that passes over a ...
physics - A girls sits on a tire that is attached to an overhanging tree limb by...
For Further Reading | <urn:uuid:89522b7c-b335-4ac0-9b37-0526ba921ae2> | 3.734375 | 433 | Comment Section | Science & Tech. | 92.380028 |
First Use of Mars Rover Curiosity's Dust Removal Tool - This image from the Mars Hand Lens Imager (MAHLI) on NASA's Mars rover Curiosity shows the patch of rock cleaned by the first use of the rover's Dust Removal Tool (DRT).
The tool is a motorized, wire-bristle brush on the turret at the end of the rover's arm. Its first use was on the 150th Martian day, or sol, of the mission (Jan. 6, 2013). MAHLI took this image from a distance of about 10 inches (25 centimeters) after the brushing was completed on this rock target called "Ekwir_1." The patch of the rock from which dust has been brushed away is about 1.85 inches by 2.44 inches (47 millimeters by 62 millimeters). The scale bar at bottom right is 1 centimeter (0.39 inch). Honeybee Robotics, New York, N.Y., built the DRT for Curiosity. Malin Space Science Systems, San Diego, built the MAHLI. | <urn:uuid:5b9b7b94-3598-4de9-9b43-404ab7f50d1d> | 3.546875 | 217 | Truncated | Science & Tech. | 80.610947 |
Researchers are developing a solar collector to turn roads and parking lots into cheap sources of electricity and hot water. "Asphalt has a lot of advantages as a solar collector," says Rajib Mallick of Worcester Polytechnic Institute. "For one, blacktop stays hot and could continue to generate energy after the sun goes down, unlike traditional solar-electric cells.
Plus there's already gynormous acreage of installed roads and parking lots. They're resurfaced every 10 to 12 years. The solar retrofit could be built into that cycle. No need to transform other landscapes into solar farms. Or maybe not as many.
Furthermore, extracting heat from asphalt would cool the urban heat-island effect, cooling the planet a wee bit. Finally, solar collectors in roads and parking lots would be invisible, unlike those on roofs. Cuz we all know how attractive roads are. | <urn:uuid:5bf5738d-cd35-4e76-809f-4b7043654bfc> | 2.859375 | 180 | Personal Blog | Science & Tech. | 47.171709 |
Nature Bulletin No. 661-A january 7, 1978
Forest Preserve District of Cook County
George W. Dunne, President
Roland F. Eisenbeis, Supt. of Conservation
There was a time when ice, cut on frozen ponds and lakes, was
transported by fast clipper ships from New England to New Orleans
where it was worth its weight in gold. Nowadays this cold brittle
colorless substance is commonplace everywhere. Few people, however,
know that ice is one of the strangest of all solids; and that, because of
its unique properties, life on earth is what it is.
Those properties are due to the distinctive structure of a molecule of
water, formed of three elemental particles or atoms -- two of hydrogen
and one of oxygen -- expressed by the familiar symbol, H2O. The three
atoms are held together by two chemical bonds expressed by another
symbol, H-O-H. Briefly, the unique properties of water, water vapor,
and ice arise from that bonding and the arrangement of electron pairs
around the oxygen atom.
The strangest and perhaps the most important property is that water
expands as it freezes and a cubic foot of water increases almost 10
percent in volume. Consequently, whereas a cubic foot of water weighs
about 62.4 lbs. at ordinary temperatures, a cubic foot of ice weighs only
57.2 and it floats. The blanket of ice that forms and floats on a pond in
winter makes it possible for aquatic plants and animals (fish, etc.) to
remain alive in the water underneath.
If, like all other substances except bismuth, water contracted and
became denser as it solidified, ice would be heavier than water and sink
to the bottom. More ice would form on the surface until the pond was
frozen solid. Only the top would be melted into shallow slush by the
heat in spring and summer; the ice below would never thaw. In the
cooler parts of the world the rivers, ponds, lakes, and even the oceans
would all be permanently frozen.
Water has a far greater capacity for absorbing and storing heat than
other substances. It gives up that heat as it cools -- in autumn, for
instance -- and continues to do so as it freezes. Conversely, as ice melts,
the ice water absorbs heat from the air or objects around it, and that is
the basic principal of refrigeration. Also due to that capacity, a large
body of water such as Lake Michigan tends to moderate the climate in
The temperature at which water freezes varies with the amount of
pressure upon it and whether or not it contains anything in solution.
Chemically pure water under atmospheric pressure at sea level, freezes
at 32 Fahrenheit, Sea water does not freeze until its temperature drops
to between 29 and 28 F; and the freezing point of brine -- water
saturated with salt -- is 7 degrees below zero.
When the pressure upon water is increased, its freezing point is
lowered. If a heavy weight is suspended from a loop of wire passing
around a block of ice, the wire slowly cuts all the way through it,
leaving the block perfectly solid. The pressure of the wire melts a
pathway which freezes again as soon as the pressure is removed.
Likewise, in skating, pressure of the skate blade melts a thin slippery
film of water. By subjecting ordinary ice to enormous pressures, other
kinds of ice can be produced and the freezing point of one of them is 40
degrees below zero ! If it were not for ice we would all be Eskimos.
To return to the Nature Bulletins Click Here!
Update: June 2012 | <urn:uuid:bcca196b-2a3f-4576-b1e8-3055acae0b83> | 3.84375 | 783 | Knowledge Article | Science & Tech. | 51.948033 |
Feb. 22, 2008 Concerned that energy system transformations are proceeding too slowly to avoid risks from dangerous human-induced climate change, many scientists are wondering whether geoengineering (the deliberate change of the Earth's climate) may help counteract global warming.
Sulfate aerosols, commonly released by volcanoes, serve to scatter incoming solar energy in the stratosphere, preventing it from reaching the surface. To investigate the feasibility of deliberately mimicking the effect of volcanic aerosols, Rasch et al. explore scenarios in which aerosol properties are varied to assess interactions with the climate system.
Through model simulations, they discover that, because stratosphere-troposphere exchange processes change with increasing levels of aerosols, about 50 percent more aerosols would have to be injected into the atmosphere than in the scenario where such processes stayed constant.
Further, almost double the level of aerosol loading is required to counteract greenhouse warming if aerosol particles are as large as those seen during volcanic eruptions. The authors caution that geoengineering methods to mask global warming may have serious environmental consequences that must be explored before any action is taken.
Journal reference: Exploring the geoengineering of climate using stratospheric sulfate aerosols: The role of particle size. Geophysical Research Letters (GRL) paper 10.1029/2007GL032179, 2008; http://dx.doi.org/10.1029/2007GL032179
Authors: Philip J. Rasch and Danielle B. Coleman: National Center for Atmospheric Research, Boulder Colorado, U.S.A.;Paul J. Crutzen: Max Plank Institute for Chemistry, Mainz, Germany; Also at Scripps Institution of Oceanography, University of California, San Diego, La Jolla, California, U.S.A.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:1dab3abc-510a-474e-aa5e-2f91544012d9> | 3.875 | 390 | Truncated | Science & Tech. | 29.557129 |
"Each time following an eruption of Iceland’s Eyjafjallajokull volcano, it’s mighty neighbor, Katla, has erupted shortly afterward. Eyjafjallajokull and Katla are separated by 27 km (17 mi) and are thought to have interconnecting magma channels. Eyjafjallajokull erupted on April 14, 2010.
Katla (named after an Icelandic witch) is known to have erupted 16 times since 930, the last time during 1918. Since then, Katla has been quiet for the longest duration on record. It is overdue, and now that it’s little sister Eyjafjallajokull has erupted, it’s just a matter of time.
Katla itself is 30 km (19 mi) in diameter reaching a height of 1,500 meters (4,900 feet), while the 10 km (6 mi) crater of the volcano lies up to 500 meters (1,600 feet) beneath the Myrdalsjokull glacier on the southern edge of Iceland. Iceland sits directly on top of a split in the earth’s crust of two tectonic plates on the Mid-Atlantic ridge and is a hot spot for volcanic activity with 35 volcanoes around the island.
An eruption of Katla would likely be 10 times stronger than the recent eruption of Eyjafjallajokull and could be disastrous to Iceland with raging floods from the melting Myrdalsjokull glacier, immense depths of volcanic ash, and climate change to regions of the world.
If the eruption is long enough and high enough, ash could be blasted 20 km (12 mi) into the stratosphere and circle the globe blotting out part of the sun from penetrating to earth, and reduce temperatures worldwide. The big question of course is how big would the eruption be and to what extent the global climate change.
We know that when Katla erupted in 1700, the Mississippi River froze just north of New Orleans for example. When Mount Pinatubo erupted in 1991 for 2 days, it dropped temperatures 4 degrees worldwide for a year. Katla on average erupts for 50 days, although the cumulative severity over that time period depends on the force of the eruptions lifting ash high into the atmosphere. We won’t know until it happens.
Although the magnitude of disaster would not be that of a super volcano such as Wyoming’s Yellowstone, the potential is there for a global catastrophe from a worldwide extended deep freeze. Huge crop failures would translate to starvation for some and very high food prices for others. A ripple effect would occur through the already teetering economies of the world.
Since the potential exists for a major Katla eruption, we should prepare ourselves as best we can, knowing how modern society is so very fragile from disruptions (just look at what happened to worldwide air travel and the economic impact from the small eruption of Eyjafjallajokull)."http://beforeitsnews.com/news/41/354/Wi ... _Next.html | <urn:uuid:f9bcb8f3-ffde-49fb-aee2-2a336e29a63c> | 3.796875 | 633 | Personal Blog | Science & Tech. | 51.942179 |
If you're new to Python
A VPython tutorial
Pictures of 3D objects
Choose an object:
Work with objects:
Windows & Events:
What's new in Visual 5
The following attributes apply to all VPython objects:
visible If False,
object is not displayed; e.g. ball.visible = False
frame Place this object into a specified frame, as in ball = sphere(frame = f1)
display When you start a VPython program, for convenience Visual creates a display window and names it scene. By default, objects you create go into that display window. You can choose to put an object in a different display like this:
make_trail You can specify that a trail be left behind a moving arrow, box, cone, cylinder, ellipsoid, pyramid, ring, or sphere object. For details, see Leaving a Trail.
scene2 = display( title = "Act IV, Scene 2" )
Executing myscene = display.get_selected() returns a reference to the display in which objects are currently being created. Given a specific display named scene2, scene2.select() makes scene2 be the "selected display", so that objects will be drawn into scene2 by default.
There is a rotate function for all objects other than the "array objects" curve, convex, extrusion, faces, text, and points (which can be put into a frame and the frame rotated).
__class__ Name of the class of object. For example, ball.__class__ is sphere is true if ball is a sphere object. There are two underscores before and after the word class. In a list of visible objects provided by scene.objects, if obj is in this list you can determine the class of the object with obj.__class__. You can check for a specific kind of object by using a standard Python function: isinstance(obj, sphere) is true if "obj" is a sphere object.
__copy__() Makes a copy of an object. There are two underscores before and after copy. Without any arguments, this results in creating a second object in the exact same position as the first, which is probably not what you want. The __copy__() function takes a list of keyword=value argument pairs which are applied to the new object before making it visible. For example, to clone an object from one display to another, you would execute: new_object = old_object.__copy__( display=new_display). Restriction: If the original object is within a frame, and the new object is on a different display, you must supply both a new display and a new frame for the new object (the new frame may be None). This is due to the restriction that an object may not be located within a frame that is in a separate display.
Here is an example that uses the __copy__() function. The following routine copies all of the Visual objects currently existing in one display into a previously defined second display, as long as there are no nested frames (frames within frames):
def clone_universe( new_display, old_display):
See Controlling One or More Visual Display Windows for more information on creating and manipulating display objects. | <urn:uuid:bc54aed6-ad90-4cca-96a9-f8010218b558> | 3.671875 | 674 | Tutorial | Software Dev. | 50.702457 |
Go Up to Working with sockets Index
Sockets provide one of the pieces you need to write network servers or client applications. For many services, such as HTTP or FTP, third party servers are readily available. Some are even bundled with the operating system, so that there is no need to write one yourself. However, when you want more control over the way the service is implemented, a tighter integration between your application and the network communication, or when no server is available for the particular service you need, then you may want to create your own server or client application. For example, when working with distributed data sets, you may want to write a layer to communicate with databases on other systems.
To implement or use a service using sockets, you must understand | <urn:uuid:7a3adf00-3b78-4df7-a4c4-5f72b60478b4> | 3.046875 | 152 | Documentation | Software Dev. | 33.161358 |
Meteoroids are the smallest particles orbiting the sun, and most are no larger than grains of sand. From years of studying the evolution of meteor streams, astronomers have concluded that clouds of meteoroids orbiting the sun were produced by comets. Meteoroids can not be observed moving through space because of their small size. Over the years numerous man-made satellites recovered by manned spacecraft have shown pits in their metal skins which were caused by the impact of meteoroids. Meteoroids become visible to observers on Earth when they enter Earth's atmosphere. They are then referred to as meteors. They become visible as a result of friction caused by air molecules slamming against the surface of the high-velocity particle. The friction typically causes meteors to glow blue or white, although other colors have been reported. Most meteors completely burn up in the atmosphere at altitudes of between 60 and 80 miles. They are rarely seen for periods of more than a few seconds. | <urn:uuid:d3473873-c6f8-4148-8078-7fb4afea4e2a> | 4.34375 | 191 | Knowledge Article | Science & Tech. | 43.170707 |
Library Home || Primary || Math Fundamentals || Pre-Algebra ||
Algebra || Geometry || Discrete Math || Trig/Calc
|Geometry, difficulty level 3. Two congruent circles are drawn, and four congruent chords are drawn, two in each circle, all perpendicular to the diameter through both circles. The distance between the two furthest chords is 20, and the distance between two chords of the same circle is 8. What's the area of one of the circles?|
|Please Note: Use of the following materials requires membership. Please see the Problem of the Week membership page for more information.|
© 1994-2012 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education. | <urn:uuid:66b88ef9-aba1-477c-af79-6553ab94da8e> | 3.234375 | 167 | Tutorial | Science & Tech. | 46.299386 |
P-Value/ One-or Two Tailed.
Having a tough time with me homework on P-Value. If anyone could help solve this?
According to an article in The New York Times, 19.3% of New York City adults
smoked in 2003. Suppose that a survey is conducted this year. 8 out of 80 randomly chosen New York City residents reply that they smoke. At a significance level of 0.05, test the claim that the rate is still 19.3%.
1. Identify H0, Ha.
A) H0: p = 0.193 vs. Ha: p > 0.193
B) H0: p = 0.1 vs. Ha: p < 0.1
C) H0: p = 0.1 vs. Ha: p > 0.1
D) H0: p = 0.193 vs. Ha: p < 0.193
E) H0: p = 0.193 vs. Ha: p ≠ 0.193
2. Compute the test statistic. (2 decimal places)
A. 1.08 B. – 1.08 C. – 2.11 D. 2.11 E. None of the others
3. Compute the P-value. (4 decimal places)
A. 0.9826 B. 0.0174 C. 0.9890 D. 0.0348 E. None of the others
4. At a significance level of 0.05, does the sample provide sufficient evidence to reject that the percent of smokers in New York City is still 19.3%?
A. yes B. no C. more information is needed to determine
5. If the actual rate is 23.1%, what type of error does it introduce?
A. Type I Error B. Type II Error C. No Error
Re: P-Value/ One-or Two Tailed.
Well both D and E are defendable, but given 5. I guess you are expected to choose E.
Originally Posted by mathwiz1
Now what have you done and what problems are you having, are those numbers in 2,3, and 4 your computations, .. | <urn:uuid:6b56d338-fe79-4c84-a764-4c11cd681bee> | 2.703125 | 461 | Q&A Forum | Science & Tech. | 111.204224 |
||Resource Management Issues:
Overview of the issue:
The Sanctuary provides many opportunities for observation of nature, including
whale watching, bird watching, pinniped pupping and haulout activity,
and viewing of sea otters. Rocky shorelines provide pedestrians opportunities
to view the flora and fauna associated with the intertidal habitat, and
kayaks and partyboats are used for nearshore and offshore tours. With
the multitude of opportunities for observation come the potential for
wildlife disturbance that may result in flushing birds from their nesting
roosts, harassment or even death of pinnipeds or sea otters, as well as
trampling and excess collecting of intertidal organisms. Other sources
of wildlife disturbance include low-flying aircraft, fireworks displays
that can flush seabirds and marine mammals, marine debris, acoustic impacts and motorized
The Sanctuary has one of the most diverse and abundant assemblages of marine animals in the world, including six species of pinniped, twenty-seven species of cetacean, four species of sea turtles, ninety-four species of seabirds and one species of sea otter (fissiped). Nearly all of the mammal and turtle species, and many birds, are protected under the Endangered Species Act, Marine Mammal Protection Act or Migratory Bird Treaty Act.
How is the Sanctuary involved?
MBNMS addresses wildlife disturbance through a mix of educational outreach,
regulations and enforcement. Sanctuary regulations explicitly prohibit
harassment of marine mammals, as defined under the Marine Mammal Protection
The Sanctuary is mandated to approach resource protection from a broad, ecosystem based perspective. This requires consideration of a complex array of habitats, species, and interconnected processes and their relationship to human activities.
Public awareness is necessary to effectively address wildlife disturbance issues since most people who choose to view marine wildlife do not intend to place the animals or themselves at risk. While it has been well established that it is harmful and dangerous to closely approach, handle or feed terrestrial wildlife (e.g., bears, deer, raccoons, nesting birds, etc.), many people do not yet seem to understand that these concerns also apply to marine wildlife.
Potential Disturbance Activities within the MBNMS
Over the last twenty years, increasing numbers of people have been seeking opportunities to view and experience marine wildlife. For the most part, wildlife viewing has resulted in many positive benefits including new economic opportunities for local communities, and increased public awareness and stewardship for marine resources. However, there is growing evidence that marine wildlife can be disturbed and/or injured when viewing activities are conducted inappropriately. Disturbance or injury also occurrs through commercial harvest activities.
Frequent disturbance can adversely affect marine species. The effects of disturbance can be especially critical during sensitive time periods, such as feeding, breeding, resting, or nesting. Disturbance is likely to cause avoidance reactions and may result in interruptions of social behavior of animals and is capable of leading to long-term changes in distribution.
Types of Wildlife in the MBNMS
The MBNMS is known both nationally and internationally as a veritable 'hot spot' for viewing marine life. Of the more than 116 federally listed threatened or endangered species (55 percent of all species nationwide) in California, twenty-six reside within the Sanctuary. There is significant interest and public participation in activities found in the region that offer wildlife viewing. Following is a description of wildlife species present in the MBNMS which are subject to disturbance.
Of the twenty-seven species of cetaceans seen in the Monterey Bay area, about one-third occur with frequency. Of the twenty-seven species of whales, five are listed as endangered: the blue, fin, humpback, right, and sperm. The highest concentration areas of cetaceans are within the central and southern portions of the MBNMS. The upwelling of the cold submarine canyon waters causes the bay to teem with microscopic life and krill. This in turn provides an abundance of nutrition for several species along the food chain. The Monterey Bay is a kind of an all-you-can-eat-buffet along their migration of the west coast.
There are a total of nine rookeries/colonies in the MBNMS. The five species of pinnipeds considered common in the Monterey Bay area include California sea lions, steller sea lions, northern elephant seals, northern fur seals, and Pacific harbor seals. An additional species, the Guadalupe fur seal, has been reported from records of sick animals stranded on the beach.
The California or Southern sea otter is a threatened species that is found throughout
the shallow waters of Monterey Bay National Marine Sanctuary, with its broader range stretching from the Gaviota Coast in Santa Barbara County to Half Moon Bay in San Mateo County. Sea otters inhabit a narrow zone of coastal waters, normally staying within one mile from shore. They forage in both rocky and soft-sediment communities as well as in the kelp understory and canopy. They seldom are found in open waters deeper than 30m, preferring instead the kelp beds, which serve as vital resting, foraging, and nursery sites. Otters are an important part of the marine ecosystem. By foraging on kelp-eating macroinvertebrates (especially sea urchins) sea otters can influence the abundance and species composition of kelp assemblages and animals within nearshore communities.
Seabirds and Shorebirds
Sanctuary waters are among the most heavily used by seabirds worldwide. Ninety-four species of seabird are known to occur regularly within and in the vicinity of the Sanctuary, and approximately ninety species of tidal and wetland birds occur on the shores, marshes, and estuaries bordering Sanctuary waters. Several environmental features are responsible for the diverse assemblage of birds in the area. The Monterey Bay is located on the "Pacific Flyway", allowing migratory birds a place to stopover during both north and south migrations between southern wintering grounds and northern breeding sites. The upwelling of nutrient-rich waters support highly productive food webs which provide abundant seabird prey, as well as the diversity of habitat types along the shore which increases the variety of bird species utilizing the MBNMS. Thus, many birds found in Sanctuary waters have come to feed, some from as far as New Zealand.
The MBNMS is home to four species of sea turtles that frequent its waters —the Green, Pacific Ridley, Leatherback and Loggerhead sea turtles. The Leatherback is the most common. It is the largest turtle in the world and has the widest geographic range of any reptile. It is found in all of the world’s major oceans and has been observed from the Artic Circle to the edges of the Antarctic convergence zone. Leatherbacks are also one of the deepest diving animals known—descending to depths in excess of 1,300m. Leatherback turtle populations in the Pacific Ocean are declining at a disastrous rate. Since 1980 populations have dropped by more than 90%, and the accidental killing of leatherbacks by high seas commercial fishing fleets is a major contributor to that decline.
How the MBNMS Currently Addresses Wildlife Disturbance Issues
MBNMS addresses wildlife disturbance through a mix of educational outreach, regulations and enforcement. Sanctuary regulations explicitly prohibit harassment of marine mammals (as defined under the Marine Mammal Protection Act), sea turtles and birds.
The Watchable Wildlife program is a unique partnership of federal and state wildlife agencies and non-profit organizations working to educate the public and commercial operators about safe and responsible wildlife viewing practices. The program has three immediate goals: (1) enhance public wildlife viewing opportunities; (2) provide education about wildlife and its needs; and (3) promote active support of wildlife conservation. Within NOAA, the National Ocean Service (through the National Marine Sanctuary Program) and the National Marine Fisheries Service (through the Office of Protected Resources) have been working together with the Watchable Wildlife program partners over the past five years to develop a “Watchable Wildlife” program specifically for marine species and habitats. The main purpose of the program is to provide the public with information about appropriate wildlife viewing practices for the marine environment that are consistent with wildlife protection laws and conservation efforts.
Despite the efforts outlined above, many species in the Sanctuary warrant further protection via outreach, education, enforcement or other strategies designed to inform the public and specific user groups of the need to prevent wildlife disturbance within the MBNMS.
The SIMON site marine mammal overview:
The MBNMS site characterization: http://montereybay.nos.noaa.gov/sitechar/welcome.html
NOAA's Ocean Etiquette Program: http://www.sanctuaries.noaa.gov/oceanetiquette/
NOAA Fisheries- Office of Protected Resources:
Marine Mammal Center: http://www.tmmc.org/what_we_do/rescue/monterey_bay.asp | <urn:uuid:c24681dd-7b72-4dea-b1d3-654a46eb73a6> | 3.265625 | 1,868 | Knowledge Article | Science & Tech. | 25.050131 |
Multiples of Nine, Digit Sum
why when a number when times by 9,the numbers in the
answers when added until a single digit is found,the number is
9?..eg. 538 x 9 = 4842,4+8+4+2=18,1+8=9...and why its only for the
number 9 and not for other numbers?
Think of the number 531, for example. You can write 531 as
500 + 30 + 1 = 5X100 + 3X10 + 1. Since 100 = 99 + 1 and 10 = 9 + 1,
531 can be written as (5X99 + 5X9) + (3X9 + 3X1) + 1X1 = (5X99 +
3X9) + (5+3+1). Since 9 divides evenly into 99 and 9, it surely
goes into the 5X99 + 5X9 part. So 9 goes into the entire number if
and only if it goes into the (5+3+1) part.
This works for any whole number expressed in decimal
notation. For example, 8347 = (8X999 + 3X99 + 4X9) + (8 + 3 + 4 +
7). Since 9 goes into the first part but not into the second (the
digit sum), it cannot go into the entire number.
By the way, the same rule works for divisibility by 3. For
example, with the number 8347, 3 goes into the (8X999 + 3X99 +
4X9) part, and therefore, 3 must go into 8347, because it goes
into the (8+3+4+7) part.
You can use other bases too. For example, in base 5, the
same trick would work for 4 and 2. For example, the number 1020304
is a base 5 number with digit sum 1+2+3+4=20 (base 5). This would
expanded as (all numbers in base 5)
1020304=(4+3+2+1)+3X(44)+2X4444+1X444444. Since the second part is
clearly divisible by 4 ( and 2), the number is divisible by 4 or 2
if and only if (4+3+2+1) is. (You need to be careful working with
bases that are not prime.)
If you iterate the process until a single digit is
obtained, then the original number is divisible by 9 if and only if
the single digit is 9 (or 0). For example 8347 --> 24 --> 6. So,
8347 is divisible by 9 if and only if 24 is and 24 is divisible by
9 if only 6 is.
In the base 5 example, 1020304 is divisible by 4 (or 2) if
and only if 20 is. And 20 is divisible by 4 (or 2) if and only if 2
is. (1020304 --> 20 --> 2) So 1020304 is divisible by 2 but not 4.
Every integer product of nine has single digits that add up to a product
of nine, which then repeats again until you have the single digit nine,
because of how carrying works. When you carry, you essentially subtract
ten from a column and add one to the next column. This is like
subtracting nine from the total. When carrying happens because you have
added nine, then really the total of digits hasn't changed. For any
other single digit, the digit total changes when you carry. When adding
nine, the digit total only changes when the last digit is a zero. In
that case, the digit total increases by nine, but the digit total is
still a multiple of nine.
When nine is added to a number with a digit total of nine, the digit
total stays a multiple of nine even when you have to carry. Starting
with nine, which does have a digit total of nine, you can thus add nine
to it as many times as you like and keep the digit total a multiple of
nine. Doing this over and over will take you through every multiple of
nine. This multiple, which might be 27, is a smaller multiple of nine.
Its digits can then be reduced to an even smaller multiple of nine,
eventually reaching nine itself.
Dr. Ken Mellendorf
Illinois Central College
It has to do with remainder 9. I offer you some interesting counter
537 x 9 = 4833 = 4 + 8 + 3 + 3 = 18 = 1 + 8 = 9
536 x 9 = 4824 = 4 + 8 + 2 + 4 = 18 = 1 + 8 = 9
539 x 9 = 4851 = 4 + 8 + 5 + 1 = 18 = 1 + 8 = 9
Also notice that for any number that is a multiple of '9', the sum of the
last two digits is '6'. The sum of the last three digits is 14. The sum of
all four digits is 18. It has to do with a concept of modular arithmetic.
Your math teacher may be able to dig into this with you.
Click here to return to the Mathematics Archives
Update: June 2012 | <urn:uuid:f0851a8d-6255-4e47-bb5c-88fa7192ef56> | 3.296875 | 1,124 | Q&A Forum | Science & Tech. | 89.133014 |
A radio one ten-thousandth of the diameter of a human hair is now picking up local radio stations.
at the University of California, Berkeley, have built a radio out of a
single carbon nanotube, which is 100 billion times smaller than the
first commercial radios. The nanotube radio requires only a battery and
earphones to tune in to a station.
nanoradio is currently configured as a receiver but could also work as
a transmitter. Scientists say it could be used in any number of
applications — from cell phones to microscopic devices that sense the
environment and relay information via radio signals.
are rolled-up sheets of interlocked carbon atoms that form incredibly
strong tubes. They detect radio signals by vibrating thousands to
millions of times per second in tune with the radio wave. Reception on
the nanoradios is scratchy, but scientists say they're working on
improving the quality.
Click the above link to listen to clips. | <urn:uuid:346f7d74-b474-4351-aa1f-170579a76a3c> | 4 | 209 | Truncated | Science & Tech. | 41.233833 |
by Dan Lashof, via NRDC’s Switchboard
We dump billions of tons of carbon pollution into the atmosphere each year. As a result, the concentration of carbon dioxide has increased by 40%. Excess carbon dioxide traps excess heat in the atmosphere. Excess heat causes extreme heat waves, droughts, and storms.
And that’s what we have been seeing. In June alone, 170 all-time high temperature records were broken or tied in the United States, and more than 24,000 daily high temperature records have been broke so far this year. If the climate weren’t changing, we would expect to see about the same number of record highs and record lows set each year due to random fluctuations. That’s what we were seeing fifty years ago, but during the last decade there were twice as many record highs as record lows. So far this year the ratio has been 10 to 1.
This year’s extreme weather follows last year’s. The last twelve months were the hottest on record for the United States. Texas saw its hottest and driest summer on record in 2011 by a wide margin, and research published this week shows that carbon pollution dramatically increased the probability of such extreme heat and drought.
Faced with similar information about the carcinogens in cigarette smoke, the mechanism by which these carcinogens cause genetic mutations, and the statistical relationship between smoking and cancer, the Surgeon General says that smoking causes cancer. Of course that doesn’t mean that every individual case of cancer experienced by a smoker can be definitively attributed to smoking. But the Surgeon General does not feel compelled to say that every time she says that smoking causes cancer. And journalists don’t feel compelled to include that caveat every time they write an article about the health toll of smoking.
The Surgeon General’s warning hasn’t always been this clear. In 1966, when cigarette packages were first required to carry a warning, the package said “Cigarette Smoking May be Hazardous to Your Health.” A few years ago a similarly tepid warning may have been appropriate for carbon pollution. Not anymore.
The data are in. It’s time for scientists and journalists to just say it: Carbon pollution causes extreme weather.
Dan Lashof is the director of the National Resources Defense Council’s climate and clean air program. This piece was originally published at NRDC’s Switchboard and was reprinted with permission.
- What Is Causing The Climate To Unravel?Answer: One trillion tons of carbon pollution.
- Every Network Gets Extreme Weather Story Right, ‘Now’s The Time We Start Limiting Manmade Greenhouse Gases’ — ABC | <urn:uuid:f81d599a-081c-4daa-82f8-7506c23ae925> | 3.546875 | 563 | Nonfiction Writing | Science & Tech. | 50.813032 |
Number 213 (Story #2), February 7, 1995 by Phillip F. Schewe and Ben Stein|
INDIRECT EVIDENCE FOR NEUTRINO MASS comes from a Los Alamos experiment in which muon antineutrinos are perhaps transmuting into electron antineutrinos in a process called "neutrino oscillation." Los Alamos uses a proton beam to produce pions whose decays result in streams of various daughter particles, including muon antineutrinos. The pion decay process does not produce any electron antineutrinos, so any that turn up further downstream must, the researchers believe, come from the metamorphosis of another neutrino type, probably muon antineutrinos. Neutrinos, regardless of their type, interact very feebly. During the five months of data taking, the Los Alamos scientists looked for rare interactions in which the newly minted electron antineutrino enters the reaction vessel (filled with 180 tons of mineral oil) and collides with a proton, creating a positron and a neutron. The apparatus is designed to search for characteristic light (Cerenkov radiation) from the positron; meanwhile, the 2-MeV neutron eventually combines with a proton to make a deuteron and a gamma ray. From the sample size one can calculate the oscillation rate. From that, one can infer not a value for neutrino mass directly but rather the difference of the squares of the masses for the two neutrino species. Current theoretical models hold that if oscillation is occurring, at least one of the neutrino types has mass. According to D. Hywel White and William Louis of Los Alamos, the observed rate of electron antineutrino interactions suggests a neutrino mass range of 0.5 and 5 eV. The results are not statistically sufficient to settle the issue of neutrino mass and more tests are needed. The issue is important for particle physicists and for cosmologists, who suspect that neutrinos with even a very small mass may play a role in organizing matter into galaxies. | <urn:uuid:dcd64632-35c2-4e78-9809-40c30af2fca5> | 3.890625 | 443 | Nonfiction Writing | Science & Tech. | 31.784252 |
Nacreous clouds are very high, 9-16 miles (15 – 25 km) in the stratosphere. They catch sunlight long before ground level sunrise and after sunset to glow eerily with unbelievably bright electric colours. They twist, stretch and curl majestically as lower dark tropospheric clouds hurry beneath.
Their tiny ice crystals diffract sunlight to give the iridescent hues. They need an unusually cold stratosphere (less than -85 Celsius) and are therefore rare winter occurrences. They tend to form in very windy weather and downwind of mountains. The resulting tropospheric disturbances possibly loft necessary water vapour into the lower stratosphere. They are a sub-class of polar stratospheric clouds, PSCs. The top image perhaps shows other less iridescent PSC types dotting the sky.
Search before sunrise or after sunset late December to February in the Northern Hemisphere. Norway, Sweden, Iceland and Finland are favoured locations but they are very occasionally seen further south.
How to distinguish them from “ordinary” iridescent clouds? Nacreous clouds glow in the sky up to an hour before sunrise and after sunset. They are slow moving. They are filmy wave clouds that over minutes twist, stretch and change. They are very brightly coloured and lower iridescent clouds are pallid ghosts by comparison. Once seen you will never forget! | <urn:uuid:2cdaa4e7-a562-45ad-9398-29563dd0167e> | 3.375 | 284 | Knowledge Article | Science & Tech. | 44.03105 |
Everyone programs console applications one time or the other. This
programming is more prevalent when people are learning to program, especially
while learning the DOS based C/C++ programming. However, when one migrates to
Windows programming, console application development takes a back seat. But the
Win32 console development holds an important place, especially when the Win32
API contains a good amount of API dedicated to console application development.
If you have noticed, even VC++, and latest development technologies like C#,
also supports console project development. Console applications are good
candidates for testing the core functionality of your Windows application
without the unnecessary overhead of a GUI.
But there's always been a sense of helplessness in regard to how to know when
certain system related events have occurred, like when user if logging off, or
the system is being shutdown, or handling control+break or control+C keyboard
events, etc. For a Windows based application, getting to know when such events
occur is no problem since they are having a message queue assigned to them that
is polled, and assuming that the concerned event is programmed for, it can be
handled pretty easily. But this isn't the case with a console application that
has no concept of a message queue.
This article intends to discuss how you can handle all kinds of console-based
events in any console application. Once you have gone through it, you will see
for yourself how trivial this seemingly helpless task is :)
Setting Console Traps
The first step in handling console application events is to setup an even
trap, technically referred to as installing an event handler. For this purpose,
we utilize the
SetConsoleCtrlHandler Win32 API that is prototyped as shown
PHANDLER_ROUTINE HandlerRoutine, BOOL Add );
HandlerRoutine parameter is a pointer to a function that has the
BOOL WINAPI HandlerRoutine(
DWORD dwCtrlType );
HandlerRoutine takes is a
DWORD parameter that tells what console
event has taken place. The parameter can take the following values:
CTRL_C_EVENT - occurs when the user presses CTRL+C, or when it is sent by
CTRL_BREAK_EVENT - occurs when the user presses CTRL+BREAK, or when it is
sent by the
CTRL_CLOSE_EVENT - occurs when attempt is made to close the console, when
the system sends the close signal to all processes associated with a given
CTRL_LOGOFF_EVENT - occurs when the user is logging off. One cannot
determine, however, which user is logging off.
CTRL_SHUTDOWN_EVENT - occurs when the system is being shutdown, and is
typically sent to services.
Upon receiving the event, the
HandlerRoutine can either choose to do some
processing, or ignore the event. If the routine chooses not to handle the event,
it should return
FALSE, and the system shall then proceed to the next installed
handler. But incase the routine does handle the event, it should then return
TRUE, after doing all the processing it requires. The
CTRL_SHUTDOWN_EVENT are typically used to perform any
cleanup that is required by the application, and then call the
Thus, the system has has some timeouts associated with these three events, which
is 5 seconds for
CTRL_CLOSE_EVENT, and 20 seconds for the other two. If the
process doesn't respond within the timeout period, Windows shall then proceed to
display the End Task dialog box to the user. If the user proceeds to end the
task, then the application will not have any opportunity to perform cleanup.
Thus, any cleanup that is required should complete well within the timeout
period. Below is an exemplification of the handler routine:
BOOL WINAPI ConsoleHandler(DWORD CEvent)
"Program being closed!","CEvent",MB_OK);
"User is logging off!","CEvent",MB_OK);
"User is logging off!","CEvent",MB_OK);
Now that we have seen how the handler routine works, lets see how to install
the handler. To do so, as mentioned earlier in the article, we use the
SetConsoleCtrlHandler API as shown below:
printf("Unable to install handler!\n");
The first parameter is a function pointer of the type
prototype has been discussed earlier. The second parameter, if set to
tries installing the handler, and if set to
FALSE, attempts the un-installation.
If either attempts are successful, the return value is
So, that's all there is to handling the console application events. After
handler is installed, your application will receive the events as and by they
come, and when the execution is about to be terminated, the handler maybe
un-installed. Pretty easy, eh :) ?
I hold Early Acheiver in MCSE 2000, MCSE NT 4.0, MCP+I, and actively involved in programming using C/C++, .NET framework, C#, Win32 API, VB, ASP and MFC.
I also have various publications to my credit at MSDN Online Peer Journal, Windows Developer Journal (http://www.wdj.com/), Developer 2.0 (http://www.developer2.com/), and PC Quest (http://www.pcquest.com/). | <urn:uuid:656f6427-8ea1-4a2a-b101-c8fbd1dfe88d> | 2.734375 | 1,154 | Documentation | Software Dev. | 45.317088 |
On Friday July 24, 2009, multiple significant hail storms moved southeastward across northeast Iowa, southwest Wisconsin, and northwest Illinois. These hail storms produced extremely large hail, and copious amounts of hail, which led to some concentrated swaths of damage to vegetation. In some areas, most of the crops were severely damaged or destroyed. For a complete write-up on the situation, click here.
With a relatively clear day today, some of the scarring is visible on satellite images. First, the MODIS Vegetation Index which is a 1km resolution product designed to pick up on areas of greenness in the vegetation:
A minimum of about 28% greenness is evident just south-southeast of Belmont, which is not surprising given that is where some of the worst crop damage was observed. Corn stalks were completely stripped and sheared off to a height of less than 2 feet. These damaged areas of vegetation now absorb more radiation from the sun, thereby allowing the surface to heat faster. This phenomenon is evident in the MODIS 250m resolution satellite image from below. Cumulus clouds fired in greater abundance on the Wisconsin hail swaths, which makes them less distinguishable than the Iowa hail swath.
The below image is from a few days later, a little earlier in the day so fewer cumulus clouds. The hail scars are more clearly visible over southwest Wisconsin as well as in northeast Iowa.
In western Clayton County, the surface skin temperature (temperature of the Earth's surface, rather than the air temperature), as estimated by MODIS 1km resolution satellite imagery, had a differential of about 22 degrees between the hail swath and the green vegetation remaining outside of the swath at 11:58 AM CDT. In northwest Lafayette County, the differential was around 15 degrees. This proves that the areas with vegetation damaged by hail tend to heat up much more quickly than surrounding areas, and it also explains why the diurnal cumulus would develop there first. | <urn:uuid:015270b8-007f-47b0-a788-84c9b73e88e9> | 3.40625 | 397 | Knowledge Article | Science & Tech. | 39.309835 |
M E R C U R Y
CharacteristicsMercury is the first planet of our Solar System and next to the Sun, it is to only 0.387 astronomical units (UA) but has an eccentric orbit of 0,20. Form leaves from rocky inner planets and its diameter is of only 4,879 km. To the being so small and next to the Sun it is difficult to locate it from the Earth.
Mercury I located the 7 to it of June of 1987 when it was 19 years old from the Majorcan locality of Cán Picafort (Majorca, Spain). One was in the constellation of Gemini and observing from the North hemisphere it was easy to locate it because the inclination of the ecliptic in Gemini is almost vertical with respect to the horizon and it was possible to be observed in the first times of the night.
First planet of our Solar System.
It really is difficult to locate to Mercury because never one moves away more of 23º with respect to the Sun, since one is an inner planet, that is to say, is between the Sun and the Earth. It has a rotation of 58.7 days that peculiarly corresponds to 2/3 of its period of transfering around the Sun. These synchronizations we found in several examples in planets and satellites of our Solar System.
Mythologily represented the God of the messengers, you can have more information of its mythology in the Web.
Mercury is a planet that has a very tenuous atmosphere and therefore, with the last images of the space probes that have happened through the planet have observed a surface similar to the spot by the immense amount of craters that there are in his surface.
|Average Dist. of the Sun||0.387 UA|
|Average radio||57.910.000 km|
|Orbital period (sidereal)||87d 23,3h|
|Orbital period (sinódico)||115.88 days|
|Orbital speed average||47,8725 km/s|
|Number of satellites||0|
|Equatorial diameter||4,879.4 km|
|Superficial area||7,5 × 107 km²|
|Average density||5,43 g/cm³|
|Superficial gravity||2,78 m/s²|
|Period of rotation||58d 15,5088h|
|Speed of escape||4,25 km/s|
|Average Temp. superf.: Day||623 K|
|Average Temp. superf.: Night||103 K|
In the page Web you can obtain more data of each one of our planets and the dwarf planets defined by the IAU of our Solar System.
Masm Last update 2006-08-28) | <urn:uuid:f4d9d069-bed5-4192-8965-5fdde2576560> | 3.375 | 583 | Knowledge Article | Science & Tech. | 67.184512 |
About half the American population receives its drinking water from lakes and rivers. Use this interactive map to find out where your water comes from.
Variable Rate Irrigation (VRI) technology lets farmers avoid waste water with the push of a button.
Healthy rivers are ever-changing, rising and falling as seasons come and go. Seasonal flow patterns are a river's heartbeat — they orchestrate plant and animal life cycles ans sustain complex natural processes.
Find out how the Conservancy is working to revolutionize the value of water by creating water funds for people and nature.
Join The Nature Conservancy on a journey down the Mississippi River. Hear the stories of people who depend on this mighty river for their livelihoods and inspiration.
Explore this fun graphic to see how rivers work for you. | <urn:uuid:99281c57-c19e-4e3f-9d23-20ebaaf1505b> | 3.171875 | 162 | Content Listing | Science & Tech. | 41.718262 |
The following example identifies two of the <th> elements as headers for columns, and two of the <td> elements as headers for rows:
The scope attribute has no visual effect in ordinary web browsers, but can be used by screen readers.
The <td> scope attribute is not supported in HTML5.
The scope attribute defines a way to associate header cells and data cells in a table.
The scope attribute identifies whether a cell is a header for a column, row, or group of columns or rows.
|col||Specifies that the cell is a header for a column|
|row||Specifies that the cell is a header for a row|
|colgroup||Specifies that the cell is a header for a group of columns|
|rowgroup||Specifies that the cell is a header for a group of rows|
Your message has been sent to W3Schools. | <urn:uuid:b3ea0fc0-499a-43a1-96ee-40bd7dc4444d> | 2.8125 | 188 | Documentation | Software Dev. | 52.212 |
Tapanila, L., and E. M. Roberts. 2012. The earliest evidence of holometabolan insect pupation in conifer wood. PLoS ONE 7(2): e31668. doi:10.1371/journal.pone.0031668.
The pre-Jurassic record of terrestrial wood borings is poorly resolved, despite body fossil evidence of insect diversification among xylophilic clades starting in the late Paleozoic. Detailed analysis of borings in petrified wood provides direct evidence of wood utilization by invertebrate animals, which typically comprises feeding behaviors.
We describe a U-shaped boring in petrified wood from the Late Triassic Chinle Formation of southern Utah that demonstrates a strong linkage between insect ontogeny and conifer wood resources. Xylokrypta durossi new ichnogenus and ichnospecies is a large excavation in wood that is backfilled with partially digested xylem, creating a secluded chamber. The tracemaker exited the chamber by way of a small vertical shaft. This sequence of behaviors is most consistent with the entrance of a larva followed by pupal quiescence and adult emergence — hallmarks of holometabolous insect ontogeny. Among the known body fossil record of Triassic insects, cupedid beetles (Coleoptera: Archostemata) are deemed the most plausible tracemakers of Xylokrypta, based on their body size and modern xylobiotic lifestyle.
This oldest record of pupation in fossil wood provides an alternative interpretation to borings once regarded as evidence for Triassic bees. Instead Xylokrypta suggests that early archostematan beetles were leaders in exploiting wood substrates well before modern clades of xylophages arose in the late Mesozoic. | <urn:uuid:ca65e3d3-bf3e-4568-a883-98c07018ba1f> | 3.109375 | 382 | Academic Writing | Science & Tech. | 20.8725 |
Wednesday, January 28, 2009
A term often used in meteorological circles is "dirty ridge." No, this is not some kind of meteorological pornography. Rather its when there is a "ridge" or area of high pressure that is not strong enough to keep us dry and cloud free. Weather disturbances with sufficient amplitude can inject clouds and rain into the northern portions of the ridge...that is the dirty part. Take a look at the upper level pattern for Thursday at 4 PM (see graphic). This represents the heights of the 500-mb pressure surface about sea level and is roughly at 18000 ft (sea level pressure is typically around 1012 mb). The ridge is obvious.
I have included the 24h precipitation for the next two days...you see some precipitation over the northern portion of the domain...particularly over the mountains. Lots of rain shadowing. This is typical for dirty ridges since the flow tends to have a strong westerly component (from the west), which produces good rain shadowing (and strong orographic enhancement).
By the way, when do you think we typically get the lowest temperatures of the day? 9 PM, midnight, 3 AM, 6 AM, or 8 AM...or perhaps some other time? Will give the answer in the next blog.
Posted by Cliff Mass at 7:33 PM | <urn:uuid:6f241f88-2c51-45ff-bd6e-8c5af2ee97e1> | 3.015625 | 272 | Personal Blog | Science & Tech. | 71.003 |
There has been much confusion of late over the definition of what a Domain Event is. I was writing some stuff that will go into both the course manual and the book and figured that it might be timely to put it up on the blog as well.
An event is something that has happened in the past.
All events should be represented as verbs in the past tense such as CustomerRelocated, CargoShipped, or InventoryLossageRecorded. For those who speak French, it should be Passé Composé, they are things that have completed in the past. There are interesting examples in the English language where one may be tempted to use nouns as opposed to verbs in the past tense, an example of this would be “Earthquake” or “Capsize”, as a congressman recently worried about Guam, but avoid the temptation to use names like this for Domain Events and stick with the usage of verbs in the past tense when creating Domain Events. These nouns tend to match up with “Transaction Objects” discussed later from Streamlined Object Modelling. It is imperative that events always be verbs in the past tense as they are part of the Ubiquitous Language.
Consider the differences in the Ubiquitous Language when we discuss the side effects from relocating a customer, the event makes the concept explicit where as previously the changes that would occur within an aggregate or between multiple aggregates were left as an implicit concept that needed to be explored and defined. As an example, in most systems the fact that a side effect occurred is simply found by a tool such as Hibernate or Entity Framework, if there is a change to the side effects of a use case, it is an implicit concept. The introduction of the event makes the concept explicit and part of the Ubiquitous Language; relocating a customer does not just change some stuff, relocating a customer produces a CustomerRelocatedEvent which is explicitly defined within the language.
In terms of code, an event is simply a data holding structure as can be seen in Listing 1.
The code listing looks very similar to the code listing that was provided for a Command the main differences exist in terms of significance and intent.
Other Definitions and Discussion
There is a related concept to a Domain Event in this description that is defined in Streamlined Object Modeling (SOM). Many people use the term “Domain Event” In SOM when discussing “The Event Principle”
Model the event of people interacting at a place with a thing with a transaction object. Model a point-in-time interaction as a transaction with a single timestamp; model a time-interval interaction as a transaction with multiple timestamps. (Jill Nicola, 2001, p. 23)
Although many people use the terminology of a Domain Event to describe this concept the terminology is not having the same definition as a Domain Event in the context of this document. SOM uses another terminology for the concept that better describes what the object is, a Transaction. The concept of a transaction object is an important one in a domain and absolutely deserves to have a name. An example of such a transaction might be a player swinging a bat, this is an action that occurred at a given point in time and should be modeled as such in the domain, this is not however the same as a Domain Event.
This also differs from Martin Fowler’s definition of what a Domain Event is.
Example: I go to Babur’s for a meal on Tuesday, and pay by credit card. This might be modeled as an event, whose type is “Make Purchase”, whose subject is my credit card, and whose occurred date is Tuesday. If Babur’s uses and old manual system and doesn’t transmit the transaction until Friday, then the noticed date would be Friday. (Fowler)
By funneling inputs of a system into streams of Domain Events you can keep a record of all the inputs to a system. This helps you to organize your processing logic, and also allows you to keep an audit log of the system (Fowler)
The astute reader may pick up on the fact that what Martin is actually describing here is a Command as was discussed previously when discussing Task Based UIs. The language of “Make Purchase” is wrong. A purchase was made. It makes far more sense to introduce a PurchaseMade event. Martin did actually make a purchase at the location, they did actually charge his credit card, and he likely ate and enjoyed his food. All of these things are in the past tense.
An example such as the sales example given here also tends to lead towards a secondary problem when applied within a system. The problem is that the domain may be responsible for filling in parts of the event. Consider a system where the sale is processed by the domain itself, how much is the sales tax? Often the domain would be calculating this as part of its calculations. This leads to a dual definition of the event, there is the event as is sent from the client without the sales tax then the domain would receive that and add in the sales tax, it causes the event to have multiple definitions, as well as forcing mutability on some attributes. One can bypass this by having dual events (one for the client with just what it provides and another for the domain including what it has enriched the event from the client with) but this is basically the command event model and the linguistic problems still exist.
A further example of the linguistic problems involved can be shown in error conditions. How should the domain handle the fact that a client told it to do something that it cannot? This condition can exist for many reasons but let’s imagine a simple one of the client simply not having enough information to be able to source the event in a known correct way. Linguistically the command/event separation makes much more sense here as the command arrives in the imperative “Place Sale” while the event is in the past tense “SaleCompleted”. It is quite natural for the domain to reject a client attempting to “Place a sale”, it is not natural for the domain to tell the client that something in the past tense no longer happened. Consider the discussion with a domain expert, does the domain have a time machine? Parallel realities are far too complex and costly to model in most business systems.
These are exactly the problems that have led to the separation of the concepts of Commands and Events. This separation makes the language much clearer and although subtle it tends to lead developers towards a clearer understanding of context based solely on the language being used. Anytime one ends up with dual definitions of a concept there is a weight placed on the developer to recognize and distinguish context, this weight can translate into both ramp up time for new developers on a project and another thing a member of the team needs to “remember”. Anytime a team member needs to remember something to distinguish context there is a higher probability that it will be overlooked or mistook for another context. Being explicit in the language and avoiding dual definitions helps make things clearer both for domain experts, the developers, and anyone who may be consuming the API.
Fowler, M. (n.d.). Domain Event. Retrieved from EAA Dev: http://martinfowler.com/eeaDev/DomainEvent.html
Jill Nicola, M. M. (2001). Streamlined Object Modelling. Prentice Hall. | <urn:uuid:9391a14b-91c6-4303-acc9-9c3f97a420f3> | 3.21875 | 1,531 | Personal Blog | Software Dev. | 45.460242 |
The south polar layered deposits are a stack of layered ice up to 3000 meters (9800 feet) thick which is similar to terrestrial ice sheets. In places, this stack extends up to 1100 kilometers (680 miles) from the pole and many of the impact craters surrounding this ice-sheet appear to be filled with mounds of similar icy material and also sand dunes.
This image shows the material within one of these near-polar craters. The crater is about 44 kilometers (27 miles) across and contains a mound of material about 23 kilometers (14 miles) across and 300 meters (1000 feet) thick on its northern (south facing) wall. The dark material at the top (north) of the image shows the northern wall of the crater, the bright material that begins near the image top and extends toward the bottom is the surface of the mound.
This surface is covered with sand dunes that appear bright as they are still covered by seasonal carbon dioxide frost. Smaller dunes and ripples can be seen on the surfaces of the larger linear dunes. In the low lying areas between dunes, one can see a network of cracks that are reminiscent of the surface of the polar layered deposits, indicating that this mound is probably mostly ice with a thinner and incomplete covering of dunes.
The dark spots in the frost cover are characteristic of how this terrain defrosts, and are commonly observed in these locations during this season.
Connection failed:Access denied for user 'conductor'@'web1.lpl.arizona.edu' (using password: YES) | <urn:uuid:1778d91f-0fbb-40bd-9ab6-19a7ffc0bf9c> | 3.296875 | 325 | Knowledge Article | Science & Tech. | 51.005137 |
Pierre Bezukhov writes "Emissions from thawing permafrost may contribute more to global warming than deforestation this century, according to commentary in the journal Nature. Arctic warming of 7.5 degrees Celsius (13.5 degrees Fahrenheit) this century may unlock the equivalent of 380 billion tons of carbon dioxide as soils thaw, allowing carbon to escape as CO2 and methane, University of Florida and University of Alaska biologists wrote today in Nature. Two degrees of warming would release a third of that, they said. The Arctic is an important harbinger of climate change because the United Nations calculates it's warming at almost twice the average rate for the planet. The study adds to pressure on United Nations climate treaty negotiators from more than 190 countries attending two weeks of talks in Durban, South Africa that began Nov. 28." | <urn:uuid:b3ab824b-c97a-4244-a60e-71cdd70b004c> | 3.03125 | 166 | Truncated | Science & Tech. | 36.678424 |
Skip to Navigation
subscribe to Plus
Search this site:
Author: John D. Barrow
What is dark energy?
What's the mysterious stuff that makes up 70% of our Universe?
Outer space: Are the constants of nature really constant?
Are the unchanging features of the Universe really unchanging?
What happened before the Big Bang?
Did the Big Bang mark the beginning of time? Not if we live in a bubble multiverse!
Outer space: When errors snowball
Why length matters
Outer space: Another Christmas Carol
What Dickens thought about statistics
Outer space: Pretty mean prices
How to keep inflation down
Outer space: Venn you can't use Venn
When the famous diagram fails
Outer space: How to rig an election
It's easier than you think
Outer space: Blowin' in the wind
Getting the most from the air
Outer space: Emergence
How does complexity arise from simplicity?
Science fiction, science fact: reports from the frontiers of physics
What is time? What is space? Are there parallel universes? Join Plus and FQXi on a journey exploring these...
Folding the future: From origami to engineering
Back in the days before smart phones with GPS functions became ubiquitous we had maps. Remember how hard it...
Make a difference to mathematics
Researching the unknown
Science is much stranger than fiction. It suggests that our Universe may just be one of infinitely many which...
Putting Turing on stage
Alan Turing was a mathematician and WWII code breaker who was convicted of homosexuality in the 1950s,...
Login to comment or download PDFs
Create new account
Request new password | <urn:uuid:88409a6a-cf32-47ac-89a0-1731241313fd> | 3.015625 | 353 | Content Listing | Science & Tech. | 55.468077 |
Space & Tech News: Predicting Meteorite Impacts
Flash Player Upgrade Required.
You must download and install the latest version of Adobe Flash Player to view this content. Click here to download.
Coming up next in 10 seconds
© 2013 National Geographic; Video & photos courtesy NASA
Predicting Meteorite Impacts
February 15, 2013—Astrobiologist and National Geographic Emerging Explorer Kevin Hand explains how Asteroid DA-14 could be only the beginning of what we can expect from space. With more than a million such objects out there, what can be done to prevent a meteor strike like the one that crashed into Russia this morning? | <urn:uuid:adc87ede-e082-430f-a5c3-a33e72b4a3e6> | 2.828125 | 132 | Truncated | Science & Tech. | 43.93596 |
Science Fair Project Encyclopedia
The kilowatt-hour (symbol: kW·h) is a unit for measuring energy. It corresponds to one kilowatt (kW) of power being used over a period of one hour. Since a watt is an amount of energy over a timespan (joule per second), multiplying it with a timespan gives an energy amount (1 J/s × 1 s = 1 J).
The kilowatt-hour is commonly used for electrical energy, since it may be easier to understand in a practical context than the proper SI unit for energy, the joule, which is a watt-second (W·s). The joule is a comparatively small unit, making numbers quite large.
1 kW·h = 3,600,000 J
1 W·h = 3,600 J
1 W·s = 1 J
1 W = 1 J/s
A 60 W light bulb consumes 60 W of power. This is the same as 60 J/s or 216,000 J/h or 60 W·s per second or 60 W·h per hour.
The relationship between power (P), energy (E) and time (t) is given by the formula:
- E = P · t
If we use SI units,
- P is measured in watts
- t is measured in seconds
- E is measured in joules
- P is given in kilowatts
- t is given in hours
- E is in kilowatt hours
Usage in commerce
As such, a kilowatt-hour meter (electricity meter) is installed in most electrically supplied buildings in the world.
In the UK, the kilowatt-hour is commonly referred to by consumers and electricity retailers as the unit (as in "we were billed for 300 units of electricity"). The Board of Trade Unit or B.O.T.U., defined by a former department of the UK government, is an alias. The B.O.T.U. should not be confused with the British thermal unit or BTU, which is a much smaller quantity of thermal energy.
The kilowatt-hour is sometimes used for billing of natural gas supply. As most gas meters measure the volume of gas supplied, the calorific value (energy content) of the gas must be accounted for in the conversion to kilowatt-hours.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:4a405d77-51f9-4d62-9db4-089f59501839> | 4.21875 | 532 | Knowledge Article | Science & Tech. | 71.799694 |
Revealing the Red Planet's secrets
The High Resolution Imaging Science Experiment onboard the Mars Reconnaissance Orbiter continues to amaze scientists with its detailed images.
December 19, 2007
|Since it reached its final orbit around Mars in late 2006, the Mars Reconnaissance Orbiter (MRO) spacecraft has been studying the Red Planet in unprecedented detail. It has found that water may not have been quite as ubiquitous on ancient Mars as many scientists previously thought. Images of the boulder-strewn northern lowlands argues against the possibility a vast ocean once existed there. And several other locations show more signs of flowing lava than flowing water.|
You are currently not logged in. This article is only available to Astronomy magazine subscribers.
Already a subscriber to Astronomy magazine?
If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will
need to regsiter for one. Registration is FREE and only takes a couple minutes.
Non-subscribers, Subscribe TODAY and save!
Get instant access to subscriber content on Astronomy.com!
- Access our interactive Atlas of the Stars
- Get full access to StarDome PLUS
- Columnist articles
- Search and view our equipment review archive
- Receive full access to our Ask Astro answers
- BONUS web extras not included in the magazine
- Much more! | <urn:uuid:9f51ca56-4426-4a7c-88c0-af52a200a528> | 3.265625 | 293 | Truncated | Science & Tech. | 36.331919 |
Copyright © 2009 Elsevier Ltd All rights reserved.
Current Biology, Volume 19, Issue 19, R892-R893, 13 October 2009
CorrespondenceAdd/View Comments (0)
Herbivory in a spider through exploitation of an ant–plant mutualism
1 Department of Biology, Villanova University, Villanova, PA 19085, USA
2 Heller School for Social Policy and Management, Brandeis University, Waltham, MA 02454, USA
3 Department of Forensic Science, Trent University, Peterborough, ON K9J 7B8, Canada
4 Department of Geological Sciences and Geological Engineering, Queen's University, Kingston, ON K7L 3N6, Canada
- Spiders are thought to be strict predators . We describe a novel exception: Bagheera kiplingi, a Neotropical jumping spider (Salticidae) that exploits a well-studied ant–plant mutualism, is predominantly herbivorous. From behavioral field observations and stable-isotope analyses, we show that the main diet of this host-specific spider comprises specialized leaf tips (Beltian food bodies; Figure 1A) from Vachellia spp. ant-acacias (formerly Acacia spp.), structures traded for protection in the plant's coevolved mutualism with Pseudomyrmex spp. ants that inhabit its hollow thorns . This is the first report of a spider that feeds primarily and deliberately on plants.
Jumping spiders use advanced color-vision, agility, and cognitive skills to prey upon invertebrates . The Salticidae is the largest family of spiders (>5,000 species), and members of this diverse group employ a broad range of foraging strategies. However, departures from carnivory in salticids — or in any of the 40,000 described spiders — are rare : several cursorial spiders imbibe nectar as an occasional supplement to animal prey , and some juvenile orb-weavers incidentally ingest pollen when recycling their webs .
We discovered herbivory in B. kiplingi during field studies in southeastern Mexico (Quintana Roo, involving V. collinsii acacias inhabited by P. peperi ants) and northwestern Costa Rica (Guanacaste Province, involving V. collinsii and V. cornigera inhabited by P. spinicola, P. flavicornis, or P. nigrocincta). Between 2001 and 2008, we systematically observed individual B. kiplingi in these two regions to study foraging behavior. We supported direct observations of spiders in Mexico with high-definition videography.
Individuals at both sites fed predominantly on Beltian bodies, which represented nearly the full diet of spiders in Mexico (91% of items consumed) but relatively less in Costa Rica (60%; χ2 = 14.2, df = 3, P < 0.05; Figure 1B). Spiders occasionally supplemented Beltian bodies with extrafloral nectar, another resource central to the ant–acacia mutualism . They also preyed on acacia-ant larvae, small nectar-feeding flies, and (rarely) smaller conspecifics.
We observed focal B. kiplingi circumventing the well-known defenses of the acacia's Pseudomyrmex ant-inhabitants, which keep the plant free of most herbivores and encroaching vegetation . These spiders occur almost exclusively on ant-occupied acacias, where they breed year-round and generally build their nests at the distal tips of older leaves (86%; N = 110) that have low rates of ant patrol (see Supplemental Data available on-line). Foraging B. kiplingi actively avoid ant-guards and exhibit situation-specific strategies (for example, changing targets if approached by ants) when harvesting Beltian bodies and when taking nectar or ant larvae (Supplemental Movies S1–S5 ).
Stable-isotope analyses confirmed B. kiplingi herbivory (Figure 1C; see also Supplemental Data ). Our results are consistent with other food-web studies: the tissues of herbivores tend to have lower 15N:14N ratios (expressed as δ15N) relative to carnivores, whereas consumers tend to match 13C:12C ratios (δ13C) of their food . Mexican B. kiplingi specimens had δ15N profiles averaging 4.8% lower than those of other jumping spiders from surrounding vegetation, but only 2.1% and 2.9% higher than ant workers and Beltian bodies, respectively. B. kiplingi spiders and ant workers at this site had δ13C signatures virtually identical to those of Beltian bodies, whereas other spiders exhibited δ13C values that did not match those of Beltian bodies.
Using dietary mixing models (see Supplemental Data ), we estimate that B. kiplingi in Mexico (N = 50) derive >95% of assimilated C and N from ant-acacias, including 89 ± 13.2% (mean ± SE) directly from plant tissue and 8 ± 7.9% indirectly from acacia-ant larvae. Individuals of all age-sex classes had similar diets, suggesting that spiders in this population are near-total vegetarians throughout their lives. Analyses of Costa Rican specimens (N = 11) indicated a larger contribution of other animal prey to the diet of spiders there (Supplemental Figure S1 ), consistent with feeding patterns observed in the field (Figure 1B).
A widespread and intimate distributional association exists between B. kiplingi and myrmecophytic Vachellia spp. The spider's known geographic range coincides with that of ant–acacia systems throughout Mesoamerica (Figure S2 ). Hundreds of individual B. kiplingi may inhabit a single ant-acacia, yet during a seven-year inventory of all salticids at our Costa Rican site (N = 1174 individuals from 48 species), we observed only two B. kiplingi individuals on plants other than Vachellia spp. In both Mexico and Costa Rica, we also found nests of B. kiplingi (N > 200) only on ant-acacias or adjacent foliage. Reports of host-plant specificity in a spider are rare , and no spider has been shown previously to exploit the specific resources exchanged in any mutualism.
Consumption of Beltian bodies by B. kiplingi may derive from foraging on other static food sources, such as acacia-ant larvae or eggs of other insects. However, while enriched in sugar and protein, these low-fat food bodies are 80% structural fiber and are thus poor surrogates for animal prey. Given that no other spider is known to feed on vegetation, the digestive physiology of B. kiplingi may be specialized to process such a fibrous, nitrogen-poor material . Year-round availability of ant-plant food , combined with indirect defensive benefits possibly conferred by the acacia-ants , may also help explain how the spider's carnivorous ancestor transitioned to herbivory.
The host-specific natural history of B. kiplingi demonstrates that commodities modified for trade in a pairwise mutualism can, in turn, shape the ecology and evolutionary trajectory of other organisms that intercept these resources. Here, one species within an ancient lineage of carnivorous arthropods — the spiders — has achieved herbivory by exploiting plant goods exchanged for animal services. While the advanced sensory-cognitive functions of salticids may have pre-adapted B. kiplingi for harvesting Beltian bodies, this spider's unprecedented trophic shift was contingent upon the seemingly unrelated coevolution between an ant and a plant.
We thank K. Arakawa, M. Milton and J. LaPergola (field assistance); G.S. Bodner and W.K. Maddison, and P.S. Ward (specimen identification); R. Michener, K. Klassen, and A. Vuletich (lab assistance); Centro Ecológico Akumal and Guanacaste Conservation Area (logistical support); and J.L. Bronstein, N.K. Whiteman, D.J. Kronauer, and N.E. Pierce (manuscript comments). This research was funded in part by the Animal Behavior Society, Sigma Xi, Villanova University, and the Earthwatch Institute.
- Document S1. Supplemental Experimental Procedures, Supplemental Results, Supplemental References and Two Figures (PDF 360 kb) | <urn:uuid:080553d4-7b5a-4a31-8029-779d99f738e2> | 3.15625 | 1,776 | Academic Writing | Science & Tech. | 39.025435 |
Australia State of the Environment Report 2001 (Theme Report)
Prepared by: Jonas Ball, Sinclair Knight Merz Pty Limited, Authors
Published by CSIRO on behalf of the Department of the Environment and Heritage, 2001
ISBN 0 643 06750 7
The pressures on inland waters can be divided into two main categories. The first category includes pressures from the extraction of surface water and groundwater for human uses such as agriculture, drinking water and industry. The effects of water extraction include:
- reducing river flows and groundwater to levels that are not sustainable for dependent aquatic ecosystems
- the alteration of natural flow patterns in rivers, stream and wetlands
- the construction of instream dams and weirs which provide a barrier to fish movement, but ideal conditions for algal bloom development.
The second category includes pressures arising from activities in the catchment such as land clearing, agriculture and urbanisation. These activities can result in dryland salinity, increased soil erosion and the associated transport of nutrients into waterways, and localised pollution of surface and groundwaters with chemical and biological contaminants. Because of the intrinsic link between inland waters and their catchments, the effects of land use and management must be considered. Other pressures on inland waters include the introduction and spread of exotic plant and animal species, and climate change.
The major pressures on the inland waters are introduced in more detail in the following sections.
The key findings on inland waters in the Australia: State of the Environment (SoE) 1996 report (State of the Environment Advisory Council 1996) are presented below. These key findings are assessed against the latest information to determine whether there has been an improvement, no change or deterioration in condition. Other key findings are also presented.
Australia's inland waters are increasingly being consumed, diverted, polluted and degraded, particularly by population centres and intense land use areas, although many good quality rivers and aquifers remain, mainly in the north.
Drinking water quality - is generally high in large cities, but is less satisfactory in many rural and remote communities.
Water quality - sediments from land erosion and increased salt from rising water tables continue to load inland waters; sedimentation and salinisation adversely affect aquatic biota, increase the cost of water treatment, reduce options for using water, and reduce the storage capacity of dams and reservoirs.
Algal blooms - nutrient levels remain too high and, combined with reduced stream flow, lead to frequent and extensive blooms of blue-green algae which are often toxic. Main nutrient sources are land run-off, erosion and sewage outfalls.
Pollutants - localised problems occur from contaminants e.g. oils, metals, pesticides, acid, chemicals and bacteria; sources include industry, mining, agriculture, forestry, urban development and sewage effluent.
Wetlands - large areas have been destroyed or seriously degraded; the banks of many rivers have been damaged. Drainage, changes to water regimes, and increases in sediment run-off and nutrient inputs are the main causes of wetland deterioration.
Threats to aquatic biota - pollution, over-allocation of water, changed flow regimes and exotic and displaced species are all affecting native species. Many species of aquatic animals are endangered, in decline or extinct.
Household water use - has increased because of increasing populations and rising consumption per person.
Over-allocation - river regulation and damming (mainly to provide a buffer against droughts) have drastically altered seasonal flow regimes in developed regions, particularly in the south-east.
Irrigation - uses the greatest amount of water; irrigation demands contribute to over-allocation of water, and inappropriate practices lead to problems with waterlogging, salinisation of soils, and nutrient and pesticide pollution of inland waters.
Groundwater mining - reserves are generally being used much faster than they are being replenished, e.g. in the Great Artesian Basin (GAB), Pioneer and Namoi valleys and Burnett Basin. A program to cap running bores is under way for the GAB.
Data issues - Australia lacks nationally coordinated basic data on water quality and catchment characteristics. Initiatives such as the National River Health Program, Waterwatch, and other community-based groups can address this need.
Integrated catchment management - most inland water issues require whole catchment management scale solutions.
Australia has only limited surface and groundwater resources, and in many areas large volumes of water are extracted for human uses. The volume of water that can be extracted from a river or groundwater resource while maintaining sufficient water to protect and maintain the aquatic environment is called the sustainable yield. Current estimates of sustainable yields suggest that, on average, only 20% of total run-off can be sustainably captured for human use. In some catchments the current water use is close to or exceeds the sustainable yield and the aquatic environment is under considerable pressure.
As well as reducing flows, the harnessing of rivers and streams to supply water for drinking and agriculture has also fundamentally changed the natural regime of low and high flows that many of Australia's unique ecosystems have adapted to. For example, many inland wetlands in Australia are decreasing in area because the size and variability of minor flooding events has decreased significantly due to the regulation of flows by dams (Kingsford 2000). The lower flows have also provided ideal conditions for stratification of water storages (which leads to the leaching of nutrients and pollutants from sediments) and the development of blue-green algal blooms.
Water storages such as dams, weirs and barrages have a secondary effect on Australia's native fish populations by providing a barrier to fish movement. Many of Australia's native fish migrate upstream as part of their reproductive cycle. By preventing fish movement both up and downstream, barriers also prevent access to important fish habitat. There are over 1700 barriers to fish movement in inland New South Wales alone.
Although groundwater resources are widespread, not all aquifers contain groundwater of sufficient yield and quality for human uses. In addition, with the cap on surface water use in the Murray-Darling Basin and increased cost of surface water, the development of groundwater resources has increased. For example, total groundwater use increased by 90% between 1983/84 and 1996/97. The volume of groundwater extracted from aquifers in some areas exceeds their sustainable yield.
Once an aquifer is exhausted it can take decades or even centuries for it to recover. It is also increasingly recognised that many terrestrial ecosystems are dependent on groundwater and that groundwater inflow provides the baseflow in many rivers and streams. There are also many unique ecosystems (e.g. wetlands) that are directly dependent on groundwater.
The impacts of indiscriminate land clearing are affecting Australia's inland aquatic environments. The replacement of deep-rooted perennial vegetation with shallow-rooted short-lived pastures and crops has increased the 'leakage' of water into the groundwater. This has caused groundwater tables to rise in many areas resulting in land salinisation, water logging and higher salinity in rivers and streams. Over the last five years the threat posed by dryland salinity has been quantified with up to 5.7 million hectares already affected (primarily in south-western Australia, western Victoria and South Australia) and another 7.5 million hectares at risk. In some areas, river and stream salinities are predicted to increase substantially over the next 50 years resulting in stress on flora and fauna, salinisation of freshwater wetlands, damage to riparian vegetation and fundamental changes to water chemistry. These impacts are already being seen in Western Australia and western Victoria. Drinking-water supplies for many inland towns in New South Wales and Victoria and much of South Australia are also threatened by increasing river and stream salinity.
Land clearing and agriculture have also contributed to nutrient enrichment of many inland waters. Soil erosion from grazing is the largest source of nutrients in many catchments. High nutrient levels, combined with the increased periods of low flow due to river regulation and water extraction, have caused blue-green algal blooms to become a persistent problem in some dams, wetlands and lakes. Point sources of nutrients such as wastewater discharges from sewage treatment plants or intensive livestock enterprises (e.g. cattle feedlots) can also contribute to nutrient enrichment in some rivers and streams.
The clearing and grazing of riparian vegetation and salinisation can have particularly severe effects, and many inland waters have severely degraded riparian vegetation. Riparian vegetation creates an important buffer between polluting land uses and rivers and streams. The loss of riparian vegetation leads to a reduction in buffering capacity, leaf litter deposition, streambank stability and habitat, and has the potential to alter stream metabolism.
Modern agriculture, and especially irrigated agriculture, relies on pesticides to maintain crop and pasture health and productivity. As in most other countries, pesticide use by agriculture in Australia has increased substantially over the past 20 years, with close to $1 billion spent annually. Although new pesticides have been developed and the management and application of pesticides has improved, high concentrations of pesticides have been measured in water bodies and groundwater in agricultural areas. Pesticides have been implicated in at least 20 fish kills in New South Wales rivers, streams and dams alone since 1990.
Other activities in catchments such as urbanisation, mining and industry can also result in the pollution of surface water, groundwater and sediments. The introduction and invasion of exotic animals and plants have displaced some native species, reduced biodiversity and caused other problems such as changes in water quality.
Unstable stream banks resulting from clearing of riparian vegetation contributing to river silt loads.
Source: Robert Simpson. | <urn:uuid:76aa0997-e36b-48ca-8f8e-7668e2079aa0> | 3.546875 | 1,964 | Knowledge Article | Science & Tech. | 21.866419 |
Pi Day is 3/14, and I’m sure you’re planning to celebrate it in your classroom.
I’ve got a collection of links for you, all of which can be used spontaneously, without prep, in some little snippet of time left over during the day. Then, when you’re home with the family, you can pull out all the stops, bake pi(e)s, spend happy hours calculating, wear your special Pi Day outfits, take the annual family Pi Day photos… you know, the usual.
Here are some ways to celebrate:
- Visit the official website.
- NCTM also has a Pi Day page.
- Check out the Exploratorium’s Pi Day page, with poetry yet!
- Rebecca Rupp’s Pi Day Page includes links to pi issues in higher mathematics, including the pi vs. tau controversy.
- While considering pi poetry, don’t miss the Piku.
- Laura Smoller at UALR has a pi history page. Add the dates to your classroom timeline.
- Visit teachpi.org. This is one of the best places to track down the infamous PiRap.
- Bake a pi cake.
- Use food to calculate pi.
- Sing Pi Day songs. The link here isn’t necessarily the best Pi Day music, but it gets my vote because it refers to its songs as “carols,” and I totally love the idea of Pi Day carols.
- My favorite Pi Day song site.
- Pidayinternational has the coolest little video, and a virtual party for you to join.
- A bunch more Pi Day songs, of various kinds and qualities:
Having sung and possibly also taught about pi, we can celebrate Albert Einstein’s birthday. Pi Day happens to be on Einstein’s birthday, or perhaps we should say that Einstein happened to be born on Pi Day. Pi begins with 3.14, making 3/14 Pi Day. But Einstein’s birthday is also worth celebrating.
Here are some of our favorite Einstein links:
- Einstein’s Big Idea This Nova page has links to lots of background information and experiments.
- Here is a PhysicsQuest assignment on Einstein. It’s all complete — just send the kids to the computer.
- The AIP Center for the History of Physics exhibit on Einstein includes a brief version if you’re in a hurry.
- Here is a PBS interactive site on Einstein.
It would be a great day to eat pie for Pi day and cake for Einstein’s birthday, check out a few of these sites on the classroom computer, sing some pi songs, and send the kids on their way inspired and refreshed by some of the world’s Big Ideas. | <urn:uuid:2d843897-31e9-4d59-91a6-ba746cf007f5> | 2.8125 | 588 | Personal Blog | Science & Tech. | 69.511317 |
- Public Outreach
- NMDB Brochures
- Cosmic rays : high energy particles from the Universe
- Solar Wind, Heliosphere, and Cosmic Ray Propagation
- Cosmic rays and the Earth
- Impact: Technological and biological effects of cosmic rays
- Neutron monitor network : fundamental research and applications
- A few technical details
- Questions ?
- Muon detectors
- Underground muon experiments
- Extensive Air Shower arrays
- Cherenkov detectors
- Balloon detectors
- Further reading
The muon component of the atmospheric cascade is measured by muon detectors. It is important to note that only primary cosmic ray (CR) nucleons >4 GeV have sufficient energy to generate muons that can penetrate through the atmosphere. The detection of the muons is realized e.g. by utilizing Geiger-Müller counters or scintillation counters. The Geiger-Müller counters require high voltage, which creates a very high electric field near the anode of the detectors. When a CR particle enters a detector, it strips off some electrons from the counting gas and from the counter tube wall. These electrons are accelerated towards the positively charged wire and gain enough energy to strip more electrons from the counter gas molecules. In turn, these electrons are also accelerated and strip off more and more electrons. This electric avalanche consisting of more than a billion negative charges rains down on the positively charged wire, causing a current that flows into the simple detection circuit.
As a single Geiger counter is sensitive to particles coming from any direction, such a detector assembly does not permit the selection of specific orientations and of the particle family. The use of two or more Geiger detectors with the coincidence technique (simultaneous count signal in two or more counter tubes) offers the possibility of carrying out more sophisticated experiments, e.g. discriminate muons and determine the direction of incidence. It also allows to exclude the detection of terrestrial radiation.
The high-energy part of the muon component is studied by underground detectors. These detectors use the good penetration capability of muons in matter to easily distinguish muons from other CR components (except for neutrinos). The underground muon detector may be either a single detector or a small array. (Note that atmospheric, solar and cosmic neutrinos can also be studied deep underground. However, the size of the detector must be very large in order to compensate for the small cross section of neutrinos).
Extensive air showers are detected with different kinds of particle detectors. Most common are scintillation counters that allow to measure the time of arrival with high accuracy. Further used devices are water Cherenkov counters, drift chambers, streamer tube detectors, and Geiger-Müller tubes. Position-sensitive devices allow to measure the incidence direction of the particles.
To detect extensive air showers coincidences of several particle detectors of an array of tens or hundreds of detectors separated by 10-30 meters are required. For the very large showers with billions of particles, the detectors have to be placed in a network with mash size of typically one kilometer. Therefore, the size of an air shower array varies from hundreds of meters to tens of kilometers. Such arrays allow to study primary CRs with energies in the range 1012-1021 eV.
Relativistic electrons and positrons produced in the atmospheric cascade generate Cherenkov emission in visible light when propagated at a speed greater than the speed of light in that medium. The Cherenkov array collects these light pulses from a large volume (thousand cubic kilometers). A similar technique is also used to study neutrinos where Cherenkov light pulses are produced under water (e.g. Deep Underwater Muon And Neutrino Detector (DUMAND)) or in ice (e.g. IceCube Neutrino Observatory or Antarctic Muon And Neutrino Detector Array (AMANDA)).
Modern balloons bring detectors up to altitudes of 40-70 km. Earlier, rather small and simple detectors were flown on balloons. However, today rather large and complicated telescopes such as the BESS (Balloon Borne Experiment with Superconducting Solenoidal Spectrometer) detectors are flown on balloons. At these high altitudes, the atmosphere above the balloon is negligible for CR, and therefore the balloon borne detectors observe directly primary CR particles. In this sense they are like low-orbit satellites, only much cheaper and easier to operate.
The geomagnetic rigidity cutoff is still a significant effect for balloon observations. Moreover, the atmospheric albedo particles (particles reflected or scattered back into space from the atmosphere) are also measured by balloon detectors and therefore have to be taken into account. The main disadvantage of balloon-borne experiments is that they are campaign-like experiments, operating only for a short time interval.
M.L. Duldig, ``Muon observations'', Space Science Review, vol. 93, pp. 207-226, 2000 | <urn:uuid:8d84c217-041b-448a-9f8c-aea44debfe16> | 3.71875 | 1,029 | Knowledge Article | Science & Tech. | 28.837031 |
|Dec21-12, 09:36 PM||#1|
every reaction requires its reactants to have enough actvation energy in order to start the chemical reaction.sometimes, extra energy is provided in the form of heat. so the reactants take in heat from the surroundings and have enough energy to break old bonds and form new bonds. when they break old bonds they take in the heat energy provided right? then when they form new bonds they let out energy that is less than the heat energy provided hence some of the heat energy is actually taken in by the reactants right?that causes the temp drop as shown in the endothermic temp graph.
just to clarify if my line of thought is correct, in this case the activation energy that the reactants possess come from the heat energy at room temperature as well as the heat energy from the heat source right? and that the amount of heat energy that reactants let out is much lesser than this energy combined, hence causing temp to drop to a minimum, am i right?
the purpose of typing out this paragraph is to check if my theory has anything wrong in it, so please feel free to correct me if i am wrong!Thank You!
|Dec22-12, 03:11 AM||#2|
I don't see anything obviously wrong in what you wrote (apart from the fact phrases should start with a capital letter), but:
1. I don't see the plot you refer to, I am only guessing its content.
2. Seems to me like you first refer to the system without heating and then miraculously there is a heat from a heat source. That makes me wonder what is the experimental setup you are talking about.
|Similar discussions for: endothermic reactions.|
|Practical Endothermic reactions||Chemistry||5|
|Balancing Redox Reactions using half reactions?||Biology, Chemistry & Other Homework||4|
|Endothermic nuclear reactions||High Energy, Nuclear, Particle Physics||10|
|Could we use endothermic(heat absorbing) reactions to reduce hurricane strength?||Earth||16|
|endothermic nuclear fusion reactions||Nuclear Engineering||11| | <urn:uuid:3238a4fe-d2f8-4638-8c50-ebf061c3fc52> | 3.578125 | 456 | Comment Section | Science & Tech. | 58.671053 |
High Energy Cosmic Rays and the Origin of Life
|Sep12-12, 08:59 AM||#1|
High Energy Cosmic Rays and the Origin of Life
The advocators of Darwinís theory of evolution discover that it is difficult for normal evolution rate alone to form the species diversity now. What factor(s) accelerated the origin of life and species diversity?
Novae, especially supernovae, would generate high energy cosmic rays that would impact on acceleration of creature DNA gene mutation during this period, thus create lots of new species.
The best supernova rate estimate we can offer indicates that one or more supernova explosions are likely to have occurred within 10 pc or so of the Earth during the Phanerozoic era, i.e., during the last 570 million years since the sudden biological diversification at the start of the Cambrian. (CERN)
CERN-TH.6805/93, John Ellis et al.
|Sep12-12, 10:03 AM||#2|
Blog Entries: 1
For a start there is no "normal evolution rate", species can remain stable for millions of years whilst others can speciate in decades. In addition the biodiversity of Earth has risen and fallen multiple times accross it's long history.
|Sep12-12, 12:32 PM||#3|
High energy cosmic rays can cause an increase in UV radiation. However, UV radiation does not cause mutation in large organisms. The high energy cosmic radiation could itself cause mutation in large organisms. I conjecture that you are referring to mutations caused by high energy cosmic rays on large animals. However, high energy cosmic rays that hit the ground would have left a large amount of radioactive contamination.
The changes in the rate of evolution are not usually caused by changes in mutation rate. Changes in the rate of speciation can be caused by a change in the environment. The punctuations in the evolutionary record seem correlated more with catastrophes then with radiations.
According to the punctuated theory, sudden catastrophes of any type can cause a discontinuity in the fossil record by causing the extinction of species. The extinction of common species allows rare species to multiply. They take over the ranges of the common species. The mutation rate does not have to change. The development of new species can occur on a much longer time scale.
“Transitional forms are generally lacking at the species level, but they are abundant between larger groups." Although there exist some debate over how long the punctuations last, supporters of punctuated equilibrium generally place the figure between 50,000 and 100,000 years.”
A supernova would not cause a flux of radiation that would last 50 KY. It would be more like decade, or maybe a few decades. The pattern of mutations caused by an influx of radiation would cause gene deletions that would be apparent even in extant organisms.
Radiation tends to cause point mutations that delete a gene. Thus, a sudden influx of radiation would result in a sudden decrease of genes. Most of those deletions would result in a saltation, which would be immediately eliminated. There would be differences in the sizes of the genome that would appear in closely related species. That does not appear to happen. There are few cases of gene deletion in the genomes of extant organisms.
“Two classes of mutations are spontaneous mutations (molecular decay) and induced mutations caused by mutagens.
<Big list of causes of mutation.>
Two nucleotide bases in DNA – cytosine and thymine – are most vulnerable to radiation that can change their properties. UV light can induce adjacent pyrimidine bases in a DNA strand to become covalently joined as a pyrimidine dimer. UV radiation, particularly longer-wave UVA, can also cause oxidative damage to DNA.”
I don’t know how common pyrimidine dimes are in the genomes of modern organisms. However, I haven’t found much in the literature on these mutations. Radiation producing mutations sounds like an easily reproduced experiment in the laboratory, but it doesn’t appear to have left much of a record in nature. Of course, there are other causes of mutation that have left a larger record in the genomes of extant organisms.
<Bigger list of causes of mutation.>
In order to form a new species, there has to be a large series of mutations that have very small effects. Saltations are preferentially lethal, as opposed to beneficial. However, small changes have a significant chance of being beneficial. Deleting a gene tends to cause saltations. If a regulatory gene is deleted by the radiation, there may be smaller mutations. However, the effect of a series of such mutations would be a smaller genome. There are other mechanisms for mutation that would not decrease the number of genes. These are more likely involved in evolution.
“Saltation does not fit into contemporary evolutionary theory, but there are some prominent proponents, including Carl Woese. Woese, and colleagues, suggested that the absence of RNA signature continuum between domains of bacteria, archaea, and eukarya constitutes a primary indication that the three primary organismal lineages materialized via one or more major evolutionary saltations from some universal ancestral state involving dramatic change in cellular organization that was significant early in the evolution of life, but in complex organisms gave way to the generally accepted Darwinian mechanisms. “
Obviously, the mutation caused by supernovas aren’t necessary to explain speciation in all cases. However, supernovas can trigger rapid speciation events by changing the environment. For example, the end-Ordovician mass extinction may have been caused by a supernova. If so, the mutation rate wasn’t the driving factor for the rapidity of the speciation. The extinction of large numbers of organisms caused a change in the environment that started organisms evolving. However, there is no evidence that the rate of speciation increased.
I present end-Ordovician extinction as the exception that proves the rule. The supernova event in this case, if it happened at all, did more killing than mutating.
“A nearby gamma ray burst (less than 6000 light years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years. It has been suggested that a supernova or gamma ray burst caused the End-Ordovician extinction. ”
Please note that UV radiation can only cause mutations in transparent organisms that live near or above the surface of the water. For example, UV radiation can’t penetrate animal skin so it can’t reach the gametes. UV radiation shining on animals can only cause sunburn and cancer. UV can’t cause mutation in animals.
The increase in UV radiation could only kill large animals and plants. Any increase in the rate of evolution caused by UV radiation would be primarily caused by extinction, not mutation. Extinctions change the environment.
There are many cases of rapid speciation that have been studied in the field. However, none of these cases have involved a flux of radiation. In each case observed, the speciation event was triggered by a change in environment. The mutations involved were extremely small. There was no increase in the number of genes deleted in their genome. The mechanisms of these mutations are still unknown, but the changes in environment are very well documented. Therefore, scientists tend to classify different types of speciation in terms of changes in environment.
“There are four geographic modes of speciation in nature, based on the extent to which speciating populations are isolated from one another: allopatric, peripatric, parapatric, and sympatric. Speciation may also be induced artificially, through animal husbandry, agriculture, or laboratory experiments. Observed examples of each kind of speciation are provided throughout.”
Here is a field study example of a speciation event that happened rapidly with no radiation. There are a few such events, but this is a really interesting one.
Rhagoletis pomonella is a case of rapid speciation that did not involve radiation. The apple maggot differentiated into a hawthorne maggot and a blueberry maggot a time interval that was only a few decades long. There was no supernova event correlated with this speciation.
"Rhagoletis pomonella is significant evolutionarily in that the race of this species that feeds on apples spontaneously emerged from the hawthorn feeding race in the 1800 - 1850 CE time frame after apples were introduced into North America. The apple feeding race does not now normally feed on hawthorns and the hawthorn feeding race does not now normally feed on apples. This constitutes a possible example of an early step towards the emergence of a new species, a case of sympatric speciation.
The emergence of the apple race of Rhagoletis pomonella also appears to have driven formation of new races among its parasites."
There is no supernova recorded in the 1800-1850 time span. In fact, there was no change in the number of sunspots either. There is no evidence that the radiation illuminating the earth changed at all in the 1800-1850 time span. Rhagoletis came to the land of opportunity and literally branched out. So did its waspish parasites.
So the theory of evolution doesn't need supernovas to be consistent with the data. Evolution usually occurs without supernovas by mechanisms that are fairly well known.
|Sep12-12, 02:32 PM||#4|
High Energy Cosmic Rays and the Origin of Life
It may be useful to consider the exception that proves the role. The end-Ordovician extinction may have been caused by a nearby supernova. Most scientists don’t think so. However, a small minority of scientists are considering the idea that maybe a supernova may have caused it. However, you may find it interesting why this minority is considering the idea that a supernova caused it.
Please note that there were no land animals or land plants until the Silurian. Land animals continued to develop for millions of years after the end-Ordovician event. So when I say that there was selectivity in the extinction of sea animals at the end of the Ordovician, I am saying that there is selectivity in the extinction of all animals.
The extinctions that occur at the end of the Ordovician are rather selective. Most of the organisms that died off were microscopic, such that UV could have shined through them. Some of the larger animals that died off were those that lived on the surface of the ocean. The large animals that did not die off were those that lived deep underwater.
The trilobites are a class of arthropods that are particularly good examples. For instance, the trilobites that died off lived near the surface of the water. Trilobites that lived deep underwater did not die off. Since the anatomy of trilobites is somewhat similar, the differential extinction is hypothesized to be caused by an environmental cause. However, this can’t be cause by the mutation of trilobites since trilobites are too big to allow UV radiation through their bodies. The UV radiation could cause their deaths my irradiating cells on their surface. It could not have increased their mutation rate. If evolution was caused by the larger mutation rate, then the trilobites on the surface would have differentiated rather than go extinct. So I conclude that neither UV nor cosmic ray radiation caused mutation in trilobites.
Furthermore, there is a latitudinal variation in extinction rates that is not consistent with the other catastrophes occurring at that time. It is for the reason of this selectivity that some scientists are considering the supernova hypothesis.
Note that other extinctions do not show this type of selectivity. There are at least four other extinctions of comparable extent, and none of them show this pattern. Furthermore, there is speciation and evolution over the entire Phanerozoic era. As your reference points out, the likelihood is small that there were a large number of nearby supernovas during this time. To produce the constant turnover of species in the Phanerozoic era, there must have been a constant battery of nearby supernova events.
So the influence of a supernova at best can explain the initial stages of the end-Ordovician extinction. Supernovas can't explain the rest of evolution.
Here are two links on the end-Ordovician extinction.
At the time, most complex multicellular organisms lived in the sea, and around 100 marine families became extinct, covering about 49% of faunal genera (a more reliable estimate than species). The brachiopods and bryozoans were decimated, along with many of the trilobite, conodont and graptolite families.
Statistical analysis of marine losses at this time suggests that the decrease in diversity was mainly caused by a sharp increase in extinctions, rather than a decrease in speciation.
These extinctions are currently being intensively studied. The pulses appear to correspond to the beginning and end of the most severe ice age of the Phanerozoic, which marked the end of a longer cooling trend in the Hirnantian faunal stage towards the end of the Ordovician, which had more typically experienced greenhouse conditions.
The event was preceded by a fall in atmospheric CO2, which selectively affected the shallow seas where most organisms lived. As the southern supercontinent Gondwana drifted over the South Pole, ice caps formed on it.
A small minority of scientists have suggested that the initial extinctions could have been caused by a gamma ray burst originating from a hypernova within 6,000 light years of Earth (in a nearby arm of the Milky Way Galaxy). A ten-second burst would have stripped the Earth's atmosphere of half of its ozone almost immediately, exposing surface-dwelling organisms, including those responsible for planetary photosynthesis, to high levels of ultraviolet radiation. Although the hypothesis is consistent with patterns at the onset of extinction, there is no unambiguous evidence that such a nearby gamma ray burst ever happened.”
Based on the intensity and rates of various kinds of intense ionizing radiation events such as supernovae and gamma-ray bursts, it is likely that the Earth has been subjected to one or more events of potential mass extinction level intensity during the Phanerozoic. These induce changes in atmospheric chemistry so that the level of Solar ultraviolet-B radiation reaching the surface and near-surface waters may be approximately doubled for up to one decade.
We previously proposed that the late Ordovician extinction is a plausible candidate for a contribution from an ionizing radiation event, based on environmental selectivity in trilobites.”
|cosmic ray, origin of life, origin of species|
|Similar discussions for: High Energy Cosmic Rays and the Origin of Life|
|Ultra-high-energy cosmic ray ≠ collision of 2 'normal' rays?||Astrophysics||2|
|LHCf and high-energy cosmic rays||High Energy, Nuclear, Particle Physics||1|
|origin of cosmic rays||Cosmology||5|
|Could ultra high-energy cosmic rays be the result of matter-antimatter collision?||Astrophysics||4|
|Origin of Cosmic Rays and GRB (new paper by A.Dar)||General Astronomy||6| | <urn:uuid:82427bd2-b25e-4532-9032-9b695d35c9f3> | 3.21875 | 3,237 | Comment Section | Science & Tech. | 33.982727 |
National Institutes of Health researchers have used the popular anti-wrinkle agent Botox to discover a new and important role for a group of molecules that nerve cells use to quickly send messages. This novel role for the molecules, called SNARES, may be a missing piece that scientists have been searching for to fully understand how brain cells communicate under normal and disease conditions.
Agilent Technologies Inc. this week announced that the Broad Institute in Cambridge,...
A new method of manufacturing short, single-stranded DNA molecules uses enzymatic...
In recently published research, St. Louis Univ. researchers describe a technology that can detect new, previously unknown viruses. The technique offers the potential to screen patients for viruses even when doctors have not identified a particular virus as the likely source of an infection. In the new approach, scientists use blood serum as a biological source to categorize and discover viruses.
Mannitol, a sugar alcohol produced by fungi, bacteria and algae, is a common component of sugar-free gum and candy. The sweetener is also used in the medical field. Now a team from Tel Aviv Univ. have found that mannitol also prevents clumps of a protein from forming in the brain—a process that is characteristic of Parkinson's disease.
The Agriculture Dept. says it has no indications that genetically modified wheat found in the western state of Oregon last month has spread beyond the field in which it was found. No genetically engineered wheat has been approved for U.S. farming, and the department is investigating how the engineered wheat got in the field.
The U.S. Supreme Court ruled Thursday that companies cannot patent parts of naturally-occurring human genes, a decision with the potential to profoundly affect the emerging and lucrative medical and biotechnology industries. The high court's unanimous judgment reverses three decades of patent awards by government officials.
What would you do with a camera that can take a picture of something and tell you how new it is? If you’re a Lawrence Berkeley National Laboratory scientist, you use it to gain a better understanding of the ever-changing world of metabolites. A team of researchers has developed a mass spectrometry imaging technique that not only maps the whereabouts of individual metabolites in a biological sample, but how new the metabolites are too.
A Cornell Univ. study offers further proof that the divergence of humans from chimpanzees some 4 to 6 million years ago was profoundly influenced by mutations to DNA sequences that play roles in turning genes on and off. The study provides evidence for a 40-year-old hypothesis that regulation of genes must play an important role in evolution since there is little difference between humans and chimps in the proteins produced by genes.
AB SCIEX has unveiled three new solutions for biological researchers to improve identification and quantitation of proteins, peptides, metabolites and lipids. The company extended the applicability of SelexION technology, SWATH Acquisition and ProteinPilot software for academic research in the field of systems biology.
For more than a decade, scientists have suspected that hairpin-shaped chains of micro-RNA regulate wood formation inside plant cells. Now, scientists at North Carolina State Univ. have found the first example and mapped out key relationships that control the process. The research describes how one strand of micro-RNA reduced by more than 20% the formation of lignin, which gives wood its strength.
Bacteria in the gut that are under attack by antibiotics have allies no one had anticipated, a team of Harvard Univ. Wyss Institute scientists has found. Gut viruses that usually commandeer the bacteria, it turns out, enable them to survive the antibiotic onslaught, most likely by handing them genes that help them withstand the drug.
Cartilage injuries have ended many athletes’ career, and the general wear-and-tear of the joint-cushioning tissue is something that almost everyone will endure as they age. Unfortunately, repairing cartilage remains difficult. Bioengineers are interested in finding innovative ways to grow new cartilage from a patient’s own stem cells. A new study from the Univ. of Pennsylvania brings such a treatment one step closer to reality.
Using data derived from nuclear weapons testing of the 1950s and '60s, Lawrence Livermore National Laboratory scientists have found that a small portion of the human brain involved in memory makes new neurons well into adulthood. The research may have profound impacts on human behavior and mental health.
By activating a brain circuit that controls compulsive behavior, Massachusetts Institute of Technology neuroscientists have shown that they can block a compulsive behavior in mice—a result that could help researchers develop new treatments for diseases such as obsessive-compulsive disorder (OCD) and Tourette’s syndrome.
Zebrafish with very weak muscles helped scientists decode the elusive genetic mutation responsible for Native American myopathy, a rare, hereditary muscle disease that afflicts Native Americans in North Carolina. Scientists originally identified the gene in mutant zebrafish that exhibited severe muscle weakness. The responsible gene encodes for a muscle protein called Stac3.
The genetic malady known as Fragile X syndrome is the most common cause of inherited autism and intellectual disability. Brain scientists know the gene defect that causes the syndrome and understand the damage it does in misshaping the brain's synapses—the connections between neurons. But how this abnormal shaping of synapses translates into abnormal behavior is unclear. Now, researchers believe they know.
JPK Instruments reports on the Yan Jie single-molecule biophysics research group at the Mechanobiology Institute (MBI) of the National Univ. of Singapore (NUS) and their use of optical tweezers. The MBI of the NUS was created through joint funding by the National Research Foundation and the Ministry of Education with the goal of creating a new research center in mechanobiology to benefit both the discipline and Singapore.
On any given day, Jason Atkins and Mohit Patel can be found toiling away inside a chemical biology laboratory at the University of Missouri–St. Louis. And they love every minute of it. The researchers recently developed new technology to transfer DNA into cells. The development is an inexpensive and non-toxic method to help DNA cross the cell membrane so that cells can be modified.
Structural biologists from Rice University and Baylor College of Medicine have captured the first 3-D crystalline snapshot of a critical but fleeting process that takes place thousands of times per second in each human cell. The research could prove useful in the study of cancer and other diseases.
Human scabs have become the model for development of an advanced wound dressing material that shows promise for speeding the healing process, scientists are reporting. The team explains that scabs are a perfect natural dressing material for wounds. In addition to preventing further bleeding, scabs protect against infection and recruit the new cells needed for healing.
Virus particles of the same type had been thought to have identical structures, like a mass-produced toy, but a new visualization technique developed by a Purdue University researcher revealed otherwise. It was found that an important viral substructure consisted of a collection of components that could be assembled in different ways, creating differences from particle to particle.
Transplantation of human stem cells in an experiment conducted at the University of Wisconsin-Madison improved survival and muscle function in rats used to model ALS (amyotrophic lateral sclerosis), a nerve disease that destroys nerve control of muscles, causing death by respiratory failure.
For the first time, researchers have found a particular kind of molecular switch in the food-poisoning bacteria Salmonella Typhimurium under infection-like conditions. This switch, using a process called S-thiolation, appears to be used by the bacteria to respond to changes in the environment during infection and might protect it from harm, researchers report.
The structures of most of the two million proteins in the human body are still unknown. A new algorithm developed by Lawrence Berkeley National Laboratory scientists solves the convoluted shapes of large molecules by using images of numerous individual samples, all caught simultaneously in a split-second flash of X-rays from a free-electron laser.
Striking a blow at foodborne diseases, the 100K Pathogen Genome Project at the University of California, Davis today announced that it has sequenced the genomes of its first 10 infectious microorganisms, including strains of Salmonella and Listeria.
Cells in the human body do not function in isolation. Living cells rely on communication with their environment—neighboring cells and the surrounding matrix—to activate a wide range of cellular functions. This cellular communication occurs on the molecular level and it is reciprocal. Now, for the first time, researchers have measured the molecular force required to mechanically transmit function-regulating signals within a cell.
For decades, people have been getting rid of cockroaches by setting out bait mixed with poison. But in the late 1980s, in an apartment test kitchen in Florida, something went very wrong. A killer product stopped working. Cockroach populations there kept rising. Mystified researchers tested and discarded theory after theory until they finally hit on the explanation. | <urn:uuid:02bbc3c3-efaf-4811-a44c-572dcbbb1c63> | 2.75 | 1,858 | Content Listing | Science & Tech. | 30.998667 |
Sep 23, 2009 | 73
Population growth, now at roughly 78 million extra people per year, is the don't-go-there zone of modern environmentalism and political discourse.
But let's go there for the moment: The biodiversity crisis. The water crisis. The climate crisis. Lurking behind all these crises is at least one shared factor: human population. Species extinction? Think land clearing for agriculture to feed a growing population of 6.8 billion people. Water? The majority of water goes directly to growing that same food supply. And giving a helping hand to all these other crises as a result of all the fossil fuel burning needed to power our lives and lift billions out of poverty: anthropogenic climate change.
Apr 28, 2009 | 16
Temperatures on the Eastern seaboard have risen to the high 80s and low 90s in recent days, 20 to 30 degrees Fahrenheit above normal for April in the region. Here in New York City, where Scientific American's offices are located, we may break the record high of 90 degrees Fahrenheit on this date set back in 1990. But as the temperature climbs in the Northeast and summer wilt sets in before trees have even budded out, it's worth remembering that weather is not climate.
Weather is the day-to-day temperature, humidity or precipitation that determines whether you'll wear your spring coat or strip down for summer. Climate is the overall combination of all these elements over a long period of time.
Temperature records kept since the 19th century reveal that global average temperatures are inexorably creeping up, a phenomenon dubbed climate change. The cause? Increasing levels of greenhouse gases, most commonly carbon dioxide, in the atmosphere, which trap heat that would otherwise radiate back out to space, like a smothering blanket.
Apr 22, 2009 | 3
Today marks the 39th annual Earth Day, an idea hatched by Wisconsin Senator Gaylord Nelson in 1969 to "shake up the political establishment and force this issue into the national agenda," according to the Earth Day Network, a nonprofit that helps organize the day.
But way back before global warming was a household term and canvas totes were a fashionable alternative to shopping bags, environmental supporters started with the basics: recycling, energy use, pesticides and population growth, to name a few.
So how much have actions and attitudes about saving the earth changed since then? Mark Fischetti, managing editor of Scientific American Earth 3.0 magazine, reflects that, "Back in the '70s, Earth Day was kind of this quirky, one-day grassroots event. It raised a little awareness, but the next day it was gone… Now it's on the radar every single day, it's in the headlines every single day."
Apr 3, 2009 | 13
There are some 82,000 chemicals used commercially in the U.S., but only a fraction have been tested to make sure they're safe and just five are regulated by the U.S. Environmental Protection Agency (EPA), according to congressional investigators. But a government scientist says there's no guarantee testing actually rules out health risks anyway.
The basic premise of safety testing for chemicals is that anything can kill you in high enough doses (even too much water too fast can be lethal). The goal is to find safe levels that cause no harm. But new research suggests that some chemicals may be more dangerous than previously believed at low levels when acting in concert with other chemicals.
"Some chemicals may act in an additive fashion," Linda Birnbaum said this week at a conference held at the Columbia Center for Children's Environmental Health at Columbia University. "When we look one compound at a time, we may miss the boat."
Jan 27, 2009 | 6
The fields of space and climate science are growing ever more closely entwined: Japan launched a new satellite to monitor greenhouse gases late last week, and NASA is set to launch its own Orbiting Carbon Observatory next month. But what about all the nasty fumes and gases spewed by the boosters needed to shoot those climate watchdogs into orbit?
A California company has a solution to shrink the ecological footprint of space exploration, but it remains to be seen whether it can or will be applied to real spaceflight: biodiesel-powered rockets. Flometrics, based in Carlsbad, Calif., earlier this month conducted a ground rocket-engine test of biodiesel (the "same stuff people put in their cars," according to company founder Steve Harrington) alongside RP-1, a standard rocket-grade kerosene fuel, and found them of almost equal fortitude. (The biodiesel delivered about 3 percent less thrust than the RP-1, according to Flometrics.) Biodiesel, a liquid fuel derived from vegetable oil or animal fat, has already been used to power a cross-country jet flight.
Dec 19, 2008 | 11
Reproductive health and enviro activists are fuming over two more last-minute rule changes by the outgoing Bush administration: a new reg that allows heathcare workers to nix treatments to which they have moral objections, and another one that bars regulators from taking into consideration a power company's climate change–causing greenhouse gas emissions when applying for a license to build new coal-fired plants.
Both rules are set take effect a month from now—just hours before Pres. Bush vacates the White House and President-elect Barack Obama is sworn in to office on Jan. 20.
Dec 18, 2008 | 6
Data from the U.S. Centers for Disease Control and Prevention (CDC) indicates that humans carry phthalates—chemicals used as softeners in plastics and found in everything from pill coatings to nail polish—around in their bodies. A growing number of studies, primarily in rats, show that phthalates cause male reproductive problems—infertility, decreased sperm count, malformation—and can cross the placenta. As a result, the European Union has banned some of them and consumer advocate and environmental groups have called for the U.S. government to do the same.
Today, an advisory panel of scientists, commissioned by the Environmental Protection Agency (EPA), released a report recommending that the chemicals be assessed as a group for potential risks as soon as possible.
Dec 12, 2008 | 21
Is your neighborhood using? Researchers from Oregon State University and the University of Washington have devised technology that analyzes what’s been flushed down the toilet to measure how many speed freaks and coke heads you’ve got living down the street.
A report published in the Dec. 15 edition of the American Chemical Society journal Environmental Science & Technology describes a new test that uses standard chemical analytical methods to look at what stuff makes its way through the municipal sewer systems to wastewater treatment plants. There, the test can measure levels of drugs including illegal substances like crystal methamphetamine. Unlike previous methods, the technique does not require expensive and time-consuming sample preparation, making it a practical for comparing drug use in different regions.
Nov 5, 2008 | 9
Among the many pressing issues that President-elect Barack Obama will face when he takes office in January is climate change, which he has called an “immediate threat” and warned has made Earth a “planet in peril.” In an effort to prevent and reverse the problem, he supports a so-called cap-and-trade scheme similar to one now in effect in the U.S. Northeast and the European Union.
Under such a plan, the government sets an overall limit on the amount of pollution allowed and polluters, such as power companies, are sold or given permits to pollute. Those who emit less pollution thanks to a new wind farm, for example, can then sell their excess pollution permits to other companies struggling to meet their quotas. That ensures that the industry stays within the overall emission limit, which declines over time.
Sep 11, 2008 | 1
The long-term effects of the 9/11 attacks aren’t merely existential. Whether the collapse of the Twin Towers and exposure to the stew of dust and chemicals caused disease, and the emotional toll it took on witnesses, are scientific questions, too.
New estimates suggest that of the more than 400,000 people who were directly exposed to the strikes, 35,000- to- 70,000 developed post-traumatic stress disorder (PTSD), and 9,700-to-2,000 people experienced serious psychological distress. Some 3,800 to 12,600 people may have developed asthma, New York City epidemiologists report in this month's Journal of Urban Health.
Deadline: Jun 29 2013
Reward: $7,000 USD
The Seeker for this Challenge desires proposals for chemical methods that could rapidly degrade a dilute aqueous solution
Deadline: Jul 14 2013
Reward: $1,000,000 USD
This is a Reduction-to-Practice Challenge that requires written documentation and&
Get Both Print & Tablet Editions for one low price!X | <urn:uuid:451f7833-d0f9-4df2-a705-d9da0df3cda2> | 2.984375 | 1,843 | Content Listing | Science & Tech. | 44.467743 |
Migratory Fish in Trouble
The Problem: dams and industrial practices have blocked spawning habitat and decimated migratory fish populations. Flows at main stem hydro-electric dams and canals, as well as industrial effluents and heated plant discharges into the river make this situation worse. Determined action is needed if the Connecticut’s fish runs are to survive.
The Solution: require working fish passage at all main stem dams. Immediately discontinue any recent industrial practices that may be injuring migratory fish runs. Undertake adequate testing before making changes to main stem discharges and flow regimes to prove they will do no harm. Continue to remove or create passage at tributary dams to increase available spawning habitat and success.
All fish are mobile, but none on the Connecticut River make longer journeys than the suite of migratory fish moving upriver from the Atlantic Ocean: blueback herring, alewives, American shad, American eels, Atlantic salmon and sea lampreys. These migrations have been taking place for thousands of years. The journeys of these species may take them through thousands of ocean miles annually, and nearly 200 miles upriver.
- Anadromous fish: Shad, lamprey, salmon, blueback herring and alewives are anadromous fish—they are born in freshwater, swim to the sea to feed and mature, then return to the rivers of their birth to spawn. Though some members of each species die after spawning, only the sea lamprey spawns as the final act in its long life cycle. All other species may survive, return to the ocean, and then return to the river to spawn again.
- Catadromous fish: The American eel is a different fish. It is a catadromous species, growing and maturing in rivers and estuaries then returning to spawn in the ocean, then die. After years feeding and maturing in river and estuaries, American eels head to the Sargasso Sea—a weed-covered expanse in the Bermuda Triangle, where they mate along that sprawling sargassum algae mat in close proximity to their counterparts, European eels. This seaweed expanse has also been found to be the protective ocean habitat that young loggerhead sea turtles journey to after hatching on sandy shores and skittering into the sea.
Main stem and tributary dams have been the major, human-induced contributor to declining migratory fish populations on the Connecticut River. Fish passage facilities are in place at most main stem dams. But changes in operations and discharges at main stem sites, as well as failing fish passage facilities, are further impacting surviving fish runs. At some critical sites, the fishways themselves are hindering fish. They simply are not fulfilling their required roles, either through poor design or operation. These failures further threaten migratory fish runs on the Connecticut.
Critical fish passage and dam-removal work is also taking place on many tributaries and is in the works for others. The Watershed Council has been a leading advocate for working fish passage at main stem and tributary dams. We have successfully helped create fish passage, restore habitat, and remove unneeded dams at dozens of watershed sites. In doing so we’ve opened up nearly 50 miles of migratory fish habitat.
American shad have been among the most common migrants on the Connecticut each spring for centuries. Shad runs never became extinct. Their numbers--once in the millions, are now experiencing steep declines after good initial fish passage restoration success. In recent years shad numbers have dropped—80 – 90% in the area above Turners Falls dam and into southern Vermont and New Hampshire. Since 2000 the Connecticut’s river-wide shad run has declined by 17%.
Recent developments are deeply troubling:
Heated Effluent. Owners of the Vermont Yankee nuclear plant in Vernon, VT are continuing to by-pass their plant’s cooling towers to save money, releasing their heated effluent directly into the River. The Vermont Supreme Court recently upheld Entergy’s application to increase the heat of that effluent, allowing them to raise the river’s temperature downstream of the Yankee nuclear plant by yet another full degree Fahrenheit.
CRWC, partnering with Vermont Law School, challenged that temperature increase on behalf of its effects on migratory fish, in particular spawning-run shad who use the mainstem—downstream and up beyond that heated discharge, to migrate and spawn. Eggs and young of shad depend on cool river habitats to develop and feed. The court’s ruling proved largely in favor of Entergy, adding yet another thermal insult to the river, the shad, and other migratory and resident aquatic species.
For nearly two decades before, the Vernon nuclear plant had been permitted to raise the River’s temperature up to 13 degrees during winter months and up to 5 degrees in the summer and fall. That heated plume is shown to extend at least 50 miles downstream to Holyoke, MA. CRWC continues its work to reverse this situation.
99% Decline of Shad. Shad numbers on the river overall, and upstream success beyond Turners Falls and then Vernon, have dropped steadily since 1992. In the past decade, annual shad passage success at Turners Falls is hovering at around 1%. The presence of American shad upstream in the “Vernon pool” section of the river have dropped by 99% since the early 1990s. This is a tragedy, the ongoing upstream loss of the Connecticut’s ancient links to the sea.
39 Blueback Herring. Blueback herring once rivaled the shad in their great mass heading upstream to spawn as far as southern Vermont and New Hampshire. Hundreds of thousands of these "baitfish" were tallied annually at Holyoke just 15 years back. They spawn in quick-water shallows of the main stem river and its tributaries. Today their runs up the Connecticut beyond Holyoke are nearly gone--just a few dozen now return beyond there annually. Thirty-nine blueback herring were tallied at Holyoke in 2009, compared with 410,000 in 1991.
Dams and failing main stem passage have taken their tolls on herring runs. But overfishing, plus long-standing management problems causing fluctuations in populations of predacious fish are also likely a part of this story.
Alewife Runs Lost. Alewives are close relatives of blueback herring. The two species have collectively been lumped under the heading “river herring.” Alewives are lighter in color and have larger eyes than bluebacks. Their spring runs were once massive annual blooms along the Atlantic Coast from Nova Scotia to the Carolina’s.
Alewife runs—typically moving up more coastal streams, have been severely impacted by tributary dams and overfishing. Many runs have been lost altogether; some have been restored.
Sea Lamprey Slow Decline. Sea lampreys have changed little since the age of the dinosaurs. Though once more numerous, the sea lamprey population spawning upstream on the Connecticut appears quite healthy, reaching New Hampshire and Vermont waters. Built for the ages, tens of thousands continue to return spawn in the river annually. Nineteen thousand lamprey were counted at Holyoke Dam in 2009, with 66,000 counted in 2008, and 100,000 tallied in 1998.
75 Salmon. The Connecticut River strain of Atlantic salmon disappeared from the river nearly two centuries back. The Connecticut was the southern-most of the large rivers in this cold water species historic migratory footprint. Atlantic salmon feed and mature off the coast of Greenland. There are no good estimates on how many Atlantic salmon once returned to the river annually, but construction of the first main stem Connecticut River dam at Turners Falls in 1798 is believed to have helped extinguish the last run in 1809.
Attempts to create a replacement for the Connecticut’s extinct strain are ongoing since 1967. Returns average between 100 – 200 fish river-wide annually. Seventy-five salmon returned in 2009.
Atlantic salmon have no problem passing Turners Falls dam—the ten salmon that entered the fish passage facilities there were able to continue upstream in 2008. Salmon that are released upstream can reach and pass upstream dams at Vernon, Bellows Falls, and even Wilder with relative ease.
Shortnose sturgeons are the only federally endangered migratory fish on the Connecticut River. They evolved in the age of the dinosaurs and are toothless and primitive looking—with bony plates instead of fish scales. Shortnose sturgeon are between 2 – 4 feet long, and weigh up to 14 lbs. They mature slowly and don’t spawn until they reach 8 – 12 years old. Federal fines up to $20,000 can be levied for harming a shortnose sturgeon. The total river population is estimated at 1,200 fish.
Shortnose sturgeon live in the Connecticut from below Turners Falls dam to the estuary at Long Island Sound. They typically migrate from salt water into rivers to spawn. However main stem dams impede this species’ movements on the Connecticut and there are now two distinct populations on the river--one is partially landlocked above the Holyoke dam. Unfortunately, only the population living above Holyoke dam is known to spawn successfully.
American eels enter the Connecticut River as tiny, transparent, glass eels. They are born in the Sargasso Sea; then migrate to rivers and estuaries to mature. American eels will spend from 8 – 23 years feeding in the sediment and growing into 2 – 4 foot, silver-bronze adults before heading to the ocean to spawn. Spawning eels congregate in the weed-choked expanse of ocean south of Bermuda called the Sargasso Sea.
Each adult female produces upwards of 15 million eggs. It is presumed that all adults die after spawning beneath that thick algal mat. American eel populations are declining. Overfishing and dams have hurt these migrants. The species was considered for federal endangered species status in 2007, but was not listed. Eels can travel successfully for short distances on land, particularly during damp weather. Eelways have been constructed at some tributary dams to improve their migratory success.
American shad are currently the most numerous migratory fish on the Connecticut. Adult shad are green-gold, nearly two feet long, and can weigh up to 5 lbs. Females (roes) are larger than males (bucks.) Peak migration occurs during May, but the run continues through late-June. American shad spawn in East Coast Rivers from central Florida to Newfoundland. Shad enter the Connecticut each spring beginning in mid-April. The vanguard of their upstream migration corresponds roughly with the blooming of the shadbush.
The spawning peak is reached when river temperatures reach 67 degrees F.—at which point the fish stop their upstream migrations to spawn. Once river temperatures hit 70 degrees, upstream migration ceases altogether. Only half of the migrating Connecticut River shad die upon spawning. Many head back to the sea and will return to spawn up to three times. During midsummer the entire East Coast shad population migrates to the Bay of Fundy to feed.
Shad numbers have experienced steep declines on the Connecticut in the past decade. From a high count of 720,000 fish passing Holyoke dam in 1992, the average for the seasons 2005 – 2007, was 143,000 fish. Historically, shad have spawned as far inland as Bellows Falls, Vermont, 173 miles from the Atlantic.
Blueback herring are sleek, metallic-blue fish, under a foot in length. Blueback herring return to the main stem of the Connecticut from mid-April through June. Considered “bait fish”, these migrants travel as far upstream as Vermont’s Vernon dam, 134 miles from Long Island Sound. They spawn in quick, shallow currents of the main stem river and its tributaries; then feed in the Connecticut’s currents until fall, when they return to the sea.
In 2007 just 69 blueback herring were tallied passing the Holyoke dam. In 1991 there were 410,000 counted at Holyoke, and 630,000 blueback herring were counted there in 1985.
Alewives are close relatives of blueback herring, and difficult to distinguish from them. Alewives are lighter in color and have larger eyes than bluebacks. They migrate into the Connecticut River and its lower tributaries each spring, moving to the slow waters and ponds where they will spawn between March and June. Their upstream migration does not reach into Massachusetts waters. Like their blueback counterparts, alewives are experiencing steep declines. Though dam removals, fishways, and other restoration projects have opened up some of their historic spawning habitat, alewife populations have been damaged by overfishing, pollution, and spikes in predacious fish populations.
Atlantic salmon are over two feet long and weigh about 8 pounds when they return from the Atlantic to spawn for the first time. Born in freshwater rivers, they spend the first two years of their lives growing and feeding there. This coldwater species then heads to the sea to spend several years feeding off the Greenland coast before returning to the freshwater rivers of their birth to spawn. The Connecticut River strain of Atlantic salmon became extinct in the early 1800’s. In 1967 an effort to create a new strain of migratory salmon on the Connecticut began. Current returns average between 100 – 200 fish annually.
Sea lamprey are mottled brown, 2 – 3 foot long, eel-like fish that are born in freshwater rivers and spend five years in freshwater before heading to the ocean to feed and mature. Once in the ocean sea lampreys become parasitic, locking onto ocean fish with their jawless, sucking mouths, and draining nutrition from them. After spending two years as ocean parasites, sea lamprey head back to the rivers of their birth, cease feeding and—though now blind and toothless, migrate upstream to their natal sites where they spawn and die. Sea lamprey evolved during the age of the dinosaurs.
Photo credits (above): CRWC Staff
Image Credits at Right - Illustrations: Bill Singleton; Photos: CRWC Staff. | <urn:uuid:5ae813f9-4f17-4875-b9da-36608c81be30> | 3.703125 | 2,931 | Knowledge Article | Science & Tech. | 47.41513 |
Narendra, Ajay and Ramachandra, TV (2009) Remote detection and distinction of ants using nest-site specific LISS-derived Normalised Difference Vegetation Index. In: Asian Myrmecology, 2 . pp. 51-62.
Remote_detection.pdf - Published Version
Restricted to Registered users only
Download (417Kb) | Request a copy
This study in Western Ghats, India, investigates the relation between nesting sites of ants and a single remotely sensed variable: the Normalised Difference Vegetation Index (NDVI). We carried out sampling in 60 plots each measuring 30 x 30 m and recorded nest sites of 13 ant species. We found that NDVI values at the nesting sites varied considerably between individual species and also between the six functional groups the ants belong to. The functional groups Cryptic Species, Tropical Climate Specialists and Specialist Predators were present in regions with high NDVI whereas Hot Climate Specialists and Opportunists were found in sites with low NDVI. As expected we found that low NDVI values were associated with scrub jungles and high NDVI values with evergreen forests. Interestingly, we found that Pachycondyla rufipes, an ant species found only in deciduous and evergreen forests, established nests only in sites with low NDVI (range = 0.015 - 0.1779). Our results show that these low NDVI values in deciduous and evergreen forests correspond to canopy gaps in otherwise closed deciduous and evergreen forests. Subsequent fieldwork confirmed the observed high prevalence of P. rufipes in these NDVI-constrained areas. We discuss the value of using NDVI for the remote detection and distinction of ant nest sites.
|Item Type:||Journal Article|
|Keywords:||ants;NDVI;nest site selection;Western Ghats;canopy gap; Pachycondyla rufipes|
|Department/Centre:||Division of Biological Sciences > Centre for Ecological Sciences|
|Date Deposited:||10 Oct 2011 07:33|
|Last Modified:||10 Oct 2011 07:33|
Actions (login required) | <urn:uuid:474f77f6-31fb-4ce4-9294-0deab2b72c84> | 2.703125 | 452 | Academic Writing | Science & Tech. | 38.210042 |
IŽll try to explain this problem. Trying to solve this for the last 40minutes now.
Equation: y = x^(0,5)
The area under this equation starts to rotate around the X-axis and gives you the rotational Volume (B).
When calculating the above mentioned volume: Int: from (a = 0) to (b), There is a line from the point b parallell to the X-axis ( called line d), that line crosses the y-axis. The area between: Y-axis; This line (d), and the area "above" the y = x^0,5 graph starts to rotate around the y axis. This gives you volume (A).
Find x when these areas have the same volumes ( A = B).
The equation for calculating volume rotating around X-axis is:
INT: (pi)*(y^2):dx from a to b (on the x-axis as limes a = 0)
The equation for calculating volume rotating around y-axis is: Int:2*pi*xy:dx from a to b (on the X-axis ALSOOO).
At first I thought, never mind the rotation, just solve so the areas a equal, but that was not very clever, points further from the axises give larger values for volume then closer points do.
I keep getting the answer x = 0,625^2 but that is wrong, THE CORRECT SOLUTION IS (6.25;2.5)
Hope I discribed this problem understandably. | <urn:uuid:9210ae88-42fe-4e67-a6cf-8183a3a61b44> | 3.015625 | 327 | Q&A Forum | Science & Tech. | 72.474231 |
4.2. Galactic magnetic field B and star formation SF
There is a body of literature connecting the radio synchrotron emission from spiral galaxies with their Far Infrared emission, i.e., Niklas et al. (1995), Hummel et al. (1988), Dickey and Salpeter (1984). The interpretation of this narrow relation often involves star formation (e.g., dust emission in Far Infrared near newly-born stars) and cosmic rays (e.g., radio synchrotron electrons near supernovae and in the general interstellar medium).
The presence of a magnetic field is indirectly assumed, being necessary for synchrotron emission, and being capable to align dust particles emitting at Far Infrared. But is there a direct relation between star formation (SF) and magnetic field B ? | <urn:uuid:454d7117-71b8-473a-be0e-502f4cd8e934> | 2.703125 | 168 | Knowledge Article | Science & Tech. | 52.468004 |
Story 4 - 18/2/2010
Is Light Slowing Down?
The speed of light is a universal constant — or is it? Some evidence seems to suggest it might actually be slowing down. Will we soon have to revise our cosmological beliefs?
Light. Is the speed of light changing over time? which evidence supports such hypotesis? and which could be the consequences?
If light were slowing down, we would have to revise many of our astronomical beliefs: from the age of the Universe to the distances between galaxies, from the dark matter to the definition of many physical constants. What a tremendous set of implications! Some evidence that this might indeed be the case starts piling up, as recently reported by Yves-Henri Sanejouand from the University of Nantes in France.
Of course, we must emphasize that the hypothesis that the speed of light, c
, might be decreasing over time is still highly speculative. However, it has been recently shown that it might shed new light on some of the most challenging open scientific problems of today.
First of all, it was observed by Hubble at the beginning of the XX century that galaxies appear to be moving away from the Earth at a velocity that is proportional to their distance from us. The standard explanation is that galaxies are being thrown apart from the expansion of space-time. Imagine drawing some red spots on a balloon and inflating it, the spots (galaxies) would recede from each other at a speed proportional to their distance due to the dilatation of the plastic (space-time). The drawback of this hypothesis is that it needs to postulate the existence of the famous dark matter
, which has never been observed and would still constitute 70% of the Universe’s mass. However, if c
were decreasing over time, the Hubble effect would turn out to be a simple optical effect, eliminating the need to postulate the existence of the dark matter, as proposed by P. I. Wold back in 1935 .
Another one of the main open questions in modern cosmology is the so-called initial value problem
: how should the Universe have begun for us nowadays to observe it as it is? In 1993, John Moffat from the University of Toronto in Canada proposed the idea of time-varying c
to tackle this problem [2,3]: "I was curious," explains Moffat, "about whether there is an alternative to the standard inflation idea for solving the initial value problems in cosmology."
Another open puzzle in the astronomical community is the so-called Pioneer anomaly
. The spaceships Pioneer 10 and Pioneer 11 were sent in the 1970s by NASA to explore the outer planets and then eventually they left the solar system. These are regarded as highly successful missions and have brought in plenty of data, which is still keeping astronomers busy today. However, both spaceships appear to be slightly and inexplicably accelerating towards the Sun, with an acceleration that increases with the distance. Again this can be explained once c
is taken as being not constant over time, as Sanejouand recently proposed.
Even though the constancy of the speed of light is nowadays widely accepted, from a historical perspective it has not always been so. For a long time the very nature of light was only vaguely understood.
The ancient Greek philosophers were interested in light mainly as part of the vision process. It is noteworthy that the Greek word Optika
referred to the science of vision and not the science of light, as it does now. The mainstream idea was that light was the vehicle carrying the objects’ colors to the eyes – and it did so instantaneously. In one version of such a theory the eyes themselves were emitting the light to touch
the objects. In any case, the speed of light was tacitly assumed to be infinite.
It is not until the Renaissance that the first attempts to measure the speed of light took place. In 1676, the Danish mathematician and astronomer Ole Rømer gave the first reasonable estimate. He noted that the time elapsed between the eclipses of Jupiter with its moons became shorter as the Earth moved closer to Jupiter and became longer as the Earth and Jupiter drew farther apart. He could use this observation to estimate c
The fact that light might have a finite speed, more than its exact value, encountered a fierce resistance in the scientific community, even though some happily endorsed it, notably Newton and Leibniz. It was only about fifty years later, and twenty years after Rømer’s death, that the British astronomer James Bradley’s measurements could definitively prove that the speed of light was indeed finite; it was 1727. More measurements were performed over the XIX century, including the ones by Fizeau (1849) and Foucault (1862). Finally, in 1879 Albert Michelson estimated a value of 299,940 km/s for the speed of light in vacuum, extremely close to the value accepted nowadays.
At that time, scientists commonly believed that light traveled in a
special, not yet observed, medium: the ether. They assumed that light
waves propagated through the ether just like sound waves propagate in
the air. Since the Earth traveled through the ether, they also assumed
that the speed of light must have differed in various directions. In 1887, Michelson and Morley set out to finally prove the existence of the still unobserved ether. To do so, their experiment wanted to prove that light travels at different speeds in different directions. However, their experiment failed: light was propagating exactly at the same speed in all directions and regardless of the motion of its source or observer!
Light had managed to shock the scientific community — again! After long discussions and experimental verifications, the fact that light is a universal constant was accepted. In 1905 Albert Einstein proposed the theory of relativity: the longstanding concepts of absolute time and space were definitively abandoned in order to preserve the constancy of the speed of light independently of the motion of source or observer.
It is clear now that the speed of light must be constant regardless of the propagation direction or the motion of the source or observer — indeed this is one of the postulates of Einstein’s relativity. The constancy of the speed of light is so fundamental and accepted now that in 1983 the 17th Conférence Générale des Poids et Mesures decreed that "The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second." At this point c
cannot change by definition
It is at this point of the story that the new question arises: What if the speed of light is not constant over time? What if it is slowing down? The very first consequence would be that our definition of meter would change over time: one meter would be slightly shorter, and we would accordingly become a bit taller. Of course, given the high-accuracy with which c
has been measured any possible variation over time should be extremely small and it would probably have no consequences on our daily life.
The evidence reported by Sanejouand points towards a possible slowing down of c
of about 0.02-0.03 m/s per year. This is extremely small compared with the actual value of c
: it would be like having 1 billion dollars in a bank account and losing a few cents per year. However, "the constancy of the speed of light is one of the fundamental pillars of contemporary physics," explains Sanejouand, "so the possibility that it may instead vary (even at a slow rate) has far reaching consequences (although mostly on the theoretical side)." Even though the hypothesis of the slowing down of the speed of light is still a very speculative one, "people like Barrow, Magueijo, as well as John Moffat," Sanejouand concludes, "have opened the way by showing that physically consistent theories in which the speed of light is varying in time can indeed be developed in a safe and rigorous way."
P. I. Wold, On the Redward Shift of Spectral Lines of Nebulae
, Phys. Rev. 47
, 217-219 (1935).
J. W. Moffat, Superluminary Universe: A Possible Solution to the Initial Value Problem in Cosmology
, Int. J. Mod. Phys. D 2
, 351-365 (1993).
J. W. Moffat, Quantum gravity, the origin of time and time's arrow
, Found. Phys. 23
, 411-437 (1993).
2010 © Optics & Photonics Focus
GV is currently working as a postdoctoral researcher in optics, statistical physics and soft matter at the Max Planck Institute in Stuttgart (Germany).
Yves-Henri Sanejouand, About some possible empirical evidences in favor of a cosmological time variation of the speed of light, EPL (Europhysics Letters) (2009) 88, 59002 (link). | <urn:uuid:34947a42-4e66-4e22-a904-21ced7e19174> | 3.4375 | 1,859 | Comment Section | Science & Tech. | 49.603371 |
Faint gamma-ray bursts on the horizon
Aug 4, 2004
Astronomers may have discovered a new class of gamma-ray bursts that are more powerful than supernovae but much weaker -- and more common -- than most "ordinary" gamma-ray bursts found so far. The claims are made by two separate teams of researchers, who used the INTEGRAL satellite to study a gamma-ray burst that was observed on 3 December 2003 (Nature 430 646 and 648). The results suggest that there is a continuum of cosmic explosions ranging in energy from supernovae to gamma-ray bursts.
Gamma-ray bursts (GRBs) are the most powerful explosions in the universe, but continue to baffle scientists over 30 years after they were first discovered. Most gamma-ray bursts are even brighter than the massive supernovae explosions that occur when a star dies and collapses to form a black hole. However, some astronomers believe that gamma-ray bursts and supernovae are in fact related. Although most supernovae do not have enough energy to produce gamma rays, the additional energy could be supplied by material falling into the black hole.
Now there is new evidence that gamma-ray bursts and supernovae could be related. On 3 December 2003 a gamma-ray burst lasting about 30 seconds was detected by the European Space Agency's INTEGRAL satellite in a small galaxy about 1.6 billion light years away. A record 18 seconds after the start of the burst -- named GRB 031203 -- its location on the sky was sent out by the automatic burst alert system on INTEGRAL. Although the burst initially looked like an ordinary gamma-ray burst, the astronomers later found that it had an energy of about 0.6-1.4 1043 Joules, which is a thousand times lower than a typical high-energy burst.
This result was even more unexpected given that GRB 031203 is only the second closest gamma-ray burst ever to be found. The previous closest burst, which was discovered in 1998, was also found to be very faint, although astronomers were not sure if it was some sort of "freak" explosion. Now, however, the two teams of researchers -- one from the US and the other from Germany and Russia -- think that both bursts belong to a completely new population of gamma-ray bursts that lie somewhere in energy between supernovae and other gamma-ray bursts.
"Our discovery of GRB 031203 suggests that there is a significant population of sub-energetic gamma-ray bursts that we do not typically see since they are below our detection thresholds," says Alicia Soderberg, a graduate student from the California Institute of Technology who is a member of the US team and lead author of one of the Nature papers. "GRB 031203 was only detectable since it was so nearby and suggests that sub-energetic gamma-ray bursts may in fact be more common than high energy counterparts." The finding also rules out the idea that all gamma-ray bursts have the same energy.
Future missions dedicated to the detection of gamma-ray bursts -- like SWIFT, which will be launched by NASA in October -- might be able to detect many more events like these. "It will be interesting to see whether low energy events are more frequent than ordinary high-energy ones and whether there is a continuous distribution of gamma-ray burst energies," adds Sergey Sazonov, a member of the Russia-Germany group at the Space Research Institute in Moscow and the Max Planck Institute in Garching.
About the author
Belle Dumé is Science Writer at PhysicsWeb | <urn:uuid:5976764a-63de-4949-972b-d9e28fb22703> | 3.9375 | 739 | Truncated | Science & Tech. | 41.804973 |
Progress in pure mathematics has its own tempo. Major questions may remain open for decades, even centuries, and once an answer has been found, it can take a collaborative effort of many mathematicians in the field to check
that it is correct. The New Contexts for Stable Homotopy Theory programme, held at the Institute in 2002, is a prime example of how its research programmes can benefit researchers and its lead to landmark results.
Data, data, data — 21st century life provides tons of it. It's paradise for researchers, or at least it would be if we knew how to make sense of it all. This year's AAAS annual meeting in Vancouver
devoted plenty of time to the question of how to understand large amounts of data. And there's one method we
particularly liked. It's based on the kind of idea that gave us the London tube map.
Topologists famously think that a doughnut is the same as a coffee cup because one can be deformed into the other without tearing or cutting. In other words, topology doesn't care about exact measurements of quantities like lengths, angles and areas. Instead, it looks only at the overall shape of an object, considering two objects to be the same as long as you can morph one into the other without breaking it. But how do you work with such a slippery concept? One useful tool is what's called the fundamental group of a shape.
This is the second in a series of two articles in which Ian Short looks at topology using topographical features of maps. Find out about Jordan curves and winding numbers with the help of hermits, lighthouses and drunken sailors.
The world we live in is strictly 3-dimensional: up/down, left/right, and forwards/backwards, these are the only ways to move. For years, scientists and science fiction writers have contemplated the possibilities of higher dimensional spaces. What would a 4- or 5-dimensional universe look like? Or might it even be true that we already inhabit such a space, that our 3-dimensional home is no more than a slice through a higher dimensional realm, just as a slice through a 3-dimensional cube produces a 2-dimensional square? | <urn:uuid:78b16dc1-c40f-46de-a1e9-8f5fdbb8887d> | 3 | 451 | Content Listing | Science & Tech. | 51.155741 |
Threats to Endangered Wildlife ~ NASA Images of Deforestation in Virunga
Gorilla CD Field Report ~ NASA recently released some photos of Virunga National Park, comparing the forests from 1999 to 2008 in the northern sector of the park. The photos show significant deforestation along the borders. The rate of forest loss shown in these two images is the highest among all national parks in the Democratic Republic of Congo (DRC), according to the study.
Satellite photos reveal forest destruction in DRC
October 2012. Flying hundreds of kilometres above the Earth, satellites rarely see the human suffering from war and poverty. But decades of unrest have left a very visible impact on the Democratic Republic of the Congo (DRC).
The DRC contains half of Africa’s tropical forest and the second largest continuous tropical forest in the world. Because of unrest and economic instability, the Democratic Republic of the Congo has mostly escaped the industrial-scale deforestation that has taken place in other tropical countries such as Brazil and Indonesia. The exception is near the country’s eastern border, around Virunga National Park.
Mountain gorillas under threat
Home to critically endangered mountain gorillas, the forests have been disappearing quickly as population growth and violence have driven people into the resource-rich forest in and around the park. Subsistence slash-and-burn agriculture and charcoal production have eaten away at the trees, transforming deep green forests into pale savannah grasslands.
In the images below, the space between the left white line and the red line shows the deforestation. The forests have been disappearing quickly as population growth and wars send people into the forests in and around the park to cut trees for charcoal or clear forests for agriculture.
1999 – 2008 deforestation images
The Landsat 5 satellite obtained the top image on February 13, 1999, and the lower image on September 1, 2008. (More recent images of the region were cloudy.) The city of Beni is tan and grey, while the forested Virunga National Park is dark green. The blue Semlike River meanders northeast through the park. The rate of forest loss shown in these two images is the highest among all national parks in the country.
NASA EARTH OBSERVATORY IMAGES BY JESSE ALLEN AND ROBERT SIMMON, USING LANDSAT DATA FROM THE USGS GLOBAL VISUALIZATION VIEWER AND PARK BOUNDARY DATA FROM PROTECTED PLANET. CAPTION BY HOLLI RIEBEEK.
Image Below: 13 February 1999
Image Below: 1 September 2008
(More recent images were cloudy)
Forest loss across the DRC
As a whole, the Democratic Republic of the Congo contains 159,529,000 hectares (615,942 square miles) of forest. Between 2000 and 2010, the country lost 3,711,800 hectares (14,331 square miles) of it, according to a recent analysis of Landsat data completed by Peter Potapov and a team of researchers from South Dakota State University, the University of Maryland, and the Observatoire Satellital des Forêts d’Afrique Centrale. The study by Potapov is the first to survey the entire country since the U.S. Geological Survey made Landsat data available for free in 2008.
Understanding tropical deforestation is important because forests store vast amounts of carbon. Deforestation releases carbon to the atmosphere and prevents the forest from taking up more carbon. Tropical forests also sustain a wide array of plants and animals.
ARTICLE SOURCE: http://www.wildlifeextra.com/go/news/virunga-forest.html#cr
Between 2000 and 2010, the Democratic Republic of Congo lost 14,331 square miles (3,711,800 hectares) of its forests, according to a recent analysis of Landsat data completed by Peter Potapov and researchers from South Dakota State University, the University of Maryland, and the Observatoire Satellital des Forêts d’Afrique Centrale. | <urn:uuid:afb158f5-b0a8-427b-8e9a-5f4c0a843ce6> | 3.515625 | 832 | Knowledge Article | Science & Tech. | 37.799188 |
View your list of saved words. (You can log in using Facebook.)
Branch of physics concerned with the forces acting on bodies passing through air and other gaseous fluids. It explains the principles of flight of aircraft, rockets, and missiles. It is also involved in the design of automobiles, trains, and ships, and even stationary structures such as bridges and tall buildings, which must withstand high winds. Aerodynamics emerged as a discipline around the time of Wilbur and Orville Wright's first powered flight in 1903. Developments in the field have led to major advances in turbulence theory and supersonic flight.
This entry comes from Encyclopædia Britannica Concise. For the full entry on aerodynamics, visit Britannica.com. | <urn:uuid:21ca8051-1267-493a-b2ea-f2322e0f6a5e> | 3.171875 | 154 | Structured Data | Science & Tech. | 49.621038 |