text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Hello there! This is my first tutorial, and I hope you enjoy it!
Lets start with the sections shall we? Here is a basic example of what the starting layout of the HTML should look like (The spaces between the sections represent were to put information).
<html> <head> </head> <body> </body> </html>
See? Looks simple right? Well it is quite simple! Allow me to explain just a moment...
The <html> tags are used at the start of the HTML page.
The <head> tags are used for non-visible code that still effects the page.
The <body> tags are used for visible content that you will see on the page.
And of course your probably wondering... What is that / for on the 2nd tag? Well a / symbol indicates that it is a closing tag, which is used for stopping the current section or tag!
<sarcasm>If you have stayed with me till here you Good job you have somewhat of an attention span!</sarcasm>
Now lets move on to some useful tags to use on your page!
If you want to write text you have several options
//For writing paragraphs. <p> </p> //For writing small amounts of text. <text> </text> //For writing italic texts. <i> </i> //For writing bold texts. <b> </b> //For writing underlined texts. <u> </u> //For writing striked through texts. <s> </s>
I could go ON and ON and ON about different types but I think that should get you far enough for now...
Now you have two options where to put it!
#1. Non-visible coding
Put it in the <head> </head> tags.
#2. Visible content
Put it in the <body> </body> tags.
Well now we'll do one more thing before I leave you to mess around with your new found skills!
Let's do a simple "Hello world!" HTML page.
with java and without java versions.
Here is the coding for it in java, I will explain with comments.
Here is the coding for it without java, will explain below codes.
<html> <head> </head> <body> <text>Hello world! </text> </body> </html>
If you where paying attention earlier you should already understand this but let's recap.
Okay all we're doing in the code is in the visible content (body section) we are writing the text "Hello world!". I know I know your probably thinking "This is wa-a-a-ayyy to easy of a project! When are we going to get to the fun part?". Well With all do respect these are the basics! It gets more fun the farther you go but remember! *YOU GOTTA KNOW THE BASICS*.
Thanks for reading and have fun with coding! | <urn:uuid:a02414af-17b0-4cd9-aa83-90eadb99b8ab> | 3.40625 | 621 | Comment Section | Software Dev. | 83.245068 |
First of all, a Gcal resource file is a plain ascii text file. This text file may be created by any text editor or by redirecting the standard output channel to a file, e.g.:
|$ echo '19930217 Text'>> resource-file <RET>|
A special —but simple— line structure is required so Gcal is able to interpret its contents. Each fixed date entry in a resource file must be split into two parts, namely a date part and an optional text part which must be separated by one whitespace1 character minimum. It is unnecessary to give a whitespace separator character if no text part is specified.
A line must always end with a ‘\n’ (newline) character, except it is the
last line of a resource file. The maximum length of a line is limited to
INT_MAX2 characters. A newline character is
automatically appended to the line if the standard output channel is
directed to a file. A newline character is appended to the typed line in a
text editor window if it is completed by pressing the <RET> key. In case
the used text editor does not generate the newline character in this way, it
should be set to this mode of operation, otherwise this text editor is useless
for creating Gcal resource files.
The line structure of fixed date entries is:
date part [ whitespace text part ] newline
or more concrete, e.g.:
yyyy[mm[dd|wwwn]] [ whitespace text ] newline
or much more concrete, e.g.:
19940217 Hi, I'm the text!
Besides fixed date entries, a resource file may contain further entries like:
; A remarked line ; A formatted and multi-line \ remark
#include <file name> #include "file name"
Date variable assignments respectively operations...
dvar=NOTHING dvar=mmdd dvar=mmwwwn dvar=*dn[www] dvar=*wn[www] dvar=dvar[[+|-]n[www]] dvar++ dvar-- dvar+=[+|-]n dvar-=[+|-]n dvar+=nwww dvar-=nwww
Text variable assignments respectively operations...
tvar=[text] tvar?[command] tvar:[command] tvar++ tvar-- tvar+=[+|-]n tvar-=[+|-]n
Text variable references...
Text variable references at the beginning of a Gcal resource file line may only be used if it is ensured that they are expanded to a valid Gcal resource file line. | <urn:uuid:880cd09f-7a1d-4bc0-b443-c3131b79ce9e> | 3.171875 | 572 | Documentation | Software Dev. | 55.010011 |
Enteromorpha: now Ulva
See Eur. J. Phycol. (August 2003) 38: 277-294. Linnaeus was right all along: Ulva and Enteromorpha are not distinct species.
When Enteromorpha first begins growing, it
forms a single row of cells, this structure is monosiphonous. Soon
after the monosiphonous filament is formed, longitudinal division of
cells creates a two layered filament. Eventually, after more cell division
the two cell layers separate to form a tube, forming the adult morphology.
The thallus of Enteromorpha is tubular with
the wall of the tube a single cell layer thick. The thallus can be
branched or unbranched, and there is a wide variety of forms within
the genus. Enteromorpha is attached to the substrate by a
disc-like holdfast. The holdfast is formed by the basal cell dividing
into three or four holdfast cells which elongate and undergo further
The cells in Enteromorpha can vary in size
and shape from species to species, and sometimes they will form regular
linear series in a frond, while other times there is an irregular arrangement
of the cells. Each cell contains a single chloroplast, varying in size
depending on the size of the cell.
There is a variety of differences in the morphology
of Enteromorpha, some of which are illustrated in the following
A single thallus of Enteromorpha. Note how
the tube becomes more compressed at the top of the frond. To see a
larger image, click on the image or click
Three separate thalli, all of these are of the species E.
intestinalis. To see a larger image, click the image, or click
Another example of the spiral shape of the thallus
in some species.
Enteromorpha growing on a rock in the intertidal
zone in Stillwater Cove, at Pebble Beach, CA
More Enteromorpha, also in Stillwater Cove
at Pebble Beach, CA.
A view of partially submerged Enteromorpha in
Stillwater Cove at Pebble Beach, CA. This is some of the same Enteromorpha that
was used to take pictures in the lab and under the microscope.
A different species of Enteromorpha, this
picture was taken by Judith
Connor at Elkhorn Slough. This is most likely E. prolifera,
which is widespread in Elkhorn Slough and other sheltered habitats.
More E. prolifera, this picture was also
taken by Judith Connor at
Elkhorn Slough. The white algae in this picture is also Enteromorpha,
it is just drying out, and turning white as it does so. | <urn:uuid:cd032175-fb73-49ee-bf2e-e8afea14a4b2> | 3.015625 | 598 | Knowledge Article | Science & Tech. | 44.474653 |
In the Land of Giant Frogs
Scientists strive to keep the world's largest aquatic frog off a growing global list of fleeting amphibians
FOR THREE DECADES, pictures of a lake monster living high in the Andes Mountains have haunted me. When Jacques Cousteau mounted an expedition in 1973 to explore Bolivia and Peru’s Lake Titicaca in search of submerged Inca treasure, he instead brought images of a bizarre, giant frog flooding into my childhood living room. Ever since then I have imagined what it would be like to see that frog monster in the flesh. Now I’m standing on the edge of the highest navigable lake in the world with only a thin sheet of aquarium glass standing between me and the biggest aquatic frog on the planet.
© PETE OXFORD AND RENEÉ BISH
BIG AND WRINKLY: The Titicaca frog’s large, saggy-skinned appearance has helped it adapt to the high-altitude habitat of Lake Titicaca. Larger than a salad plate, the frog absorbs oxygen directly through its superfluous skin so it can stay submerged for long periods of time, avoiding high levels of ultraviolet radiation and other hazards.
I and my wife, Reneé, have come here to a small, rustic research facility run by the Bolivian Institute of Ecology in Huatajata not only to see the Titicaca frog face-to-face, but also to consider its fate. When Cousteau explored the lake bottom in a submersible 30 years ago, he reported that the lake was brimming with "thousands of millions" of giant frogs, many weighing more than two pounds and stretching nearly 20 inches. But local fishermen say the behemoths of Cousteau’s day have long since vanished—and their smaller descendents are becoming harder and harder to find.
Some experts say Cousteau’s early estimates may have been somewhat exaggerated. Thus the Convention on International Trade in Endangered Species (CITES) lists the Titicaca frog as "vulnerable." Still, scientists worry that if current trends prevail, the Titicaca frog population will dwindle—an omen that one of the least studied ecosystems in the world is in grave danger.
Against the backdrop of snow-capped mountains, Lake Titicaca straddles 2,000 square miles of Peruvian and Bolivian landscape at 12,500 feet above sea level. Life at this altitude is adapted for extremes: high levels of ultraviolet radiation, freezing temperatures and oxygen-depleted air. Titicaca frogs survive here and nowhere else.
The key to their success is staying below the lake’s surface. The sluggish, bottom-dwelling frogs manage this feat, despite the fact that they have very small lungs, because they evolved to absorb oxygen directly from the water through their semipermeable skin, which acts like gills. Being larger than a salad plate enhances this adaptation because it gives a frog a greater surface-area-to-body-volume ratio, creating even more efficient oxygen uptake. What’s more, the ratio is maintained throughout a frog’s life span, allowing it to survive as respiration demands increase.
This explains why the saggy skin of a Titicaca frog seems to fit like the baggy trousers of a person featured in a before-and-after weight loss commercial. Even S.W. Garman, who discovered the frog during a 1876 scientific expedition with Alexander Agassiz, made a tongue-in-cheek reference to the frog’s exaggerated skin folds when he named it Telmatobius culeus, "aquatic scrotum" in Latin.
The appearance of this big, wrinkly frog is elaborated further by the variety of colors and patterns that adorn its body. Some are olive green with peach-colored stomachs. Others are entirely black with white marbling. Such variety in the Titicaca frog’s appearance has prompted some scientists to estimate that as many as seven subspecies exist in the lake. But according to Bolivian scientist Edgar Benavides, who conducted DNA studies in 1997, all of these creatures belong to a single, widely varying species.
To learn more about the gentle giants bobbing below the lake’s windblown surface, researchers spend many hours watching them in aquariums at the Bolivian Institute of Ecology. Out in the lake, the enigmatic denizens hunt mostly at night, using binocular vision and long hind legs to navigate the darkness. But here in the lab, they are also active during the day, and fairly nonselective in their diet. The frogs seem to eat almost anything that moves, including fish, snails, crustaceans, tadpoles and worms. They creep up on their prey and suck them up with a single, large gulp. If a crustacean or a fish is too large or too wriggly, the agile amphibians simply use their forefeet to help spoon the victim into their mouths.
A favorite quarry is a small, native fish, known locally as ispi (a Quechua word meaning "baby fish"). Titicaca frogs can capture and swallow whole ispi that are more than three inches long. Some research suggests the ispi might be linked to a dip in the Titicaca frog population. In March 2001, a team of Peruvian divers surveyed the lake bottom and found large areas completely devoid of frogs and ispi. One explanation: The frogs moved out of the area to follow the seasonal north-south ispi migration within the lake. Another: Frogs are impacted by the harvest of ispi for an alternative to expensive trout pellets used at fish farms. A drop in the giant frogs’ main food source could jar the population, and frogs following migrating ispi could easily end up as by-catch.
© PETE OXFORD AND RENEÉ BISH
HIGH-ALTITUDE HOME: Lake Titicaca straddles 2,000 square miles of Peruvian and Bolivian landscape at 12,500 feet above sea level. Life at this altitude is adapted for extremes: high levels of ultraviolet radiation, freezing temperatures and oxygen-depleted air. Titicaca frogs survive here and nowhere else.
The ispi harvest isn’t the only lake activity being looked at as scientists investigate the frog population, however. The frogs themselves are being harvested—for human consumption. Titicaca frog legs are popular on tourist menus around the lake, though the dish has recently become somewhat difficult to find. Still, some enterprising restaurateurs (many of whom are fishermen) retain small ponds or tanks to display live frogs in hopes of enticing adventurous gastronomes.
Bolivia has no laws in place to protect Titicaca frogs, and Peru does not protect them within the lake—but transporting the oversized amphibians is a criminal offense. During a single month in 1999, Peruvian law enforcement officials intercepted and repatriated over 4,500 live frogs on their way to city restaurants.
Lately, the demand for Titicaca frogs seems to be growing. According to a recent report on Peruvian National Television, 150 live frogs are required daily to satisfy Lima’s latest fad—frog juice, known locally as Peruvian Viagra. Live frogs are stripped of their skins like peeled bananas and dropped into a household blender. Mixed with water, maca (a local tuber) and honey, the juice’s prowess as an aphrodisiac is given great claim by locals who guzzle the concoction in a show of macho bravado. One regular user, Lima local Jorge Flores, says he has a glass of frog juice almost every night. When he plans to go out on the town, he makes it a double.
New trends of eating and drinking frogs buck old traditions. For years, Titicaca frogs have been revered as animals with special powers. Used as rainmakers during times of drought, a large frog would be carried in a ceramic pot to a hilltop where the gods would hear the frog’s distressed cries and misinterpret them as calls for rain. Eventually the rain would fall and overflow the pot, allowing the sacred frog to escape back to the lake.
© PETE OXFORD AND RENEÉ BISH
CHANGE OF HEART: Lake Titicaca resident Don Ramon Catari (back, left) once made a living illegally capturing frogs from the lake and selling them to international buyers. Alarmed by the news that the frog population might be declining, Catari has abandoned his entrepreneurial efforts and turned his talents toward helping biologists collect the motley frogs (front) for captive breeding studies.
Titicaca frogs have also been used in traditional medicine. Dried frog meat is said to cure tuberculosis. One treatment for a fracture requires tying a frog to the area as a poultice. A small live frog is swallowed whole to cure a fever. And soup made from a large frog is used to treat both anemia and female infertility.
In an effort to ensure that Titicaca frogs go on being revered—and eaten—in the future, scientists on both sides of the Bolivia–Peru border are studying the frog’s ecology and potential for captive breeding. Headed by Esther Perez on the Bolivian side and Carlos Calmet in Peru, the team is trying to judge the potential viability of a large-scale commercial frog-farming project. If Titicaca frog farming proves feasible, it could be one of the first environmentally sustainable operations in the world.
According to experts, attempts to farm frogs fail because they are labor intensive and plagued by problems with diseases that spread rapidly among concentrations of animals confined in close quarters. Forming an environmentally sustainable operation is even more daunting because it must also offer sufficient protection to the natural resource. "In some areas, farming schemes don’t work simply because people believe that wild animals have special powers, and those farmed do not," says John Behler, a herpetologist at the New York-based Wildlife Conservation Society. "In too many situations, farming is used to cover wild resource exploitation."
Victor Hutchison, a herpetology professor at the University of Oklahoma, who has studied the physiology and ecology of Titicaca frogs in Bolivia, says he has concerns about the impacts of a large-scale, commercial business on the wild population. "With frog farming, you keep harvesting animals under the guise of commercial development," he says, "But there is generally no regulation."
Still, the team of Lake Titicaca frog researchers is hopeful that working directly with indigenous people to raise animals that can be marketed as frog legs and leather will inspire the people to protect the natural resource.
Some locals are already on board. Dressed in a bright poncho and a colorful woolen hat to stave off a morning chill, Don Ramon Catari—an illegal frog fisherman turned conservationist—describes how he once caught 1,000 live frogs to fill a single order for a Japanese buyer. Now he understands that such harvests threaten the frog’s future, as well as his own. He and his son, Simar, are helping scientists at the Bolivian Institute of Ecology care for captive frogs as they try to figure out their nutritional needs. Meanwhile, the research team in Peru is focusing on breeding adult frogs in earthen ponds. Since 2000, they have successfully produced several dozen clutches of eggs.
As excitement about the project grows, more and more local people are beginning to show interest. Daily, curious Aymara Indians peer quizzically into the aquariums at the institute and get eye to eye with the frogs. If these close encounters help the people of Bolivia and Peru develop even a fraction of the affinity that I have felt for the giant amphibian ever since it first leapt into my living room all those years ago, the Titicaca frog might just have a fighting chance.
Husband and wife team Pete Oxford and Reneé Bish have traveled to more than 40 countries to capture wildlife stories in words and photographs. Their work has also appeared in National Geographic, Smithsonian and BBC Wildlife. | <urn:uuid:35c54518-1b46-466f-96d8-e3b293fa17e3> | 3.25 | 2,518 | Nonfiction Writing | Science & Tech. | 38.387158 |
How does propulsion work?
To calculate the specific impulse, we first need to calculate the exhaust velocity. Since the real exhaust velocity is exceeding complex to calculate, we will be using some simplifying assumptions to make a simpler equation. Assume that the exhaust velocity follows the following formula:
Ve2 = kRgasTc [1 - (pe/pc)(k-1)/k] / (k-1)
k = ratio of specific heats, cp/cv
pe = nozzle exit pressure
pc = combustion chamber pressure
Tc = combustion chamber temperature
Rgas = exhaust flow specific gas constant RR/MM
RR = universal gas constant
MM = exhaust gas molecular weight
Note that k is typically between 1.21-1.26 for a wide range of fuels and oxidizers. Also note that the subscripts (e, c) stand for exit and chamber. Therefore, pc stands for temperature of chamber and pe stands for pressure of exit.
Because we are just approximating, we can simplify the above equation. The second half of the equation:
[1 - (pe/pc)(k-1)/k] / (k-1)
typically ends up as a small number which we will approximate with 1. Since anything multplied by one is itself then a rough guess of a specific impulse is:
Ve2 = kRgasTc
and since k is typically between 1.21-1.26 then we will also remove that term and approximate it as 1 giving us:
Ve2 = RgasTc
This result gives us some interesting observations:
This then tells us that for better performance (high exhaust velocity), we want fuel-oxidizer mixtures that burn very hot and have a low molar mass exhaust.
The specific impulse is:
Isp = ueq/ge
Isp = specific impulse
ueq = total impulse / mass of expelled propellant
ge = acceleration at Earth's surface (9.8 m/s2)
And since we are approximating the speed of a gas with a constant velocity; the momentum of the escaping gas is:
p = mv
p = momentum (kg m/s)
m = mass (kg)
v = velocity (m/s)
Notice how masses cancel out and therefore we the Isp is just the velocity of the exhaust gas (Ve) divided by the gravitational attraction of the Earth (ge).
Why ion propulsion?
How do conventional rockets work?
What is thrust?
What are the types of rocket propulsion?
Why is mass important?
What is chemical propulsion?
What is specific impulse?
What are some rocket propellants?
How do you calculate rocket engine performance? | <urn:uuid:ed0cb7d1-5ab1-417a-935e-52f04c55480c> | 3.703125 | 566 | Tutorial | Science & Tech. | 57.911971 |
The photosphere is the visible "surface" of the Sun (left). Sunspots are often visible "on" the photosphere. A close-up view (right) shows the granulation pattern on the photosphere.
Click on image for full size
Images courtesy of SOHO/NASA/ESA and The Royal Swedish Academy of Sciences and Oddbjorn Engvold, Jun Elin Wiik, and Luc Rouppe van der Voort - University of Oslo.
The Photosphere - the "Surface" of the Sun
Most of the energy we receive from the Sun is the visible (white) light emitted from the photosphere. The photosphere is one of the coolest
regions of the Sun (6000 K), so only a small fraction (0.1%) of the gas is
ionized (in the plasma state). The photosphere is the densest
part of the solar atmosphere, but is still tenuous compared to
Earth's atmosphere (0.01% of the mass density of air at sea level).
The photosphere looks somewhat boring
at first glance: a disk with some dark spots. However, these
are the site of strong magnetic fields. The solar magnetic field is believed to drive the complex
activity seen on the Sun.
Magnetographs measure the solar magnetic field at the photosphere.
Because of the tremendous heat coming from the solar core, the solar interior below
the photosphere (the convection zone) bubbles like a pot of boiling water.
The bubbles of hot material welling up from below are seen at the photosphere
as slightly brighter regions. Darker regions occur where cooler plasma
is sinking to the interior. This constantly churning pattern of convection
is called the solar granulation pattern.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
Plasma is known as the fourth state of matter. The other three states are solid, liquid and gas.In most cases, matter on Earth has electrons that orbit around the atom's nucleus. The negatively charged...more
Sunspots are dark, planet-sized regions that appear on the "surface" of the Sun. Sunspots are "dark" because they are colder than the areas around them. A large sunspot might have a temperature of about...more
Sunspots are caused by very strong magnetic fields on the Sun. The best way to think about the very complicated process of sunspot formation is to think of magnetic "ropes" breaking through the visible...more
The Sun has a very large and very complex magnetic field. The magnetic field at an average place on the Sun is around 1 Gauss, about twice as strong as the average field on the surface of Earth (around...more
The solar core is made up of a really hot and dense gas (in the plasma state). The temperature of 15 million kelvins (27 million degrees Faranheit) keeps the core at a gaseous state. The core is where...more
The last solar eclipse of this millennium occurred on August 11, 1999. Amateurs and scientists witnessed a truly awesome site. This was a total eclipse, which means the Moon completely covered the Sun....more
On March 30, 1998, the TRACE spacecraft will be launched. TRACE stands for Transition Region and Coronal Explorer (try saying that fast three times!). This spacecraft has four telescopes on it. The telescopes...more | <urn:uuid:971b8d6f-2311-4ae4-8fc2-3c6747cb8c1e> | 3.828125 | 757 | Knowledge Article | Science & Tech. | 57.656899 |
A plasma of hydrogen ions is trapped in a tokamak by balancing the pressure gradient and magnetic forces.
The equation of motion of a plasma is described by magnetohydrodynamics
ρ(dv/dt)= jxB - grad p
where ρ is the plasma density
is the velocity
of the plasma fluid
is the plasma current, B
is the magnetic field and p
is the pressure.
When the plasma is in equilibrium the velocity will be zero and the following equation results
jXB = grad p
This is the fundamental
equation of magnetic equilibrium.
It follows that
B. grad p = 0
j. grad p = 0
These two equations imply that the magnetic surfaces
are surfaces of constant pressure and that the plasma current is a flux surface quantity (i.e. a quantity which is a constant on a magnetic flux surface
The picture that emerges in a tokamak is of toroidally nested magnetic surfaces. As the minor radius of these surfaces tends to zero, the flux toroid becomes a line known as the magnetic axis.
In the simplest scenario, the surfaces are nested circles. Each successive circle is shifted slightly outwards due to the plasma pressure (i.e. the Shafranov shift). In fact, in most modern tokamaks, the magnetic flux surfaces are warped in shape (to optimise performance). The triangularity and elongation are two quantities used to define the surface shape.
External current coils are used to impose magnetic fields to keep the plasma in one place. Otherwise it will quickly hit the vessel wall and quench. | <urn:uuid:d448d9fc-e5c1-4a7c-ad41-78cd7a103280> | 4.125 | 335 | Knowledge Article | Science & Tech. | 49.446261 |
14.2. The CORBA Architecture
, the CORBA architecture for distributed objects shares many features with the architecture used by Java RMI. A description of a remote object is used to generate a client stub interface and a server skeleton interface for the object. A client application invokes
on a remote object using the client stub. The method request is transmitted through the underlying infrastructure to the remote host, where the server skeleton for the object is asked to invoke the method on the object itself. Any data resulting from the method call (return values, exceptions) is transmitted back to the client by the communication infrastructure.
But that's where the similarities between CORBA and RMI end. CORBA was designed from the start to be a language-independent distributed object standard, so it is much more
and detailed in its specification than RMI is (or needs to be). For the most part, these extra details are required in CORBA because it needs to support languages that have different built-in features. Some languages, like C++, directly support objects, while others, like C, don't. The CORBA standard needs to include a detailed specification of an object model so that non-object-oriented languages can take advantage of CORBA. Java includes built-in support for communicating object interfaces and examining them abstractly (using Java bytecodes and the Java Reflection API). Many other languages don't. So the CORBA specification includes details about a Dynamic Invocation Interface and a Dynamic Skeleton Interface (DSI), which can be implemented in languages that don't have their own facilities for these operations. In languages that do have these capabilities, like Java, there needs to be a mapping between the built-in features and the features as defined by the CORBA specification.
The rest of this section provides an overview of the major
that make up the CORBA architecture: IDL, which is how CORBA interfaces are defined; the ORB and Object Adaptor, which are responsible for handling all interactions between remote objects and the applications that use them; the Naming Service, which is a standard service in CORBA that lets remote
find remote objects on the network; and the inter-ORB communication protocol, which handles the low-level communication between processes in a CORBA context.
14.2.1. Interface Definition Language
IDL provides the primary way of describing data types in CORBA. IDL is independent of any particular programming language. Mappings, or bindings, from IDL to specific programming languages are defined and standardized as part of the CORBA specification. Standard bindings for C, C++, Smalltalk, Ada, COBOL, Lisp, Python, and Java have been approved by the OMG. Appendix G contains a complete description of IDL syntax.
The central CORBA functions, services, and facilities, such as the ORB and the Naming Service, are also specified in IDL. This means that a particular language binding also specifies the bindings for the core CORBA functions to that language. Sun's Java IDL API
the Java IDL mapping defined by the OMG standards. This allows you to run your CORBA-based Java code in any compliant Java implementation of the CORBA standard, provided you stick to standard elements of the Java binding. Note, however, that Sun's implementation includes some nonstandard elements; they are highlighted in this chapter as appropriate.
14.2.2. The Object Request Broker and the Object Adaptor
The core of the CORBA architecture is the ORB, as shown in Figure 14-1. Each machine involved in a CORBA application must have an ORB running in order for processes on that machine to interact with CORBA objects running in remote processes. Object clients and servers make requests through their ORBs; the ORB is responsible for making the requests happen or indicating why they can't. The client ORB provides a stub for a remote object.
made on the stub are transferred from the client's ORB to the ORB
the implementation of the target object. The request is passed on to the implementation through an object adaptor and the object's skeleton interface.
Figure 14-1. Basic CORBA architecture
The skeleton interface is specific to the type of object that is exported remotely through CORBA. Among other things, it provides a wrapper interface that the ORB and object adaptor can use to invoke methods on
of the client or as part of the lifecycle management of the object. The object adaptor provides a general facility that "plugs" a server object into a particular CORBA runtime environment. Older versions of the CORBA specification and Java IDL supported a BOA interface, while
versions (CORBA 2.3 and later, JDK 1.4 and later) support a POA interface. All server objects can use the object adaptor to interact with the core functionality of the ORB, and the ORB in
can use the object adaptor to pass along client requests and lifecycle notifications to the server object. Typically, an IDL compiler is used to generate the skeleton interface for a particular IDL interface; this generated skeleton interface will include calls to the object adaptor that are supported by the CORBA environment in use.
14.2.3. The Naming Service
The CORBA Naming Service (sometimes abbreviated to COSNaming, from CORBA Object Services, Naming) provides a directory naming structure for remote objects. The CORBA Naming Service is one of the naming and directory services supported by JNDI, so the concepts used in its API are similar to the general model of
used in JNDI.
The naming tree always starts with a root node, and subnodes of the object tree can be created by an application. Actual objects are stored by
of the tree. Figure 14-2 depicts an example set of objects registered within a Naming Service directory.
The fully qualified name of an object in the directory is the ordered list of all of its parent nodes, starting from the root node and including the leaf name of the object itself. So, the full name of the object labeled "Fred" is "Living thing," "Animal," "Man," "Fred," in that order.
Figure 14-2. A naming directory
Each branch in the directory tree is called a
, and leaf objects have
. Each node in the naming directory is represented by an
can be asked to find an object within its branch of the tree by asking for the object by name, relative to that particular naming context. You can get a reference to the root context of the naming directory from an ORB using the
method. Once you have a reference to the root of the naming directory, you can perform lookups of CORBA objects, as well as register your own CORBA objects with the Naming Service. We'll see more concrete details of
the CORBA Naming Service in "Putting It in the Public Eye" later in this chapter.
14.2.4. Inter-ORB Communication
The CORBA standard includes specifications for inter-ORB communication protocols that can transmit object requests between various ORBs running on the network. The protocols are independent of the particular ORB implementations running at either end of the communication link. An ORB implemented in Java can talk to another ORB implemented in C, as long as they're both compliant with the CORBA standard and use the same CORBA communication protocol. The inter-ORB protocol is responsible for delivering messages between two cooperating ORBs. These messages might be method requests, return types, or error messages. The inter-ORB protocol also deals with differences between the two ORB
, like machine-level byte ordering and alignment. As a CORBA application developer, you shouldn't have to deal directly with the low-level communication protocol between ORBs. If you want two ORBs to talk to each other, you need to ensure that they are compatible in terms of CORBA compliance levels (do they support similar levels of the CORBA specification?) and that they both speak a common, standard inter-ORB protocol.
The Internet Inter-ORB Protocol (IIOP ) is an inter-ORB protocol based on TCP/IP. TCP/IP is by far the most commonly used network protocol on the Internet, so IIOP is the most commonly used CORBA communication protocol. Other standard CORBA protocols are defined for other network environments, however. The DCE Common Inter-ORB Protocol (DCE-CIOP), for example, allows ORBs to communicate on top of DCE-RPC. | <urn:uuid:eaf2af0f-53bb-4fe2-97c2-3f688b22e4fe> | 3.234375 | 1,766 | Documentation | Software Dev. | 39.975881 |
Asymmetric Chemistry and Sugar Synthesis
Professor David MacMillan
Professor of Chemistry, California Institute of Technology
The chemistry of life relies heavily on special molecules that are ideally suited for their various roles. Part of this design often involves an asymmetric design that enables their activity. However, this asymmetry poses a problem for chemists interested in replicating these biologically important molecules.
Joining us today to discuss the synthesis of natural compounds is Prof. David MacMillan. Prof. MacMillan is a professor of chemistry at the California Institute of Technology, where his interests include new reaction design, enantioselective catalysis, and natural product synthesis.
Prof. David MacMillan (DM) joins Charles Lee (CL) to discuss asymmetric catalysis and the synthesis of carbohydrates from simple starting materials.
CL: Prof. MacMillan, thank you for joining us today.
DM: Thank you for having me.
CL: It’s certainly our pleasure. It looks like you are doing some very fascinating work in the field of enantioselective catalysis. I’m curious if you could explain to our audience, what exactly is enantioselective catalysis?
DM: Well, enantioselective catalysis is basically generating single enantiomers using catalysts. The reason why you need single enantiomers, as you stated in your introduction, is that a lot of times people in chemistry and biology are interested in generating organic molecules, which are molecules that revolve around the atom carbon. Carbon, as most people who have done chemistry know, exists in a tetrahedral format, which means that it has four different substituents, and can therefore exist as two different mirror images. It turns out that being able to produce one mirror image over another one has a lot of implications for biology. Sometimes molecules that exist in one mirror image will provide benefits for therapeutic use, whereas the other mirror image might actually be harmful to biological systems. As such there is a big pressure in organic synthesis to be able to generate one of these mirror images, which are called single enantiomers, selectively. And one way that this can actually be done is to develop catalysts that allow the production of one mirror image in preference to the other one. And, that’s why the name enantioselective, or asymmetric meaning non-symmetrical, catalysis came about. And, that’s why there’s been a lot of work over the last 30 years towards developing catalysts that allow the production of one enantiomer in preference to the other one.
CL: Are there any good examples of two different enantiomers, where one is beneficial and the other harmful?
DM: Some of the most famous are birth defects that arise due to the Thalidomide drug. Thalidomide was a drug that exists as three enantiomers. It turns out that one of the mirror images actually provides the beneficial effects, which remove morning sickness whenever a woman is going through a pregnancy in her first trimester. However, the other enantiomer leads to the birth defects that has led to this disastrous phenomenon known as Thalidomide children. So, that’s one example that led to the FDA coming out with very strong guidelines that modern pharmaceuticals have to be registered in their single enantiomer format.
CL: Is most of nature constructed in this way, where specific enantiomers are important?
DM: You do find molecules in nature that exist as both mirror images. But, for the most part, most molecules in biological systems, such as proteins, DNA, and RNA, are formed around one enantiomeric series of a core molecular structure. So, really it is just based on one of two mirror images.
CL: Why is the construction of different enantiomers very difficult then?
DM: It’s very difficult because when you carry out a transformation on a molecule and generate a molecule with carbon and four different substituents, that’s called a stereogenic center. It’s called stereogenic because it can exits in two different formats. Whenever you carry out this transformation, there is a 50:50 probability that you’ll make one or the other mirror images. So, to take the same molecule and make it undergo one of the transformations that will form one mirror image is actually very difficult. It has to be carried out in a transition state, where the energy required to form one mirror image is lower than the energy to form the other. And, so for the last 30 years or so, organic chemists have been focused on trying to come up with methods to do just that. It’s an interesting situation because it’s only in the last 10 years that chemists have become very successful at it.
CL: So, what are the strategies that chemists have used to get one form preferred over the other one?
DM: In terms of catalysis, there have been several different forms. It typically revolves around the type of catalyst, and the mode of catalytic activation, which simply means the method by which the catalyst will activate the starting material in such a way that you can discriminate between the different enatiomers that are formed. There has really been three different types of catalysts which have been utilized. There has been two types of organometallic catalysts using transition metal catalyzed processes such as hydrogenation or insertion chemistry. Or, you could have another type of organometallic catalysis which is based around Lewis acid catalysis, which is simply a method by which you lower the electron density in a substrate to the point where it can now engage in a reaction with a more electron rich partner. The third method, which my group has been working on for the past 5 – 6 years, is called organic catalysis. This uses organic molecules to function as catalysts to interact with starting materials to energetically partition them between the production of one enantiomer in preference to the other one. It’s an interesting area to be involved with, because if you think about biology, in many cases biology is organic catalysis, which will often involve enzymes that allow the production of one mirror image in preference to the other one.
CL: So, has the design of chemical catalysts taken a cue from biological catalsysis?
DM: It’s an interesting question, and I would answer that by saying ‘no’. Up until this point, there has not been a lot of work in organic synthesis utilizing the types of catalysis which have been learned from biology or biochemistry and trying to take those catalysis concepts and applying them to organic synthesis. Now, in the field of bio-inorganic chemistry there’s been a lot of work in trying to understand the methods by which these systems carry out catalysis as a means to develop and design catalysts that could do the same thing in a laboratory setting. But, in terms of organic synthesis, most of the methods that have been developed have not been based on what you might call the blueprints that came from biology. Most of them have been de novo catalysis concepts which have been utilized to try and partition between these two single mirror images.
CL: Is it easier to design these organic catalysts then attempt to reconstruct biological catalysis?
DM: I think that has typically been the case, and it makes sense. Biological systems are very complex for a reason. They focus on carrying out selective reactions, but they also focus on molecular recognition, so that one specific molecule will undergo one specific reaction from a large milieu of many different types of molecules. And, in the laboratory setting, you also typically want one molecule to undergo one transformation. But, it’s much easier to focus on developing catalysts that can carry out chemical reactions on a series of substrates, but doing them one molecule and one reaction flask at a time. The focus in this case is not to go after one molecule to do one selective transformation, as much as it is to take one class of different molecules and to be able to carry out enantioselective catalysis on one class of molecules, making a more general approach to what you might call enantioselective induction. This means if you want the capacity to build one enantiomer in preference to the other one, you want to do it on a general class instead of one particular molecule.
CL: So, it’s weighing the generality and the specificity of the two methods.
CL: Recently, your group published an interesting paper in Science about synthesizing sugars in a two-step synthesis. Could you tell us about that?
DM: One of the things my group is interested in is a concept that is central to organic synthesis, which is the rapid development of molecular complexity. How can we generate very complex organic molecules in a very rapid fashion? And in doing this, how can we focus on molecular structures and architectures that are prevalently used by chemists, biologists, biochemists, but that at the moment there might not be straightforward ways to get their hands on them or utilize them? In this regard in biological systems, I would argue there are three main bioarchitectures. You have DNA or RNA, nucleic acid based architectures. You have amino acid, or protein, based systems. And, the third major bioarchiteture is carbohydrates.
Carbohydrates are actually the most prevalent form of bioarchitecture found in biological systems, and they also have a widespread role in many biological processes, such as signal tansduction, cognition, as well as the immune response. The interesting thing is that carbohydrates in their monomer forms, that is the single unit form such as glucose, manose, allose, or galactose, it is very difficult to take those carbohydrates and either selectively functionalize them or couple them to each other. The reason why it is very difficult is that each carbohydrate has five oxygens, which are basically substiuent groups that are attached to the carbohydrates central framework. And it’s very difficult to differentiate each of those hydroxyl groups from each other. For example, if I wanted to couple two carbohydrates, glucose at the anomeric position to galactose at the fourth position, it would be very difficult to do that. Chemically, you couldn’t actually perform that transformation.
The thing that we were very interested in doing was finding a method that we could come up with, a synthesis or a way to construct these carbohydrates selectively and as a single enantiomer, but at the same time differentiate all of these oxygens which reside on the periphery of the carbohydrate. The nice thing with this is that it would have to be very rapid. For example, if you were to take glucose from a bottle and try to chemically differentiate all of those oxygens, it would typically take anywhere from eight to fourteen chemical steps depending upon how you wanted to differentiate all of these oxygens. We were interested in coming up with a way to differentiate all of those oxygens in a relatively straightforward fashion.
So, the method we came up with was taking three two-carbon units, called alpha-oxygenated aldehydes, and asking ourselves the question if we could carry out two chemical reactions which would build the whole carbohydrate framework and at the same time differentiate all of those oxygens, in just a two chemical step process. And, in fact, that’s the thing that we have been able to accomplish.
CL: That’s very impressive. So, has the inability to discriminate between these oxygens limited the synthesis or carbohydrate molecules?
DM: Yes. You can look at people in glycobiology or other biological areas, where they might be interested in generating tetra-saccharides with specific linkages between all of the different carbohydrates linked at different positions. And, you may have specific substituents, such as sulfate esters, around a variety of oxygen positions to try and test for a number of biological processes. But, at the moment, it may take many chemical steps to build those types of tetra-saccharides. However, if you could put those carbohydrates together in just two chemical steps and have them be completely differentially protected on each carbohydrate so that you could couple them all together very rapidly. In theory you could build these tetra-saccharides in 6 or 7 chemical steps, instead of the 40 or 50 chemical steps that are involved at the moment in terms of the production of all the monomers and then the production of the tetra-saccharide from all of those monomers. So, it’s very important to be able to develop rapid methods where you can get your hands on these carbohydrates with all of the oxygens differentially protected. This was one of the things that we were trying to accomplish.
But, the second thing that we were trying to accomplish with the production of carbohydrates from two carbon units, is that you are no longer restricted to having just oxygens around the periphery. Now, you could start to think could you introduce other atomic systems into different positions. We call that atomic mutation. For example, in the fourth position of a carbohydrate, you may decide that you no longer want an oxygen there, maybe you want a sulfur. To be able to take a natural carbohydrate and derivatize it in such a way that you would differentiate all of those oxygens and then convert the number four position into a sulfur would basically be impossible at this moment in time using other chemical methods. The means by which you could displace that oxygen with a sulfur is not straightforward. Well, by the fact that we were actually building these carbohydrates using two carbon units, this allows us to bring in those substituents from the very outset on the two carbon units. And as such, we can actually build carbohydrates that would contain sulfur at the number four position in only two chemical steps.
The reason why it is important to be able to get our hands on these unnatural carbohydrates is for medicinal chemists. Medicinal chemists are the people who really do all of the chemistry involved in developing pharmaceuticals. And, one of the things that they have to do is take biologically active molecules and be able to fine-tune them by taking little components of those pharmaceutical agents, for example converting an oxygen to a sulfur, that’s called a structure-activity relationship. And this capacity to build unnatural carbohydrates in two steps allows you to do just that. It allows you to completely pinpoint the atom that you want to change to try and understand what effect that would have in a biological system.
CL: So, rather than derivatizing a natural molecule, you can build it up using components.
DM: Exactly. It’s a bit like saying, instead of taking a molecule that exists in nature and basically bashing away at it over 14 to 18 chemical steps to convert it into something else. Wouldn’t it be better if we could actually build it de novo in just two chemical steps by focusing upon the development of two new chemical reactions that would allow you to put it together instead of having to take the naturally occurring material and trying to convert it into something that it is not? I always tell people that it’s a bit like taking a washing machine and asking yourself can you convert it into a lawnmower? It’s not a great way of doing things. It’s usually much easier to build the lawnmower de novo, than trying to convert something that was designed with a completely different framework.
CL: We are running a little out of time, but I’m curious if any medicinal chemists have expressed an interest in using this method?
DM: That’s a great question. Very interestingly, since this paper was published in Science, we’ve really had a lot of different phone calls from biotech companies and major pharmaceutical companies who are interested in actually utilizing this technology. There’s one company that right now is actually using this in medicinal chemistry processes, but I can’t actually disclose. But, you can basically understand that this is a method to generate and carry out structure-activity relationships on carbohydrates that wasn’t possible before, but is now possible in just two chemical steps. I think from that it’s easy to appreciate how rapidly people will start to adopt the technology.
CL: Right. Well, it is very fascinating, and certainly a great advance, but we are out of time and I just want to thank you very much for joining us to discuss all of your fascinating research.
DM: Thank you very much. | <urn:uuid:eb1bfd64-9a1c-4935-9afa-66c10a402708> | 3.0625 | 3,438 | Audio Transcript | Science & Tech. | 34.0106 |
Calico Scallop Dissected
Like other bivalves, the two valves, or shells, of the calico
scallop are secreted by the thin tissue called the mantle. These
valves are joined at the hinge by a ligament, and connected by a
cylindrical muscle. By contracting this muscle, the scallop can
open and close the two shells.
Calico scallops are hermaphroditic organisms with both male and
female reproductive organs. In the image above the "ripe" gonad
contains the orange eggs and white sperm.
Image Credit: FWC | <urn:uuid:4bafd1e0-8d13-4d44-ab59-00109eb76e86> | 3.09375 | 133 | Knowledge Article | Science & Tech. | 35.432902 |
How Best to "Weatherproof" Earth's Corals Against Warming-Induced Bleaching
Wooldridge, S.A. and Done, T.J. 2009. Improved water quality can ameliorate effects of climate change on corals. Ecological Applications 19: 1492-1499.
In a study that addresses such concerns, Wooldridge (2009a) developed an hypothesis that suggests that reduced dissolved inorganic nitrogen (DIN) content in seawater surrounding reefs could "directly benefit corals by enhancing their resistance to heat stress, i.e., raising the temperature thresholds that trigger bleaching," while Wooldridge and Done (2009) investigated the implications of this suggestion "at the scale of sea-scapes and regions on [Australia's] Great Barrier Reef [GBR]," where they say "the coastal waters are highly 'DIN enriched' as a consequence of a century and a half of European settlement of north Queensland." More specifically, they say they "used a spatially explicit Bayesian belief network (BBN) model (Pearl 1988; Woldridge and Done, 2004) to investigate the benefits of inclusion of DIN in explaining and predicting variability in complex patterns of coral bleaching documented on the GBR in 1998 and 2002."
In conducting their study, the two researchers from the Australian Institute of Marine Science found that "corals bathed in nutrient-rich coastal waters had a decreased bleaching resistance (per degree of heating) during the 1998 and 2002 bleaching events compared to reefs in oligotrophic oceanic waters, effectively lowering the upper thermal bleaching threshold by ~1.0-1.5°C," while they report that one of them (Wooldridge, 2009b) further found that "a complementary investigation suggests these figures could be as much as 2.0-2.5°C in the most DIN-enriched locations." As a result, Woldridge and Done state that "the new conceptual picture that emerges from this paper is of the fundamental importance of nutrient loading, in particular DIN, in defining the bleaching resistance of corals to heat stress," adding that "coral reef resilience to climate change may be improved by good local management of coral reefs, including management of water quality."
Idso, S.B., Idso, C.D. and Idso, K.E. 2000. CO2, global warming and coral reefs: Prospects for the future. Technology 75S: 71-93.
Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, San Francisco, California, USA.
Wooldridge, S.A. 2009a. A new conceptual model for the warm-water breakdown of the coral-algal endosymbiosis. Marine and Freshwater Research 60: 483-496.
Wooldridge, S.A. 2009b. Water quality and coral bleaching thresholds: formalizing the linkage for the inshore reefs of the Great Barrier Reef, Australia. Marine Pollution Bulletin 58: 745-751.
Wooldridge, S.A. and Done, T.J. 2004. Learning to predict large-scale coral bleaching from past events: A Bayesian approach using remotely sensed data, in-situ data, and environmental proxies. Coral Reefs 23: 96-108. | <urn:uuid:947e5554-1a10-4371-b824-3fb14fffc08c> | 3.234375 | 705 | Academic Writing | Science & Tech. | 53.169237 |
There are essentially 5 states of matter:
Plasma is actually the most abundant state of matter in the universe, because it is the state that exists inside stars that are undergoing nuclear fusion. Ball lightning is an example of plasma that manifests on the Earth.
Phase TransitionsSubstances can go through a number of transitions between their states:
- condensation - gas to liquid
- fusion (or freezing) - liquid to solid
- melting - solid to liquid
- sublimation - solid to gas
- vaporization - liquid or solid to gas
For a given substance, it is possible to make a phase diagram which outlines the changes in phase (see image to the right). Generally temperature is along the horizontal axis and pressure is along the vertical axis. It is sometimes convenient to create the diagram using Kelvin as the temperature scale, so that the origin will be 0 for both temperature & pressure, but this is obviously not required.
Curves representing the "Fusion curve" (liquid/solid barrier), the "Vaporization curve" (liquid/vapor barrier), and the "Sublimation curve" (solid/vapor barrier) can be seen in the diagram. The area near the origin is the Sublimation curve and it branches off to form the Fusion curve (which goes mostly upward) and the Vaporization curve (when goes mostly to the right). Along the curves, the substance would be in a state of phase equilibrium, balanced precariously between the two states on either side.
The point at which all three curves meet is called the triple point. At this precise temperature and pressure, the substance will be in a state of equilibrium between the three states, and minor variations would cause it to shift between them.
Finally, the point at which the Vaporization curve "ends" is called the critical point. The pressure at this point is called the "critical pressure" and the temperature at this point is the "critical temperature." For pressures or temperatures (or both) above these values, essentially there is a blurry line between the liquid and gaseous states. Phase transitions between them do not take place, although the properties themselves can transition between those of liquids and those of gases. They just do not do so in a clear-cut transition, but metamorph gradually from one to another.
3D Phase DiagramsIn actuality, a phase diagram needs to be three dimensional to be fully complete, because the equations of state for a material are dependent upon temperature, pressure, and also on volume. These three values - temperature, pressure, and volume - are sometimes called the state coordinates of a material.
The standard, 2-dimensional phase diagram assumes that the volume remains relatively constant. You could also assume that, say, temperature remains constant, and would get a very different looking phase diagram.
Obviously, extending the phase diagram to include all three state coordinates can become rather complex and is generally not required for most analyses of state situations, especially in a non-ideal gas. | <urn:uuid:7b828ff5-9218-49cd-81c9-5cb1f9db3fd0> | 3.59375 | 615 | Knowledge Article | Science & Tech. | 27.98504 |
As noted in A doubt about the age of the universe, the wiki about quasars still contains the following misleading sentence:
"The highest redshift quasar known (as of June 2011) is ULAS_J1120+0641, with a redshift of 7.085, which corresponds to a proper distance of approximately 29 billion light-years from Earth."
But even if the "strange 29 billion" is replaced by the "correct 12.9 billion", the fact remains that the actual measurement is "a redshift of 7.085". The "proper distance" is only a different way to express that measurement. It's not clear to me how "accurately" this describes the distance of the quasar, because the quasar surrounds a black hole and rotates quite fast. So there are at least two additional sources for the redshift, but how significant is their contribution? | <urn:uuid:853f6599-bc2e-4b8a-990c-fa6c3b88e789> | 3.234375 | 188 | Q&A Forum | Science & Tech. | 58.325588 |
I ran across an excellent passage in one of Feynman's "extra" lectures about the need to develop physical intuition in learning physics:
Now, all these things you can feel. You don't have to feel them; you can work them out by making diagrams and calculations, but as problems get more and more difficult, and as you try to understand nature in more and more complicated situations, the more you can guess at, feel, and understand without actually calculating, the much better off you are! So that’s what you should practice doing on the various problems: when you have time somewhere, and you’re not worried about getting the answer for a quiz or something, look the problem over and see if you can understand the way it behaves, roughly, when you change some of the numbers.
Now, how to explain how to do that, I don’t know. I remember once trying to teach somebody who was having a great deal of trouble taking the physics course, even though he did well in mathematics. A good example of a problem that he found impossible to solve was this: “There’s a round table on three legs. Where should you lean on it, so the table will be the most unstable?”
The student’s solution was, “Probably on top of one of the legs, but let me see: I’ll calculate how much force will produce what lift, and so on, at different places.”
Then I said, “Never mind calculating. Can you imagine a real table?”
“But that’s not the way you’re supposed to do it!”
“Never mind how you’re supposed to do it; you’ve got a real table here with the various legs, you see? Now, where do you think you’d lean? What would happen if you pushed down directly over a leg?”
I say, “That’s right; and what happens if you push down near the edge, halfway between two of the legs?”
“It flips over!”
I say, “OK! That’s better!”
The point is that the student had not realized that these were not just mathematical problems; they described a real table with legs. Actually, it wasn’t a real table, because it was perfectly circular, the legs were straight up and down, and so on. But it nearly described, roughly speaking, a real table, and from knowing what a real table does, you can get a very good idea of what this table does without having to calculate anything—you know darn well where you have to lean to make the table flip over.
So, how to explain that, I don’t know! But once you get the idea that the problems are not mathematical problems but physical problems, it helps a lot.
This passage makes a point similar to the one in Glen Coughlin's introduction to his translation of Aristotle's Physics: that knowledge and thoughts about the physical world are prior to the abstract knowledge of modern mathematical physics:
To understand Newton's argument for universal gravitation, one must have experience of weight in things and in oneself, of the motion of the stars and planets and moons. Knowing calculus is not enough. This hybrid science [mathematical physics], then, comes after the consideration of nature through non-mathematical means.
Richard P. Feynman, Michael A. Gottlieb, Ralph Leighton, Feynman's Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics (Boston: Pearson, 2006), 52-53.
Aristotle, Physics, or Natural Hearing, trans. Glen Coughlin (South Bend, IN: St. Augustine’s Press, 2005), xii. | <urn:uuid:ea6c5798-c5a9-4bdd-806b-bcd5a2a1f442> | 2.84375 | 807 | Personal Blog | Science & Tech. | 62.664846 |
An anonymous reader tips a piece in Australian Geographic indicating that Pluto may be in for another demotion, as researchers work to define dwarf planets more exactly. "[Australian researchers] now argue that the radius which defines a dwarf planet should instead be from 200–300 km, depending on whether the object is made of ice or rock. They base their smaller radius on the limit at which objects naturally form a spherical rather than potato-like shape because of 'self-gravity.' Icy objects less than 200 km (or rocky objects less than 300 km) across are likely to be potato shapes, while objects larger than this are spherical. ... They call this limit the 'potato radius' ... [One researcher is quoted] 'I have no problem with there being hundreds of dwarf planets eventually.'" | <urn:uuid:90ec92f4-2f7b-498e-b7dd-ca498f361995> | 3.15625 | 157 | Truncated | Science & Tech. | 54.608341 |
model attribute to declare a model object the view binds to.
This attribute is typically used in conjunction with views that render data controls, such as forms.
It enables form data binding and validation behaviors to be driven from metadata on your model object.
The following example declares an
enterBookingDetails state manipulates the
<view-state id="enterBookingDetails" model="booking">
The model may be an object in any accessible scope, such as
model triggers the following behavior when a view event occurs:
View-to-model binding. On view postback, user input values are bound to model object properties for you.
Model validation. After binding, if the model object requires validation that validation logic will be invoked.
For a flow event to be generated that can drive a view state transition, model binding must complete successfully. If model binding fails, the view is re-rendered to allow the user to revise their edits. | <urn:uuid:2742a0df-1388-40e1-89d2-1d654dafefc7> | 2.875 | 195 | Documentation | Software Dev. | 33.491648 |
Ocean creatures face climate consequences
Sharks, blue whales and loggerhead turtles look like losers due to climate change coming to the Pacific Ocean in this century, scientists report.
Sea birds, tuna and leatherback turtles, on the other hand, look more likely to prosper as global warming shifts sea temperatures and habitats, finds the report in the journal Nature Climate Change.
"There will be winners and losers," says National Oceanic and Atmospheric Administration fisheries scientist Elliott Hazen, who led the study.The report looked at changing temperatures and habitat areas in the Pacific by 2100, under a "business as usual" scenario of increasing greenhouse gas emissions tied to fossil fuel use continuing to heat the atmosphere.
Seabirds, such as the sooty shearwater, which would see their habitat expand more than 20%, appear likely to increase in numbers, suggests the analysis. Blue whales and mako sharks see their habitat decrease due to warming ocean water and less prey, raising issues for these threatened species, Hazen says. The study suggests effects would be noticeable by 2040.
The good news is that the Pacific's California current system remains strong in the analysis, Hazen says. "That is a region of great abundance for sea life." | <urn:uuid:402fc682-1597-4eae-9219-c96ac67a3715> | 3.59375 | 250 | Truncated | Science & Tech. | 34.748756 |
sensitivity to initial conditions
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
contribution of Lorenz
In the early 1960s Lorenz discovered that the weather exhibits a nonlinear phenomenon known as sensitive dependence on initial conditions. He constructed a weather model showing that almost any two nearby starting points, indicating the current weather, will quickly diverge trajectories and will quite frequently end up in different...
What made you want to look up "sensitivity to initial conditions"? Please share what surprised you most... | <urn:uuid:43680a3a-8af3-4a76-9487-b5cc419c9230> | 2.8125 | 128 | Knowledge Article | Science & Tech. | 35.213615 |
Fill by Stripes
Algorithm Fill by Stripes is more complicated than the previous algorithms. Again, it starts by sorting the rectangles by height. Then it positions the tallest non-positioned rectangle and uses it to start a new horizontal stripe. The algorithm then calls the FillBoundedArea
method to fill that stripe as completely as it can by using the remaining rectangles.
is a fairly tricky recursive subroutine. It starts with an area to fill (the stripe) and loops through the non-positioned rectangles, placing each rectangle in the upper-left corner of the area it is filling to see whether that placement produces a good solution.
|Figure 8. FillBoundedArea: By dividing empty space into areas determined by the edges of an already-placed rectangle, this method tries to make optimum use of remaining space.|
The subroutine then divides the area either vertically or horizontally along an edge of the newly positioned rectangle, which is in the green area labeled 1 in Figure 8
. The subroutine first tries dividing the area horizontally, and then divides the remaining area into the piece labeled 2
and the large horizontal area containing the regions labeled 3 and 4 (surrounded by the red lines). Subroutine FillBoundedArea
calls itself to see how well it can fill these two new areas 2
and 3 + 4
After the recursive call to itself returns, FillBoundedArea
tries dividing its area vertically. In Figure 8
, it now considers the area labeled 3
and the large vertical area that includes the regions 2 and 4 (surrounded by the blue dashed lines). The subroutine calls itself recursively to see how well it can fill these two new areas 3
and 2 + 4
When that recursive call returns, the method evaluates whether it did a better job filling the total area by using the horizontal division or the vertical division. Finally, if the better of those solutions improves on the best solution found so far for the total area, it saves the new solution.
|Figure 9. Fill by Stripes: This heuristic recursively attempts to find the best possible solution after placing each remaining rectangle.|
Now the method goes back and tries again, starting with a new rectangle positioned in the upper left corner (area 1 in Figure 8
). It iterates over all the rectangles in this manner, placing each, and then recursively filling any remaining empty area until it has tried all the rectangles. When the loop completes, it returns the best solution.
Figure 9 shows the best solution found by the Fill by Stripes algorithm. It's an even better solution than the one found by the Sort and Fill algorithms, having a total height of only 13 units.
The Recursive Division algorithm uses recursion similarly to Fill by Stripes, but where Fill by Stripes starts by positioning a rectangle and then using the FillBoundedArea subroutine to try to fill its horizontal strip as densely as it can, the Recursive Division algorithm does away with the idea of stripes altogether. Like Fill by Stripes it positions a rectangle and then considers dividing the remaining area vertically and horizontally along the rectangle's edges. Unlike Fill by Stripes, however, the lower areas that it must then fill are not always bounded below.
Look again at Figure 8. Suppose the algorithm has just positioned the very first rectangle. This algorithm doesn't use stripes so the total area to fill includes the entire piece of stock, which does not have a fixed length. Imagine that areas 3 and 4 don't have bottom edges; they extend indefinitely downward. The FillBoundedArea method can fill only bounded areas (hence the name) so it won't work on unbounded areas. However, FillUnboundedArea can fill unbounded areas.
|Figure 10. Recursive Division: The Recursive Division algorithm packs rectangles equally as well as Fill by Stripes.|
The Recursive Division algorithm starts by calling FillUnboundedArea
to fill the entire stock area. That routine positions a rectangle as before, and tries to divide the area horizontally and vertically. For the horizontal division, it calls FillBoundedArea
to fill area 2
and calls FillUnboundedArea
to fill the combined area 3 + 4
. For the vertical division, it calls FillUnboundedArea
to fill area 3
and calls FillUnboundedArea
to fill the combined area 2 + 4
When its recursive calls return, the subroutine considers the solutions it found and returns the best one.
Figure 10 shows the result produced by the Recursive Division heuristic. It results in a new arrangement but the same total height as the Fill by Stripes technique. | <urn:uuid:568fc29a-8887-4ec7-8af3-03abca337464> | 4.03125 | 970 | Documentation | Software Dev. | 39.036184 |
Contact: Diana Lutz
Washington University in St. Louis
Caption: This artist's conception of a planetary smashup whose debris was spotted by NASA's Spitzer Space Telescope three years ago gives an impression of the carnage that would have been wrecked when a similar impact created Earth's Moon. A team at Washington University in St. Louis has uncovered evidence of this impact that scientists have been trying to find for more than 30 years.
Credit: Credit: NASA/JPL-Caltech
Usage Restrictions: None
Related news release: Proof at last: Moon was created in giant smashup | <urn:uuid:ab77309b-39be-43ad-ac70-95b143e13893> | 2.9375 | 121 | Truncated | Science & Tech. | 43.806645 |
We are being told to reduce our carbon footprints in an attempt to reduce the catestrophic effects of global warming. Will this do any good? Until we know the past and future effects of Global Governmental weather manipulation on our climate we cannot take steps to save our planet.
For the past twenty plus years, the planet has been subject to increasingly unusual weather patterns.
Governments are encouraging companies and individuals to reduce their carbon footprints to tackle “global warming” – caused by modern-day energy expenditure.
Is this the whole truth?
Weather control is not sci-fi, it is a reality, yet very little is publicised about it and the long term effects of weather manipulation are an unknown.
In the knowledge that if a country can control the weather, they can control the world, an Environmental Modification Treaty was signed or ratified by 70 of the world governments in 1977 to prevent hostile weather manipulation.
Actions that would contravene the treaty would be: triggering earthquakes, manipulating ozone levels, alteration of the ionosphere, deforestation, provoking flood or drought, use of herbicides, setting fires, seeding clouds, introduction of invasive species, eradication of species, creation of storms, destruction of crops.
Weather control is the ultimate weapon.
Forget nuclear weaponry, bombs, rockets and guns – weather control technology is the new weapon of mass destruction, the technology is the ultimate state / terrorist weapon and / or deterrent.
To assume that in signing the treaty, Governments would completely disregarded weather control research – the ultimate weapon – would be naïve to say the least.
In June 08, the LA Times reported that in response to criticism of the 2008 Olympic Games being held in Beijing, China, during the rainy season, the Chinese Meteorological Bureau’s of weather modification had assured the public that they had set aside, 30 aeroplanes, 4000 rocket launchers, 7000 anti-aircraft guns to modify the weather to prevent rain.
The Chinese weather control would not be in contravention of the treaty, as it would be done without hostile intent, however, it proves that governments have invested in the research and development of weather control techniques.
The above picture is HAARP in Alaska – High frequency Active Aural Research – this is one of at least seven research facilities around the globe.
These facilities were built by the US to monitor the ionosphere, the outer defence of earths atmosphere.
The ionosphere retains the globes warmth and protects the planet from the suns solar flares – variations on any point of the ionosphere can severely affect the weather.
The facility can gather valuable information in respect of the earth’s atmosphere.
However, more worryingly, the facilities have an active capability to superheat specific points of the ionosphere, creating holes or incisions, which then allow solar flares to enter the atmosphere – thus significantly affecting weather conditions below.
Under the remit of serious research, the facilities do not contravene the Environmental Modification Treaty, however the technology has the potential to used as a weapon of mass destruction on a global scale.
Details are available on http://www.haarp.alaska.edu/haarp/ion4.html
The Government website noted above, confirms that the HAARP facility has been actively tested.
An environmental study was carried out on the Alaskan facility in 1992 which concluded “all of the significant environmental impacts associated with HAARP Alaska can be mitigated to an acceptable level. Some insignificant potential impacts such as loss of habitat, socioeconomic and wildlife impacts may not be mitigated.”
Further explanation of the ominous wording highlighted above, the immediate and long term effects of the use of the facility upon the environment / weather patterns and the military potential of the HAARP system are not listed on the website.
The Russian government have also developed Project Woodpecker- Electron Cyclotron Resonance Heating Method.
The system was developed from experiments carried out by Nickola Tesla into electro magnetism and the potential to use the planets natural electromagnetism as an energy source which could be further developed for use as a powerful weapon.
Woodpecker transmitters emit ELF – extreme low frequency elecro-magnetic pulses.
The transmitters, positioned in facilities around the globe generate an electro-magnetic grid which circumnavigates the globe using natural high density water vapour trails as conductors (essentially five water vapour rivers which naturally occurring in both the North and South Hemispheres of the Earth’s atmosphere).
Disruption in their frequency acts as an early warning system for incoming missiles or aircraft.
Convergence of transmissions on the grid, at any pre-determined point, will disrupt the ionosphere, the release of electromagnetic energy pulses by the transmitters can either be used as a weapon and disrupt weather conditions
The Woodpecker system has the capability to disrupt the path of the Northern Hemispheres Jet Stream Flow – Which can, and may already have had cataclysmic effects upon the world’s weather patterns.
It was reported in the Wall Street Journal on 2nd October 1992, that a Russian Company called “Elite Intelligent Technologies” were selling weather control Technology with the slogan “Weather made to Order”. They claimed to be able to fine tune the weather over a 200 square mile radius for $200.00 per day.
This article only touches upon some of the technology involved in weather control / manipulation. There are many internet sites where weather manipulation and control is being discussed – Please take a look for yourself.
In conclusion, we cannot and must not trust the propaganda regarding global warming – until we are given the full facts regarding the short and long term effects of planetary weather manipulation/ weaponry.
The past and future consequences of weather manipulation cannot no longer be ignored or brushed off as conspiratorial sci-fi.
Until the public takes the time to research this topic for themselves, spreads awareness, loses the apathy and demands answers from who are elected to govern on our behalf, we, our children, and our planet will continue to hang on a precipice of oblivion.
Also continue to read this related story:
Methods of Artificial Weather Manipulation(AWM) help agriculture, devastate the enemy and control the world economy
Engineers and Space Specialists are working towards something that can change human civilization forever! It is the methods of artificial weather modification.
The process when perfected, can help in agriculture, getting rid of droughts, floods and avoid cyclones and typhoons.
It is also the process by which the enemy can be devastated, artificial floods, cyclones and typhoons created.
It can allow controlling the world economy and agricultural commodity markets.
In ancient religions and legends, weather control, creation of cyclone, rain, flood, drought were nothing new. In the modern age the Scientists and technologists are busy perfecting the weather control sciences. It is shaping up as the most vibrant area of research and development.
Many countries are mastering the science of weather control. As a matter of fact many experts predict that a war game is being played by major powers in the world to demonstrate their capabilities of weather control. Most of these initiatives are classified and shoved off from the public. The only way one can track these initiatives is to look at countries taking actions to shield against weather control experiments.
Recently, scientists and engineers have started unveiling the actual methods of weather control. Some primitive methods like Cloud-top seeding can confuse you and point you to a wrong direction. Cloud-top seeding is usually performed between the temperatures of -5°C and -10°C. The greatest amount of super-cooled liquid water is usually found within this range. This corresponds to an altitude range of 15,000 to 22,000 ft depending upon location. Dropping or ejecting silver iodide flares into the growing cloud turrets dispenses seeding agent. The seeding agent is placed into the super-cooled clouds where nucleation is desired, so the updrafts in these cases are relied upon only to provide a continuing source of condensate. This delivery technique requires less anticipation on the part of those directing the seeding operations and may have a more immediate effect.
The more modern methods involve artificial ionization of earth’s atmosphere between 15,000 and 30,000 ft. and above. Manipulating the ionosphere and use of controlled solar-terrestrial interactions can create much larger effects. Scientists are realizing that the earth’s weather is controlled by Sun’s natural Electromagnetic Radiation reaching the earth. The Sun’s Radiations and Ultraviolet Rays have to cross the ionosphere to reach the earth.
There are early indications that the solar radiations and flares are directly responsible for planetary weather changes. And Solar flares and levels of radiations are caused by bombardment of cosmic rays on the Sun from either a distant massive black hole or a star-cluster caused by the collapse of thousands and thousands of stars in a small space.
Computer models obviously focused on the ionosphere, which acts as a filter for the solar radiations to reach the earth. If one can manipulate and control the filter, it becomes a potential source of massive weather modification. That is what the computer simulation models found. Controlling the ionosphere potentially allows weather control. The algorithmic variation of ionosphere can create the magic in a massive scale.
There are many methods of controlling the ionosphere. It is the process of artificially manipulating ion density in the ionosphere. High power transmitter and antenna array operating in the HF(High Frequency) range is one of the methods. There are lots of literature on that in the Internet and declassified scientific research journals.
However, the recent trend is in using super conductors in space satellites to generate intense high intensity electromagnetic flux. | <urn:uuid:9e1a3b98-82b6-4ce5-b4d0-abf7a1141dfc> | 2.75 | 2,020 | Personal Blog | Science & Tech. | 28.539028 |
So much fun, itís scary!
Do crawling cockroaches give you the creeps? Are you particularly petrified on a plane? Are you feverishly frightened of falling? Explore popular fears in this heart-pounding, laughter-filled (and totally safe) exhibition about the often-dreaded emotion.
Discover fun, interactive fear challenges
Fear of animals
Can you reach inside an opaque box connected to terrariums filled with snakes and other creatures? Itís easier said than done!
Fear of electric shock
Think you can handle it? Feel your heart beat faster and your muscle tense as you anticipate a mild electric shock.
Fear of falling
Can you stay cool as a cucumber as you wait to fall backwards without warning? See how fear registers on your face.
Mr. Goose Bumps
Meet this larger-than-life animated figure that comes alive to playfully illustrate how his body changes when he gets scared.
Can you outsmart a leopard? Play an immersive video game and collect fruit without being seen by the leopard. If he sees you move, heíll pounce!
Make a scary movie
Like to be scared? Experiment with different soundtracks and frightening sound effects to create your own scary movie!
Can you learn fear? Participate in a live demonstration by Science Center staff, experience fear conditioning first-hand and see how scientists measure fear in the lab.
Explore more than 20 exhibits in a safe and fun learning environment! Find out more about all of the Goose Bumps exhibits
Goose Bumps! The Science of Fear developed by the California Science Center and supported, in part, by the Informal Science Education program of the National Science Foundation under grant ESI-0515470. Opinions expressed are those of the authors and not necessarily those of the National Science Foundation.
|Support provided by
||Promotional support provided by | <urn:uuid:879715c5-fafc-415d-bfb0-134f43229fb5> | 2.84375 | 389 | Content Listing | Science & Tech. | 48.431951 |
Now, doctoral research by evolutionary biologist Kathryn Lord at the University of Massachusetts Amherst suggests the different behaviors are related to the animals’ earliest sensory experiences and the critical period of socialization. Details appear in the current issue of Ethology.
Until now, little was known about sensory development in wolf pups, and assumptions were usually extrapolated from what is known for dogs, Lord explains. This would be reasonable, except scientists already know there are significant differences in early development between wolf and dog pups, chief among them timing of the ability to walk, she adds.
To address this knowledge gap, she studied responses of seven wolf pups and 43 dogs to both familiar and new smells, sounds and visual stimuli, tested them weekly, and found they did develop their senses at the same time. But her study also revealed new information about how the two subspecies of Canis lupus experience their environment during a four-week developmental window called the critical period of socialization, and the new facts may significantly change understanding of wolf and dog development.
When the socialization window is open, wolf and dog pups begin walking and exploring without fear and will retain familiarity throughout their lives with those things they contact. Domestic dogs can be introduced to humans, horses and even cats at this stage and be comfortable with them forever. But as the period progresses, fear increases and after the window closes, new sights, sounds and smells will elicit a fear response.
Through observations, Lord confirmed that both wolf pups and dogs develop the sense of smell at age two weeks, hearing at four weeks and vision by age six weeks on average. However, these two subspecies enter the critical period of socialization at different ages. Dogs begin the period at four weeks, while wolves begin at two weeks. Therefore, how each subspecies experiences the world during that all-important month is extremely different, and likely leads to different developmental paths, she says.
Lord reports for the first time that wolf pups are still blind and deaf when they begin to walk and explore their environment at age two weeks. “No one knew this about wolves, that when they begin exploring they’re blind and deaf and rely primarily on smell at this stage, so this is very exciting,” she notes.
She adds, “When wolf pups first start to hear, they are frightened of the new sounds initially, and when they first start to see they are also initially afraid of new visual stimuli. As each sense engages, wolf pups experience a new round of sensory shocks that dog puppies do not.”
Meanwhile, dog pups only begin to explore and walk after all three senses, smell, hearing and sight, are functioning. Overall, “It’s quite startling how different dogs and wolves are from each other at that early age, given how close they are genetically. A litter of dog puppies at two weeks are just basically little puddles, unable to get up or walk around. But wolf pups are exploring actively, walking strongly with good coordination and starting to be able to climb up little steps and hills.”
These significant, development-related differences in dog and wolf pups’ experiences put them on distinctly different trajectories in relation to the ability to form interspecies social attachments, notably with humans, Lord says. This new information has implications for managing wild and captive wolf populations, she says.
Her experiments analyzed the behavior of three groups of young animals: 11 wolves from three litters and 43 dogs total. Of the dogs, 33 border collies and German shepherds were raised by their mothers and a control group of 10 German shepherd pups were hand-raised, meaning a human was introduced soon after birth.
At the gene level, she adds, “the difference may not be in the gene itself, but in when the gene is turned on. The data help to explain why, if you want to socialize a dog with a human or a horse, all you need is 90 minutes to introduce them between the ages of four and eight weeks. After that, a dog will not be afraid of humans or whatever else you introduced. Of course, to build a real relationship takes more time. But with a wolf pup, achieving even close to the same fear reduction requires 24-hour contact starting before age three weeks, and even then you won’t get the same attachment or lack of fear.”
Kathryn Lord | Source: Newswise
Further information: www.umass.edu
More articles from Ecology, The Environment and Conservation:
Bullfrogs may help spread deadly amphibian fungus, but also die from it
18.06.2013 | Oregon State University
Study of Oceans’ Past Raises Worries About Their Future
18.06.2013 | McGill University
... two engines aircraft project “Elektro E6”.
The countdown has been started for opening the gates again for the worldwide leading aviation and space event in Le Bourget, Paris from June 17th - 23rd, 2013.
EADCO & PC-Aero will present at the Paris Air Show in Hall H4 booth F-7 their new future aircraft and innovative project: ...
Siemens scientists have developed new kinds of ceramics in which they can embed transformers.
The new development allows power supply transformers to be reduced to one fifth of their current size so that the normally separate switched-mode power supply units of light-emitting diodes can be integrated into the module's heat sink.
The new technology was developed in cooperation with industrial and research partners who ...
Cheaper clean-energy technologies could be made possible thanks to a new discovery.
Led by Raymond Schaak, a professor of chemistry at Penn State University, research team members have found that an important chemical reaction that generates hydrogen from water is effectively triggered -- or catalyzed -- by a nanoparticle composed of nickel and phosphorus, two inexpensive elements that are abundant on Earth. ...
The Fraunhofer Institute for Laser Technology ILT generated a lot of interest at the LASER World of Photonics 2013 trade fair with its numerous industrial laser technology innovations.
Its highlights included beam sources and manufacturing processes for ultrashort laser pulses as well as ways to systematically optimize machining processes using computer simulations. There was even a specialist booth at the fair dedicated to the revolutionary technological potential of digital photonic production.
Now in its fortieth year, LASER World ...
It's not reruns of "The Jetsons", but researchers working at the National Institute of Standards and Technology (NIST) have developed a new microscopy technique that uses a process similar to how an old tube television produces a picture—cathodoluminescence—to image nanoscale features.
Combining the best features of optical and scanning electron microscopy, the fast, versatile, and high-resolution technique allows scientists to view surface and subsurface features potentially as small as 10 nanometers in size.
The new microscopy technique, described in the journal AIP Advances,* uses a beam of electrons to excite a specially ...
18.06.2013 | Materials Sciences
18.06.2013 | Health and Medicine
18.06.2013 | Life Sciences
14.06.2013 | Event News
13.06.2013 | Event News
10.06.2013 | Event News | <urn:uuid:82b73096-841e-4184-ba8e-49d1cab5db49> | 3.796875 | 1,522 | Content Listing | Science & Tech. | 45.420025 |
Extraordinary and unexpected clouds of methane gas which is a greenhouse gas and is 20 times more intoxicating than carbon dioxide gas has been found bubbling at the surface, by a few scientists who were there to explore and take survey in the Arctic Ocean.
The head of the Russian team who was surveying there was surprised by the extent and quantity if the methane gas. This team has been taking several surveys in the seabed of East Siberian Arctic Shelf off northern Russia for about 20 years. In an interview, Igor Semiletov who is working at the Russian Academy of Sciences said that he has witness such a thing for the first time in his work career, and he has never seen methane gas being released with such a force and large scale. Continue reading | <urn:uuid:97b59794-0fe6-473d-be48-1efe0b5de861> | 2.75 | 153 | Truncated | Science & Tech. | 39.826471 |
Case Studies in Earth & Environmental Science Journalism
Session 1: Background and Vent Animals.
Session 2: Origins of Life and Vent Mining.
Session 3: Meet at AMNH with Ed Mathez.
Read the title of the June 6, 1979 Christian Science Monitor article by Lynde McCormick. How does it affect you as a reader? Choose a few titles you particularly like from this selection of articles. What do they tell the reader? What are some titles you feel are ineffective, cliched, trite or even misleading?
Chose one of the popular articles that you could give to someone who had no knowledge of theories on the origin of life. Outline the reason you chose that article. What might you have added (or taken out) to give the article depth of the entire issue at hand?
Look at the quotations found in the September 29, 1996 edition of the Springfield, IL State Journal-Register and the May 4, 1997 Sacramento bee. What quotation strike you as being poignantly effective and which ones fall flat?
Which articles do a good job of defining archaebacteria and putting it into an evolutionary perspective? Do any articles leave holes in the science and the logic? What do you think might be the cause of this?
Do you think it is necessary to read the original (scientific) material in order to write a popular article? Which scientific article would you choose if you were going to write a popular piece and you had to base it on only one scientific article? Which popular articles seemed like the author had had read the original research?
What is your first impression of the May 8, 1979 Walter Sullivan NY Times article? Read the article again. Is you second impression different than the first? How did the second reading change your absorption of the material? How do you think we (as journalists) can use our writing to hit readers the first time?
Which lead to you prefer -one that is scientifically to-the-point or more creative and attention getting? Chose an example of each kind of lead and show how they are effective and noneffective.
What do you think about W. Broad's Dec. 21, 1997 article on the front page of the NY Times? Why do you think it made the front page? How does it compare with the Dec. 30, 1997 article on the cover of Science Times? Do you think W. Broad did additional research for the second article?
Press Release. (1 April 1977) pp. 5. University of California San Diego, Scripps Inst. of Oceanography. Public Affairs Office, A-033 La Jolla, CA 92093.
Press Release. (17 April 1978) pp. 3+ill. . University of California San Diego, Scripps Inst. of Oceanography. Public Affairs Office, A-033 La Jolla, CA 92093.
Project Summary. (12 July 1978) pp. 4. . NSF Award #OCE7810460. University of California San Diego, Scripps Inst. of Oceanography. La Jolla, CA 92093.
Press Release. (3 Jan. 1979) pp. 4. Woods Hole Oceanographic Institution. Woods Hole, MA 02543. Ballard, R. D. (1984) The exploits of Alvin and ANGUS: Exploring the East Pacific Rise. Oceanus. 27(3): p. 7-14.
Hayman, R. M. and K. C. Macdonald. (1985) The geology of deep-sea hot springs. American Scientist. 73: p. 441-445.
Web page. (1998) American Museum of Natural History Expeditions: Black Smokers. pp. 9. http://www.amnh.org
Van Dover, C.L. (1988) Do 'eyeless' shrimp see the light of glowing deep-sea vents? Oceanus. 31:p. 47-52.
VanDover, C. L. , E. Z. Szuts, S. C. Chamberlain, and J. R. Cann. (1989) A novel eye in 'eyeless' shrimp from hydrothermal vents of the Mid-Atlantic Ridge. Nature. 337: p. 458-460.
Pelli, D. G. and S. C. Chamberlain. (1989) The visibility of 350ÉC black-body radiation by the shrimp Rimicaris exoculata and man. Nature. 337: p. 460-461.
Travis, J. (1993) Probing the unsolved mysteries of the deep. Science. 259: p. 1123-1124.
Little, C. T. S., R. J. Herrington, V. V. Maslennikov, N. J. Morris and V. V. Zaykov. (1997) Silurian hydrothermal-vent community from Southern Urals, Russia. Nature. 385: p. 146-148.
Vrijenhoek, R.C. (1997) Gene flow and genetic diversity in naturally fragmented metapopulations of deep-sea hydrothermal vent animals. The Journal of Heredity. 88(4): p. 285-293.
Tunnicliffe, V., R. W. Embley, J. F. Hoden, D. A. Butterfield, G. J. Massoth, and S. K. Juniper. (1997) Biological colonization of new hydrothermal vents following an eruption on Juan de Fuca Ridge. Deep-Sea Research I. 44(9): p. 1627-1644.
Holden, C. (ed.) (1998) Farming Tubeworms. Science. 279: p. 663.
Prieur, D., S. Chamroux, P. Durand, G. Erauso, Ph. Fera, C. Jeanthon, L.Le Borgne, G. Mevel, and P. Vincent. (1990) Metabolic diversity in epibiotic microflora associated with the Pompeii worms Alvinella pompejana and A. caudata (Polychaetae: Annelida). Marine Biology. 106: p. 361-367.
Cary, S. C., T. Shank, and J. Stein. (1998) Worms bask in extreme temperatures. Nature. 391: p. 545-546.
Smith, C. (1979) Diverse marine animals: Deep sea hot springs beckon. San Diego Union. Thurs. 4 Jan. page B-5.
Anonymous. (1979) Sea hot spring probed by team. Oceanside Blade-Tribune. Sun. 28 Jan. p. 13.
Anonymous. (1979) Sea life thrives in hot springs on bottom of ocean. San Diego Daily Transcript. Mon. 22 Jan.
Corbett, B. (1979) Scripps divers find rare life on sea floor. San Diego Tribune, 30 April page B-1.
Anonymous. (1979) New deep-sea 'clambakes' found of Baja. San Diego Union. Mon. 30 April page B-1 and B-4.
Anonymous. (1979) Scripps finds sea life in 'hot springs'. Times-Advocate, Escondido, CA. Tues. 1 May.
Anonymous. (1979) Undersea spa harbors new ecosystem: Key clue was crack in crust. The Daily Guardian Science. 18 Sept. p. 43.
Childress, J. J., H. Felbeck and G. N. Somero. (1987) Symbiosis in the deep sea. Scientific American. 256: p. 114-120.
Stover, D. (1994) Web page: Creatures of the thermal vents. Ocean Planet Smithsonian: Popular Science. http://seawifs.gsfc.nasa.gov/OCEAN_PLANET/HTML/ps_vents.html
Travis, J. (1996) Live long and prosper. Science News. 150: p. 201.
Anonymous (1996) Web page: Scientists witness creation of new hydrothermal vents on seafloor. http://www.geo.nsf.gov/~develop/geo/adgeo/pr9654.htm
Meadows, R. (1996) Web page: Smoking in the dark. Zoogoer. 25(3). http://www.fonz.org/zhsmoke.htm
Anonymous. (1996) Breakthrough: Oceanography; Home on the bone. Discover. 17(10) Oct. p. 22.
Monastersky, R. (1996) The light at the bottom of the ocean. Science News. 150(10): p. 156.
Radford, T. (1996) Ex-philes: The truth is down there. The Guardian (London). 24 Oct, p. 12.
Suplee, C. (1997) marine Biology: No refuge from evolution? The Washington Post. 13 Jan. page A2.
Menon, S. (1997) Deep sea rebirth. Discover. 18(7): p. 34.
Flanagan, R. (1997) The light at the bottom of the sea. New Scientist. 13 Dec. p. 42.
Anonymous. (1998) Heat-loving Pompeii worms baffle scientists. AAP Newsfeed. 4 Feb.
Anonymous. (1998) Divers discover unique thermostable enzymes in hydrothermal vent worm symbiont. Business Wire. 4 Feb.
Freeman, K. (1998) An all-temperature worm. The New York Times. Tues. 10, Feb. page F4 col. 6.
Niiler, E. (1998) When it comes to heat, tiny worm seems hardiest creature on Earth. The San Diego Union-Tribune. Wed. 11 Feb. page E4.
Baron, D., R. Siegel, and N. Adams. (1998) Whale stepping stones. NPR: All Things Considered. 11 Feb. 8pm ET. Transcript #98021107-212.
Flam, F. (1998) Worm of the deep sea knows about being on the hot seat. Austin American-Statesman. 15 Feb. page A27.
Arnst, C. (ed.) (1998) A worm for all seasons. Business Week. 23 Feb. p. 111.
Bishop, E. M. (1998) Science beat: Hot worms. The Columbian (Vancouver WA.)
4 March p. C1.
Ponnampera, C. and M. Hobish (1982) "The Galapagos Hydrothermal Vent Ecologies: Possibilities for Neoabiogenesis?" First Symposium on Chemical Evolution and the Origin and Evolution of Life. NASA Ames Research Center, CA. August 2-4.
Tunnicliffe, V. (1992) "Hydrothermal-Vent Communities of the Deep Sea." American Scientist. 80: 336-349.
de Ronde, Cornel E. J., T. W. Ebbsen (1996) "32 .b.y. of organic compound formation near sea-floor hot springs" Geology 24(Sept.): 791-794.
Corbett, B. (1979) "Strange Sea Cretaures-a new plan of life?" San Diego Evening Tribune. May 19.
Stadler, M. (1979) "Scripps scientists discover new environment for life" La Jolla (California) Light. Thurs. May 31, A-7.
McCormick, L. (1979) "Life where no life should be" The Christian Science Monitor. Wed. June 6, B2 and B3.
Wilford, J. N. (1983) "Bacteria Found to Thrive in Heat of Volcanic Vents on Ocean Floor" The New York Times. June 3, A14.
Flynn, P. (1988) "2 at UCSD challenge sea-vent origin-of-life theory" The San Diego Union-Tribune. Aug. 18, B1.
Anonymous. (1989) "Life without oxygen; dark secrets" The Economist. July 15, p. 82.
Rona, P. (1992) "Deep-Sea Geysers of the Atlantic". National Geographic. Oct. p. 105-109.
Dietrich, B. (1994) "Life inside Earth---Scientists wondering if that's where we originated". The Seattle Times. Jan. 24, A1.
Cone, J. (1994) "Life's Undersea Beginnings". Earth. July.
Ryan, S. (1994) "Back from the deep". Sunday Times. Times Newspapers Limited. Oct. 9.
Wasowicz, L. (1996) "Life may have begun in the dark". U.P.I., B.C. Cycle, Feb. 23.
Hawkes, N. (1996) "Germs that Time Forgot". The Times. (Times Newspapers Limited.) Aug. 26.
Blum. D. (1996) "Sludge Factor: Life theory aims at ocean depths". State Journal-Register (Springfield, IL). Sept. 29, p. 52.
Sawyer, K. (1997) "Cosmic Revelations from Deep within the Earth" . Sacramento Bee. May 4, F1.
Dorminey, B. (1997) "Monitoring Secrets of the Deep: Oceanographers worldwide probe volcanic vent systems for clues about the origins of life". The Financial Post. Oct. 23, p. 69.
Rona, P. (1973) "Plate Tectonics and Mineral Resources". Scientific American. Continents Adrift and Continents Aground- readings from Scientific American. Intro by J. Tuzo Wilson, July.
Haymon, R. M. and M. Kastner. (1981) "Hot spring deposits on the East Pacific Rise at 21 degrees North: preliminary description of mineralogy and genesis". Earth and Planetary Science Letters. 53:363-381.
Edmond, J. M. (1984) "The Geochemistry of Ridge Crest Hot Springs". Oceanus. 27(Fall): 15-19.
Rona, P. (1986) "Mineral Deposits from Sea-Floor Hot Springs". Scientific American. 254(Jan.): 84-92.
Herzig, P. M. and M. D. Hannington. (1995) "Hydrothermal activity, vent fauna, and submarine gold mineralization at alkaline fore-arc seamounts near Lihir Island, Papua New Guinea". Proceedings of the 1995 PACRIM Congress at the Australian Institute of Mining and Metallurgy. 9: 279-284.
Humphris, S. E. et al. (1995) "Th internal structure of an active sea-floor massive sulphide deposit". Nature. 377:713-716.
Koski, R. (1995) "The Making of Metal Deposits". Nature. (News and Views). 377:679-680.
Sullivan W. (1979) "Sea-Floor Geysers May be Key to Ore Deposits" New York Times. May, 8, C1 and C2.
Perlman, D. (1979) "New Theory of Ocean Chemistry" . San Francisco Chronicle. Nov. 10, p. 4.\
Anon. (1983) "Mining impact on Gorda Ridge questioned". U. P. I. May 23.
Anon (1985) "Mineral deposits discovered". U. P. I. Oct. 4.
A. P. (1987) "Pacific mineral find reported". Journal of Commerce. May 19.
Rona, P. (1988) "Metal Factories of the Deep Sea". Natural History. Jan. 97:52-57.
Anon. (1989) "Valuable minerals found on undersea ridge". U.P.I., Feb. 15.
Highfield, R. (1993) "Mile-down lab to study deep sea 'smokers'". The Daily Telegraph. Aug. 18, p. 16.
Matthews, R. (1995) "Gold rush begins beneath the waves". Sunday Telegraph. March 26.
Furukawa, T. (1996) "Japanese find seabed metals". American Metal Market. Sept. 10.
Smith, M. (1997) "Undersea harvesting of metals" . U. P. I. Feb. 16.
Broad, W. J. (1997) "First Move Made to Mine Mineral Riches of Seabed". New York Times. Dec. 21, A1.
Anon. (1997) "PAC: scientists strike gold beneath sea off New Zealand". AAP Newsfeed. Dec. 22.
Broad, W. J. (1997) "Undersea Treasure, and Its Odd Guardians". New York Times. Dec. 30, F1.
Anon. (1998) "Seafloor Massive Sulfides". The Mining Journal. Feb. 13.
Return to Case Studies
Return to E&ESJ Home Page | <urn:uuid:ced174d2-3f13-4a8c-b260-df2e5a9c0e6a> | 3.015625 | 3,607 | Content Listing | Science & Tech. | 86.259421 |
Abb. 1:An image of the solar photosphere (the surface of the solar disk seen
in visible light), showing the structures responsible for the
total solar irradiance (TSI) variations. The granules, covering most of
the area, are the convective flows carrying energy from the interior.
They contribute the steady component of the irradiance received by the Earth.
Magnetic structures are dark (sunspots) or bright (the small bright points
called faculae) and contribute a component that varies with the sunspot cycle.
Faculae show up especially towards the limb of the solar disk (to upper right
in this image). Their contribution dominates over the dark spots, so that the
Sun is slightly brighter at sunspot maximum. Length of the bar is 1000 km
(Copyright of image: Swedish 1-m Solar Telescope/B. de Pontieu).
Abb. 2:Variation of the Sun's brightness, as measured by radiometers on
spacecraft since 1978. The total solar irradiance (TSI) increases around
the maxima of sunspot number that occurred near 1980, 1990 and 2001. The
rapid variations are caused by the changing projected areas of spots and
faculae on the solar disk as the Sun rotates on its axis in approximately
The Earth's temperature is determined mainly by the Sun's energy
output: its its brightness. Sunspots are dark, they reduce the Sun's
brightness (MPEG movie, 7.5MB). .
If sunspots were the only kind of blemishes on the Sun's surface, the
increased spottiness over the past centuries would caused the climate
to become cooler not warmer, so the answer would have been a simple
no. But in addition to spots there are also bright patches on the Sun
called faculae. They are quite small but there are very many of
them (MPEG movie, 7.5MB) .
Their number is largest at times when there are many sunspots
(around the years 1991 and 2002 for example).
The Sun's brightness has been measured accurately since 1978. It turns
out to be about 0.07% higher at times of sunspot maximum than at
minimum (Fig. 2). This is because the faculae, though less obvious
because of their small size, actually have a bigger net effect than
the dark spots. Are this kind of brightness changes enough to explain
historical variations in the Earth's climate such as the `global
Sufficiently accurate measurements of the Sun's energy output exist
only for the past 30 years, but observations of sunspot activity for
the past 300 years can be used to extend these data. A theory for the
connection between spot activity and brightness is needed to do
this. The structure of the Sun is known from well-tested theory. This
theory makes a simple statement: apart from the brightness changes due
to spots and faculae, there are no additional, `hidden' brightness
changes. In this way the brightness of the Sun can be reconstructed
since the 17th century. With these brightness variations as input,
computer simulations of the Earth's climate can be made.
With such simulations of the climate the researchers could show that
the effects of spots and faculae is about 4 times to low to explain
the observed climate variations. The results imply that, over the
past century, climate change due to human influences must far outweigh
the effects of changes in the Sun's brightness.
P.V. Foukal, C. Fröhlich, H.C. Spruit, T. Wigley
P.V. Foukal, C. Föhlich, H.C. Spruit, T. Wigley:
Variations in solar luminosity and its effect on the Earth's climate,
Nature (14. September 2006) | <urn:uuid:7fd68162-ebe4-48fb-9bbf-d5e65ea9f90c> | 4.40625 | 819 | Academic Writing | Science & Tech. | 61.793887 |
The last sentence of the article on missing dark matter says it all: "Even if MACHOs exist, astronomers will still have to look for other as yet undetected particles to explain all of dark matter" (16 April, p 10).
The problem with all dark matter theories is that they require all kinds of as yet undiscovered and exotic particles, and exclude the possibility that dark matter is not required.
For example, in a paper from 1998 ("Is the missing mass really missing?", Astronomical and Astrophysical Transactions, vol 16, p 3), a theory is proposed to explain the rotation curves of galaxies based on minor deviations of the vacuum energy density, with no need for dark matter at all. It could well be that the whole search for missing matter is in vain.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:7a7acb5f-c579-49f4-a4e3-4dd32e8b1584> | 2.84375 | 186 | Truncated | Science & Tech. | 49.546316 |
Inheritance diagram for PSmartPointer:
Public Member Functions
|PSmartPointer (PSmartObject *obj=NULL)|
|PSmartPointer (const PSmartPointer &ptr)|
|PSmartPointer &||operator= (const PSmartPointer &ptr)|
Overrides from class PObject
|virtual Comparison||Compare (const PObject &obj) const|
Pointer access functions
|PBoolean||IsNULL () const|
|PSmartObject *||GetObject () const|
|Object the smart pointer points to. |
A PSmartPointer carries the pointer to a PSmartObject# instance which contains a reference count. Assigning or copying instances of smart pointers will automatically increment and decrement the reference count. When the last instance that references a PSmartObject# instance is destroyed or overwritten, the PSmartObject# is deleted.
A NULL value is possible for a smart pointer. It can be detected via the IsNULL()# function.
|PSmartPointer::PSmartPointer||(||PSmartObject *|| obj =
Create a new smart pointer instance and have it point to the specified PSmartObject# instance.
|obj||Smart object to point to.|
|PSmartPointer::PSmartPointer||(||const PSmartPointer &||ptr||)|
Create a new smart pointer and point it at the data pointed to by the ptr# parameter. The reference count for the object being pointed at is incremented.
|ptr||Smart pointer to make a copy of.|
Destroy the smart pointer and decrement the reference count on the object being pointed to. If there are no more references then the object is deleted.
Determine the relative rank of the pointers. This is identical to determining the relative rank of the integer values represented by the memory pointers.
Reimplemented from PObject.
|PSmartObject* PSmartPointer::GetObject||(||)|| const
Get the current value if the internal smart object pointer.
|PBoolean PSmartPointer::IsNULL||(||)|| const
Determine if the smart pointer has been set to point to an actual object instance.
Assign this pointer to the value specified in the ptr# parameter.
The previous object being pointed to has its reference count decremented as this will no longer point to it. If there are no more references then the object is deleted.
The new object being pointed to after the assignment has its reference count incremented.
|ptr||Smart pointer to assign.| | <urn:uuid:e7e90d44-a989-49ac-8fee-95068a3b2b38> | 2.890625 | 547 | Documentation | Software Dev. | 42.708383 |
Mechanics: Vectors and Projectiles
Vectors and Projectiles: Problem Set Overview
This set of 34 problems targets your ability to perform basic vector operations such as vector addition and vector resolution, to use right angle trigonometry and vector addition principles to analyze physical situations involving displacement vectors, and to combine a conceptual understanding of projectile motion with an ability to use kinematic equations in order to solve horizontally and non-horizontally launched projectile problems. Problems range in difficulty from the very easy and straight-forward to the very difficult and complex. The more difficult problems are color-coded as blue problems.
Direction: The Counter-Clockwise From East Convention
A vector is a quantity which has magnitude and direction. The direction can be described as being east, west, north, or south using the typical map convention. Most of us are familiar with the map convention for the direction of a vector. On a map, up on the page is usually in the direction of North and to the right on the page is usually in the direction of east. In Physics, we utilize the map convention to express the direction of a vector. When a vector is neither north or south or east or west, an additional convention must be used. One convention commonly used for expressing the direction of vectors is the counter-clockwise from east convention (CCW). The direction of a vector is represented as the counter-clockwise angle of rotation which the vector makes with due East.
Often times a motion involves several segments or legs. For instance, a person in a maze makes several individual displacements in order to finish some distance out of place from the starting position. Such individual displacement vectors can be added using a head-to-tail method of vector addition. If adding vector B to vector A, then vector A should first be drawn; then vector B should be added to it by drawing it so that the tail of vector B starts at the location that the head of vector A ends. The resultant vector is then drawn from the tail of A (starting point) to the head of B (finishing point). The resultant is equivalent to the sum of the individual vectors. In this set of problems, you will have to be able to read the word story problem and sketch an appropriate vector addition diagram.
Adding Right Angle Vectors
Two vectors which are added at right angles to each other will sum to a resultant vector which is the hypotenuse of a right triangle. The Pythagorean theorem can be used to relate the magnitude of the hypotenuse to the magnitudes of the other two sides of the triangle. The angles within the right triangle can be determined from knowledge of the length of the sides using trigonometric functions. The mnemonic SOH CAH TOA can help one remember how the lengths of the opposite, adjacent and hypotenuse sides of the right triangle are related to the angle value.
Resolving an Angled Vector into Right Angle Components
If one of the vectors to be added is not directed due east, west, north or south, then vector resolution can be employed in order to simply the addition process. Any vector which makes an angle to one of the axes can be projected onto the axes to determine its components. Trigonometric functions (remembered by SOH CAH TOA) can be used to resolve such a vector and to determine the magnitudes of its x- and y- components. By resolving an angled vector into x- and y-components, the components of the vector can be substituted for the actual vector itself and used in solving a vector addition diagram. The resolution of angled vectors into x- and y-components allows a student to determine the magnitude of the sides of the resultant vector by summing up all the east-west and north-south components.
Relative Velocity Situations
Often times an object is moving within a medium which is moving relative to its surroundings. For instance, a plane moves through air which (due to winds) is moving relative to the land below. And a boat moves through water which (due to currents) is moving relative to the land on the shore. In such situations, an observer on land will observe the plane or the boat to move at a different velocity than an observer in the boat or the plane would observe. It's a matter of reference frame. One's perception of a motion is dependent upon one's reference frame - whether the person is in the boat, the plane or on land.
In a relative velocity problem, information is typically stated about the motion of the plane relative to the air (plane velocity) or the motion of the boat relative to the water (boat velocity). And information about the motion of the air relative to the ground (wind velocity or air velocity) or the motion of the water relative to the shore (water velocity or river velocity ) is typically stated. The problem centers around relating these two components of the plane or boat motion to the resulting velocity. The resulting velocity of the plane or boat relative to the land is simply the vector sum of the plane or boat velocity and the wind or river velocity.
The approach to such problems demands a careful reading (and re-reading) of the problem statement and a careful sketch of the physical situation. Efforts must be made to avoid mis-interpreting the physical situation. Once properly set-up, the algebraic manipulations become relatively simply and straight-forward. The crux of the problem is typically associated with the reading, interpreting and understanding of the problem statement.
A projectile is an object which upon which the only force of influence is the force of gravity. As a projectile moves through the air, its trajectory is effected by the force of gravity; air resistance is assumed to have a negligible effect upon the motion. Because gravity is the only force, the acceleration of a projectile is the acceleration of gravity - 9.8 m/s/s, down. As such, projectiles travel along their trajectory with a constant horizontal velocity and a changing vertical velocity. The vertical velocity changes by -9.8 m/s each second. (Here the - sign indicates that an upward velocity value would be decreasing and a downward velocity value would be increasing.)
A projectile has a motion which is both horizontal and vertical at the same time. These two components of motion can be described by kinematic equations. Since perpendicular components of motion are independent of each other, any motion in the horizontal direction is unaffected by a motion in a vertical direction (and vice versa). As such, two separate sets of equations are used to describe the horizontal and the vertical components of a projectile's motion. These equations are described below.
The VoxVoy Equations
Projectile problems in this set of problems can be divided into two types - those which are launched in a strictly horizontal direction and those which are launched at an angle to the horizontal. A horizontally launched projectile has an original velocity which is directed only horizontally; there is no vertical component to the original velocity. It is sometimes said that voy = 0 m/s for such problems. (The voy is the y-component of the original velocity.)
A non-horizontally launched projectile (or angled-launched projectile) is a projectile which is launched at an angle to the horizontal. Such a projectile has both a horizontal and vertical component to its original velocity. The magnitudes of the horizontal and vertical components of the original velocity can be calculated from knowledge of the original velocity and the angle of launch (theta or ) using trigonometric functions. The equations for such calculations are
The quantities vox and voy are the x- and y-components of the original velocity. The values of vox and voy are related to the original velocity (vo) and the angle of launch (). Here the angle of launch is defined as the angle with respect to the horizontal. This relationship is depicted in the diagram and equations shown below.
The Known and Unknown Variables
It is suggested that you utilize an x-y table to organize your known and unknown information. An x-y table lists kinematic quantities in terms of horizontal and vertical components of motion. The horizontal displacement, initial horizontal velocity. and horizontal acceleration are all listed in the same column. A separate column is used for the vertical components of displacement, initial velocity and acceleration. In this problem set, you will have to give attention to the following kinematic quantities and their corresponding symbols.
|horizontal displacement||x or dx||vertical displacement||y or dy|
|original horizontal velocity||vox||original vertical velocity||voy|
|horizontal acceleration||ax||vertical acceleration||ay|
|final horizontal velocity||vfx||final vertical velocity||vfy|
Given these symbols for the basic kinematic quantities, an x-y table for a projectile problem would have the following form:
x = __________________
vox = __________________
ax = __________________
vfx = __________________
t = __________________
y = __________________
voy = __________________
ay = __________________
vfy = __________________
t = __________________
Of the nine quantities listed above, eight are vectors which have a specific direction associated with them. Time is the only quantity which is a scalar. As a scalar, time can be listed in an x-y table in either the horizontal or the vertical columns. In a sense, time is the one quantity which bridges the gap between the two columns. While horizontal and vertical components of motion are independent of each other, both types of quantities are dependent upon time. This is best illustrated when inspecting the kinematic equations which are used to solve projectile motion problems.
If the understanding that a projectile is an object upon which the only force is gravity is applied to these projectile situations, then it is clear that there is no horizontal acceleration. Gravity only accelerates projectiles vertically, so the horizontal acceleration is 0 m/s/s. Any term containing the ax variable will thus cancel. There are three equations in the top row of horizontal motion equations which contain the ax variable; these have thus been greyed out.
Trajectory Diagram and Characteristics
Non-horizontally launched projectiles (or angle-launched projectiles) move horizontally above the ground as they move upward and downward through the air. One special case is a projectile which is launched from ground level, moves upwards towards a peak position, and subsequently fall from the peak position back to the ground. A trajectory diagram is often used to depict the motion of such a projectile. The diagram below depicts the path of the projectile and also displays the components of its velocity at regular time intervals.
The vx and vy vectors in the diagram represent the horizontal and vertical components of the velocity at each instant during the trajectory. A careful inspection shows that the vx values remain constant throughout the trajectory. The vy values decrease as the projectile rises from its initial location towards the peak position. As the projectile falls from its peak position back to the ground, the vy values increase. In other words, the projectile slows down as it rises upward and speeds up as it falls downward. This information is consistent with the definition of a projectile - an object whose motion is influenced solely by the force of gravity; such an object will experience a vertical acceleration only.
At least three other principles are observed in the trajectory diagram which apply to this special case of an angle-launched projectile problem.
The time for a projectile to rise to the peak is equal to the time for it to fall to the peak. The total time (ttotal) is thus the time up (tup) to the peak multiplied by two:
ttotal = 2 • tup
At the peak of the trajectory, there is no vertical velocity for a projectile. The equation vfy = voy + ay • t can be applied to the first half of the trajectory of the projectile. In such a case, t represents tup and the vfy at this instant in time is 0 m/s. By substituting and re-arranging, the following derivation is performed.
vfy = voy + ay • t
0 m/s = voy + (-9.8 m/s/s) • tup
tup = voy / (9.8 m/s/s)
The projectile strikes the ground with a vertical velocity which is equal in magnitude to the vertical velocity with which it left the ground. That is,
vfy = voy
The Basic Strategy
The basic approach to solving projectile problems involves reading the problem carefully and visualizing the physical situation. A well-constructed diagram is often a useful means of visualizing the situation. Then list and organize all known and unknown information in terms of the symbols used in the projectile motion equations. An x-y table is a useful organizing scheme for listing such information. Inspect all known quantities, looking for either three pieces of horizontal information or three pieces of vertical information. Since all kinematic equations list four variables, knowledge of three variables allows you to determine the value of a fourth variable. For instance, if three pieces of vertical information are known, then the vertical equations can be used to determine a fourth (and a fifth) piece of vertical information. Often times, the fourth piece of information is the time. In such instances, the time can then be combined with two pieces of horizontal information to calculate another horizontal variable using the ...
Habits of an Effective Problem-Solver
An effective problem solver by habit approaches a physics problem in a manner that reflects a collection of disciplined habits. While not every effective problem solver employs the same approach, they all have habits which they share in common. These habits are described briefly here. An effective problem-solver...
- ...reads the problem carefully and develops a mental picture of the physical situation. If needed, they sketch a simple diagram of the physical situation to help visualize it.
- ...identifies the known and unknown quantities in an organized manner, often times recording them on the diagram iteself. They equate given values to the symbols used to represent the corresponding quantity (e.g., vox = 12.4 m/s, voy = 0.0 m/s, dx = 32.7 m, dy = ???).
- ...plots a strategy for solving for the unknown quantity; the strategy will typically center around the use of physics equations be heavily dependent upon an understaning of physics principles.
- ...identifies the appropriate formula(s) to use, often times writing them down. Where needed, they perform the needed conversion of quantities into the proper unit.
- ...performs substitutions and algebraic manipulations in order to solve for the unknown quantity.
Additional Readings/Study Aids:
The following pages from The Physics Classroom tutorial may serve to be useful in assisting you in the understanding of the concepts and mathematics associated with these problems.
- Vectors and Direction
- Vector Addition
- Vector Components
- Vector Resolution
- Relative Velocity and Riverboat Problems
- Independence of Perpendicular Components
- What is a Projectile?
- Characteristics of a Projectile's Trajectory
- Horizontal and Vertical Velocity Components
- Horizontal and Vertical Displacement
- Calculating Initial Velocity Components
- Horizontally Launched Projectiles Problems
- Non-Horizontally Launched Projectiles Problems
Problem Sets and Audio Guided Solutions
Vectors and Projectiles Problem Set
Vectors and Projectiles Audio Guided Solutions
View the audio guided solution for problem:
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | <urn:uuid:19b07472-bccc-4820-aca0-8db6c6ab2ece> | 4.3125 | 3,271 | Tutorial | Science & Tech. | 40.193848 |
In 1999, Yoseph Bar-Cohen of NASA's Jet Propulsion Laboratory challenged the engineering world to an arm-wrestling contest. Sort of, anyway. He doesn't plan on participating himself, and the arm, which will face off against a human opponent of middling strength, has to be robotic. The catch is, he's not asking for your standard metallic appendage–this robotic arm must be built with electroactive polymers (EAPs). These materials, which are often referred to as artificial muscles, bend, stretch, twist, or contract under the influence of an electrical charge, behaving much like real muscle fibers. So far, they've been used to make a swimming toy fish, drug release capsules and a miniature windshield wiper, but Bar-Cohen sees far more in store for EAPs, including more human-like robots.
Before issuing his arm-wrestling challenge, Bar-Cohen, who heads the Nondestructive Evaluators and Advanced Actuators Laboratory at JPL, did pioneering work himself with the multifunctional materials. He developed an EPA-driven contraption that resembles a miniature windshield wiper for the optical/infrared window of the 2.2-pound, palm-sized Nanorover. The rover, which was scheduled to fly as part of 199TK's MUSES-CN mission to asteroid 1998SF36, was eventually left behind due to mass and budget restrictions. Bar-Cohen's wiper would have been used to clear away microscopic debris from the rover's lens.
Bar-Cohen, who doubles as an unofficial EAP spokesperson, sees their initial inclusion in the mission as a step in the right direction. "Just the fact that [NASA] suddenly treated this idea as something that has potential means they're going to fly with it one way or another."
Another major milestone for EAPs, he says, is the first commercial product. Eamex, a Japanese corporation, has made a small splash in the toy industry with the introduction of an EAP-propelled fish. An EAP strip is attached between the body of the fish and its tail fin; electrical signals coax the polymer strip into alternately expanding and contracting, which moves the fin back and forth, propelling the fish forward.
Other groups have grander plans for EAPs. Marc Madou, a professor at University of California at Irvine, developed what he calls "smart pills"—capsules that would be implanted into the body to release doses of medication. The capsules are about the size of a small matchstick and come equipped with a sensor and a battery, and they're covered with a series of EAP valves. When the sensor detects a certain chemical change, it signals the battery, which emits an electrical charge. This charge activates the polymer valves, causing them to flap open and expose tiny perforations on the capsule surface. Medication stored in the capsule then seeps through the perforations until the sensor determines that a sufficient amount has been released. The sensor signals the battery again, which triggers the polymer flaps to close; the perforations are covered and the flow of medication stops. Madou, who has co-founded a company, ChipRx, to develop the device, expects it to be on the market within five to ten years.
To move toward the goal of building more lifelike robots, however, a strong, EAP-driven robotic arm is critical. That's why Bar-Cohen is excited about a claim by California-based SRI International that, with sufficient funding, they may be ready to wrestle.
SRI's progress on raising the necessary capital is unclear, but Bar-Cohen hopes the company will be ready in time for the next SPIE Electroactive Polymer Actuators and Devices conference in March 2005. And although he is anxious to finally witness the event, the spokesman in him sees it more as an opportunity to spark interest in artificial muscles than as a real test of their potential. For that reason, he wants to make sure that the robot's competition isn't too fierce. "We would like to go against a high school student, who will be selected not for force capability but for intelligence," he says, adding that the student will be asked to write an essay about the experience. "Hopefully he won't be too strong, so we'll give the arm a chance to win."
The incredible innovations, like drone swarms and perpetual flight, bringing aviation into the world of tomorrow. Plus: today's greatest sci-fi writers predict the future, the science behind the summer's biggest blockbusters, a Doctor Who-themed DIY 'bot, the organs you can do without, and much more. | <urn:uuid:dd71d75f-5532-42d5-a21e-40dac2887f2c> | 2.90625 | 967 | Nonfiction Writing | Science & Tech. | 43.03646 |
A robotic sensor that won an R&D 100 Award in 2009 has been put to use by Woods Hole Oceanographic Institution (WHOI) in Gulf of Maine coastal waters to monitor the way red tides behave. These harmful algal blooms, which generate a potentially fatal toxin, can be a challenge to track or predict. The Environmental Sample Processors have been remotely deployed and should simplify and enhance this effort.
A recent study is the first to show that corals are not able to fully acclimate to low...
A new study on the feeding habits of ocean microbes calls into question the potential...
Surprisingly large amounts of discarded trash end up in the ocean. A recent paper by...
Stromatolites (“layered rocks”) are structures made of calcium carbonate and shaped by the actions of photosynthetic cyanobacteria and other microbes that trapped and bound grains of coastal sediment into fine layers. According to recent research, the widespread and mysterious disappearance of stromatolites may have been driven by single-celled organisms called foraminifera.
All forms of life that breathe oxygen—even ones that can't be seen with the naked eye, such as bacteria—must fight oxidants to live. These same oxidants also exist in the environment. But neutralizing environmental oxidants such as superoxide was a worry only for organisms that dwell in sunlight—in habitats that cover a mere 5% of the planet. Now researchers have discovered the first light-independent source of superoxide.
The ability to determine the fate of charcoal is critical to knowledge of the global carbon budget, which in turn can help understand and mitigate climate change. However, until now, researchers only had scientific guesses about what happens to charcoal once it's incorporated into soil. They believed it stayed there. Surprisingly, the findings of a new study shows that most of these researchers were wrong.
A comprehensive marine biodiversity observation network could be established with modest funding within five years, according to a recently published assessment from a team led by J. Emmett Duffy of the Virginia Institute of Marine Science. Such a network, they say, would fill major gaps in scientists' understanding of the global distribution of marine organisms.
Variations in nutrient availability in the world's oceans could be a vital component of future environmental change, according a research team. Their research reviews what we know about ocean nutrient patterns and interactions, and how they might be influenced by future climate change and other man-made factors. The authors also highlight how nutrient cycles influence climate by fuelling biological production.
In 2011, Lake Erie experienced a record-breaking algae bloom that began in the lake's Western region in mid-July and eventually covered an area of 230 square miles. At its peak in October, the bloom had expanded to more than 1,930 square miles, three times greater than any other bloom on record. According to recent research, the bloom was triggered by long-term agricultural practices coupled with extreme precipitation, followed by weak lake circulation and warm temperatures.
Rusted pieces of two Apollo-era rocket engines that helped boost astronauts to the moon have been fished out of the murky depths of the Atlantic by Amazon.com CEO Jeff Bezos. A privately funded expedition led by Bezos raised the main engine parts during three weeks at sea, about 360 miles from Cape Canaveral. The engine parts were resting nearly 3 miles deep in the Atlantic
Large chunks of an ancient tectonic plate that slid under North America millions of years ago are still present under parts of central California and Mexico, according to new research led by Brown University geophysicists. Called the Isabella anomaly—a large mass of cool, dehydrated material about 100 km beneath central California—is in fact a surviving slab of the ancient Farallon oceanic plate driven deep into the Earth’s mantle about 100 million years ago.
According to new research, models of carbon dioxide in the world’s oceans need to be revised. Trillions of plankton near the surface of warm waters are far more carbon-rich than has long been thought global marine temperature fluctuations could mean that tiny microbes digest double the carbon previously calculated.
A new form of microbial life has been found in water samples taken from a giant freshwater lake hidden under kilometers of Antarctic ice, Russian scientists said Monday. In a prepared statement, the researchers said that the "unidentified and unclassified" bacterium has no relation to any of the existing bacterial types. They touched the lake water Sunday at a depth of 12,366 feet (3,769 m), about 800 miles (1,300 km) east of the South Pole in the central part of the continent.
All living organisms rely on iron as an essential nutrient. In the ocean, iron’s abundance or scarcity means all the difference as it fuels the growth of plankton. A new study from the Woods Hole Oceanographic Institution identifies an unexpectedly large source of iron to the North Atlantic—meltwater from glaciers and ice sheets, which may stimulate plankton growth. This source is likely to increase as melting of the Greenland ice sheet escalates under a warming climate.
With data from 73 ice and sediment core monitoring sites around the world, scientists have recently reconstructed Earth's temperature history back to the end of the last Ice Age. The analysis reveals that the planet today is warmer than it's been during 70 to 80% of the last 11,300 years.
Like the extraterrestrial creature in the movie Alien, the "extremophile" red alga Galdieria sulphuraria can survive brutal heat and resist the effects of toxins. Scientists were previously unsure of how a one-celled alga acquired such flexibility and resilience. But recently they made an unexpected discovery: Galdieria's genome shows clear signs of borrowing genes from its neighbors.
A continental-scale chemical survey in the waters of the eastern U.S. and Gulf of Mexico is helping researchers determine how distinct bodies of water will resist changes in acidity. The study, which measures varying levels of carbon dioxide and other forms of carbon in the ocean. According to the survey, different regions of coastal ocean will respond to an influx of carbon dioxide in different ways.
Researchers steering a remote-controlled submarine around the world's deepest known hydrothermal vents have collected numerous samples from depths reaching more than 3 miles below the sea's surface between the Cayman Islands and Jamaica. They believe that laboratory analysis in the coming months will reveal some new life forms that have evolved in the pitch-black vent areas of the Cayman Trough, where mineral-rich fluid gushes from volcanic chimneys.
When migrating, sockeye salmon typically swim up to 4,000 miles into the ocean and then, years later, navigate back to the upstream reaches of the rivers in which they were born to spawn their young. Scientists have long wondered how salmon find their way to their home rivers over such epic distances. A new study suggests that salmon find their home rivers by sensing the rivers' unique magnetic signature.
Researchers recently found that nitrogen entering the ocean—whether through natural processes or pollution—boosts the growth and toxicity of a group of phytoplankton that can cause the human illness “amnesic shellfish poisoning”. Commonly found in marine waters off the North American West Coast, these diatoms produce a potent toxin called domoic acid. When these phytoplankton grow rapidly into massive blooms, high concentrations of domoic acid put human health at risk if it accumulates in shellfish.
British researchers have unveiled a futuristic Antarctic research base that can move, sliding across the frozen surface to beat the shifting ice and pounding snow that doomed its predecessors. Its builders hope that the Halley VI Research Station, the sixth facility to occupy the site on the Brunt Ice Shelf, can adapt to the unpredictable ice conditions.
In another blow to the "Everything is Everywhere" tenet of bacterial distribution in the ocean, scientists at the Marine Biological Laboratory have found "bipolar" species of bacteria that occur in the Arctic and Antarctic, but nowhere else. And, surprisingly, they found even fewer bipolar species than would turn up by chance if marine bacteria were randomly distributed everywhere.
After years of searching, scientists and broadcasters say they have captured video images of a giant squid in its natural habitat deep in the ocean for the first time. Japanese public broadcaster NHK released photographs of the giant squid this week ahead of Sunday's show about the encounter. The Discovery Channel will air its program on Jan. 27.
The rapid retreat of sea ice in the Arctic has attracted the attention of top naval officials who have recently held an Arctic Summit at the Office of Naval Research to discuss their reponse to what will likely be a increased volume of human activity in the region. Although the meeting did not discuss policy, it did highlight the many potential areas of impact, from oil drilling to tourism.
Vast amounts of methane are stored under the ocean floor, and anaerobic oxidation of methane coupled to sulfate respiration prevents the release of this gas. Though discovered decades ago, the mechanism for how microorganisms performed this reaction has remained a mystery. According to recent findings, a single microorganism can do this on its own, and does not need to be carried out in collaboration with a bacterium as previously thought.
To keep cellular systems running all cells need fuel. For certain ocean-dwelling microorganisms, methane can be such a fuel. But researchers studying these creatures had previously assumed that the methane they consumed was used as a carbon source. However, recent studies have surprisingly shown that is not the case and will force scientists to reevaluate the microorganisms’ role in inactivating environmental methane.
In the future, warmer waters could significantly change ocean distribution of populations of phytoplankton, tiny organisms that could have a major effect on climate change. Researchers have recently shown that by the end of the 21st century, warmer oceans will cause populations of these marine microorganisms to thrive near the poles and shrink in equatorial waters.
A new NASA study shows that from 1978 to 2010 the total extent of sea ice surrounding Antarctica in the Southern Ocean grew by roughly 6,600 square miles every year, an area larger than the state of Connecticut. However, this growth rate is not nearly as large as the decrease in the Arctic, which has scientists questioning the reasons for the growth. Atmospheric circulation may be one cause. | <urn:uuid:5927b156-e957-49b7-884d-26217fa20a67> | 2.890625 | 2,131 | Content Listing | Science & Tech. | 36.690059 |
Mathematicians Through History
- Grades: PreK–K, 1–2, 3–5
Mathematicians have been changing the history of the world since the Babylonians invented the abacus in the fourth century B.C. The Internet offers great resources to learn about famous mathematicians and math inventors. Start with the MacTutor History of Mathematics Archive. Here you'll find an overview of math history from 2000 B.C. to modern mathematical thinking. The site also includes a detailed chronology of math throughout history, as well as biographies of math pioneers and inventors.
You can also learn about the inventors who pioneered and perfected calculating machines. One such inventor was Ada Byron Lovelace, a pioneering female mathematician whose work was vital to Charles Babbage's development of the first "analytical engine" — the forerunner of the modern-day computer. You can read about Ada and other pioneering female mathematicians at the Women Mathematicians site, courtesy of Agnes Scott College. | <urn:uuid:65f9afc9-6698-48d1-b02c-f47a6aca33e6> | 3.84375 | 207 | Knowledge Article | Science & Tech. | 34.412013 |
Vertebrates are any animals that have a backbone or spinal column. These animals are so named because nearly all adults have vertebrae, bone or segments of cartilage forming the spinal column. The five main classes of vertebrates are fish, amphibians, birds, reptiles, and mammals.
Vertebrates are the most complex of Earth's animal life-forms. The earliest vertebrates were marine, jawless, fishlike creatures with poorly developed fins. First appearing on Earth more than 500 million years ago, they probably fed on algae (single-celled or multicellular plants and plant-like animals), small animals, and decaying organic matter. The evolution of jaws, limbs, internal reproduction organs, and other anatomical changes over millions of years allowed vertebrates to move from ocean habitats to those on land.
All vertebrates have an internal skeleton of bone and cartilage or just cartilage alone. In addition to a bony spinal column, all have a bony cranium surrounding the brain. Vertebrates have a heart with two to four chambers, a liver, pancreas, kidneys, and a number of other internal organs. Most have two pairs of appendages that have formed as either fins, limbs, or wings. | <urn:uuid:992ef1c5-b9c1-40c3-906b-7b02dee3e8e7> | 3.984375 | 252 | Knowledge Article | Science & Tech. | 31.987727 |
It is a mystery that has stymied astrophysicists for decades: how do black holes produce so many high-power X-rays?
It won't come as much of a surprise to most dog owners, but new research has shown that man's best friend is a lot more likely to steal food when nobody's looking, suggesting for the first time that dogs can understand a human's point of view.
Astronomers have measured the light from the very earliest stars, working out for the first time the total amount of light from all the stars that have ever shone.
NASA scientists have, for the first time, seen the light from a planet outside our solar system that's a similar size to the Earth.
Researchers at the National Institute of Standards and Technology (NIST) have worked out a way to produce light pulses that - in a way - travel faster than the speed of light.
For innumerable decades, drivers have attempted to set their vehicles apart from the rest by customizing their ride with everything from wheels to paint.
BASF and Philips have developed an OLED car roof that can flip between acting as a window and a light source, giving light when switched on but becoming transparent when turned off.
MIT's created a camera that can capture a trillion frames a second - the ultimate in slow motion, say its developers.
Scientists at Chalmers University of Technology have succeeded in creating light from vacuum.
Researchers say they've been able to control the brains and muscles of small organisms such as worms, controlling them like tiny robots.
IBM has developed a new chip technology that integrates electrical and optical devices on the same piece of silicon, enabling data transfer using pulses of light.
Physicists from the University of Bonn have developed a completely new source of light, previously thought to be impossible - a so-called Bose-Einstein condensate consisting of photons.
Cities could one day be lit by glowing trees, thanks to a discovery at Taiwan's National Cheng Kung University (NCKU).
A team of international researchers has designed a photonic chip that operates on the principle of light, rather than electricity.
An Australian team says it's developed the most efficient quantum memory for light ever, and has used it to create read-once holograms. | <urn:uuid:0eba0d5d-c9d9-4d29-9de8-0a2267c6d595> | 3.125 | 468 | Content Listing | Science & Tech. | 41.834047 |
"Inspect this code. Here we're creating a new OrderedCollection, and asking it to add 1, then add 2, then add 3, then finally return yourself"
(OrderedCollection new) add: 1; add: 2; add: 3; yourself.
"This is normally not written on one line, but is written like this:"
Here, we're asking the class (remember, this is a blueprint for creating objects) OrderedCollection to create a new collection. Then we're asking the new collection to add 1 to itself. Then we're asking the same new collection to add 2 to itself, then 3, and then finally we're asking the collection to return itself. You normally don't have to send that last message to an object, as the default return is the object itself (we call this the receiver of the the messages), but the message add: returns the parameter you're passing, so in this case, if we want to see the OrderedCollection that we're creating, we need to ask it to return itself as the last message send. This may be a little confusing; I showed the above snippet as it explicitly creates a new object. You could get the same results by inspecting the below two snippets, that don't explictly create a new object - this is done implicitly from the message sent:
"Here, we're asking the OrderedCollection class to give us a new OrderedCollection object with the values 1, 2, and 3 in it."
OrderedCollection with: 1 with: 2 with: 3
"Here, we're asking the OrderedCollection class to give us a new OrderedCollection object with all of the values 1, 2 and 31"
OrderedCollection withAll: #(1 2 3)
Now, if you print it to the above code, you'll see an ASCII representation of the object: OrderedCollection (1 2 3 ). When you inspect the above code, and click on self you will see:
There are several ways that we could make an ordered collection with a fourth Integer in it, here's a neat way. Say that you have this OrderedCollection with only the first 3 integers in it, and realize 'wups, I actually wanted 4 integers in it.' You don't have to go back to the code you typed in above and redo it, you can just ask the object you're inspecting to add a fourth integer to itself:
Highlight the code entered in the bottom pane, and do it. Here, you're asking the object itself (self) to add 4 to itself. If you have self highlighted, you'll notice that it is updated (if you don't have self highlighted, then click on it to see the updates. You'll see:
This is an illustration of being able to view and
manipulate objects in real time, which is Immensely Powerful.
If you're coding along and something isn't quite working right, you can
stop execution, grab the troublesome object and see exactly what is going
on. If you want to simulate certain conditions, you can just change
the object directly. For example, say you realized that you shouldn't
have the integer 4, but rather the string 'four', you can click on the
fourth element, delete 4, and tye in 'four', then middle click>accept.
fourth element in this collection is now the string 'four'.
clicking on the 3rd element, then back to the fourth element to confirm
this, you'll see:
...and remember, we did all this without the hassle of compiling, linking, and running the compiled program! Ok, now that we have an idea about how to create a collection, we're going to do something with this collection, lets add up the integers in the collection. To do this, you can do it the following snippet:
| anOrderedCollection aSum |
aSum := 0.
anOrderedCollection := OrderedCollection withAll: #(1 2 3).
anOrderedCollection do: [:anElement | aSum := aSum + anElement].
Here, the lines of code mean:
1) declare temporary variables
2) initialize the sum
3) create a new ordered collection, assign it to one of the temporary variables
4) ask the ordered collection to do something for each element. For each element, we're asking the sum to add the element to itself.
5) here, we're asking the sum to open an inspector on itself (yeah, you can do this programatically - cool eh?)
For the folks with programming experience, you'll
note that we didn't have to worry about bounds checking, or the size of
the collection, or declaring temporary variables to index the collection
- this is all handled by the collection. Very nice and it helps to
reduce errors. We very naturally just asked the ordered collection
to do something with each element.
Back to inheritence now, as the name suggests, OrderedCollection is a type of Collection, and inherits methods and instance variables from Collection. To be more precise, it inherits from a class called SequenceableCollection, which in turn inherits from Collection. Now, I could use UML, or any number of other industry software modeling diagrams here, but I want to save time so I'm going to use a textual shorthand for outlining class relationships - I'll denote inheritence by tabbing, so indicating the above inheritence looks like this:
You can think of this as OrderedCollection is
a type of SequenceableCollection, which is a type of Collection.
For example, a creation method we used - withAll: is inherited from
I'll show this class method by:
Both Collection, and SequenceableCollection,
are what we call abstract classes - classes
that would never instantiate an object themselves, but serve as good logical
building points. Here, it doesn't matter if we have an OrderedCollection,
a SortedCollection, or a Bag (an unordered collection),
or whatever - we'd want all of them to know how to respond to withAll:.
the sweet thing: we implement the method that all these classes
should respond to in one spot, and reuse it. So, if you need
to change withAll: for these classes, then there's only one spot to
If you need to have an exception to the rule, say you have have a Heap2 class that needs to implement the withAll: method differently, then you can do what is called overriding the method in Heap. Adding Heap to our outline, and indicating abstract classes in italics gives us:
Note: when we send the withAll: message
to Heap or to OrderedCollection, these two classes have
different implementation of the same message - this is known as polymorphism.
is another one of those esoteric terms that really means something pretty
The corollary of polymorphism is a very powerful one though, it allows you to get out of a decision making frame of mind, and get into a commanding frame of mind. This allows us to get away from a common procedural programming trait - having lot of code that is checking stuff and conditionally doing stuff (if it's an OrderedCollection, do this, if it's a Heap, do this, if it's a Bag, do this, etc), and lets us just do stuff. It doesn't matter what type of collection it is, when we ask it to do withAll:, it will do the right thing!
Finally, if we also add the above mentioned SortedCollection and Bag (<groan> here's where this month's title pun comes from ;-), we get:
It's easy to see how there are lots of opportunities
for reuse here, it's generally a good thing when you can code something
in one spot, and have many objects reuse that one implementation.
That way, when you have to make an udpate, you only update that one spot
and don't have to worry with tracking down many different spots and keeping
the update in synch.
Now we're going to start getting to the question of how we know what objects are where and how to use them. As with other topics in this series, I'm introducing this one a bit at a time as well. A common problem for Smalltalk beginners is that they're overwhelmed with the rich class library as there are thousands of objects you can use. To help reduce this problem, I've extended one of the Smalltalk browsers and made a ScopedBrowser. This is a good example of the reflectiveness we mentioned earlier - I was able to extend or alter the behavior of the IDE to suit my needs. This ScopedBrowser will only show you the classes we need to concentrate on for this article. My intent is to add to the scope that is being browsed over time as more objects are introduced. For this time, I've included all the above mentioned collections objects as well as a couple more collections objects for those interested (a total of 9 classes). To open this browser, you first need to file in the MakingSmalltalk-Article3.st code to your image (see article 2 on how to do this). Then open the browser by doing the snippet:
For the read-along folks, this is what you'll see
after navigating to the withAll: method:
(Note: I set my browser colour to purple - the default colour is green, I'll come back to customization in a future article)
To find the withAll: method, click on the class button, then Collections-Abstract>Collection>instanceCreation>withAll:
This browser has 5 panes and 3 buttons, from left to right and top to bottom:
pane 1: shows categories - these are collections of classes (pun intended)
pane 2: shows classes
pane 3: shows categories - these are collections of methods
pane 4: shows methods
pane 5: shows Smalltalk code
button 1: toggles the browser to show the instance methods of the object
button 2: toggles the browser to show the class comments
button 3: toggles the browser to show the class methods of the object
Now, if we step back a little bit, and click on Collections-Sequenceable>OrderedCollection, you'll see:
Note that the code pane shows who OrderedCollection inherits from, as well as their instance variables, if you then go back to the abstract classes and click on SequenceableCollection, you'll notice that it inherits from Collection just as we discussed. Take some time poking around these classes and get comfortable with navigating in this browser. Look for the classes and methods we discussed above.
Finally, I'm going to introduce one more browser - the hierarchy browser. This one is good when you're concentrating on hierarchies and inheritence when you're coding. To open it, first click on OrderedCollection again, then middle-click>spawn hierarchy. You'll see:
Note, that this browser hasn't been scoped, and shows the full hierarchy. Notice that Collection inherits from an object called Object - no surprise here, most things about Smalltalk are just what you would expect. Finally, the topmost object is ProtoObject, which implements some really fundamental methods. The question naturally arises: "What does ProtoObject inherit from?". The answer is nothing, or nil to be more precise.
Project thumbnailFromUrl: 'http://www.squeak.org/Squeak2.0/2.7segments/SqueakEasy.extSeg'
For the read-along folks, you'll see a simple turtle
game project, and when you enter the project you can direct the turtle
by entering Smalltalk code:
Q: How compatible with [VisualWorks, VisualAge, Smalltalk/X, Dolphin,
etc] Smalltalk will the code examples be?
A: Though I'm not writing these articles with code portability in mind, and I'm not doing any portability testing, much of the basic code should be compatible. By basic code, I mean things like how collections are used, how classes are declared, instance variable use, etc. Traditionally where the different flavours of Smalltalk differ most is in GUI code. With Squeak specifically, some of the cool stuff we're going to look at isn't portable to other flavours, for example: the halo stuff, morphic stuff, and downloading projects.
What I'll start doing though, is any code that I a priori suspect is Squeak specific, I'll tag with [Squeak-only-suspected]. NOTE: this will only indicate my suspicion - I don't plan on spending time on testing it in different flavours, or searching for ways to accomplish the same task in a different manner.
This would be a great use of the Linux Gazette's talkback sections - if other Smalltalkers note what does and doesn't work in other flavours, they can post this info. Also starting with this article, I'll start indexing the examples so they're easier to refer to for this purpose (ie: ex1, ex2). I haven't done this yet, as I wanted to keep the series informal, but I expect enumerating examples will make it easier/clearer to post talkbacks. If you don't like the enumerating - post a talkback. | <urn:uuid:36cd3275-a661-4eac-bdae-01fafe6073be> | 2.734375 | 2,796 | Documentation | Software Dev. | 48.891164 |
Photographic survey of the impacts of Hurricane Katrina on the barrier islands, barrier shoreline, and the Mississippi River Delta along the Louisiana coastline. Primary focus is on the ecosystems such as fish, rookeries, and seagrass beds.
Landscapes of interwoven wetlands and uplands offer a rich set of ecosystem goods and services. Changes in climate and land use can affect the value of those services. We study these areas to understand how they may be changing.
Will salt marshes survive if sea level rises quickly? The answer depends on whether the areas surrounding them can allow salt marsh fauna and flora to migrate there. Local topography, both natural and manmade, is the main factor limiting this migration.
The North American Amphibian Monitoring Program (NAAMP) is a long-term monitoring program designed to track the status and trends of frog and toad populations with links to data access, protocol, and how to volunteer as an observer.
Locations of survey points, a photographic record of each site, field observations of vegetation cover and descriptions of oil coverage in the water and on plants, including measurements of the distance of oil penetration from the shoreline.
Portal of the South Florida Information Access (SOFIA) system providing multiple links to projects, products, information, and data for research, decision-making, and resource management of the South Florida ecosystem restoration effort.
A pictorial overview for general audiences of key landscapes and ecosystems in South Florida; includes extensive references and links to past and current research activities relating to the South Florida ecosystem restoration effort.
Interactive Mapping Service (IMS) is an Internet based Geographic Information System designed to provide users with online mapping capability of habitats, land use and land cover, and seagrass for areas of Tampa Bay. | <urn:uuid:a67a23ef-34f0-452d-969a-c366e22219be> | 2.953125 | 367 | Content Listing | Science & Tech. | 21.803641 |
Resonance is a relatively large selective response of an object or a system that vibrates in step or phase, with an externally applied oscillatory force. Resonance was first investigated in acoustical systems such as musical instruments and the human voice. An example of acoustical resonance is the vibration induced in a violin or piano string of a given pitch when a musical note of the same pitch is sung or played nearby.
The concept of resonance has been extended by analogy to certain mechanical and electrical phenomena. Mechanical resonance, such as that produced in bridges by wind or by marching soldiers, is known to have built up to proportions large enough to be destructive, as in the case of the destruction of the Tacoma Narrows Bridge in 1940. Spacecraft, aircraft, and surface vehicles must be designed so that the vibrations caused by their engines or by their movement through air are kept to a safe minimum.
Excerpt from the Encyclopedia Britannica without permission. | <urn:uuid:bde24c83-e0cf-4a31-a27e-a0205cc62047> | 3.765625 | 192 | Knowledge Article | Science & Tech. | 26.456442 |
On 5 May 1961, two weeks after Yuri Gagarin had orbited Earth to become the world’s first spaceman, American astronaut, Alan B. Shephard was shot into space atop a Redstone rocket. His Mercury spacecraft, which he named ‘Friendship 7’, did not orbit Earth. It was a sub-orbital flight. Launched from Cape Canaveral, the spacecraft reached a maximum altitude of 187 kilometres and then dropped back to Earth to splash down in the Atlantic, 490 kilometres away.
The whole flight took just over 15 minutes, but it lifted America into space and Shephard became a hero.
Elated by the success of the mission, the country’s new president, John F. Kennedy set a new goal of “landing a man on the moon and returning him safely to Earth” before the decade was out.
Mercury was a single-seat spacecraft like Gagarin’s Vostok, but unlike the spherical Vostok, Mercury was conical in shape, and smaller and lighter, weighing only 1.3 tonnes to Vostok’s 2.4 tonnes.
The craft re-entered Earth’s atmosphere bottom end down. The shield on the bottom protected the rest of the spacecraft and the astronaut from the heat that was generated as the spacecraft plunged to Earth.
On 21 July, the Americans sent another astronaut, Virgil Grissom into space. Grissom’s flight too was sub-orbital. His craft, ‘Liberty Bell 7’ shot out into space, went a little higher than Shephard’s, and then fell back to Earth. The hatch of the spacecraft accidentally blew off after splashdown, and the Liberty Bell sank, but Grissom was rescued by helicopter. | <urn:uuid:855dce1f-cd15-4c1e-98b3-95c41c69b02e> | 3.765625 | 377 | Knowledge Article | Science & Tech. | 59.399147 |
Five sounding rockets streaked into the pre-dawn sky on March 27, 2012, leaving trails of milky white clouds in a little understood part of the atmosphere. The first rocket was launched to the cusp of space at 4:58 a.m. Eastern Daylight Time, and the subsequent launches occurred at 80 second intervals. The goal of the Anomalous Transport Rocket Experiment (ATREX) was to improve understanding of the process that drives fast-moving winds high in the thermosphere.
Fiery trails from four of the five sounding rockets are clearly visible in this time-lapse photograph (top) of the launch. The second image shows two of the clouds left in the wake of the experiment; the rockets released trimethyl aluminum, a substance that burns spontaneously in the presence of oxygen. The harmless by-products of this glowing reaction were visible to the naked eye as far south as Wilmington, North Carolina; west to Charlestown, West Virginia.; and north to Buffalo, New York. Both photographs were taken near the launch site at NASA’s Wallops Flight Facility in Virginia.
Throughout the experiment, researchers used specialized cameras in North Carolina, Virginia, and New Jersey—as well as temperature and pressure instruments on two of the rockets—to monitor the clouds. By measuring how quickly the clouds move away from each other and integrating that information into atmospheric models, they hope to improve their understanding of the 320 to 480 kilometer (200 to 300 mile) winds in the thermosphere.
First noticed by scientists in the 1960s, the winds are thought to be part of a high-altitude jet stream that’s distinct from the one lower in the troposphere, where commercial aircraft fly. Observing the turbulence produced by these winds should make it possible to determine what’s driving them.
An improved understanding of the upper jet stream will make it easier to model the electromagnetic regions of space that can damage satellites and disrupt communications systems. The experiment will also help explain how the effects of atmospheric disturbances in one part of the globe can be transported to other parts of the globe in a mere day or two.
The launches are part of a broader sounding rocket program at NASA that conducts approximately 20 flights a year from launch sites around the world.
- NASA (2012, March 7) Jet Stream Study Will Light up the Night Sky Accessed March 28, 2012.
Photographs by Brea Reeves and Chris Perry from NASA's Wallops Flight Facility. Caption by Adam Voiland and Karen Fox. | <urn:uuid:542721ca-54a6-486f-9f25-64a563660f29> | 3.984375 | 514 | Knowledge Article | Science & Tech. | 47.666693 |
Water held in soil plays an important role in the climate system. The dataset released by ESA is the first remote-sensing soil moisture data record spanning the period 1978 to 2010 – a predecessor of the data now being provided by ESA’s SMOS mission.
- Soil moisture and ocean salinity satellite ready for launchThu, 29 Oct 2009, 10:16:20 EDT
- Iowa State researchers developing wireless soil sensors to improve farmingFri, 10 Oct 2008, 13:15:26 EDT
- Mars air once had moisture, new soil analysis saysWed, 25 Jun 2008, 10:43:05 EDT
- Sophisticated soil analysis for improved land useFri, 30 May 2008, 11:28:54 EDT
- Restoring coastal wetlands? Check the soilTue, 7 Sep 2010, 15:14:43 EDT | <urn:uuid:ae8fbb24-f771-4708-a5d9-1c6dda92cbb0> | 3.3125 | 171 | Content Listing | Science & Tech. | 47.037163 |
[advance to content]
Click on the Probability worksheet set you wish to view below.
What is probability?
Probability is the branch of mathematics that deals with random events and chance. It is related to our common sense notion of chance (like when we say, "She will probably go to the park.") but uses exact numbers.
Probability is expressed mathematically on a scale from zero (can't happen) to one (will certainly happen). So probability is often written as a percentage or a fraction. For example, a die has six faces, numbered one through six. When the die is thrown, any of the six numbers can show. The chances of any particular number being face up is 1/6 (sometimes read as 1 out of six) - six things could happen (and no others) so if we add up all the possibilities, we should get one. 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6 = 1.
It's important to recognize that we don't have to know which number will show up, the numbers are random, but even so, we know the chances of getting each number.
Who invented probability?
Games of chance were first investigated formally by Gerolamo Cardano, an Italian lawyer who was also a wonderful mathematician. He worked on the mathematics of gambling and probability in the 16th century.
How is probability used in the real world?
Mathematical probabilities are all around us. Batting averages, for instance, tell us an estimate of how likely it is that any particular baseball player will get a hit. The numbers are used to decide who is a better batter and how much they are worth when it comes time to trade them or negotiate a new contract.
Lotteries are based on probability, and whoever is running the lottery depends on knowing how many tickets they have to sell and the chances someone will win. They use this to decide how much money to pay out (and how much profit they will keep).
Earthquakes, sunspots and weather are all random appearing processes, but all can be analyzed using probability.
A basic problem in probability.
Suppose you have three aces in a five card hand of poker (A,A,A,X,Y - the x and y are other cards). What is the probability that if you discard x and draw a new card, that you will get another ace?
Since there are 52 cards in a deck, and you have 5 already, there are a total of 52 - 5 = 47 cards you might draw. Only one of those is an ace.
The chances of getting the single ace out of 47 cards total is 1/47.
An interesting fact about probability.
Sometimes, events we think are very improbable can turn out to be likely when we understand the probability in the mathematical sense.
One example is the probability of getting the exact same card dealt simultaneously from two different decks. Take one deck of cards and give a partner another deck. Each of you then turns over one card at a time from the top of your decks. You both continue until all the cards are gone.
What are the chances that you will both deal the exact same card at the same time? To count, the cards have to match exactly, same value, same suit.
It turns out that the probability of not getting a match is about 36%, which means the chance of getting an exact match is about 64%, so almost 2 out of 3 times, you will get at least one match. This is a very surprising result! | <urn:uuid:bc35062b-39da-4200-8915-4310830adb40> | 3.984375 | 742 | Knowledge Article | Science & Tech. | 60.201443 |
Mission Type: Orbiter
Launch Vehicle: N1 (no. 15005)
Launch Site: NIIP-5 / launch site 110P
Spacecraft Mass: about 6900 kg
Spacecraft Instruments: Unknown
Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi
National Space Science Data Center, http://nssdc.gsfc.nasa.gov/
Solar System Log by Andrew Wilson, published 1987 by Jane's Publishing Co. Ltd.
This was the second attempt to launch the giant N1 rocket. As with its predecessor, its payload consisted of a basic 7K-L1 ("Zond") spacecraft equipped with additional instrumentation and an attitude-control block to enable operations in lunar orbit. Moments after launch, the first stage of the booster exploded in a massive inferno that engulfed the entire launch pad and damaged nearby buildings and structures for several kilometers around the area. Amazingly, the payload's launch-escape system operated without fault, and the Zond descent apparatus (or descent module) was recovered safely 2 kilometers from the pad.
An investigation commission traced the cause of the failure to the entry of a foreign object into the oxidizer pump of one of the first-stage engines at T-0.25 seconds. The ensuing explosion started a fire that began to engulf the first stage. The control system shut down all engines except one by T+10.15 seconds. The booster lifted about 200 meters off the pad and then came crashing down in a massive explosion. | <urn:uuid:44809ca0-50d0-47fe-b47e-8c90a66e220c> | 3.078125 | 331 | Knowledge Article | Science & Tech. | 53.861355 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Thursday, 15 July 2010
Researchers have unearthed a treasure trove of fossils that contains individuals of an ancient marsupial species ranging from birth to adulthood.
Friday, 9 July 2010
Australian scientists say they have unearthed the remains of a bizarre, prehistoric, carnivore in an ancient former rainforest, where specimens stretch back 25 million years.
Tuesday, 6 July 2010 13
Great Moments in Science From top to bottom, T-rex had many, ahem, revealing secrets about how and what it ate. Dr Karl snaps on the rubber gloves to examine the entrails of one fascinating dinosaur.
Tuesday, 29 June 2010 3
Great Moments in Science We know that the dinosaurs came to a dreadful end. But Dr Karl thinks that there was at least one beast that ruled them all.
Wednesday, 9 June 2010
A study comparing how carnivorous dinosaurs tore through their meat has found meat eaters munched their meals using at least four distinct biting methods.
Tuesday, 1 June 2010 4
Great Moments in Science What is known about dinosaurs is increasing all the time. Dr Karl has been digging around to understand their rise and ultimate decline.
Tuesday, 25 May 2010 3
Great Moments in Science The family connection between birds and dinosaurs is amazing. So much so you could knock Dr Karl over with a feather.
Tuesday, 18 May 2010 20
Great Moments in Science There are amazing family connections between the birds we see today and the dinosaurs of the distant past. We know this because a little birdie told Dr Karl.
Wednesday, 12 May 2010
Researchers have located chemical remains of the oldest known bird from fossils recovered 150 years ago, a new study claims.
Tuesday, 11 May 2010 4
Great Moments in Science You may be familiar with how dinosaurs came to a nasty end. But Dr Karl is puzzled by how they began in the first place.
Friday, 7 May 2010
Modern humans most likely interbred with Neanderthals, according to a landmark genome analysis that sheds light on how we evolved differently from our prehistoric cousins.
Tuesday, 20 April 2010
A new species of dinosaur found in Texas featured flanges on the side of its skull that may have allowed its skull bones to mesh like gears, say researchers.
Friday, 9 April 2010
Remains of a new species of early human have been found in South Africa, at the base of what was once a network of underground caves, described by scientists as a "death trap".
Friday, 26 March 2010
Australian scientists say they have discovered the first evidence that an ancestor of the mighty Tyrannosaurus rex once roamed across Australia.
Tuesday, 23 March 2010
It took a volcanic eruption and the loss of half of Earth's plant life 200 million years ago to tip the scales in favour of the dinosaurs over crocodiles, say researchers. | <urn:uuid:7aab84c7-6a34-4376-8065-700167295f90> | 2.859375 | 589 | Content Listing | Science & Tech. | 55.226481 |
A super-powered neutrino generator could in theory be used to instantly destroy nuclear weapons anywhere on the planet, according to a team of Japanese scientists.
If it was ever built, a state could use the device to obliterate the nuclear arsenal of its enemy by firing a beam of neutrinos straight through the Earth. But the generator would need to be more than a hundred times more powerful than any existing particle accelerator and over 1000 kilometres wide.
"It is really quite futuristic," Alfons Weber, a neutrino scientist at Oxford University, UK, told New Scientist. "But the maths and physics seems to be right."
John Cobb, another researcher at Oxford University, cautions: "It might be technically feasible, given massive investment, but there are still unsolved problems." | <urn:uuid:deb3d3fd-1ac7-47cb-9ecf-d2bf4d549e11> | 2.78125 | 160 | Comment Section | Science & Tech. | 30.13981 |
A dynamo is a mechanism for converting flow energy to magnetic energy. The magnetic fields of most stars and galaxies are probably sustained by dynamos. The flow energy has an ordered component - the large scale differential rotation, and a turbulent component. Both are important.
The solar dynamo, and some stellar dynamos, are cyclic in time, with the cycle period of the solar dynamo being 22 years, and other cycles being observed on many different stars. One way to study stellar dynamos is through numerical simulations. The accompanying figure shows a "magnetic wreath" prodiced by one of Ben Brown's dynamo simulations. The magnetic field is stretched by rotation into a ring in a plane perpendicular to the rotation axis. Eventually we would like to understand the relationship between stellar rotation, luminosity, and the length and vigor of its magnetic cycles.
Dynamos can also be studied in the laboratory. There are two dynamo experiments at UW-Madison. The Madison Dynamo Experiment is a ball of liquid sodium that can be mechanically stirred. The Madison Plasma Dynamo Experiment will be a ball of plasma that can be used to to study a host of dynamo problems, inlcuding some related to the growth of magnetic fields in the early universe. | <urn:uuid:e011b286-8b6c-49e2-bbba-305ad67704e6> | 3.765625 | 253 | Knowledge Article | Science & Tech. | 32.50786 |
Copyright © 2011 Elsevier Ltd All rights reserved.
Current Biology, Volume 21, Issue 21, R883-R884, 8 November 2011
CorrespondenceAdd/View Comments (0)
Changing expectations about speed alters perceived motion direction
- Our perceptions are fundamentally altered by our knowledge of the world. When cloud-gazing, for example, we tend spontaneously to recognize known objects in the random configurations of evaporated moisture. How our brains acquire such knowledge and how it impacts our perceptions is a matter of heated discussion. A topic of recent debate has concerned the hypothesis that our visual system ‘assumes’ that objects are static or move slowly rather than more quickly [1,2,3]. This hypothesis, or ‘prior on slow speeds’, was postulated because it could elegantly explain a number of perceptual biases observed in situations of uncertainty . Interestingly, those biases affect not only the perception of speed, but also the direction of motion. For example, the direction of a line whose endpoints are hidden (as in the ‘aperture problem’) or poorly visible (for example, at low contrast or for short presentations) is more often perceived as being perpendicular to the line than it really is — an illusion consistent with expecting that the line moves more slowly than it really does. How this ‘prior on slow speeds’ is shaped by experience and whether it remains malleable in adults is unclear. Here, we show that systematic exposure to high-speed stimuli can lead to a reversal of this direction illusion. This suggests that the shaping of the brain's prior expectations of even the most basic properties of the environment is a continuous process.
We tested two groups of six participants, across five consecutive days, on their ability to report the motion direction of a field of parallel lines oriented at 70 degrees from horizontal (Figure 1A) that either moved perpendicular (‘up’ and to the right, 20 degrees) to the lines (50% of trials) or oblique (‘down’ and to the right, –20 degrees) to the lines (in the other 50%), as we varied stimulus contrast (high = 53% or low = 8%) and duration (133, 266 or 532 ms). Each session contained a short test block (216 trials), a long ‘training’ block (720 trials) and a final test block (216 trials). The test blocks were always conducted with low stimulus speeds (4 deg/s). The training block differed across groups, with a high-speed group performing the task with stimuli moving at 8 deg/s (16 times the previously estimated prior speed ) and a low-speed group at 4 deg/s. We reasoned that exposure to such stimuli might lead the observers to implicitly update their expectations towards faster speeds, leading to a decrease in the direction bias in all conditions, and possibly a reversal of the illusion for the high speed group when tested with lower speeds (Figure 1B).
Consistent with previous findings , we found that initial perception of motion direction was accurate for both groups at high contrast (see Figure S1 in the Supplemental Information ), and biased towards perpendicular judgments at low contrast (Figure 1C). The low-speed group showed a small within-session effect (p = 0.046, corresponding to the vertical displacement between dashed and solid lines in Figure 1C); however, the illusion was unaltered across sessions (p = 0.52). For the high-speed group, the initial perpendicular bias gradually diminished until the illusion reversed and the motion direction was most often perceived as being more oblique. Interestingly, this group exhibited both a fast (within-session; p = 0.0047) and a slow (across-sessions; p < 0.001) learning component. The fast component is a type of perceptual adaptation in which the perceptual system adapts to current perceptual conditions (for example ) and then is reset. The slow component resembles perceptual learning, where the lack of a significant effect in the low-speed group is consistent with the need for a learning threshold to be exceeded for perceptual learning to occur . These results provide the first evidence that basic sensory knowledge such as the speed of stimuli in the world can be changed in a long-lasting manner.
We modeled our results using a Bayesian model , which suggests that motion perception can be described as an optimal estimation of object velocities under the assumption of local measurement noise and an a priori preference for slower speeds. This model satisfactorily fits our results by assuming that the speed prior shifts towards higher speeds with exposure (Figure 1D and Supplemental Figure S2 ), with a mean of 0 deg/s at the start of the first experimental session and 6.2 deg/s by the end of the last session for the high-speed group. The prior of the low-speed group started at 0 deg/s and showed little change — achieving only 0.63 deg/s by the end of the last session. As such, our results support the previously proposed Bayesian models of motion perception and provide the first experimental evidence that our knowledge of the speed of stimuli plays a causal role in direction discrimination.
Overall, our results are consistent with the hypotheses, first, that naive subjects expect to see slow speeds, and thus perceive the direction that corresponds to the slowest speed under conditions of uncertainty, and second, that subjects exposed to high-speed stimuli gradually shift their expectations towards higher speeds and thus perceive directions consistent with faster speeds more often.
Generally, our results show that expectations that are thought to result from a lifetime of sensory inputs remain plastic. Previous studies found that expectations that ‘light comes from above’ can be reset in the short term, particularly if they conflict with inputs from other modalities (such as tactile inputs). Here, we found that even basic aspects of motion processing, such as perceived direction, can by changed in a long-lasting manner. Moreover, they occur through an implicit statistical learning [7,8,9,10] procedure where no guidance was provided to subjects regarding the stimuli's motion-directions. This suggests that the brain is constantly revising even its most basic assumptions about the environment even without explicit information regarding the true properties of the stimuli in the world.
- Document S1. Two Figures, Supplemental Results, Supplemental References and Supplemental Experimental Procedures (PDF 509 kb)
- Movie S1. The stimulus used in our experiments (MOV 449 kb)
- The stimulus consists of a field of parallel lines, translating rigidly and coherently along one of two directions: either perpendicular to the lines (‘up’), or oblique (‘down’). The task of the subject is to report whether motion is ‘up’ or ‘down’. Note that the duration and contrast of the stimulus are here much higher than used in the actual experiment. | <urn:uuid:bf7b2f3a-b426-4d85-b95c-c9b25030906c> | 3.265625 | 1,412 | Academic Writing | Science & Tech. | 40.653056 |
Not all that long ago we assumed habitable planets needed a star like our Sun to thrive, but that view has continued to evolve. M-class red dwarfs may account for as many as 80 percent of the stars in our galaxy, making habitable worlds potentially more numerous around them than anywhere. And let’s extend our notion of habitability to what Luca Fossati (The Open University, UK) and colleagues call a Continuous Habitable Zone (CHZ). Now things really get interesting, for a red dwarf evolves slowly, so planets could have a CHZ with surface water for billions of years.
But what about white dwarfs? Stellar evolution seems to rule out habitable worlds around them because we normally think of stars entering their red giant phase and destroying their inner planets enroute to becoming a white dwarf. But can a new planetary system emerge from the wreckage? We’ve already found planets orbiting close to the exposed core of a red giant (KOI 55.01 and KOI 55.02), showing that the end of main sequence evolution isn’t necessarily the end of planetary survival. We’ve also found evidence in the metallic lines in the spectra of white dwarfs for rocky bodies close to such stars, a kind of ‘pollution’ thought to be caused by the accretion of small, rocky worlds or perhaps planetesimals (see Planetary Annihilation Around White Dwarfs for more).
Image: Almost all small and medium-size stars will end up as white dwarfs, after all the hydrogen they contain is fused into helium. Near the end of its nuclear burning stage, such a star goes through a red giant phase and then expels most of its outer material (creating a planetary nebula) until only the hot (T > 100,000 K) core remains, which then settles down to become a young white dwarf which shines from residual heat. Credit: Jonathan Saurine/Science Vault.
The conditions on planets orbiting close to a cool white dwarf might be relatively benign. What Fossati and team show is that the cooling process in these stars slows down as their effective temperature approaches 6000 K, producing a habitable zone that can endure up to eight billion years. And it turns out that white dwarfs offer advantages M-dwarfs do not, providing a stable luminosity source without the flare activity we associate with younger M-class stars. As you would expect, a cool white dwarf has a habitable zone close to the star, ten times closer than for M-dwarfs. One recent study has used this to argue that a Mars-sized planet in the white dwarf CHZ would be detectable with today’s ground-based observatories even for faint stars.
But there are other options including polarized light that may be used to detect a planet with an atmosphere around a white dwarf. Normally, starlight is unpolarized, but when light reflects off a planetary atmosphere, the interactions between the light waves and the molecules in the atmosphere cause the light to become polarized. The paper notes that the polarization due to a terrestrial planet in the CHZ of a cool white dwarf would be larger than the polarization signal of a comparable planet in the habitable zone of any other type of star except brown dwarfs. Analyzing polarization is thus a viable way to detect close-in rocky planets around white dwarfs.
Would the ultraviolet radiation put out by a white dwarf disrupt the formation of DNA molecules? Fossati and company created a computer model to study the DNA dose expected for an Earth-like planet in the white dwarf habitable zone using an Earth atmosphere model. The result:
The DNA-weighted UV dose encountered at the surface of an Earth-like planet in the white dwarf CHZ becomes comparable to that of an exoplanet in the habitable zone of a main sequence star at approximately 5000 K. Interestingly, present-day solar conditions produce an average dose on Earth a factor of only 1.65 less than that for a white dwarf with solar Teff [effective temperature]. Varying terrestrial atmospheric conditions at times produce DNA-weighted doses on Earth as high as that on a CWD planet… [T]he DNA-weighted dose for a hypothetical Earth-like planet around a CWD is remarkably benign from an astrobiological perspective, for an extremely long period of time.
So white dwarfs, the remnants of those stars not massive enough to become a neutron star, may provide us with interesting venues for life. We can even imagine a typical star going through its pre-red giant phase with a planetary system nurturing life and then, after the red giant phase is complete, beginning a post Main Sequence astrobiological phase with a planetary system in a new configuration. The notion is plausible on the strength of this paper, though the researchers point out that 10 percent of all white dwarfs host magnetic fields that could be problematic for life. This paper assumes the hosting white dwarf is a non- or only weakly magnetic star.
The paper is Fossati et al., “The habitability and detection of Earth-like planets orbiting cool white dwarfs,” accepted for publication in Astrophysical Journal Letters (abstract). Thanks to Adam Crowl for the pointer. For more on planets around red giants, see Planets Survive Red Giant Expansion. | <urn:uuid:e220b938-d503-4344-b5f5-b424ca0a9c66> | 4.03125 | 1,081 | Knowledge Article | Science & Tech. | 41.484416 |
an Antarctica with trees and ferns and without ice may be difficult,
but about 35 million years ago, as the supercontinent Pangea was breaking up,
the now-frozen polar continent had a much more temperate climate. What exactly
caused its shift to a deep freeze has long puzzled paleoclimatologists. New
data and models implicating carbon dioxide in the atmosphere are
challenging the idea that tectonics were to blame.
Even though Antarctica was at the south pole around 35 million years ago, it was warm and relatively ice free; then it abruptly froze. Early data showed that as Australia moved north and the Tasmanian Gap opened, cold counterclockwise currents replaced warm ones, isolating and chilling the continent. But new fossil data show the opposite, shown here, which may have had little effect on Antarcticas growing ice sheets. Courtesy of Matthew Huber; modified by Mark Shaver.
During the shift from the warm Eocene to the cold Oligocene, permanent ice sheets finally established themselves on Antarctica, around 33.5 million years ago. About that time, the Tasmanian Gateway a shallow sea that deepened and widened as Australia slowly drifted away from Antarctica opened. Climate researchers hypothesized that the new notch in the ocean floor brought in cold currents that refrigerated the region.
James Kennett and his co-workers established that idea in 1975, using fossil evidence in cores from Deep Sea Drilling Project leg 29 in the Tasmanian region that supported a shift from warm to cold currents surrounding and isolating the Antarctic, generally flowing counterclockwise. Kennett, now at the University of California, Santa Barbara, and Neville Exon of Geoscience Australia extended the idea last December in an American Geophysical Union monograph, using past studies and new data from a more recent cruise, Ocean Drilling Program leg 189. But data from that same cruise also supports an opposite view, according to researchers who analyzed the regions first four nearly continuous cores, covering the Eocene-Oligocene boundary.
The original hypothesis rested on microfossils with calcium-carbonate skeletons, says Catherine Stickley, a biostratigrapher at Cardiff University in the United Kingdom, not the silica-based and organic ones that are abundant throughout the new cores. Focusing on dinoflagellate cysts and diatoms (organic-walled and silica-containing microalgae), Stickley and her co-workers published a detailed assessment in the December Paleoceanography that tells a very different story.
The fossils date the gateways first deepening to 35.5 million years, about 2 million years before the Antarctics permanent glaciation. In this instance, this is a sufficiently long time gap, Stickley says, to lessen the linkage between the deepening and the permanent Antarctic glaciers. They also found that instead of moving from warm to cold currents, a cold clockwise current was characteristic of the region. And by the time the gateway finished deepening around 30.2 million years ago, a warm clockwise current had moved into the region from the subtropics, after the continent-wide ice was established.
As reported in a companion paper by a team led by Matthew Huber of Purdue University, a co-author on Stickley and colleagues paper, models show that early warm currents in the region did not keep the continent warm, as they were flowing in the wrong direction. The deepening of the gateway and timing of glaciation was coincidental, they hypothesize. Instead, they say, a serendipitous combination of the planets orbital cycles, where seasons remained cool enough to maintain ice at the pole year-round, and changes in greenhouse gases in the atmosphere seem to have collaborated to cool the region and promote the growth of the Antarctic ice sheet.
The idea that atmospheric carbon could be the culprit was first put forward several years ago, most notably in a model by Rob DeConto of the University of Massachusetts, Amherst, and Dave Pollard of Pennsylvania State University, in 2003. Although Hubers teams new model focuses on oceans, and DeConto and Pollards on ice, were converging on the same conclusion, DeConto says. The opening of the southern ocean gateways seem not to have had as big an effect on Antarctica. Although the deepening could have been a trigger, glaciation probably would have happened anyway, he says.
Kennett says that the microfossil data do not agree with Huber and colleagues model. Kennett notes that some of the open-ocean microfossils and closer-to-shore assemblages indicate different water temperatures. The opening and closing of gateways has an effect, he says.
People often believe that, intuitively, if you have an extreme and rapid change, there must be a rapid forcing, says Paul Wilson of the Southampton Oceanography Centre, in the United Kingdom, but more gradual changes can also bring systems to the brink. Wilson, who favors the carbon-loaded atmosphere as the agent of change at a time when Earths orbit was just right, says that determining the cause of the switching point in ancient Antarctica, from small or nonexistent ice sheets to a continent-wide glacier, may have important implications for carbon-loading in todays Antarctic atmosphere.
Back to top | <urn:uuid:141a6e7c-1d6e-4f17-9e9a-7c501a6fda22> | 4.03125 | 1,086 | Knowledge Article | Science & Tech. | 27.452582 |
Lasers come in many shapes and sizes, and perform a myriad of functions ranging from surgery to video recording. In this document, we'll explain how laser light is produced, why it's so useful, and some of its most common applications.
1917, Albert Einstein introduced the field of physics to the concept of the
laser, which stands for
Light Amplification by the Stimulated Emission of Radiation.
What's So Special About Laser Light?
Laser light has several properties that, together, make lasers useful.
Laser Light is Highly Monochromatic
Light from the sun, or a light bulb, is generally seen as "white", and contains many wavelengths of light (seen as different colors when white light is put through a prism). Laser light, on the other hand, is generally monochromatic, meaning that it contains one specific wavelength of light. This wavelength of light can be seen as one single, intense color (red, blue, green, or yellow, etc., depending on the laser) or invisible (ultraviolet or infrared). Lasers can, and do, produce more than one color, but these colors are discrete individual wavelengths of light, as opposed to the broad spectrum of sunlight or fluorescent light.
Laser Light is Highly Coherent
Laser light wavelengths can be thought of as "organized". The photons of laser light all "move in step" with one another. Light from a light bulb, for instance, has wavelengths that are not nearly as organized, with most photons' waves traveling chaotically and interfering with one another. It's the coherent, organized property of laser light that makes it capable of delivering a high amount of energy in a small beam. In the case of visible lasers, this makes the laser beam very bright and intense.
Laser Light is Highly Directional
Because of the way laser light is produced (described below), beams of laser light are very small, tight, and bright. Photons in a laser beam are traveling almost exactly parallel to each other. For instance, if a flashlight and a laser beam were shone on a building across the street from your home, the flashlight beam would appear several feet wide, while the laser beam would be only be inches across.
Laser Light Can Be Sharply Focused
Due to the laser light's parallelism, it can be focused very efficiently compared with other types of light. Focused laser beams can deliver very high amounts of energy over a very tiny space.
How does it work?
There are many ways to produce laser light. There are lasers that operate with gas, crystals, and diodes; lasers can be as small as a pinhead, or be large enough fill an entire room. However, they all operate on the same general principle. Light Amplification (generating more light) by the Stimulated Emission of Radiation (by stimulating atoms with radiation -- that is, light). We'll explain the operation of one common type of laser, the gas ion laser, that is used in science, industry, as well as Laser Fantasy laser shows.
Gas Ion Lasers
Gas ion lasers use a tube filled with a gas. Often, this gas is a noble (or inert) gas (such as Neon, Argon, or Krypton, or a combination of noble gases).
This tube is applied with a high voltage electric current, which travels down the length of the tube. This discharge creates collisions between the electrons from the electricity, and the atoms of gas in the laser tube.
The interaction between the electrons and the atoms of gas affects the gas atoms; the gas atoms become ionized, and some gas ions that interact with more electrons are excited to a higher energy state.
The atoms quickly return back to a lower state of energy, but in going from the excited state to a lower energy state, a photon of light is generated. This is the general principle behind a neon light. Lasers, however, go one step further.
The photon that is released can then interact with other atoms of gas. If the atom happens to be excited, a second photon is generated when the atom returns to its "ground" state of energy. This second photon is in every way identical to the original photon in direction, polarization and energy.
A "chain reaction" takes place, where photons continually collide with gas atoms, generating more photons, and therefore more light. The direction of this light is random, with some photons going up, down, and just about every way possible. To produce a single concentrated beam, a mirror is placed at each end of the laser tube. Photons that happen to travel in the direction along the mirrors will reflect back down to the other mirror, and so on. During these reflections, the photons interact with more atoms in the process described above, creating more photons traveling between the mirrors.
Soon, many atoms along the mirror axis are emitting light through this stimulated emission of photons. All the while the electricity keeps the gas atoms primed, and ready to emit photons. The light traveling between the mirrors (at the speed of light) can be thought of as an optical resonant cavity that, like an organ pipe for sound waves, can be tuned to resonate one or more wavelengths (or colors) of light.
One of the mirrors at one end of the laser tube is only partially reflective, letting a tiny part of the laser light out. This tiny amount of light is the laser beam that can be used for scanning a bag groceries, reading and writing audio and video to or from a CD, performing delicate surgery, and much more.
Lasers themselves don't magically perform surgery, read compact discs, or weave laser light concerts. Lasers only produce a unique source of light and energy for these various applications. A variety of optics, mechanical motors, electronics and optical detectors, and good engineering (people!) come together to produce all of these amazing feats with this unique light. Lasers are the heart of these applications, making them possible. Here are just a few of the many users of lasers:
The small, intense bright beam of a laser can be focused with lenses and other optics to provide a point of energy intense enough to burn through living tissue. Because "laser scalpels" are so small, they can very delicately reach difficult places. The burning action of laser surgery also instantly clots the incision, reducing bleeding dramatically. Reattaching detached retinas and using fiber optics to burn away ulcers in the stomach are just a couple of the medical uses of lasers. Lasers used in surgery include Nd:YAG crystal lasers (Neodymium and yttrium aluminum garnet), argon gas ion lasers, and excimer lasers.
Laser Welding, Cutting & Blasting
Once again, the laser's intense energy when focused makes it ideal for providing concentrated welding and cutting. Laser cutting and welding can be extremely precise. Clothing manufacturers can use lasers to cut precise fabric patterns. Laser welding can allow the easy welding of two different kinds of metals and alloys, making the resulting product significantly stronger than other techniques. Many car manufacturers use laser welding performed by industrial robots to assemble vehicles.
The intense color of laser light has opened up a whole new world for laser artists to weave a new kind of art. Laser shows are usually performed in planetarium domes, and set to music ranging from new age to rock and roll. Laser shows generally use gas ion lasers, including Argon, Krypton-Argon, and Helium-Neon lasers. Sets of high-speed vibrating mirrors called scanners move the laser beams in different patterns. Abstract imagery or full-motion animation can be displayed in laser shows. Colors can also be changed by using multi-wavelength lasers (such as Argons or Krypton-Argons) and sending the laser through crystals which vibrate with sound waves (AOMs Acousto-Optic Modulators), providing full-color imagery.
Laser-powered fusion holds hope of generating tremendous amounts of electricity through the use of lasers. Highly focused, powerful lasers "zap" tiny fuel pellets from all sides, triggering thermonuclear fusion. In experiments at the Lawrence Livermore National Laboratory, laser pulses deliver close to 200 kJ (kilojoules) of energy to each pellet in less than a nanosecond. This single pulse delivers approximately 2 X 10^14 W - about 100 times the world's installed electric power! The feasibility of a working reactor is still the subject of ongoing research.
Lasers are at the heart of some of the fastest methods of information transfer yet devised. Using fiber optic bundles to carry them, modulated laser beams can transfer huge amounts of information. The internet is just one information technology taking advantage of laser fiber optics. In fact, the words you are reading now were most likely transferred most of the way to your computer via lasers in this manner. Lasers in compact disc players and video discs players read tiny reflections on CDs and laserdiscs to play back audio and video. Soon, your home may be fitted with fiber optics to carry cable TV and phone service. | <urn:uuid:89b811a2-5eca-4241-a1aa-b6cc25fffc9b> | 3.453125 | 1,862 | Knowledge Article | Science & Tech. | 41.826273 |
The August 2010 issue of NANO Magazine, highlighting nanoscale research expected to have a positive impact on the developing world, included articles focused on energy generation, disease prevention and water purification. The articles reflect a now-familiar pattern: a presentation of the horrific scope of the current problem (e.g., unclean water responsible for 6,000 deaths every day) followed by a report on promising nanotech research that would seem to address the problem (e.g., electrostatically charged nanoscale particles that remove contaminants from water). Readers are expected to connect the dots along the way to the logical and inevitable conclusion: Who would say ‘no’ to nano?
Indeed, the 19 member countries of the Common Market for Eastern and Southern Africa (COMESA) closed their recent summit, ‘Harnessing science and technology for development’, by urging the promotion and utilisation of nanotechnology and science, ‘given its application in various key areas such as medical treatment’. That wasn’t the first time experts have committed to pursue nanotechnology as a way to solve the global South’s most pressing problems of course. Back in 2005, the UN Millennium Project’s Task Force on Science, Technology and Innovation had already identified nanotechnology as an important tool for addressing poverty and achieving the Millennium Development Goals.
Early in 2010, however, participants at a regional awareness-raising workshop on issues related to nanotechnologies in Côte d’Ivoire were insisting that countries have the right to accept or reject the import and use of manufactured nanomaterials to minimise their risks. They also urged that attention be paid to the critical role of precaution and to nanotechnology’s ethical and social risks, in addition to benefits, especially in developing countries and countries with economies in transition. Here was a group of experts in Africa questioning the received wisdom of nanotechnology’s central role in solving the problems of the developing world, even going so far as to suggest that in some cases it may make sense to ‘say no to nano’.
WHAT IS NANOTECHNOLOGY AND WHAT ARE ITS RISKS?
Nanotechnology is a suite of techniques used to manipulate matter on the scale of atoms and molecules. Nanotechnology speaks solely to scale: Nano refers to a measurement, not an object. A nanometre (nm) equals one-billionth of a metre. Ten atoms of hydrogen lined up side-by-side equal one nanometre. A DNA molecule is about 2.5nm wide (which makes DNA a nanoscale material). A red blood cell is enormous in comparison: about 5,000nm in diameter. Everything on the nanoscale is invisible to the unaided eye and even to all but the most powerful microscopes. Only in the last quarter of a century has it been possible to intentionally modify matter at the nanoscale.
Key to understanding the potential of nanotech is that, at the nanoscale, a material’s properties can change dramatically; the changes are called ‘quantum effects’. With only a reduction in size (to around 300nm or smaller in at least one dimension) and no change in substance, materials can exhibit new characteristics – such as electrical conductivity, increased bioavailability, elasticity, greater strength or reactivity – properties that the very same substances may not exhibit at larger scales. For example, carbon in the form of graphite (like pencil ‘lead’) is soft and malleable; at the nanoscale carbon can be stronger than steel and is six times lighter; nanoscale copper is elastic at room temperature, able to stretch to 50 times its original length without breaking.
Researchers are exploiting nanoscale property changes to create new materials and modify existing ones. Governments around the world have already invested more than US$50 billion on nano-science and nanotechnology research. One market analyst firm expects the private sector to invest a staggering US$41 billion just this year. Companies now manufacture engineered nanoparticles that are used in thousands of commercial products, including textiles, paints, cosmetics and even foods.
Because nanoscale manipulations are now possible and, because the basic components of both living and non-living matter exist on the nanoscale (e.g., atoms, molecules and DNA), it is now possible to converge technologies to an unprecedented degree. Technological convergence, enabled by nanotechnology and its tools, can involve biology, biotechnology and synthetic biology, physics, material sciences, chemistry, cognitive sciences, informatics, geoengineering, electronics and robotics, among others. At the nanoscale there is no qualitative difference between living and non-living matter. (ETC Group uses the term BANG to describe technological convergence: bits, atoms, neurons and genes – the stuff that can come together when various technologies converge.)
The most direct impact of new designer materials created using nanotechnology is multiple raw-material options for industrial manufacturers, which could mean major disruptions to traditional commodity markets. It is too early to predict with certainty which commodities or workers will be affected and how quickly. However, if a new nano-engineered material outperforms a conventional material and can be produced at a comparable cost, it is likely to replace the conventional commodity. History shows that there will be a push to replace commodities such as cotton and strategic minerals – both heavily sourced in Africa and critical export earners – with cheaper raw materials that can be sourced or manufactured by new processes closer to home. Worker-displacement brought on by commodity-obsolescence will hurt the poorest and most vulnerable, particularly those workers who don’t have the economic flexibility to respond to sudden demands for new skills or different raw materials.
(In the face of perennially low and volatile prices for primary export commodities, and the persistent poverty experienced by many workers who produce commodities, few would argue in favour of preserving the status quo. Preservation of the status quo is not the issue. The immediate and most pressing issue is that nanotechnologies are likely to bring huge socio-economic disruptions for which society is not prepared.)
The beneficiaries of sudden shifts in market demand will be those in a position to see the changes coming, while the ‘losers’ will be the producers of primary commodities who are unaware of the imminent changes and/or those who could not make rapid adjustments in the face of new demands.
South Africa has had its eye on nanotech for the better part of the last decade for this very reason, paying particular attention to the impact new nanomaterials could have on minerals markets. In 2005, the country’s then-Minister of Science and Technology Mosibudi Mangena warned, ‘With the increased investment in nanotechnology research and innovation, most traditional materials … will … be replaced by cheaper, functionally rich and stronger [materials]. It is important to assure that our natural resources do not become redundant, especially because our economy is still very much dependent on them.’ The government launched its National Nanotechnology Strategy the same year, funding research & development (R&D) through the Department of Science and Technology, whose overall budget for 2009–10 neared US$600 million. South Africa is also a player in a cooperative nanotech R&D programme under the India-Brazil-South Africa Dialogue Forum (IBSA). Nanotech is one area of science collaboration, led by India, funded by a US$3 million trilateral research pool.
Despite supposedly self-evident claims to its ability to solve social and health problems in Africa, developments in nanotechnology should be met with serious critical reflection, writes Kathy Jo Wetter. In a discussion of what nanotechnology is and the risks associated with it, Wetter underlines that the technology offers new opportunities of monopoly control ‘over both animate and inanimate matter’, while government regulations worldwide remain completely inadequate to address its unique risks.
HEALTH AND ENVIRONMENTAL IMPACTS
The qualities that make nanomaterials so attractive to industry across a wide range of fields, particularly pharmaceuticals – their mobility and small size, on the same scale as biological processes, and their unusual properties – turn out to be the same qualities that may make them harmful to the environment and to human health. Human cells are generally larger than nanoscale – on the order of 10-20 microns in diameter (10,000-20,000 nm) – which means that nanoscale materials and devices can easily enter most cells, often without triggering any kind of immune response. While there is great uncertainty about the toxicity of nanoparticles, hundreds of published studies now exist that show manufactured nanoparticles, currently in widespread commercial use (including zinc, zinc oxide, silver and titanium dioxide) can be toxic. Some nanoparticles can cross the placenta, posing significant risks to developing embryos. Workers who experience routine occupational exposure to nanoparticles will likely be most at risk.
Back in 2002, ETC Group called for a moratorium on the commercialisation of new nano products until they could be shown to be safe, to protect workers as well as consumers. In 2007, a broad coalition of civil society, public interest, environmental and labour organisations from across the globe worked out a set of Principles for the Oversight of Nanotechnologies and Nanomaterials grounded in the Precautionary Principle. With the exception of the occasional reporting requirement, no government regulations yet exist that address the unique risks posed by nanoscale materials, and the commercialisation of nanotech products continues unhindered.
While no one knows how many workers are exposed to manufactured nanomaterials currently, the number of workers involved in nanotech is predicted to reach as high as 10 million worldwide within five years. Given the uncertainties regarding exposure and health effects, the international trade union IUF (Uniting Food, Farm and Hotel Workers World-Wide) has called for a moratorium on commercial uses of nanotechnology in food and agriculture. The Côte d’Ivoire conference participants made the sane recommendation that workers be involved in developing occupational health and safety programmes and measures in relation to manufactured nanomaterials, and countries were encouraged to set up and enforce legal provisions to ensure safe practices with regard to production, use, transport and disposal of nanoparticles and nanomaterials.
WHO’S IN CONTROL?
Nanotechnology provides new opportunities for sweeping monopoly control over both animate and inanimate matter. In essence, patenting at the nanoscale could mean monopolising the basic building blocks that make life possible. Whereas biotechnology patents make claims on biological products and processes, nanotechnology patents may literally stake claim to chemical elements, as well as the compounds and the devices that incorporate them. With nanoscale technologies, the issue is not just patents on life – but on all of nature – opening up new avenues for biopiracy (see Oduor Ong’wen’s contribution in this special issue). Control and ownership of nanotechnology is a vital issue for all governments because a single nanoscale innovation can be relevant for widely divergent applications across many industry sectors.
Many who envision nanotech bringing benefits to Africa ignore the realities of technology transfer and intellectual property. Intellectual property is being driven by the North and promotes the interests of dominant economic groups, both North and South. A 2006 study reported that Africa accounts for just 0.4 per cent of all patents granted throughout the world, while the United States and Europe together account for 81.8 per cent.
More than 12,000 patents in the field of nanotechnology have been awarded, granted over three decades (1976–2006) by the three offices responsible for most of the world’s nanotech patenting – the US Patent & Trademark Office (USPTO), the European Patent Office and the Japan Patent Office. As of March 2010, close to 6,000 nanotech patents had been granted by the USPTO and a further 5,184 applications were waiting in the queue. Multinational corporations, universities and nanotech start-ups (primarily in the OECD countries) have secured ‘foundational patents’ on nanotech tools, materials and processes – that is, seminal inventions upon which later innovations are built – and nanotech ‘patent thickets’ are already causing concern in the US and Europe.
Meanwhile, African governments are under pressure to enact tougher intellectual property laws that recognise the rights of patent owners. In June, the US government, reportedly spending millions of dollars campaigning for an Anti-Counterfeits Trade Agreement (ACTA), hosted a three-day regional workshop in Kampala, where the East African Community was encouraged to take the lead – in the interest of public safety! – in developing enforcement procedures and regional standards.
Researchers in the global South are likely to find that participation in the proprietary ‘nanotech revolution’ is highly restricted by patent tollbooths, obliging them to pay royalties and licensing fees to gain access – which is not to suggest that nanotech, unencumbered by patents, will provide solutions for the South’s most pressing needs. On the contrary, a technological fix can never bring about equity.
Ultimately, however, nanotech will profoundly affect Africa’s economy, regardless of its level of direct participation or its handling of intellectual property. It is crucial that commodity-dependent developing countries in Africa gain a fuller understanding of the direction and impacts of nanotechnology-induced technological transformations, and participate in determining how converging technologies could affect their futures. Innovative approaches are needed to monitor and assess the introduction of new technologies. Early-warning and early-listening strategies must be developed to keep pace with technological change. The recommendations put forward by the participants in the regional workshop in Côte d’Ivoire are a strong start. ETC Group has called for the creation of a broadly inclusive International Convention for the Evaluation of New Technologies (ICENT) at the United Nations.
BROUGHT TO YOU BY PAMBAZUKA NEWS
* Kathy Jo Wetter has worked in ETC Group’s Carrboro, NC office as a researcher and as the Assistant to the Research Director.
* Please send comments to email@example.com or comment online at Pambazuka News.
NOTES Anon. ‘Africa: Nineteen countries pledge to promote science,’ University World News, Issue 139, 12 September 2010, http://www.universityworldnews.com/article.php?story=20100911201707964, with link to Summit Communiqué.
Resolution on nanotechnologies and manufactured nanomaterials by participants in the African regional meeting on implementation of the Strategic Approach to International Chemicals Management, Abidjan, Côte D’Ivoire, 25 – 29 January 2010. The event was one in a series of regional awareness raising workshops, and was organized by the UN Institute for Training and Research (UNITAR) and the OECD.
Opening Address by Mr. Mosibudi Mangena, Minister of Science and Technology at a Project Autek Progress Report Function, Cape Town International Convention Centre, 8 February 2005.
The Principles are available online at http://www.nanoaction.org/nanoaction/page.cfm?id=223
Sikoyo, G., Nyukuri, E., Wakhungu, J. (2006) Intellectual Property Protection in Africa: Status of Laws, Research and Policy Analysis in Ghana, Kenya, Nigeria, South Africa and Uganda. Ecopolicy Series. ACTS Press
Hsinchun, C. et al., ‘Trends in nanotechnology patents,’ Nature Nanotechnology, Vol. 3, March 2008, pp. 123-125.
Wambi Michael, ‘U.S. Intensifies Anti-Counterfeit Drive in East Africa,’ Inter Press Service, 19 July 2010: http://ipsnews.net/news.asp?idnews=52228 | <urn:uuid:d87067e4-ee8e-458d-869a-29d8586bef76> | 2.984375 | 3,302 | Nonfiction Writing | Science & Tech. | 22.671515 |
This article covers basic PHP syntax, including variable usage, variable types, and several ways of printing variables to the web browser.
Embedded code blocks
PHP is an embedded web development language with many similarities to commercial packages such as Microsoft's Active Server Pages (ASP) or Cold Fusion. One of the similarities between PHP and these packages (especially ASP) is the ability to jump between PHP and HTML code quickly and easily. The basic syntax to jump in and out of PHP follows:
<html> <head> <title>My first PHP page</title> </head> <body> This is normal HTML code <?php // PHP code goes here ?> Back into normal HTML </body> </html>
In this example, we see that PHP code is signified by the use of the
<?php to begin the PHP block and
?> to signify the end of the code block. Although this a completely acceptable method of encapsulating your PHP code there are many other ways that are all, as far as syntax is concerned, correct. Here are some other ways to mark PHP code blocks.
|Valid syntax to indicate a PHP code block|
||Standard PHP syntax|
||ASP-style PHP syntax|
||Standard script syntax|
So, for example, let's say that you are a web developer who uses a third-party software package to do the actual layout of your web pages. Under normal conditions, PHP's standard syntax would cause unpredictable results in your layout software. So, to remedy this you could use the HTML script standard syntax instead of the PHP standard:
Also in PHP Foundations:
<html> <head> <title>My first PHP page</title> </head> <body> This is normal HTML code <script language="PHP"> // PHP code goes here // This code block syntax won't break // graphical web layout software </script> Back into normal HTML </body> </html>
Before we discuss variables, we should begin with some general PHP syntax rules. First, all single-line statements must conclude with a semicolon. In addition, statements that exceed a single line (such as most conditionals) must be surrounded by
} characters. Finally, the double forward slash (
//) represents a comment and everything past those characters until the end of the line will be ignored by PHP. Now, on to the variables in PHP!
Variables in PHP
PHP denotes all of its variables by using the
$ operator followed by any combination of characters as long as these rules are followed:
- The variable name starts with a letter or an underscore (
- It is followed by any combination of letters, numbers, and underscores
Note: A letter is defined as the lowercase and uppercase characters "a" through "z" as well as any character with an ASCII value between 127 and 255
To define a variable, you can either define it with a value or by using the
var operator. Example:
<?php $myvar = "foo"; // The variable contains nothing var $my_second_var; ?>
In this example, the variable
$myvar is assigned the string value "foo" while the second variable we defined
$my_second_var is empty and contains nothing. For a comparison, we will now give you some examples of invalid PHP variables:
<?php // Incorrect: Must have a '$' sign in front myvar = "bar"; // Incorrect: Variable name starts with a number $2myvar = "cat"; // Incorrect: Variable name has invalid characters $my(third)var = "dog"; // Correct syntax $_myvarnumber4 = "mouse"; ?>
Now that we have learned proper syntax for our variables, we can move on the variable types and functions.
Types of variables
PHP is what is known as a "loosely typed" language. What that means is any given variable can be an integer, floating-point number, string, object, or an array. In this article, we will only be discussing the first three types.
Type 1: The integer
An integer is the basic mathematical datatype and represents any whole number and usually can be any value from minus 2 billion to 2 billion. When assigning an integer value, three different types of notation can be used: decimal (regular base 10), hexadecimal (base 16), or octal (base 8). Usually, only the normal decimal notation is used but there are special cases when a hexadecimal or octal notation makes life easier for the developer.
<?php // All of the following are numerically equivalent $myint = 83; // Normal decimal notation $myint = O123; // Octal notation for the # 83 $myint = 0x53; // Hexadecimal notation for # 83 ?>
Type 2: The floating-point number
Floating-point numbers are the second mathematical datatype PHP provides. A floating-point number represents any value that contains a decimal point. Floating-point numbers are somewhat unreliable in the sense that the value stored in them is not always the exact value the developer expects, but for now we will ignore that. Instead, we will focus on the notation used to assign a variable a floating-point value. Floating-point numbers can be expressed in two different types of notation: decimal and scientific.
<?php // Both of the following are equivalent to 1.234 $myfloat = 1.234; // Standard decimal notation $myfloat = .001234e3; // Scientific notation ?>
Type 3: The string
A string is a datatype we first used in our original examples to assign values to variables. A string can be any combination of letters, numbers, or special symbols as long as consideration is given to characters that have functions in PHP. Before we consider special cases, let's first discuss the difference between the two string notations: the single and double quote. In every case where you may want to assign a string value to a variable, the value itself must begin and end with a pair of either single (
' ') or double (
" ") quotes.
<?php // This string begins and ends with single quotes $mystring = 'single quoted string'; // This string begins and ends with double quotes $mystring = "double quoted string"; ?>
In this example, both variables would simply be assigned a value within the single or double quotes. However, when double quotes are used, PHP will first look inside the string for any references to variables that may exist. If any references are found, they are replaced with values before being assigned to the designated variable. Conversely, when dealing with single-quoted strings, PHP simply takes the string as-is and assigns it to the designated variable.
<?php $myint = 10; // Assign the variable myint to 10 $string_one = 'The value of myint is $myint'; $string_two = "The value of myint is $myint"; ?>
Consider the above example. In the first line, we simply assign the integer value "10" to the variable
$myint. Then, we assign two more variables
$string_two. These are identical except
$string_one is stored using single quotes and
$string_two is stored using double quotes. In this example, the values within the two strings are as follows:
- $string_one = The value of myint is $myint
- $string_two = The value of myint is 10
Notice that, when the value of
$string_two is displayed, the variable
$myint was replaced with the value "10". In the single-quoted string, however, the actual string
$myint was stored.
Next, we will discuss "special" characters. Here's a common scenario: You are developing a web site and find yourself needing to store the double-quote character itself (
") as a string within another variable. You can't simply place the double quote within a set of double quotes because it will cause an error in PHP. To overcome this dilemma, a method called an "escape" is used to allow developers to store this special character along with others in strings. To escape a character, the character is simply prefixed by the backslash character (
\). So, to store the double-quote character in a string, we would instead tell PHP to store the string
\" instead. Here is an example:
<?php // The following will cause an error in PHP $mybadstring = "Do you know what an "escape" is?"; // The same string properly escaped $mystring = "Do you know what an \"escape\" is?"; ?>
The first attempt is incorrect due to the use of "un-escaped" double quotes within the string itself. The proper syntax for storing this string is illustrated in the second example where the quotes surrounding the word "escape" are properly coded. Below is a table of other special characters that require a backslash.
Note: Although attempting to store a un-escaped double-quote within a string will cause an error, storing a single-quote within a double-quoted string will not throw an error and is completely acceptable.
|Valid back-slashed characters|
||linefeed (LF or 0x0A in ASCII)|
||carriage return (CR or 0x0D in ASCII)|
||horizontal tab (HT or 0x09 in ASCII)|
||the sequence of characters matching the regular expression is a character in octal notation|
||the sequence of characters matching the regular expression is a character in hexadecimal notation|
John Coggeshall is a a PHP consultant and author who started losing sleep over PHP around five years ago.
Read more PHP Foundations columns.
Discuss this article in the O'Reilly Network PHP Forum.
Return to the PHP DevCenter. | <urn:uuid:e7ec3735-1e21-41ae-bee7-5353a11e2266> | 3.28125 | 2,083 | Documentation | Software Dev. | 49.235109 |
Methanogens, anaerobic bacteria that generate methane from hydrogen and carbon dioxide, make up the largest group of archaebacteria identified so far. Four genera of methanogens that differ widely in size and morphology are seen here in scanning electron micrographs made by Alexander J. B. Zehnder of the Swiss Federal Institute of Technology. Shown here is Methanosarcina. The cells are shown enlarged 2,500 diameters. The methanogens are found only in oxygen-free environments. Image: Scientific American
Editor's Note: Microbiologist Carl R. Woese, a recipient of the Crafoord Prize, Leeuwenhoek Medal, and a National Medal of Science, died December 30, 2012, at the age of 84. This story was originally published in the June 1981 issue of Scientific American.
Early natural philosophers held that life on the earth is fundamentally dichotomous: all living things are either animals or plants. When microorganisms were discovered, they were divided in the same way. The large and motile ones were considered to be animals and the ones that appeared not to move, including the bacteria, were considered to be plants. As understanding of the microscopic world advanced it became apparent that a simple twofold classification would not suffice, and so additional categories were introduced: fungi, protozoa and bacteria. Ultimately, however, a new simplification took hold. It seemed that life might be dichotomous after all, but at a deeper level, namely in the structure of the living cell. All cells appeared to belong to one or the other of two groups: the eukaryotes, which are cells with a well-formed nucleus, and the prokaryotes, which do not have such a nucleus. Multicellular plants and animals are eukaryotic and so are many unicellular organisms. The only prokaryotes are the bacteria (including the cyanobacteria, which were formerly called blue-green algae). | <urn:uuid:54a02c0c-8662-4a73-95c3-241c2b10b158> | 3.90625 | 405 | Knowledge Article | Science & Tech. | 27.396557 |
Have you ever wondered why leaves change from green to an amazing array of yellow, orange and red during the fall? Leaves get their brilliant colors from pigments made up of various size, color-creating molecules.
During the warm, sunny months, plants use their leaves to turn sunlight into food energy, a process called photosynthesis. This primarily uses a pigment that reflects green light, which gives the leaves their characteristic color.
In autumn, when colder, shorter days arrive, many kinds of trees no longer make food energy with their leaves and, consequently, no longer need the green pigment. The leaves' other pigments, some of which were already there during summer, become visible. Uncover these hidden colors of fall by separating plant pigments with a process called paper chromatography. What colors will you see?
There are many types of pigments in plant leaves. Chlorophyll makes them green and helps carry out photosynthesis during warm, sunny months. As fall arrives and the green, food-making color fades, other pigments such as yellow, orange and red ones become more visible.
Xanthophylls are yellow pigments, and carotenoids give leaves an orange color. Photosynthesis also uses these pigments during the summer, but chlorophyll, a stronger pigment, overpowers them. These pigments take more time to break down than chlorophyll does, so you see them become visible in fall leaves. They're also found in carrots, daffodils, bananas and other plants that have these vibrant colors. There are also anthocyanins, intense red pigments that aren't made during the summer, only appearing with the final group of the fall colors. These molecules also give the red hue to apples, cranberries, strawberries and more.
Although a leaf is a mixture of these pigments, you can separate the colors using a method called paper chromatography. This process dissolves the pigments and allows them to be absorbed by a strip of paper. Larger molecules have a harder time moving in the woven paper and get trapped in the paper first, whereas smaller ones travel farther along the paper. This process separates the mixture of pigments by molecular size—and by color.
• Leaves at different stages of turning colors (the more the better—about 10 of each color is best)
• Strong, sturdy drinking glasses (three to four)
• Rubbing alcohol (isopropyl alcohol)
• Wooden spoon or another wooden utensil with a blunt end for crushing leaves
• Very small bowls or tea-light candleholders (three to four)
• Strong, white, heavyweight, ultra-absorbent paper towels
• Plate (or other surface to protect working area from stains)
• Tall glass jars, such as mason jars (three to four)
• Clothespins or large paper clips (nine to twelve)
• Collect some leaves that are at different stages of color change during the fall, preferably from the same tree.
• Separate your leaves into distinct groups arranged by color, with about 10 large leaves per group. Separating them into green, yellow and red piles may be easiest.
• Prepare paper towel strips, making three to four strips for each group of leaves. Cut up a strong, thick paper towel into long, one-inch-wide strips. They should be long enough to touch the bottom of the tall glass jars or mason jars and still extend over the top. With a pencil, gently draw a line one inch from the bottom of each strip.
• Cut the leaves into small pieces with scissors. Put each group of leaves into the bottom of a drinking glass.
• Add one tablespoon of rubbing alcohol to each glass.
• Crush the leaves into the rubbing alcohol using the blunt end of a wooden spoon for about five minutes, until the solution is dark. How has the color of the alcohol changed?
• Let the solution sit for 30 minutes in a dark place indoors.
• Use a fork to remove any leaf pieces from the solutions and discard these, while leaving the liquid in the glass.
• Pour each solution into a very small bowl, and leave it in a dark place indoors to allow more of the alcohol to evaporate. You will be ready for the next step when you stir your solutions with a toothpick and they seem thicker.
• Thoroughly stir each colored solution with a toothpick, using a different toothpick for each solution so as not to mix the colors.
• Using a toothpick for each color, smoothly and evenly "paint" some of each solution across a paper towel strip on the pencil line you drew. Because some plant pigments can stain, you should do this on a plate so that the color will not stain your work surface. For each color, do this using a total of three to four strips.
• Allow the strips to dry.
• While the strips are drying, pour enough rubbing alcohol into each glass jar to just cover the bottom. Prepare one jar for each color solution.
• With the dry strips, carefully put the pigmented end into the jar until the strip just touches the alcohol. Drape the top of the strip over the jar's opening and secure it with a clothespin. Make sure that each strip is not touching the jar's sides, but only contacts the jar where it is secured. Place and secure strips from the same solution into the same jar, but keep them from touching each other.
• Let the glasses sit for 30 minutes and watch the paper strips. What is happening to the color of the paper strips?
• When one of the colors reaches the top of a strip, remove all strips and let them dry.
• Look at the different dried strips. How are the colors in the strips different? Do strips from different color solutions have unique colors, shared colors or both?
• Look at the order in which the colors appear on the different strips. Is the same color on the same place in different strips or is it in a different place? Do the colors appear in the same separation order or in different orders on each strip?
• Tip: If your chromatography comes out pale, try using more leaves, cutting them up into smaller pieces and/or "painting" more of your solution onto the pencil line on the paper towel.
• Extra: You can use this same procedure to compare the color molecules in many different plant sources. For example, you could try red cabbage, blueberries, carrots, beets, spinach, flowers or other intensely colored plants. How do their mixtures of color molecules compare?
• Extra: If you find a tree with a wide range of colors, you can repeat this procedure using leaves at more intermediate stages of change. An especially good source of a wide variety of leaf colors is aspen trees. | <urn:uuid:6f547f35-9a44-476b-8c07-7ec2bb6ca708> | 4.21875 | 1,403 | Tutorial | Science & Tech. | 52.692006 |
Consider the case where you retrieve a list of many objects which have a status code attached. There may be many different status codes, and you need to fish through the list of codes and display the object differently, based on the priority. So for instance, an object may have many tasks attached, each of which has an associated priority. You need to display the object in a
certain manner depending on the outstanding task with the highest priority.
A combination of enums and PriorityQueues make this easy. Firstly, we can create an enum to represent the status codes for an outstanding item:
Next, we generate a random list of Status instances and add them to a PriorityQueue instance:
There are a couple of features that make this interesting:
- Java’s Priority Queue implementation sorts by the results of
compareTo(). Hence, your candidate objects must implement
Comparable, and you need to write boilerplate comparison code. Except…
- …When you use an enum. Enums implement Comparable directly, and their comparison order is based on the order of declaration of the enumeration instances. Thus, you just need to declare your enumeration in order of priority (highest first), and let the queue handle the rest. | <urn:uuid:72489df7-4d5e-4e87-8a08-fdaadae9ade0> | 2.78125 | 253 | Personal Blog | Software Dev. | 36.173799 |
The global fishing fleet is currently 2.5 times larger than what the oceans can sustainably support, and as many as 90% of all the ocean’s large fish have been fished out. Unless the current situation improves, stocks of all species currently fished for food are predicted to collapse by 2050.
52% of the world’s fisheries are fully exploited. Unintended destruction caused by the use of non-selective fishing gear, such as trawl nets, long-lines and gillnets kills over 300,000 seabirds, whereas the annual global by-catch mortality of small whales, dolphins and porpoises alone is estimated to be more than 300,000 individuals. 100 million sharks are killed each year. | <urn:uuid:a832e23b-d166-436e-93be-c34ca87b22b8> | 3.109375 | 152 | Knowledge Article | Science & Tech. | 57.170399 |
So far this year, more than 2.1 million acres have burned in wildfires, more than 113 million people in the U.S. were in areas under extreme heat advisories last Friday, two-thirds of the country is experiencing drought, and earlier in June, deluges flooded Minnesota and Florida.
"This is what global warming looks like at the regional or personal level," said Jonathan Overpeck, professor of geosciences and atmospheric sciences at the University of Arizona. "The extra heat increases the odds of worse heat waves, droughts, storms and wildfire. This is certainly what I and many other climate scientists have been warning about."
Kevin Trenberth, head of climate analysis at the National Center for Atmospheric Research in fire-charred Colorado, said these are the very record-breaking conditions he has said would happen, but many people wouldn't listen. So it's I told-you-so time, he said.
As recently as March, a special report an extreme events and disasters by the Nobel Prize-winning Intergovernmental Panel on Climate Change warned of "unprecedented extreme weather and climate events." Its lead author, Chris Field of the Carnegie Institution and Stanford University, said Monday, "It's really dramatic how many of the patterns that we've talked about as the expression of the extremes are hitting the U.S. right now."
"What we're seeing really is a window into what global warming really looks like," said Princeton University geosciences and international affairs professor Michael Oppenheimer. "It looks like heat. It looks like fires. It looks like this kind of environmental disasters."
And the bottom line is after nearly 25 years of warnings that we've ignored, we now face the very real prospect that it's now too late to save coastal cities from rising oceans. The window is all but closed, folks...and we've lost. Now it's time to pay the piper.
"Even with aggressive mitigation measures that limit global warming to less than 2ºC above pre-industrial values by 2100, and with decreases of global temperature in the 22nd and 23rd centuries ... sea level continues to rise after 2100," they said in the journal Nature Climate Change on Sunday.
This is because as warmer temperatures penetrate deep into the sea, the water warms and expands as the heat mixes through different ocean regions.
Even if global average temperatures fall and the surface layer of the sea cools, heat would still be mixed down into the deeper layers of the ocean, causing continued rises in sea levels.
If global average temperatures continue to rise, the melting of ice sheets and glaciers would only add to the problem.
Now the choice is between limiting the damage to the coasts and full-blown catastrophe. That's the political battle we'll be fighting for the next generation. That and who will suffer the most due to climate change.
Get used to triple digits, mega-cell storms like the derechos that flattened DC and West Chicago and deadly wildfires in June, folks. It's here to stay. | <urn:uuid:44668518-9dbb-4962-a015-60eba153d3b8> | 3.09375 | 625 | Personal Blog | Science & Tech. | 52.546307 |
Blazing Wings recently posted a list and great photos of twelve solar powered airplanes. It is worth a click just to look at all the different ways people solve the same set of problems. One of the 12 solar airplanes is the Solar Impulse, brainchild of aero pioneer Bertrand Piccard who flew the Orbiter 3 balloon around the world in 1999. His goal now is to fly non-stop around the world on only solar power in the Solar Impulse.
All of the solar airplanes at Blazing Wings look more like gliders than
fighters. They have long narrow wings so they can collect enough
solar power and narrow so they don’t add unnecessary drag. Let’s face it, the laws of physics are against solar
flight. We get only about 92 watts per
square foot of energy from the sun and even the best solar panels are only 40% efficient.
The Solar Impulse will sport an incredible 2,700 square feet of panels on a 231 foot wingspan (the same span as a 747-400) giving it about 65 hp. Piccard will use the prevailing winds at high altitudes to assist propulsion, not unlike Orbiter 3.
The Solar Impulse is limited in part by the energy density
of the batteries. Piccard plans to
overcome this limitation by gaining altitude in daylight and slowly
gliding to a lower altitude at night. Basically, the Earth's gravity will be his power-storage device.
Advanced engineering of the energy systems and aerodynamics for the Solar Impulse should lead to better photovoltaic systems for terrestrial use as well as more fuel efficient airplanes.
Via: Blazing Wings
|< Prev||Next >| | <urn:uuid:58214617-6d90-4dd6-a39a-6313f29f3d1f> | 3.234375 | 349 | Knowledge Article | Science & Tech. | 54.9215 |
The inclination is one of the six orbital parameters describing the shape and orientation of a celestial orbit. It is the angular distance of the orbital plane from the plane of reference (usually the primary's equator or the ecliptic), normally stated in degrees.
In the Solar System, the inclination of the orbit of a planet is defined as the angle between the plane of the orbit of the planet and the ecliptic — which is the plane containing Earth's orbital path. It could be measured with respect to another plane, such as the Sun's equator or even Jupiter's orbital plane, but the ecliptic is more practical for Earth-bound observers. Most planetary orbits in the Solar System have relatively small inclinations, both in relation to each other and to the Sun's equator. There are notable exceptions in the dwarf planets Pluto and Eris, which have inclinations to the ecliptic of 17 degrees and 44 degrees respectively, and the large asteroid Pallas, which is inclined at 34 degrees.
to Sun's equator
to invariable plane
Natural and artificial satellites
The inclination of orbits of natural or artificial satellites is measured relative to the equatorial plane of the body they orbit if they do so close enough. The equatorial plane is the plane perpendicular to the axis of rotation of the central body.
- an inclination of 0 degrees means the orbiting body orbits the planet in its equatorial plane, in the same direction as the planet rotates;
- an inclination greater than -90° and less than 90° is a prograde orbit.
- an inclination greater than 90° and less than 270° is a retrograde orbit.
- an inclination of exactly 90° is a polar orbit, in which the spacecraft passes over the north and south poles of the planet; and
- an inclination of exactly 180° is a retrograde equatorial orbit.
For the Moon, measuring its inclination with respect to Earth's equatorial plane leads to a rapidly varying quantity and it makes more sense[clarification needed] to measure it with respect to the ecliptic (i.e. the plane of the orbit that Earth and Moon track together around the Sun), a fairly constant quantity.
Exoplanets and multiple star systems
- an inclination of 0° is a prograde face-on orbit, meaning the plane of its orbit is parallel to the sky;
- an inclination greater than 0° and less than 90° orbits in the same direction as the rotation of its star;
- an inclination of exactly 90° is an edge-on orbit regardless if the orbit is prograde or retrograde, meaning the plane of its orbit is perpendicular to the sky;
- an inclination greater than 90° and less than 180° orbits in the opposite direction as the rotation of its star; and
- an inclination of 180° is a retrograde face-on orbit, meaning the plane of its orbit is parallel to the sky.
Because the radial velocity method is easier to find planets with more edge-on orbits, most exoplanets would have inclinations between 45° and 135°, even though most exoplanets don't have known inclination. Correspondingly, most exoplanets would have true masses no more than 70% greater than their minimum masses. If the orbit is almost edge-on, then the planet can be seen transiting its star. If the orbit is almost face-on, especially for superjovians detected by radial velocity, then those objects may actually be brown dwarfs or even red dwarfs. One particular example is HD 33636 B, which has true mass 142 MJ, corresponding to an M6V star, while its minimum mass was 9.28 MJ. The inclinations and hence true masses for almost all the exoplanets will eventually be measured by the number of observatories in space, including the Gaia mission, Space Interferometry Mission, and James Webb Space Telescope.
Other meanings
- For planets and other rotating celestial bodies, the angle of the axis of rotation with respect to the normal to plane of the orbit is sometimes also called inclination or axial inclination, but to avoid ambiguity can be called axial tilt or obliquity.
- In geology, the magnetic inclination is the angle made by a compass needle with respect to the horizontal surface of the Earth at a given latitude.
Mutual inclination of two orbits may be calculated from their inclinations to another plane using cosine rule for angles.
See also
|Look up inclination in Wiktionary, the free dictionary.|
- Altitude (astronomy)
- Axial tilt
- Beta Angle
- Kepler orbits
- Kozai effect
- Orbital inclination change
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (May 2009)|
- Chobotov, Vladimir A. (2002). Orbital Mechanics (3rd ed.). AIAA. pp. 28–30;. ISBN 1-56347-537-5.
- McBride, Neil; Bland, Philip A.; Gilmour, Iain (2004). An Introduction to the Solar System. Cambridge University Press. p. 248. ISBN 0-521-54620-6.
- "The MeanPlane (Invariable plane) of the Solar System passing through the barycenter". 2009-04-03. Retrieved 2009-04-10. (produced with Solex 10 written by Aldo Vitagliano) | <urn:uuid:d4d51e70-d136-4cda-952b-6dd613402665> | 4.03125 | 1,138 | Knowledge Article | Science & Tech. | 44.696182 |
Section 1: C Programming Lab
Question 1: Write an interactive program in C language to manage the Clinic Information with menu options like Patient’s details, Doctor’s details, Doctor’s and Patient’s visits, Laboratory details, Bills, Payments etc. using the file handling concepts. The application should be designed user-friendly.
Note: You must execute the program and submit the program logic, sample input and output along with the necessary documentation for this question. Assumptions can be made wherever necessary.
Section 2: Assembly Language Programming Lab
(a) Write a program in assembly language to sort an array of signed integers and search for the presence of an item in the sorted array using linear search
(b) Develop and execute an assembly language program to find the HCF of two unsigned 16-bit numbers.
(c ) Write a program in assembly language for finding the largest number in an array of 10 elements.
(d) Given a string of characters terminated by 00H, write a assembly language program to determine if it is a palindrome. If ‘Yes’ output the message ” The given string is a palindrome. If ‘No’ output the message “No, it is not a palindrome” | <urn:uuid:ca324322-2b77-4d89-a4c3-8ffeb2b61824> | 2.921875 | 265 | Content Listing | Software Dev. | 40.80641 |
Energy in 3 consecutive forms: potential, kinetic, internal (Photo credit: Wikipedia)
A rock has potential energy (PE) localized in it when you lift it up above the ground. The rock is the system; everything else it encounters is the surroundings. Drop the rock and its PE changes to kinetic energy (energy of movement, KE), pushing air aside as it falls (therefore spreading out the rock’s KE a bit) before it hits the ground, dispersing a tiny bit of sound energy (compressed air) and causing a little heating (molecular motion energy) of the ground it hits and in the rock itself. The rock is unchanged (after a minute when it disperses to the air the small amount of heat it got from hitting the ground). But the potential energy that your muscles localized in by lifting it up is now totally spread out and dispersed all over in a little air movement and a little heating of the air and ground.
A hot frying pan? The iron atoms in a hot frying pan (system) in a room (surroundings) are vibrating very rapidly, like fast "dancing in place". Therefore, considering both the pan and the room, the motion energy in the hot pan is localized. That motion energy will disperse—if it is not hindered, according to the second law. Whenever the less rapidly moving molecules in the cooler air of the room hit the hot pan, the fast-vibrating iron atoms transfer some of their energy to the air molecules. The pan’s localized energy thus becomes dispersed, spread out more widely to molecules in the room air. | <urn:uuid:5a593fa3-1f61-43dc-8c47-bd51839fb04a> | 3.53125 | 330 | Personal Blog | Science & Tech. | 45.104417 |
by Nate Jones, Vertebrate Ecology Lab
(still in the Bering Sea) … Of course the bad weather I’ve been writing about was nothing compared to what happens on the Bering during the months of February or March, and the Gold Rush fishes regularly during that time of year, so I had complete faith in the seaworthiness of the ship and the judgment and skill of the crew. I took comfort in that thought, and stumbled down to my bunk for what became a grueling 72 hours of bumps, rolls, and queasy stomachs. During this stormy time the crew exchanged watches at the helm, keeping the ship pointed into the fury.
We all hoped for the best, but by the time the seas had calmed to (a more manageable?) 8-10’, the hungry ocean had damaged and ripped off much of our scientific equipment, snapping several ¼” steel bolts and ripping welds clean apart!
The Gold Rush itself weathered this storm in fine shape (wish we could say the same of our scientific equipment!), and there were no major injuries to anyone on board. It really was quite a minor event in the context of the Bering Sea; just another blowy, bumpy day or two out on the water.
But, it impressed me and I couldn’t help contemplating darker scenarios – what happens when there is a true emergency? What if someone had been swept overboard, or, worse yet, what if the ship itself had been damaged or taken on water and started to go down? Such things do happen, although not as frequently now as they have in the past (coast guard regulations and improvements in technology and crew training have contributed to much increased safety).
In my next post I’ll put up some images from training exercises that are routinely undertaken to help prepare crew and passengers (scientists) for emergencies at sea… | <urn:uuid:c3efafd2-55f0-42c5-978e-951582e12ce4> | 3.171875 | 387 | Personal Blog | Science & Tech. | 49.61423 |
Brought to you by the Organic Reactions Wiki, the online collection of organic reactions
[4+3] Cycloaddition is the annulation of an allyl or oxyallyl cation with a four-atom pi system to form a seven-membered ring. It represents one of the relatively few synthetic methods available to form seven-membered rings stereoselectively in high yield.
Symmetry-allowed [4+3] cycloaddition is an attractive method for the formation of historically difficult-to-access seven-membered rings. Neutral dienes and cationic allyl systems (most commonly oxyallyl cations) may react in a concerted or stepwise fashion to give seven-membered rings. A number of dienes have been employed in the reaction, although cyclic, electron-rich dienes such as those found in the pyrrole and furan ring systems are the best 4π systems for this process. Intramolecular variants are also efficient.(1)
Recent developments have focused on expanding the scope of enantioselective [4+3] cycloadditions and the range of conditions available for generating the key oxyallyl cation intermediate.
Mechanism and Stereochemistry
Oxyallyl cations may be generated under reductive, mildly basic, or photolytic conditions. Reduction of α,α'-dihaloketones is a very popular method for generating symmetric oxyallyl cations. After formation of a metal enolate, dissociation of halide generates a positively charged oxyallyl intermediate. This electron-deficient 2π component reacts with electron-rich dienes to give cycloheptenones. Cyclic dienes fare better than the corresponding acyclic dienes because in order to react, the diene must be in the s-cis conformation in the presence of the short-lived oxyallyl cation—cyclic dienes are locked in this reactive conformation.(2)
Substituents at the 1 and 3 positions are usually required to stabilize the oxyallyl cation and prevent isomerization to cyclopropanones and allene oxides. In most cases, an excess of the diene is employed to prevent isomerization of the oxyallyl cation intermediate. Increasing the covalent character of the metal-oxygen bond (by, for instance, employing iron carbonyl reducing agents instead of sodium) also stabilizes the oxyallyl cation, leading to cleaner reactions. Strongly electrophilic allyl cations tend to give products of electrophilic substitution rather than cycloaddition.
The cycloaddition itself may be either concerted or stepwise, depending on the nature of the oxyallyl intermediate and the reaction conditions. Concerted reactions taking place under reductive conditions usually exhibit low regioselectivity due to somewhat indiscriminate frontier orbital control; however, stepwise (or at least asynchronous) reactions under basic conditions do exhibit moderate regioselectivity (attributed to initial formation of a bond between the less sterically hindered ends of the pi systems).
Stereochemical control in the [4+3] cycloaddition is not as strict as in the Diels-Alder reaction, because the former often proceed through stepwise, polar pathways. Even when the reaction is concerted, complications may arise due to conformational dynamics in the oxyallyl component, which can exist in "W," "U," or "sickle" forms. Generally, however, the "W" form dominates. Even so, two stereochemically distinct transition states are possible: a chair-like, "extended" TS which leads to a cis relationship between the bridging atom and oxyallyl substituents, and a boat-like, "compact" TS which leads to a trans relationship.(3)
Which transition state is favored depends on both the 4π and 2π reacting partners. Reactions of cyclic dienes tend to favor the compact over the extended TS (this is particularly true for furan). In addition, the electrophilicity of the oxyallyl cation is related to the favorability of the extended transition state—more electrophilic cations (which possess more covalent metal-oxygen bonds) tend to favor the extended transition state, while less electrophilic cations favor the compact transition state.
Scope and Limitations
Reduction of α,α'-dihaloketones is an effective method for the generation of oxyallyl cations for cycloaddition. Reducing agents used include copper-bronze., iron carbonyl complexes., and copper/zinc As mentioned previously, products exhibiting trans stereochemistry between the bridging atom and the oxyallyl substituents (resulting from the compact transition state) are generally favored.(4)
α-Haloketones with hydrogens at the α' position can also be transformed into oxyallyl cations under basic conditions. This usually requires highly polar media, and the use of a halophilic Lewis acid (such as Ag+) is sometimes necessary.(5)
Photochemical routes to oxyallyl cations generally result in the formation of a new covalent bond before the cycloaddition itself takes place. These reactions thus may lead to the formation of three new carbon-carbon bonds in a single operation.(6)
Intramolecular [4+3] cycloadditions are also possible, and oftentimes lead to interesting bridged architectures that are difficult to access by other methods. The product below, for instance, features a rare trans-bridging ketone.(7)
A synthesis of Prelog-Djerassi lactone illustrates how stereocenters set during a [4+3] cyclization may be used later for stereochemical control. The oxabicyclo[3.2.1]octane products of cycloadditions involving furan may be opened using a variety of methods(8)
Comparison with Other Methods
Compared to annulations that form five- and six-membered rings, annulations that form seven-membered rings are relatively rare. "Classical" methods that clip linear substrates together through the formation of a single carbon-carbon bond form seven-membered rings efficiently in some cases (cf. the acid-mediated olefin cyclization below). Transition-metal catalyzed cycloadditions of vinylcyclopropanes are also useful for the formation of seven-membered rings(9)
Experimental Conditions and Procedure
Cycloadditions carried out under reductive conditions can generally be effected with commercially available reducing agents, although a few reducing agents require special preparation. Reductive reactions employing iron carbonyl complexes should be carried out in a well-ventilated fume hood, as free carbon monoxide may be released. The optimal conditions for base-mediated cycloadditions vary somewhat, although polar media tend to give higher yields—fluorinated solvents are more effective than their non-fluorinated analogues, and alkoxide or amine bases work better than others.
Anthracene (3 g, 16.9 mmol) was dissolved in benzene (30 mL) at 80°. Zinc dust (2 g, 32 mg-atom) and copper(I) chloride (0.32 g, 3.2 mmol) were added through a powder funnel, and the mixture was stirred for several minutes. Chlorotrimethylsilane (4.9 g, 36 mmol) was added, followed by 2,4-dibromopentan-3-one (7.45 g, 30 mmol) in benzene (5 mL). A second portion of zinc (2 g, 32 mg-atom) and copper(I) chloride (0.32 g, 3.2 mmol) was added and the mixture maintained at 80° for 4 hours. The hot reaction mixture was filtered to remove the Zn/Cu couple and the flask was rinsed with several portions of dichloromethane. On cooling, a solid mass precipitated and additional dichloromethane was added. The resulting solution was washed twice with saturated aqueous ammonium chloride solution, once with water, and once with saturated sodium chloride solution. The aqueous layers were washed with dichloromethane and the combined organic phase was dried over MgSO4. The solvent was removed in vacuo and the crude product was chromatographed on silica gel (CH2Cl2 as eluent). This afforded 4.1 g (93%) of product as a mixture of epimers: 1H NMR(CDCl3)δ 1.14 (d, J = 7 Hz, 6 H), 2.74 (d, q, J = 2.3, 7 Hz, 2 H), 3.80 (d, J = 7.3 Hz, 2 H), 7.22 (m, 8 H).
- ↑ Rigby, J. H.; Pigge, F. C. Org. React. 1997, 51, 351. doi: (10.1002/0471264180.or051.03)
- ↑ Harmata, M.; Elahmad, S.; Barnes, C. L. Tetrahedron Lett. 1995, 36, 1397.
- ↑ a b Bingham, R. C.; Dewar, M. J. S.; Lo, D. H. J. Am. Chem. Soc. 1975, 97, 1302.
- ↑ Henning, R.; Hoffmann, H. M. R. Tetrahedron Lett. 1982, 23, 2305.
- ↑ Hill, A. E.; Hoffmann, H. M. R. J. Am. Chem. Soc. 1974, 96, 4597.
- ↑ a b Hoffmann, H. M. R. Angew. Chem., Int. Ed. Engl. 1984, 23, 1.
- ↑ Takaya, H.; Makino, S.; Hayakawa, Y.; Noyori, R. J. Am. Chem. Soc. 1978, 100, 1765.
- ↑ Giguere, R. J.; Rawson, D. I.; Hoffmann, H. M. R. Synthesis 1978, 902.
- ↑ Mann, J.; Wilde, P. D.; Finch, M. W. J. Chem. Soc., Chem. Commun. 1985, 1543.
- ↑ West, F. G.; Hartke-Karger, C.; Koch, D. J.; Kuehn, C. E.; Arif, A. M. J. Org. Chem. 1993, 58, 6795.
- ↑ Harmata, M.; Elomari, S.; Barnes, C. L. J. Am. Chem. Soc. 1996, 118, 2860.
- ↑ White, J. D.; Fukuyama, Y. J. Am. Chem. Soc. 1979, 101, 226.
- ↑ Sato, T.; Watanabe, M.; Noyori, R. Tetrahedron Lett. 1978, 4403.
- ↑ Marshall, J. A.; Anderson, N. H.; Johnson, P. C. J. Org. Chem. 1970, 35, 186.
- ↑ Huffman, M. A.; Liebeskind, L. S. J. Am. Chem. Soc. 1993, 115, 4895.
- ↑ Giguere, R. J.; Rawson, D. I.; Hoffmann, H. M. R. Synthesis 1978, 902. | <urn:uuid:f6d64193-87fb-40d1-8fd4-88c5ded8896b> | 2.90625 | 2,450 | Knowledge Article | Science & Tech. | 57.792594 |
Scientists have solved the mystery of why the world's highest mountains sit near the equator - colder climates are better at eroding peaks than had previously been realised.
Mountains are built by the collisions between continental plates that force land upwards. The fastest mountain growth is around 10mm a year in places such as New Zealand and parts of the Himalayas, but more commonly peaks grow at around 2-3mm per year.
In a study published today in Nature, David Egholm of Aarhus University in Denmark showed that mountain height depends more on ice and glacier coverage than tectonic forces. ... At cold locations far from the equator, he found, erosion by snow and ice easily matched any growth due to the Earth's plates crunching together.
At low latitudes, the atmosphere is warm and the snowline is high. "Around the equator, the snowline is about 5,500m at its highest so mountains get up to 7,000m," said Egholm. "There are a few exceptions [that are higher], such as Everest, but extremely few. When you then go to Canada or Chile, the snowline altitude is around 1,000m, so the mountains are around 2.5km." ...
Thursday, August 13, 2009
I'm always interested in new "facts" that explain the world to me. I just ran across this article by Alok Jha in the Guardian: | <urn:uuid:cdacc2e9-4693-48b2-8347-6af74faa092d> | 3.6875 | 296 | Personal Blog | Science & Tech. | 63.917132 |
Writing about climate research has revealed a generation gap within my own family, made obvious this weekend by wine-fuelled dinner-table debates. Visiting my nearest and dearest over the Easter national holiday for the first time since I started Simple Climate, they grilled me on the research I’ve covered.
In the UK daffodils are often associated with spring and Easter, but have been late to appear this year thanks to an especially cold winter. Both my parents and my girlfriend’s strongly dislike the cold. Therefore, like so many people across Europe and the US, they currently find it hard to accept that the world is warming overall. However, when I pointed out that climate researchers assess average temperatures across the planet, my stepmum did recall her friends in Australia complaining about January to March being exceptionally warm.
Simple Climate has published graphs over the past three months showing how temperatures have risen over the past decade – with 2009 being the second warmest on NASA’s records – and century that offer a wider perspective. Nevertheless, it was tough to try and argue against our families’ first-hand evidence. So, I told them about Louis Codispoti, the University of Maryland scientist who’s been visiting the Arctic for 47 years and has also seen the disappearance of ice there first hand. For other evidence, both Jane Ferrigno of the US Geological Survey and Huw Griffiths of the British Antarctic Survey pointed to maps of retreating glaciers in the Antarctic.
“So, even if the world is actually warming up,” my girlfriend’s parents wondered, “could this not be part of some natural process, rather than being caused by people?” I recalled the rising atmospheric carbon dioxide (CO2) levels that both Potsdam Institute of Climate Research’s Georg Feulner and US Oak Ridge National Laboratory’s Paul Hanson had mentioned. CO2 is known to trap heat that would otherwise escape into space. “We know how much fossil fuel we burn and these rates are in agreement with the measured rise in atmospheric CO2 concentrations,” Feulner points out.
Much of that fuel has been burnt by people during my parents’ lifetime, a time of great prosperity in the developed world. They have worked hard to contribute to this prosperity, and grown accustomed to the way of life that it has brought. Yet this way of life has inadvertently set in motion a chain of events that seems likely to cause environmental upheaval if action is not taken. Having worked hard, my family, and many other people, are understandably reluctant to make any more sacrifices than necessary. As a result, they question whether the scientists professionally conducting climate change research really have their facts right. However, the science that I have covered since beginning Simple Climate has overwhelmingly been in agreement that they do. As fellow retiree Louis Codispoti points out to the likes of my parents, “If I visited 100 doctors and 95 of them said that I needed an operation, would it make sense to ignore this advice?”.
It is my generation and the generations after that will have to deal with the consequences of the fossil fuels that are burnt. The purpose of speaking with these scientists over the past three months has been to produce an accurate but brief explanation of climate change, and what it might mean for the planet’s future. Feulner’s explanation, previewed above, seemed especially clear to me, so I intend to replace the attempt I made in January with his, barring some small edits. That explanation is:
Every day humans burn large amounts of fossil fuels like coal, gas and oil to produce energy and goods. These fossil fuels contain carbon atoms which are converted to carbon dioxide (CO2) when burnt. This CO2 is released to the atmosphere where it acts as a ‘greenhouse’ gas: CO2 traps outgoing radiation and leads to a warming of the atmosphere.
Since humans begun to use fossil fuels to power industry, measurements show that CO2 concentrations in the atmosphere have increased by about a third, and that global temperatures have increased by about 0.8°C. If CO2 emissions continue to rise as in the past, temperatures could be several degrees higher in the year 2100, which would negatively impact the environment and human societies. We know how much fossil fuel we burn and these rates are in agreement with the measured rise in atmospheric CO2 concentrations, once ocean uptake is considered. Because we know the greenhouse-gas effect of CO2, we know this increasing CO2 concentration will cause global warming.
Does this seem clear to you? Feel free to comment using the tools at the end of the post. | <urn:uuid:286004e0-982a-4936-8f26-ec96534b86db> | 3.0625 | 955 | Personal Blog | Science & Tech. | 40.750989 |
Science Fair Project Encyclopedia
In fluid dynamics, head refers to the constant right hand side in the incompressible steady version of Bernoulli's equation. It is possible to express head in either units of height (e.g. meters) or in units of pressure such as pascals (the SI unit).
This is best understood by considering a waterwheel: the head is the vertical distance from the top of the waterwheel to the free surface of the millpond.
More generally, when considering a flow, one says that head is lost if energy is dissipated, usually through turbulence. In the context of steam trains, one talks of a good head of steam, referring to the pressure in the boiler.
The static head of a pump is the maximum height (pressure) it can deliver. The capacity of the pump can be read from its Q-H curve (flow vs height).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:5ca168c2-ceb8-407e-a99b-603e38a4fba7> | 3.90625 | 215 | Knowledge Article | Science & Tech. | 58.700349 |
DeflagrationA deflagration is a relatively slow explosion, generating only subsonic pressure waves. This sort of explosion is usually produced by rapid chemical combustion reactions, for instance of gunpowder in a firearm, or fuel in an internal combustion engine. Contrast detonation, where the pressure waves are supersonic.
Deflagrations are easier to control than detonations, and better suited when the goal is to move an object (a bullet in a gun, or a piston in an engine) with the force of the expanding gas.
This article is a stub. You can help Wikipedia by fixing it. | <urn:uuid:95c49858-9644-47a2-9cbd-cdc611f804fb> | 3.34375 | 124 | Knowledge Article | Science & Tech. | 31.633333 |
A rare meteor shower predicted to hit Earth on 1 September should give astronomers only their second chance to study an ancient comet's crust. It could also help them develop a warning system against an otherwise insidious threat - a comet aimed at Earth from the dark fringes of the solar system.
September's shower, called the alpha Aurigids, has only been seen three times before, in 1935, 1986 and 1994. The reason for this elusiveness is the shower's unusual origin.
Most meteor showers are caused by short-period comets, dirty iceballs that loop around the inner solar system on orbits lasting less than 200 years, shedding debris each time they approach the Sun's heat. This debris builds up into a broad band along the comet's orbit. Every year, when we pass through, it burns up in the atmosphere and appears as shooting stars.
The Aurigids come from a comet that takes 2000 years to orbit the Sun. With such infrequent visits, Comet Kiess can't build up a broad dust band; it only generates a narrow trail of debris each time.
The showers happen when Earth passes through one of these dust trails in particular, which was thrown off by the comet in 83 BC. "It is only a very narrow trail, and it is only once in a while that it crosses Earth's path," says Peter Jenniskens of NASA's Ames Research Center in Moffett Field, California, US.
He thinks the gravity of Jupiter and Saturn controls the path of the dust trail, waving it around like a garden hose, occasionally aiming it at Earth. Along with his colleague Jérémie Vaubaillon at Caltech, US, Jenniskens has calculated that the hose should be pointed at us again this year.
Several teams of astronomers will be watching the shower, both from the ground and from two aircraft following the Earth's shadow.
They are hoping to see fragments of the ancient crust of Comet Kiess. For 4.5 billion years before some gravitational accident nudged it towards the inner solar system, Kiess was drifting among a vast swarm of icy bodies called the Oort cloud lying far beyond the planets.
All that time, high-energy particles called cosmic rays bombarded the comet, and astronomers suspect that created a hard crust by blasting out some of its more volatile substances.
Only once before have astronomers knowingly seen a shower from a long-period comet, when Jenniskens predicted an appearance of the alpha Monocerotids in 1995. They penetrated unusually far into the atmosphere, suggesting that they were made of relatively tough material, perhaps from such a cosmic-ray-produced crust.
This time, astronomers will be looking at the spectral signature of evaporating meteors to test this theory. "Now we are better prepared, we can do more in-depth studies to understand the properties of the material," Jenniskens told New Scientist.
He also wants to know whether meteor showers such as this could warn of planetary peril. At present, astronomers can only spot a long-period comet a few years before it arrives in the inner solar system, leaving little time to deflect it if it were pointed right at Earth.
But if it had visited the inner solar system before, the resulting meteor shower might be used to trace the comet's orbit and get a much earlier warning. The size and number of Aurigid meteors will tell the researchers how debris has spread along the orbit and how these showers evolve.
They are keen for amateurs to contribute their observations. "We're interested to know what is the brightest, biggest Aurigid," says Jenniskens. "Somebody is going to capture that, and it's probably not going to be us."
The best view of the meteors will be from the west coast of North America, before dawn on 1 September. Based on past showers, there should be up to 200 bright meteors visible per hour, and they may have an unusual blue-green colour.
The shower probably won't return for at least 50 years, according to Jenniskens' calculations. "It's a once in a lifetime event."
Comets - Learn more about the threat to human civilisation in our special report.
Journal reference: EOS (7 August 2007, vol 88, no 32)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Sat Nov 17 07:16:33 GMT 2007 by Cristi Sheets
The questions that I have .... What about the blackhole? is it getting bigger? and all this comets and the meteor showers whats really going on up there..can you explain that and not to mention the global warming ..Is the global warming causing all this changes with the solar system? And I know that there is no fixing to that .........to none if it. We gt to admit that the end of the world is getting near and there is nothing that we are able to do.nt even the most intelligent person will be able to find a solutionn. I dont want to sound like a freak or anything like that but .its the truth. Thanks
Sat May 23 20:40:40 BST 2009 by Cruise
Really Cristi, Global warming causing changes to our solar system? If anything changes to our solar system might influence global warming but that is not the case. At least i don't think. As for the end of the world, well this will probably not happen within your lifetime. As of now there is no prediction of such an event, but even i have to admit that we don't know everything so who can tell? So we can cross out the word truth from your statement for now.
23rd May 2009 Meteor Showers
Sat May 23 21:00:13 BST 2009 by Cruise
Hi there team,
I do however have a relevant question, I reside in the Canadian capital Ottawa (WGS84 45° 25′ 15″ N, 75° 41′ 24″ W) and i witnessed three entries on the 23rd between 2 and 4 am - i wasn't out there for long (the time to smoke a cigarette) but these observations where made on two separate instances. Leading me to believe and ask, are these part of the eta Aquarids shower of 5-6 May 2009? could these be part of Haley's trail? Any and all comments would be appreciated. I just want confirmation that there were unexpected showers last night. Oh i was looking at the western sky at the time of the sightings.
Thanks in advance for your time and attention in this matter.
23rd May 2009 Meteor Showers
Sat May 23 22:32:46 BST 2009 by Cruise
My Apologies - those times are Eastern Standard Times -5 GMT
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:3317e00e-6f0b-4237-88c6-b872fb724c41> | 4.1875 | 1,499 | Comment Section | Science & Tech. | 65.301104 |
CARBON, the element that is essential to all life, has sprung a baffling surprise on chemists. An analysis of the 7 million or so known organic compounds has shown that a significantly higher proportion contain even numbers of carbon atoms than odd numbers. Yet carbon chemistry is so diverse that the number of compounds with odd and even numbers should be roughly equal.
"We found this completely by accident," says Gautam Desiraju, professor of chemistry at the University of Hyderabad in India. Desiraju was studying the phenomenon of crystal polymorphism, in which individual organic compounds adopt a variety of crystal structures. While he was screening a database of 150 000 carbon-based crystal structures, he realised that it contained about 15 per cent more compounds with an even number of carbons than odd. "I realised this was something serious," he says."
When Desiraju told Jack Dunitz of the Swiss Federal Institute ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:02718a2e-be2c-4760-b692-7fa29f92a1c6> | 3.5625 | 217 | Truncated | Science & Tech. | 42.960724 |
Rachael Porter for NPR
NASA engineer Adam Steltzner led the team that designed a crazy new approach to landing on Mars.
NASA engineer Adam Steltzner led the team that designed a crazy new approach to landing on Mars. Rachael Porter for NPR
It's called the seven minutes of terror. In just seven minutes, NASA's latest mission to Mars, a new six-wheeled rover called Curiosity, must go from 13,000 mph as it enters the Martian atmosphere to a dead stop on the surface.
During those seven minutes, the rover is on its own. Earth is too far away for radio signals to make it to Mars in time for ground controllers to do anything. Everything in the system known as EDL — for Entry, Descent and Landing — must work perfectly, or Curiosity will not so much land as go splat.
The team that invented the EDL system has spent nearly 10 years together, designing, building, testing, tweaking, retesting and retweaking. Now all they can do is sit and wait to see if their design works.
Because the new Mars rover is five times heavier than its predecessors, NASA had to come up with a totally new landing system. Here's a step-by-step look at how it is supposed to work.
The Mars Science Laboratory spacecraft will approach Mars at 13,000 mph. The entry, descent and landing process has to guide it to a soft landing.
This artist's concept shows thrusters firing to steer the spacecraft as it enters Martian atmosphere. The Curiosity rover has traveled for more than seven months inside the spacecraft.
Friction with the Martian atmosphere helps slow the spacecraft as it descends. It also heats the heat shield. Friction alone accounts for almost all of the deceleration needed for landing.
A parachute more than 50 feet across pops out, adding a bit more braking as the craft sinks into Mars's lower atmosphere.
With the heat shield jettisoned, the rover can be seen tucked into the backshell of the spacecraft.
Rocket thrusters provide the last little bit of deceleration. At the same time, radar clicks on, giving the craft information about its speed and distance from the surface.
Here's where things get crazy. A new "sky crane" lowers the rover on three cables while hovering above the surface.
Once the sky crane senses that it's no longer supporting the rover, it releases the cables and flies off to crash-land a safe distance away. Curiosity is now free to explore its new home.
So you won't be surprised to learn that this is a rather nerve-wracking time for Adam Steltzner, the EDL team leader.
"The product of nine years of my life will be put to the test Sunday evening," Steltzner told me when I visited him at NASA's Jet Propulsion Laboratory in Pasadena, Calif., in late July. "And so that is personally anxiety provoking."
I don't know about you, but I tend to think of engineers as serious buttoned-down types. Steltzner is anything but.
He has pierced ears, wears snakeskin boots and sports an Elvis haircut. He's quick to laugh and curious about everything. Steltzner's laid-back style makes team meetings a jolly affair. I stopped by one of those meetings during my visit. The jollity was still there, but it was clear that the prelanding tension was rising.
"We are 19 days from landing," he told his team. "Is that freaky or what? Freaks me out to no end. Every time I say that, my back gets tight."
Steltzner had some advice for his colleagues.
"If any of you are sharing any of the emotional experience I am, keeping ourselves, like, chill, and focused and not freaking, is a good thing to do," he said.
Watch Steltzner and others at NASA explain the hair-raising sequence of events that must go perfectly right in order for Curiosity to land safely on Mars on Sunday night.
From Rock Star Dreams To Rocket Science
Steltzner's path to becoming team leader for this new Mars lander was hardly direct. Unlike many successful engineers, he struggled at school. An elementary school principal told him he wasn't very bright. His high school experience seemed to confirm that.
"I passed my geometry class the second time with an F plus, because the teacher just didn't want to see me again," he says.
His father told him he'd never amount to anything but a ditch digger, a remark he still carries with him years later.
Maybe that's because school wasn't a priority, particularly with the distractions of the flower-power era in the Bay Area.
"I was sort of studying sex, drugs and rock and roll in high school," says Steltzner. It wasn't just the long hair. "I liked to wear this strange Air Force jump suit. And my first car was a '69 Cadillac hearse. I put a bed in the back."
Talk about a night to remember. "Well, I was younger. It was a different time," says Steltzner.
After high school, the plan was to be a rock star. While he waited for stardom, Steltzner played bass guitar in Bay Area bands, watching his friends graduate and go off to college.
Finding Purpose In The Stars
But then something happened. As Steltzner tells it, he was on his way home from playing music at a club one night when he became fascinated with the stars, especially the constellation of Orion.
"The fact that it was in a different place in the sky at night when I returned home from playing a gig, than it had been when I'd driven out to the gig," he said. "And I had only some vague recollection from my high school time that something was moving with respect to something else, but that was it."
As crazy as it sounds, that experience was enough to motivate him to take a physics course at the local community college. That did it. He was hooked.
The fog of sex, drugs, and rock and roll lifted. He had to know all about the laws that govern the universe. The rocker wound up with a doctoral degree in engineering physics.
"I was totally turned on by this idea of understanding my world," Steltzner said. "Engineering gave me an opportunity to be gainfully employed [and] really understanding my world with these laws and equations that governed it."
After years of being somewhat aimless, he was glad to be involved in something more practical, a career that produced something tangible at the end of the day.
"With music, how your band is thought of has to do with how you dress, and who you open for, or who opens for you," he said. "That ephemeral, not really able to get a solid understanding of good and bad was tough for me, and the thing that engineering and physics gave me was this idea that there was a right answer, and I could get to it."
I asked Steltzner whether he would have been just as happy getting to the right answer while designing waste-treatment facilities. Did it have to be something as glamorous as designing a landing system for a Mars probe? He thought for a minute before he answered.
"I grew up in an era where space was revered," he said. "So I think there's a kind of natural ego drive to be involved in something so sexy. And I came from rock 'n' roll, and there's a lot of sexy in rock 'n' roll. So in terms of, really, just what I would need to measure myself, it could have been waste treatment, but I also needed a little bit of sexy."
Steltzner and his colleagues considered several options before hitting upon the 'sky crane' concept.
'Rover On A Rope': Crazy. Sexy. Cool.
He's got the sexy, but Steltzner has added a dash of crazy to the mix, especially when it comes to the design he and his team invented for the landing system.
A totally new Mars landing system was needed because other systems, including the airbags used on earlier rovers, were considered too wimpy to land Curiosity safely. The craft is the biggest rover yet, weighing in at more than 2,000 pounds — about five times as heavy as the Spirit and Opportunity rovers sent to Mars in 2003. Then there's the pesky Martian atmosphere. It's too thin to make parachutes alone effective, and too thick to make rocket brakes enough.
So Steltzner's team came up with a kind of rocket-powered platform that hovers over the Martian surface and lowers Curiosity down on a cable — a system that was once derisively referred to as "rover on a rope."
Crazy, but to an engineer, crazy smart.
"It ends up being we've come to really love this system," he said.
And as Steltzner will be the first to tell you, he didn't invent it all by himself.
"This is way bigger than any one person, way bigger than any five, 10, 20, 100. At one point, there were almost 2,000 people working on this project," he said. "So to bring all those people together takes some teaming. And also, I like people. So bringing that sense of togetherness together is important for me."
We'll know on Sunday night California time whether all that teamwork invented a landing system able to withstand the hazards Mars can throw at it.
Produced for broadcast by Rebecca Davis. | <urn:uuid:42c284b6-065c-41ec-acc3-d3761156a82b> | 3.625 | 2,012 | Truncated | Science & Tech. | 64.855385 |
Using PHP to Interpret Forms10/12/2000
The Internet, as we know it today, is primarily the result of two seminal events:
- The February 1993 introduction of Mosaic, the first graphical Web browser (see references in the sidebar to the right), and,
- The 1994 addition of forms to the HTML specification.
Mosaic, with its multimedia capability, added entertainment value to what had previously been a bland character-based medium. Forms created the capability for dynamic Web sites tailored to user requests. Without either of these two developments, we would all probably be watching TV right now.
In view of the above, we'll start this column with a series on forms. Before getting down to the nitty-gritty, let's review our programming level assumptions and take a look at where we are going.
What I'm assuming
This column is not directed toward beginning programmers, although they are more than welcome to come along for the ride. I'm assuming our readers have a reasonable degree of programming experience. PHP experience is not assumed, but a background in a Unix-based language such as C or Perl would be very helpful. Whenever PHP deviates from typical programming conventions, I'll stop the show and take time to explain in detail. PHP arrays immediately come to mind.
Since embedded programming code segments in HTML documents is the main strength of PHP, it is also assumed you know HTML on a first name basis.
To get the most out of these sessions you will need access to PHP. Paraphrasing the old saying: "If your ISP doesn't offer PHP, move." If you need a low-cost PHP test bed, drop me a note at the address at the end of this article. PHP4 code is utilized in programming examples. I'll point out the differences between PHP3 and PHP4 when necessary. However, PHP4 has many advanced features and should be employed if at all possible. PHP is free, so there is little reason for your ISP not to upgrade.
Where we are going
The major goal of this series is to give you tools (program, function, or object) in each session that are immediately usable. When I do a series on a theme, as I'm about to do with forms, I'll build upon and expand on the material from previous sessions. My motto is "Start simply, and then dazzle them with my footwork!"
Add a little form to your life
When people start to get interested in Web-related programming, the first thing they typically want to do is a form. A form can be one simple input box on a search engine front end or a multi-page questionnaire.
The program form-one.php is the demonstration PHP program for our first exercise. Display the code here and cut and paste into your favorite editor, upload to your Web server, and run the script. If you have not uploaded PHP documents to your friendly ISP, check with your administrator to ascertain the extension required, typically php or php4.
Before going into code details, a few notes: PHP3 requires function declarations before they are called, so put the functions at the top of the script if you're trapped into PHP3. Also, the
here document construct starting on line 44 is PHP4-specific so replace it with a multi-line
PHP documentation calls the
here document construct
here print, but I prefer the Perl terminology since it's more descriptive. Notice I've used the
here print construct to assign the big string to a variable, $HTML in this case. Some subsequent assignments are concatenated to $HTML and then printed. This minimizes I/O operations, which in turn increases program efficiency.
here document construct is essentially a multiple line, double-quoted string. The new line character,
\n, is automatically added to the end of every line. Perl programmers beware, there are three structural differences between the Perl and PHP versions of this construct.
The PHP version starts with three "<" characters, there is no semicolon termination after the starting label, and there is a semicolon after the terminating label. Anything that would normally appear between a double-quoted string may appear in this construct, including variables that will be expanded. The
here document construct is a very convenient way of including a large group of HTML within a PHP block. Create a form with your favorite HTML editor and cut and paste it into a
here document block as I've done in the example.
Pages: 1, 2 | <urn:uuid:203bf447-8b01-4bf1-898f-d570a27e7b16> | 2.859375 | 926 | Personal Blog | Software Dev. | 54.450618 |
Metallic Glass Stronger Than Steel
Researchers have created a new kind of damage-tolerant metallic glass that may be tougher than any other known material. The new metallic glass is a microalloy featuring palladium, a metal with a high "bulk-to-shear" stiffness ratio that counteracts the intrinsic brittleness of glassy materials.
"Because of the high bulk-to-shear modulus ratio of palladium-containing material, the energy needed to form shear bands is much lower than the energy required to turn these shear bands into cracks," Ritchie says. "The result is that glass undergoes extensive plasticity in response to stress, allowing it to bend rather than crack."
(Damage-tolerant metallic glass)
This micrograph of a deformed notch in palladium-based metallic glass shows extensive plastic shielding of an initially sharp crack. The inset is a magnified view of a shear offset (arrow) developed during plastic sliding before the crack opened.
"These results mark the first use of a new strategy for metallic glass fabrication and we believe we can use it to make glass that will be even stronger and more tough," says Robert Ritchie, a materials scientist who led the Berkeley contribution to the research.
"Our game now is to try and extend this approach of inducing extensive plasticity prior to fracture to other metallic glasses through changes in composition," Ritchie says. "The addition of the palladium provides our amorphous material with an unusual capacity for extensive plastic shielding ahead of an opening crack. This promotes a fracture toughness comparable to those of the toughest materials known. The rare combination of toughness and strength, or damage tolerance, extends beyond the benchmark ranges established by the toughest and strongest materials known."
Science fiction fans recall the metalloglass buildings in the comic strip version of Buck Rogers in the 25th Century. As far as I know, there is no reference to "metalloglass" in Philip Nowlan's 1928 novel Armageddon: 2419, which introduced Buck Rogers to the world.
I also thought of the tower of glass from Robert Silverberg's 1970 eponymous story.
Compare to the glassite seen in Edmond Hamilton (and probably other) stories; it is used as a material for transparent bubble-style helmets for space-suits, and as a building material:
The Terra Hotel stood in a garden at the edge of town, fronting the moonlit immensity of the desert. This glittering glass block, especially built to cater to the tourist trade from Earth, was Earth-conditioned inside...
The place had glassite walls and ceiling, and was designed to give an impression of the navigating bridge of a space-ship.
(From The World with a Thousand Moons by Edmond Hamilton )
Via Lab Spaces; thanks to Winchell Chung for the tip and the reference for this story.
Scroll down for more stories in the same category. (Story submitted 1/12/2011)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
you like to contribute a story tip?
Get the URL of the story, and the related sf author, and add
Comment/Join discussion (Back On) ( 8 )
Related News Stories -
Filabot Turns Dull Plastic Junk To 3D Printed Shiny
'Whenever Nell's clothes got too small for her, Harv would pitch them into the deke bin and then have the M.C. make new ones.'-Neal Stephenson, 1995.
Army Wants Black Blacker Than Black
'Well, we have a black coating now thatís ninety-nine percent absorptive...'- Doc Smith, 1940.
Military Fabric Like A Smart Second Skin
Now, your dress whites and your NBC suit can be the same outfit.
Outdoor Testing For Self-Healing Concrete
'I noticed that curious mottled knots were forming, indicating where the room had been strained and healed faultily.'- J.G. Ballard, 1962.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
MIT Robot Cheetah Video Shows Gait Transition
'The legs are long, curled way up to deliver power, like a cheetah's.'
TrackingPoint Smart Rifle
Not your typical 'smart bullet' approach.
'Hello, Computer!' Google Now Highlighted at IO13
Sky City's 220 Stories Are Go
'It rested among green parklands and... stood in total isolation, a glittering block of whites and flashing windows dotted with colors.'
CARMAT Bioprosthetic Total Human Heart Replacement
'George Walt's corporate existence proved the workability of wholly mechanical organs...'
Personal Sniffer Robots
'...The ticking combinations of the olfactory system of the hound.'
Physical Exam? We've Got Apps
See the future of handheld, personal medical devices.
The Interplanetary Internet, Vint Cerf Speaking
'This was the center of Interplanetary Communications.'
Drosophila Robotica, The Mechanical Fly
'... the Scarab [flying robot] buzzed into the great workroom as any intruding insect might...'
Robo-Raven Flapping Wing Robot Bird
'When he had first built them, they had been crude indeed, flying mechanisms with little more than a reflex-response unit.'
Japan's Nursing Home Robot Plan
Let's make the Roujin Z-0001 Robotic Bed!
Samsung Smart TVs With Gesture Control
'He waved his hand and the circuit switched abruptly.'
Swiss HCPVT Giant Photovoltaic 'Flower'
'...leaning against one of the slender stalks of a sunshade-photocell collector.'
Mini-Livers Made By 3D Printer
Organleggers may experience an employment downturn.
Smartphone Sensor System Tracks Gunfire
'Sound trackers on the roof could zero in on weapons action...'
Bacteria Now Make Biofuel Like Oil
'They have ... germs that eat pretty near anything, and produce oil as a waste product.'
More SF in the News Stories
More Beyond Technovelgy science news stories | <urn:uuid:18bbc2f7-7d14-4367-bdb8-b57f86bd8739> | 2.921875 | 1,340 | Content Listing | Science & Tech. | 54.2611 |
Ancient rainforests resilient to climate change
Climate change wreaked havoc on the Earth’s first rainforests but they quickly bounced back, scientists reveal today. The findings of the research team, led by Dr Howard Falcon-Lang from Royal Holloway, University of London, are based on spectacular discoveries of 300-million-year-old rainforests in coal mines in Illinois, USA. http://www.physorg.com/news173641201.html
I am not so sure what they are saying is truth, climate change isn't the problem it is clear cutting like they are doing in Brazil and the Amazon. | <urn:uuid:ac2a5318-dc70-4dc1-8579-7a3b9a2f0b78> | 2.703125 | 131 | Comment Section | Science & Tech. | 59.856798 |
Use of the <style> element in an HTML document:
The <style> tag is supported in all major browsers.
The <style> tag is used to define style information for an HTML document.
Inside the <style> element you specify how HTML elements should render in a browser.
Each HTML document can contain multiple <style> tags.
Tip: To link to an external style sheet, use the <link> tag.
Tip: To learn more about style sheets, please read our CSS Tutorial.
Note: If the "scoped" attribute is not used, each <style> tag must be located in the head section.
The "scoped" attribute is new in HTML5, which allows to define styles for a specified section of the document. If the "scoped" attribute is present, the styles only apply to the style element's parent element and that element's child elements.
New : New in HTML5.
|media||media_query||Specifies what media/device the media resource is optimized for|
|scopedNew||scoped||Specifies that the styles only apply to this element's parent element and that element's child elements|
|type||text/css||Specifies the MIME type of the style sheet|
The <style> tag also supports the Global Attributes in HTML.
The <style> tag also supports the Event Attributes in HTML.
HTML tutorial: HTML CSS
HTML DOM reference: Style object
Your message has been sent to W3Schools. | <urn:uuid:ed53c828-ccc0-4110-b0e3-abc82dd8ec6d> | 3.71875 | 317 | Documentation | Software Dev. | 54.298077 |
(Linnaeus, 1758) : Box Jelly|
Phylum Cnidaria / Class Cubozoa / Family Carybdeidae
Our West Coast waters are not well endowed with box jellies. The one species that does enter California waters is Carybdea marsupialis (pictured here). Although primarily a warm-water species, it visits nearshore habitats off Santa Barbara and other southern California areas from August through November. This species may have a bell of up to 4 cm high, with numerous nematocyst containing nodules on the outer part. The bell also usually is marked by light tan specks. Four distinctive spade-like structures (the pedalia) are aligned with the 4 tentacles and the septa that separate the gastric pouches. Each tentacle is capable of extending more than 10 times the height of the bell. Carybdea tends to swim most of the time while seeking crustaceans and small fishes. When visiting southern California waters, this box jelly favors shallow sandy habitats inshore of the kelp beds. Fortunately for bathers in the area, this species lacks a potent stinging punch. In addition to southern California, Carybdea marsupialis ranges farther south into Mexico, and also is known from the Atlantic Ocean and Mediterranean Sea.
All photographs © David Wrobel and may not be used or copied without permission! | <urn:uuid:349090b5-dd61-475e-9ce7-c95e43bea90e> | 3.515625 | 289 | Knowledge Article | Science & Tech. | 38.801529 |
To understand why gravity modification is not yet a reality, let’s analyze other fundamental discoveries/inventions that changed our civilization or at least the substantially changed the process of discovery. There are several that come to mind, the atomic bomb, heavier than air manned flight, the light bulb, personal computers, and protein folding. There are many other examples but these are sufficient to illustrate what it takes. Before we start, we have to understand four important and related concepts.
(1) Clusters or business clusters, first proposed by Harvard prof. Michael Porter, “a business cluster is a geographic concentration of interconnected businesses, suppliers, and associated institutions in a particular field. Clusters are considered to increase the productivity with which companies can compete, nationally and globally”. Toyota City which predates Porter’s proposal, comes to mind. China’s 12 new cities come to mind, and yes there are pro and cons.
(2) Hot housing, a place offering ideal conditions for the growth of an idea, activity, etc. (3) Crowdsourcing, is a process that involves outsourcing tasks to a distributed group of people. This process can occur both online and offline. Crowdsourcing is different from an ordinary outsourcing since it is a task or problem that is outsourced to an undefined public rather than a specific body. (4) Groundswell, a strong public feeling or opinion that is detectable even though not openly expressed.
I first read about the fascinating story of the making of the atom bomb from Stephane Groueff’s The Manhattan Project-the Making of the Atomic Bomb, in the 1970s. We get a clear idea why this worked. Under the direction of Major General Leslie Groves, and J. Robert Oppenheimer the US, UK & Canada hot housed scientist, engineers, and staff to invent and produce the atomic bomb physics, engineering and manufacturing capabilities. Today we term this key driver of success ‘hot housing’, the bringing together a group of experts to identify avenues for further research, to brainstorm potential solutions, and to test, falsify and validate research paths, focused on a specific desired outcome. The threat of losing out to the Axis powers helped increase this hot housing effect. This is much like what the Aspen Center for Physics is doing (video here).
In the case of the invention of the light bulb, the airplane, and the personal computer, there was a groundswell of public opinion that these inventions could be possible. This led potential inventors with the necessary basic skills to attempt to solve these problems. In the case of the incandescent light bulb, this process took about 70 years from Humphrey Davy in 1809, to Thomas A. Edison and Joseph Wilson Swan in 1879. The groundswell started with Humphrey and had included many by the time of Edison in 1879.
In the case of the airplane the Wright brothers reviewed other researchers’ findings (the groundswell had begun much earlier), and then invented several new tools & skills, flight control, model testing techniques, test pilot skills, light weight motors and new propeller designs.
The invention of the personal computer had the same groundswell effect (see Homebrew Computer Club & PBS TV transcripts). Ed Roberts, Gordon French, Fred Moore, Bob Harsh, George Morrow, Adam Osborne, Lee Felsenstein, Steve Jobs, Steve Wozniak, John Draper, Jerry Lawson, Ron Jones and Bill Gates all knew each other before many of them became wealthy and famous. Bill Gates wrote the first personal computer language, while the others invented various versions of the microcomputer, later to be known as the personal computer, and peripherals required. They invented the products and the tools necessary for the PC industry to take off.
With protein folding, Seth Cooper, game designer, developed Fold It, the tool that would make the investigation into protein folding accessible to an undefined public. Today we describe this ‘crowdsourcing’. Notice that here it wasn’t a specialized set of team that was hot housed, but the reverse, the general public, were given the tools to make crowdsourcing a viable means to solving a problem.
Thus four key elements are required to foster innovation, basic skills, groundswell, hothouse or crowdsourcing, and new tools.
So why hasn’t this happened with gravity modification? Some form of the groundswell is there. In his book The Hunt for Zero Point, Nick Cook (an editor of the esteemed Jane’s Defense Weekly) describes a history that goes back to World War II, and Nazi Germany. It is fund reading but Kurt Kleiner of Salon provides a sober review of The Hunt for Zero Point.
There are three primary reasons for this not having happened with gravity modification. First, over the last 50 years or so, there have only been about 50 to 100 people (outside of black projects) who have investigated this in a scientific manner. That is, the groundswell of researchers with the necessary basic skills has not reached a critical mass to take off. For example, protein folding needed at least 40,000 participants, today Fold It has 280,000 registered participants.
Second, pseudoscience has crept into the field previously known as ‘antigravity’. In respectable scientific circles the term used is gravity modification. Pseudoscience, has clouded the field, confused the public’s perception and chased away the talent – the 3 C’s of pseudoscience. Take for example, plutonium bomb propulsion (written by a non-scientist/non-engineer), basic investigation shows that this is neither feasible nor legal, but it still keeps being written up as a ‘real’ proposition. The correct term for plutonium bomb propulsion is pseudoscience.
Third reason. Per the definition of gravity modification, we cannot use existing theories to propose new tools because all our current status quo theories require mass. Therefore, short of my 12-year study, no new tools are forth coming.
Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative.
Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification. | <urn:uuid:c8f3ecf6-ca5e-4ec1-9997-67fad6fefdf6> | 3.328125 | 1,322 | Personal Blog | Science & Tech. | 38.855286 |
I realise I posted this in the wrong forum sorry
this is grade 10
the problem is:
a silversmith has alloys that contain 40% silver and others that have 50% silver. A custom order requires 150g of 44% silver. How much of each alloy should be melted together to make the bracelet?
so far i got:
Let x represent the 40% alloy
Let y represent the 50% alloy.
I rearranged the first eqaution to be y=-x+150.
Then subbed it into the second to be 0.4x+0.5(-x+150)=0.44
but when I solved this, i got 745.6. Where did I go wrong? | <urn:uuid:fa4dd73a-19f7-4c54-8f87-a30ae09fa54f> | 3.03125 | 152 | Q&A Forum | Science & Tech. | 95.036212 |
Current Location in a Wire
Does DC current travel more through the center of the
conductor or more on the surface of the conductor (wire)?
As the frequency of electrical current goes up, it tends to travel
tends more on the surface of a conductor. DC is the ultimate low
frequency. It would tend to move through the entire conductor. So -
to answer you question - percentage-wise more is traveling through the
In some applications - such as military aircraft - 400 hz is sometimes
used (compared to 60 hz for household current). In doing this they are
able to use hollow tubing as a conductor giving them lighter weight.
The center of the conductor that is not used by the higher frequency is
A lightning bolt (real DC!) travels deeply in trees or persons struck by
the lightning. The damage done is in the portion of the object most
conductive to the current... where the most heat is generated by the
DC current will use the wire's whole cross-section evenly.
Imagine a single solid wire divided by invisibly-thin barriers
into parallel strands of equal thickness and shape.
Only at the ends are they joined together, metal-to-metal.
In this picture, each strand has equal resistance,
and equal voltage from end-to-end,
so the current in each is equal.
Only AC has a preference for a particular depth.
It prefers to be shallow, staying towards the outside.
This is a consequence of changing magnetic fields caused by the changing
If it is DC, it is not changing, so the magnetic field is steady,
and has no effect on the DC current density.
DC current only cares about resistance, not inductance or magnetism.
Weird but moot minor point:
The steady field around a wire with DC current may cause a
small voltage difference between the outside and the inside.
However the difference at one end cancels out the difference at the other.
Due to electromagnetism, parallel currents attract.
So if the metal conducts electrons, then they are squeezed inwards,
and the interior would be slightly more negative than the outside.
Contact at the starting end is made to the wire's outside.
So is contact at the finishing end.
So an electron going travelling the wire-center route may
go up a small potential step at the start,
then go down the same amount at the end.
These two steps cancel each other out.
The end-to-end voltage in the wire interior is the same as the wire
so the current densities are the same too.
Nobody even thinks about those last two paragraphs. They do not need to.
Except maybe physicists doing plasma hi-power sparks with Z-pinch.
Z-pinch is when the glow of the current in the ionized gas
spontaneously squeezes itself into a very intense sharp narrow strand,
even though it started out wide and diffuse.
It only does that if the current is very high,
and because a gas can be compressed.
In a solid metal the mobile electrons (charge -1) are forced to keep a
by the need to keep charge neutrality with the
hard-packed metal-ions (charge +1) they wander amidst.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:778fbe4d-e24b-4f72-9808-f093e78cf1f1> | 3.984375 | 697 | Knowledge Article | Science & Tech. | 57.318897 |
Global warming is sneaky. For more than a century it has been hiding large amounts of excess heat in the world’s deep seas. Now that heat is coming to the surface again in one of the worst possible places: Antarctica.
For obvious reasons this should be regarded as alarming enough a story for the MSM to report on -- maybe even on their front pages. Antarctica is disintegrating much faster than almost anybody imagined. Not only is this happening more than 90 years ahead of schedule, one of the reasons for this underestimation is that many the climate models is being referenced for discussion about climate change didn't include the vast amounts of methane the cryosphere and oceans will release as the warming gathers momentum.
So while global warming has continued its fitful warming of the temperature on Earth’s surface, the planet is warming from human-cause greenhouse gases just where climate science said it would — the oceans, which is where more than 90% of the warming was projected to end up. | <urn:uuid:4c01ef03-5f88-408d-a754-7c32b2291614> | 2.953125 | 199 | Personal Blog | Science & Tech. | 42.063621 |
The resulting map should contain a key for each value of type b, occurring in any set in the original map. The new values (of type
type SetMap k a = Map k (Set a) invertSetMap :: (Ord a, Ord b) => SetMap a b -> SetMap b a
Set a) are all those original keys for which the new key occurred in the value set. Intuitively, if the original set was representing arrows from values of type a to values of type b, this function should reverse all arrows. We can easily implement this function using two nested folds.
That's not pretty at all! I had written quite a bit of this kind of code (and hated it each time), until I finally remembered a fundamental Haskell lesson. Haskell uses lists to simulate iteration and specify other kinds of control flow. In particular list comprehensions are often extremely cheap, since the compiler can automatically remove many or all intermediate lists and generate very efficient code. So let's try again.
invertSetMap sm = M.foldWithKey (\k as r -> S.fold (\a r' -> M.insertWith S.union a (S.singleton k) r') r as) M.empty sm
So much more readable! A quick benchmark also shows that it's slightly faster (a few percent for a very big map). Lesson to take home: If your folds get incomprehensible consider list comprehensions.
invertSetMap sm = M.fromListWith S.union [ (a, S.singleton k) | (k, as) <- M.assocs sm , a <- S.toList as ] | <urn:uuid:f54234e4-c1fc-435f-b938-9a8bbbf8205e> | 3.078125 | 346 | Personal Blog | Software Dev. | 67.703303 |
Lost City Macrofauna
Woods Hole Oceanographic Institution
WHOI/MIT Joint Program in Oceanography
When the Lost City Hydrothermal Field first emerged in the lights of a submersible in 2001 the venting spires did not appear to teem with life like most other vent fields around the world. There were no large red tubeworms, fields of clams or mussels typical of eastern Pacific vents or massive swarms of shrimp common to vents along the Mid-Atlantic Ridge. Yet in 2003, recovered pieces of Lost City chimney spires revealed numerous animals less than an eighth of an inch long. To date, over 70 potential species have been identified from Lost City, a surprisingly high biodiversity which is more than double the number of species found on vent chimneys typical of the Mid-Atlantic Ridge.
The Lost City vent site may look barren at first glance, but look a little deeper, into the cracks and crevices of the carbonate through a microscope and the animals become obvious. Tiny invertebrates dwell within the cracks of these highly sculpted and actively venting porous carbonate structures. Gastropod snails, shrimp-like crustaceans (including amphipods that migrate daily from the upper surface waters to the Lost City area), and numerous polychaete worms (of which there are at least five new species) all live here. Nematode worms, flea-like ostracods, and small bivalves can also be found on these amazing structures. The biomass (net weight of animals) is small compared to a typical hydrothermal vent on the Mid-Atlantic Ridge. The biggest contributors to the biomass at Lost City are the larger more mobile megafauna, including the (grouper-like) wreckfish; cut-throat eels; and large red geryonid crabs, all readily visible around the spires.
Hydrothermal vent sites around the world host animals that are endemic, animals that are found living only in these vent areas. A pattern exists where typical vent areas exclude almost all other general deep-sea animals for some distance from hydrothermal activity. Non-venting habitats less than a few meters away (e.g., the sides of inactive solidified carbonate structures, sedimented areas, and breccia cap rock just to the north of the field) are dominated by hard corals (Lophelia pertusa and Desmophyllum), octocorals (gorgonians), galatheid crabs, turrid gastropods, foraminifera, pteropods, urchins, asteroids, ophiuroids, and typical deep-sea barnacles. Vent and non-vent habitats are strongly segregated at the Lost City.
This wreckfish, swimming between carbonate chimneys, is just over 1 meter in length. They are common near the summit of the massif and within the Lost City Field at a depth of ~750-800 m. They routinely followed the submersible Alvin during many dives in 2003. Click image for larger view and image credit. (HR)
This crab was recovered from the edge of the Lost City field in 2003 at a water depth of ~ 750 m. Animals of this size are rare within the field. Although total biomass is small the diversity of fauna is as high or higher than that of black smoker sites along the Mid-Atlantic Ridge. Click image for larger view and image credit. (HR) | <urn:uuid:8ccbff49-f5e5-4142-a610-ac49f9cd1b30> | 3.375 | 713 | Knowledge Article | Science & Tech. | 39.104476 |
(a) In a charge current, spin “up” and “down” electrons flow together. In a spin current, up and down electrons flow in opposite directions. (b) A schematic of the spin Hall effect. Spin-orbit coupling induces an orbital motion opposite in direction to the electron spin, deflecting up- and down-spin electrons in opposite directions. The net effect is a conversion of charge into spin currents. | <urn:uuid:e4770dfe-e174-4b7f-9fdf-11ab7de87df9> | 3.140625 | 90 | Knowledge Article | Science & Tech. | 51.155182 |
Is there an official specification for the
round function in Haskell? In GHCi version 7.0.3 I see the following behaviour:
ghci> round (0.5 :: Double) 0 ghci> round (1.5 :: Double) 2
Since both 0.5 and 1.5 are representable exactly as floating point numbers, I expected to see the same behaviour as in Python:
>>> round(0.5) 1.0 >>> round(1.5) 2.0
Is there a rationale for the difference, or is it a quirk of GHCi? | <urn:uuid:1a67c9d9-22a8-40db-898d-fad54bdf5862> | 2.75 | 123 | Q&A Forum | Software Dev. | 96.728435 |
Science Fair Project Encyclopedia
etc. see list of Coccinellidae genera
Ladybirds (Commonwealth English), also known as ladybugs (American English, Canadian English) or lady beetles (some scientists favor this) are a family, Coccinellidae ("little sphere"), of beetles; the name is thought to allude to the Blessed Virgin Mary in the Catholic faith. Ladybirds are found worldwide, with over 4,500 species described, more than 450 native to North America alone. Ladybirds are small insects, ranging from 1 mm to 10 mm (0.04 to 0.4 inches), and are usually yellow, orange, or red with small black spots on their carapace, and black legs, head and feelers. As the family name suggests, they are usually quite round in shape.
Because they are useful, colorful, and harmless to humans, ladybugs are typically considered cute even by people who hate most insects.
Ladybirds are beneficial to organic gardeners because most species are insectivores, consuming aphids, fruit flies, thrips, and other tiny plant-sucking insects that damage crops. In fact, their name is derived from "Beetle of Our Lady", recognizing their role in saving crops from destruction. Today they are commercially available from a variety of suppliers.
Adult ladybirds are able to reflex-bleed from their leg joints. The blood is yellow, with a strong repellent smell, and is quite obvious when one handles a ladybird roughly.
Ladybirds are and have for very many years been favourite insects of children, who are reputed to regard them tenderly. The insects had many regional names (now mostly disused) such as the lady-cow, May-bug, golden-knop, golden-bugs (Suffolk); and variations on Bishop-Barnaby (Barney, Burney) Barnabee, Burnabee, and the Bishop-that-burneth.
The ladybird is immortalized in the children's nursery rhyme extant:
- Ladybird, ladybird, fly away home
- Your house is on fire and your children are gone
- All except one, and that's Little Anne
- For she has crept under the warming pan.
- Ladybird, ladybird, fly away home
and ancient (recounted in an 1851 publication):
- Dowdy-cow, dowdy-cow, ride away heame,
- Thy house is burnt, and thy bairns are tean,
- And if thou means to save thy bairns
- Take thy wings and flee away!
The name which the insect bears in the various languages of Europe is clearly mythic. In this, as in other cases, the Blessed Virgin Mary has supplanted Freya, the fertility goddess of Norse mythology; so that Freyjuhaena and Frouehenge have been changed into Marienvoglein, which corresponds with Our Lady's Bird. There, can, therefore, be little doubt that the esteem with which the lady-bird, or Our Lady's cow, is still regarded, is a relic of ancient beliefs.
The ladybird is the symbol of the Dutch Foundation Against Senseless Violence .
Note that not all individuals show the number of spots suggested by their names:
- Seven-spotted lady beetle (Coccinella septempunctata)
- Two-spotted lady beetle (Adalia bipunctata)
- Convergent lady beetle (Hippodamia convergens)
- Spotted lady beetle (Coleomegilla maculata)
- Twice-stabbed lady beetle (Chilocurus stigma)
- Mexican bean beetle (Epilachna varivestis Mulsant)
- Asian Lady Beetle (Harmonia axyridis)
For a complete list of genera, see list of Coccinellidae genera.
- I. Hodek & A. Honek , Ecology of Coccinellidae (Dordrecht: Kluwer , 1996)
- Taxonomy of Coccinelids
- Report sightings of the Harlequin Ladybird in the British Isles
- Discussion of Bishop-Barnaby in Notes and Queries, November 24 1849 at Project Gutenberg
- Bishop-Barnaby discussion continued in Notes and Queries, December 29 1849 at Project Gutenberg
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:fb27213b-8fac-4b66-a084-0f2c70e7049a> | 3.484375 | 940 | Knowledge Article | Science & Tech. | 33.467071 |
I Sense Dinner Is Ready
We should all be lucky enough to have the sense of a bat, at least their auditory sense. If we did, we'd never lose glasses, keys, or anything else. All we'd have to do is stand in the middle of a room and hum or click our tongues to locate the things we misplace.
Of all the bats in the world, a sub-order of the species called microchiroptera, have a special skill known as echolocation. This sound/audio sense is used by more than 800 varieties of bats in the sub-order.
If you start out with the right type of bamboo for your temperatures, soil composition and moisture, you can soon be enjoying a lush, "exotic" greenery that will grow and spread faster than you could ever have imagined.
Each variety will issue its own peculiar vocal signals, that we will never hear, because they are in a KHz range that is beyond the human ear. Depending on the species, the bat's own auditory system may be geared to a specific range such as the 60-61KHz bracket used by the moustached bat.
Like human ears, the bat's ear has a basil membrane in the cochlea that vibrates when receiving sounds and transforms the vibrations into neural signals. But unlike humans, that membrane will be thickened in the exact areas that would best perceive a given range of KHz. Researchers have even discovered that the ganglion cells of the brain can also by hyper-developed to receive signals of a specific KHz.
Bats use their echolocation to gather information on many things, primarily the insects that make up their diet. By issuing a sound, and receiving back signals created by the sound bouncing off the insect, a bat can tell their location, size, how fast they are moving, and whether they are fluttering. | <urn:uuid:0576c81f-aa20-4a5d-993f-bac0afd0f610> | 3.84375 | 388 | Knowledge Article | Science & Tech. | 53.802286 |
American solar satellite. One launch, 1980.02.14. The Solar Maximum Mission (SMM) was intended primarily to study solar flares and related phenomena.
Launched during a period of maximum solar activity, SMM observed more than 12,000 flares and over 1,200 coronal mass ejections during its 10 year lifetime.
SMM provided measurements of total solar radiative output, transition region magnetic field strengths, storage and release of flare energy, particle accelerations, the formation of hot plasma, and coronal mass ejection. The payload also observed the short-wavelength and coronal manifestations of flares.
Observations from SMM were coordinated with in situ measurements of flare particle emissions made by the ISEE 3 satellite. SMM was the first satellite to be retrieved, repaired and redeployed in orbit. In 1984, the STS-41C Shuttle crew restored the spacecraft's malfunctioning attitude control system and replaced a failed electronics box for the coronagraph/polarimeter. SMM collected data until 24 November 1989, and re-entered on 2 December 1989.
The solar payload instruments and the sun-sensor system were contained in the instrument module occupying the top 2.3 meters of the craft. The Multimission Modular Spacecraft (MMS), below the instrument module, contained the systems for attitude control, power, communication, and data handling. Two fixed solar panels, located between the instrument module and the MMS supplied 1500-3000 W. The fine-pointing Sun-sensor system had a precision of 1 arcsec along all 3 axes. The payload included:
- Active Cavity Radiometer Irradiance Monitor (ACRIM) - measured total solar irradiance.
- Gamma Ray Spectrometer (GRS) - studied the composition of solar and interstellar gamma ray emissions.
- Hard X-ray Burst Spectrometer (HXRBS) - studied hard X-ray spectra of solar flares in 15 energy channels between 20-260 keV.
- soft X-ray Polychromator (XRP) - monitored soft X-ray emissions.
- Hard X-ray Imaging Spectrometer (HXIS)
- Ultraviolet Spectrometer and Polarimeter (UVSP) - a raster imager providing 0.04 A sp. res.
- Coronograph/Polarimeter - studied the faint solar corona between 2 and 5 solar radii with a 6.4 arcsec resolution.
AKA: Solar Maximum Mission.
More... - Chronology...
Gross mass: 2,315 kg (5,103 lb).
Height: 4.00 m (13.10 ft).
First Launch: 1980.02.14.
Number: 1 .
Delta The Delta launch vehicle was America's longest-lived, most reliable, and lowest-cost space launch vehicle. Development began in 1955 and it continued in service in the 21st Century despite numerous candidate replacements. More...
Associated Launch Vehicles
Delta American orbital launch vehicle. The Delta launch vehicle was America's longest-lived, most reliable, and lowest-cost space launch vehicle. Delta began as Thor, a crash December 1955 program to produce an intermediate range ballistic missile using existing components, which flew thirteen months after go-ahead. Fifteen months after that, a space launch version flew, using an existing upper stage. The addition of solid rocket boosters allowed the Thor core and Able/Delta upper stages to be stretched. Costs were kept down by using first and second-stage rocket engines surplus to the Apollo program in the 1970's. Continuous introduction of new 'existing' technology over the years resulted in an incredible evolution - the payload into a geosynchronous transfer orbit increasing from 68 kg in 1962 to 3810 kg by 2002. Delta survived innumerable attempts to kill the program and replace it with 'more rationale' alternatives. By 2008 nearly 1,000 boosters had flown over a fifty-year career, and cancellation was again announced. More...
Delta 3910 American orbital launch vehicle. Three stage vehicle consisting of 9 x Castor 4 + 1 x ELT Thor/RS-27 + 1 x Delta P /TR-201 More...
Delta 3000 American orbital launch vehicle. The Delta 3000 series upgraded the boosters to Castor 4 solid propellant strap-ons, while retaining the Extended Long Tank core with RS-27 engine. The 3910 series used the TRW Lunar Module engine in the second stage, while the 3920 series reintroduced the Aerojet AJ110 Delta engine. More...
Associated Manufacturers and Agencies
NASA American agency overseeing development of rockets and spacecraft. National Aeronautics and Space Administration, USA, USA. More...
Fairchild American manufacturer of rockets, spacecraft, and rocket engines. Fairchild, USA. More...
McDowell, Jonathan, Jonathan's Space Home Page (launch records), Harvard University, 1997-present. Web Address when accessed: here.
JPL Mission and Spacecraft Library, Jet Propulsion Laboratory, 1997. Web Address when accessed: here.
McDowell, Jonathan, Launch Log, October 1998. Web Address when accessed: here.
Associated Launch Sites
Cape Canaveral America's largest launch center, used for all manned launches. Today only six of the 40 launch complexes built here remain in use. Located at or near Cape Canaveral are the Kennedy Space Center on Merritt Island, used by NASA for Saturn V and Space Shuttle launches; Patrick AFB on Cape Canaveral itself, operated the US Department of Defense and handling most other launches; the commercial Spaceport Florida; the air-launched launch vehicle and missile Drop Zone off Mayport, Florida, located at 29.00 N 79.00 W, and an offshore submarine-launched ballistic missile launch area. All of these take advantage of the extensive down-range tracking facilities that once extended from the Cape, through the Caribbean, South Atlantic, and to South Africa and the Indian Ocean. More...
Cape Canaveral LC17A Delta launch complex. Part of a dual launch pad complex built for the Thor ballistic missile program in 1956. Pad 17A supported Thor, Delta, and Delta II launches into the 21st Century. More...
1980 February 14 -
15:57 GMT - .
: Cape Canaveral
. Launch Complex
: Cape Canaveral LC17A
. LV Family
. Launch Vehicle
: Delta 3910
. LV Configuration
: Delta 3910 635/D151.
- SMM - .
Payload: Solar Maximum Mission. Mass: 2,315 kg (5,103 lb). Nation: USA. Agency: NASA Greenbelt. Class: Astronomy. Type: Solar astronomy satellite. Spacecraft: SMM. Decay Date: 1989-12-02 . USAF Sat Cat: 11703 . COSPAR: 1980-014A. Apogee: 408 km (253 mi). Perigee: 405 km (251 mi). Inclination: 28.5000 deg. Period: 92.70 min. Summary: Solar Maximum Mission; solar observatory; repaired 4/9/84 by STS-41C in orbit. Spacecraft engaged in practical applications and uses of space technology such as weather or communication (US Cat C). .
1989 November 23 -
- Solar Maximum ends operating life. - .
Nation: USA. Spacecraft: SMM. Summary: SMM finished collected data . It re-entered on December 2, 1989..
Home - Browse - Contact
© / Conditions for Use | <urn:uuid:f661b1a0-3cd5-4d93-85e6-c7be56b6ca28> | 3.0625 | 1,545 | Knowledge Article | Science & Tech. | 52.975269 |
Microsoft® Visual Basic® Scripting Edition
Language Reference |
Is is also a comparison operator, but it is used exclusively for determining if one object reference is the same as another.
Const A = "MyString"
|Byte||0 to 255.|
|Boolean||True or False.|
|Integer||-32,768 to 32,767.|
|Long||-2,147,483,648 to 2,147,483,647.|
|Single||-3.402823E38 to -1.401298E-45 for negative values; 1.401298E-45 to 3.402823E38 for positive values.|
|Double||-1.79769313486232E308 to -4.94065645841247E-324 for negative values; 4.94065645841247E-324 to 1.79769313486232E308 for positive values.|
|Currency||-922,337,203,685,477.5808 to 922,337,203,685,477.5807.|
|Date||January 1, 100 to December 31, 9999, inclusive.|
|Object||Any Object reference.|
|String||Variable-length strings may range in length from 0 to approximately 2 billion characters.|
Dates are stored as part of a real number. Values to the left of the decimal represent the date; values to the right of the decimal represent the time. Negative numbers represent dates prior to December 30, 1899.
In VBScript, the only recognized format is US-ENGLISH, regardless of the actual locale of the user. That is, the interpreted format is mm/dd/yyyy.
Note that script-level code resides outside any procedure blocks.Sub MySub() ' This statement declares a sub procedure block. Dim A ' This statement starts the procedure block. A = "My variable" ' Procedure-level code. Debug.Print A ' Procedure-level code. End Sub ' This statement ends a sub procedure block. | <urn:uuid:e5a69bab-9432-4784-ae35-5504fcab4c03> | 2.765625 | 439 | Documentation | Software Dev. | 73.330508 |
Welcome back for another episode in the pattern series! This will also be the last article about Design Patterns, since I've finished reading the Head First Design Patterns book :)
It's been a very interesting journey, lots of new patterns learned, lots of knowledge gained, and now it's time to apply them in real projects.
As a summary, the overview of all articles about patterns, including the one we're going to see today:
Let's get started! Make sure you're seated comfortable, it's going to be a long one today!
The definiton, as usual: "Provide a surrogate or placeholder for another object to control access to it."
A new request popped up, we need to add in a multiplayer option in our game, featuring a Lobby where users can get in touch with each other.
This lobby is going to be running on another machine, or in our case, just another console application to illustrate it.
First of all, we're going to start by adding an interface ILobby to define our lobby.
We'll place this interface in a seperate library, to better illustrate the Proxy Pattern later on. This way you can clearly see on which machine a specific piece of code is located.
Time to create our actual Lobby implementation!
As mentioned before, we will create this in a seperate project to clearly show the code for the Lobby is not located on the same machine as our main client.
A very simple Lobby implementation, containing nothing more than a List<string> of Users.
At this stage, we can have our Lobby on one machine, but how do we add users to it from another machine?
This is where the Proxy Pattern comes looking!
Just to make one thing clear, the Proxy pattern comes in many different shapes, we're using it to give a client access to a remote object, by means of a placeholder, reminds you of the definition, doesn't it?
In our case, it's called a 'remote proxy', there is also a 'virtual proxy', a 'protection proxy' and more.
A virtual proxy can for example serve as a placeholder for an object which is expensive to create, if you want to retrieve an image over the internet, you could display an icon via a proxy while the real image is loading in the background.
So, let's create this placeholder on the client side.
By placing a reference to our previously created ILobby, this proxy allows our client to work with, what it believes to be, a real lobby object.
In reality however, their is no Lobby implementation in our client code at all, it is merely a placeholder which implements the correct interface.
You might have noticed the notion of a Socket already :)
Our proxy object might implement the correct ILobby interface, if we want it working, we will eventually need to call our real Lobby object.
In a first step this is done using sockets to connect to the server and communicate with the real object.
Before you go screaming .NET Remoting, hold your horses! This is meant to illustrate what is going on behind the scenes with a remote proxy.
When you're coming from the Java world, you might have heard people mentioning a Skeleton.
This is not something creepy, but simply a class on the server side which intercepts call from the Proxy, talks to the real object and sends the results back.
Here's a small part of our Skeleton code:
As you can see, the Proxy talks to the Skeleton, which talks to the Real Object, after which it sends the response back over the wire.
When we put all of this in action, we see the following happening:
Our main client talked to a remote Lobby and registered some users, great!
Now that we've seen how a proxy serves as a placeholder, it's time to clean this code and get rid of all the socket stuff.
Just a note, all the socket stuff in the project is highly unstable and not meant for production! Don't use it!! It's only meant for demo purposes :)
Let's work on the server side first by removing the Skeleton code, referencing System.Runtime.Remoting, and letting our Lobby class inherit MarshalByRefObject.
Last step needed for our server is to expose the Lobby object through Remoting, which is nothing more than having something like our previous Skeleton code hidden behind the scenes.
The only thing left to do now is to remove the Proxy class from our main project, and use Remoting to get an instance of ILobby, which acts as a proxy behind the scenes.
Resulting output when we run this version? Exactly the same! But a lot less work to implement :)
And that's it, another pattern in our heads!
I've uploaded the solution again to have a look at. When you run it, make sure your run the GameServer first, unblock it on your Windows Firewall, and then run the Proxy project.
Well, that's it, the last part of my series. I hope you liked them and learned a lot from it. Be sure to keep on visiting for some other tech subjects coming up soon.
You can always subscribe to the RSS feed to stay informed.
Thanks to the people who were generous enough to donate a little bit after reading some of these articles! (If you'd like to donate, simply use the PayPal button on the left :))
See you soon!
Some additional information on the Proxy Pattern: | <urn:uuid:77e3bebf-c379-4813-a69c-badc9cc44686> | 2.765625 | 1,129 | Personal Blog | Software Dev. | 55.842307 |
First, there are quite a few preprocessor directives out there (14 in total I believe) with #include winning the popularity contest. Others that are used often are #pragma and #ifndef and all of its family .Let's take a look at pragma:
Each implementation of C and C++ supports feature unique to its host machine or operating system. Various programs/applications "must" exert quite a bit of control over their memory allocation and function parameters. The #pragma directive is a way to offer each compiler machine and/or operating system specific features while (attempting ) to maintain overall compatibility with the C and C++ languages. OK! Enough definition let's see some examples!
My favorite use of pragma is to avoid annoying linker problems with the following code segment:
#pragma comment (lib, "winmm.lib") //example
This has the compiler directly insert that library into the code in case (for whatever reason) you were having problems linking it. It places a library-search record in the object file. This comment type must be accompanied by a commentstring parameter containing the name (and possibly the path) of the library that you want the linker to search.
Overall the pragma portion has some versatility, as it can be used in conditional statmements such as #ifndef and all of its family . For C++ users the pragma comment has ~32 commands (C has 28 or so) that can be used.
Ever want a faster build time? Good news is that pragma has a directive just for you:
//some header.h file #pragma once
This directive specifies that the file will be included (opened) only once by the compiler in a build. This can reduce build times as the compiler will not open and read the file after the first #include of the module. However, it is worth noting that #pragma directives are compiler specific and will vary depending on what you are using (I’m currently using MSVS 2005). If the compiler does not support a specific argument for #pragma, it is ignored - no error is generated.
Let’s discuss the range of #if… statements:
#define #ifdef #ifndef #endif #undef #elif #else #if !defined …etc…
These directives allow to include or discard part of the code of a program if a certain condition is met.
#ifdef allows a section of a program to be compiled only if the macro that is specified as the parameter has been defined, no matter which its value is. For example:
#ifdef ARRAY _SIZE int array[ARRAY _SIZE]; #endif
In this case, the line of code int array[ARRAY_SIZE]; is only compiled if ARRAY_SIZE was previously defined with #define, independently of its value. If it was not defined, that line will not be included in the program compilation.
#ifndef serves for the exact opposite: the code between #ifndef and #endif directives is only compiled if the specified identifier has not been previously defined. For example:
#ifndef ARRAY _SIZE #define ARRAY _SIZE 100 #endif int array [ARRAY _SIZE];
In this case, if when arriving at this piece of code, the ARRAY_SIZE macro has not been defined yet, it would be defined to a value of 100. If it already existed it would keep its previous value since the #define directive would not be executed.
The #if, #else and #elif (i.e., "else if") directives serve to specify some condition to be met in order for the portion of code they surround to be compiled. The condition that follows #if or #elif can only evaluate constant expressions, including macro expressions. For example:
#if ARRAY _SIZE>200 #undef ARRAY _SIZE #define ARRAY _SIZE 200 #elif ARRAY _SIZE<50 #undef ARRAY _SIZE #define ARRAY _SIZE 50 #else #undef ARRAY _SIZE #define ARRAY _SIZE 100 #endif int array [ARRAY _SIZE];
Notice how the whole structure of #if, #elif and #else chained directives ends with #endif.
The behavior of #ifdef and #ifndef can also be achieved by using the special operators defined and !defined respectively in any #if or #elif directive:
#if !defined TABLE_SIZE #define TABLE_SIZE 100 #elif defined ARRAY_SIZE #define TABLE_SIZE ARRAY_SIZE int table[TABLE_SIZE];
Line control (#line)
When we compile a program and some error(s) happen during the compiling process, the compiler shows an error message with references to the name of the file where the error happened and a line number, so it is easier to find the code generating the error.
The #line directive allows us to control both things, the line numbers within the code files as well as the file name that we want that appears when an error takes place. Its format is:
#line number "filename"
Number is the new line number that will be assigned to the next code line. The line numbers of successive lines will be increased one by one from this point on.
"filename" is an optional parameter that allows to redefine the file name that will be shown. For example:
#line 20 "assigning variable" int a?;
This code will generate an error that will be shown as error in file "assigning variable", line 20.
Error directive (#error)
This directive aborts the compilation process when it is found, generating a compilation the error that can be specified as its parameter:
#ifndef __dreamincode #error A C++ compiler is required! #endif
This example aborts the compilation process if the macro name __dreamincode is not defined.
With all that knowledge under our proverbial belt, let us move on to a powerful (perhaps underused?) feature of preprocessor directives -> macros. Yes, yes, we all know what macros are, but let's see the possible usefulness (AND DANGER) of one defined in a preprocessor directive:
// Macro to get a random integer with a specified range #define getrandom(min, max) \ ((rand()%(int)(((max) + 1)-(min)))+ (min)) //example taken from msdn
Here it is almost like a function call, but all included in the preprocess directive. Whenever getrandom() is called it is executed from this code. Here is another example of defining your own macro:
#define MAX(a, b) ( (a) > (b) ) ? (a) : (b) )
A classic function that returns a value based on which variable is greater. Again, it may not be in your best interest to decalre and use a directive such as that, but good to know the capability exists.
Using macros in this sort of fashion is not always recommended! The results can be unpredictable and crashing is likely (if not implemented properly). In fact, if there is more then one programmer using/modifying the code (i.e. anything that is not one of your personal projects), it would be better to use functions instead of a macro in the above case. These problems have led to several naming conventions among programmers, most notably the ALL CAPS with underscores to denote periods or spaces before the name. This is used when including .h files: #define __HELPER_H and when using macros: #define MAX, FUNCTION HERE!!! If multiple people go about doing the same task in a variety of ways, there will be problems. Unfortunately, it will be the kind that are small and annoying to catch (dangling pointers anyone?).
This use of #define is BAD PRACTICE!!!:
#define getmax(a, b) ( (a) > (b) ) ? (a) : (b) )
Unless explicitly identified, no one would guess that that is a macro… Here is the entire warning in a nutshell:
Because preprocessor replacements happen before any C++ syntax check, macro definitions can be a tricky feature, but be careful: code that relies heavily on complicated macros may result obscure to other programmers, since the syntax they expect is on many occasions different from the regular expressions programmers expect in C++.
Here’s a list outlining the following macros that are ‘constant’. (Notice the underscores and all caps convention, except for the last one).
The following macro names are defined at any time:
Macro -------- Value
__LINE__ ----- Integer value representing the current line in the source code file being compiled.
__FILE__ ----- A string literal containing the presumed name of the source file being compiled.
__DATE__ ----- A string literal in the form "Mmm dd yyyy" containing the date in which the compilation process began.
__TIME__ ----- A string literal in the form "hh:mm:ss" containing the time at which the compilation process began.
__cplusplus ----- An integer value. All C++ compilers have this constant defined to some value. If the compiler is fully
compliant with the C++ standard its value is equal or greater than 199711L depending on the version of the
standard they comply.
Why is this important to C and C++ programmers? Well, it may or may not be for a variety of reasons. If your project is massive, using directive such as #ifndef, #pragma once, etc... can save you hours of debugging time from twice defined identifers and bad function calls. Consider it a tool much like declaring a variable to be const. If the desired output is wrong then the function manipulating it must be in error rather then a changed variable that was uncaught. Hopefully this will aid you in a better understanding of how the preprocessor works. There is so much material; it can ebb and flow to your specific needs. Happy coding! | <urn:uuid:6fa399e7-1bb0-403d-b825-3a88965aa9ff> | 3.40625 | 2,115 | Documentation | Software Dev. | 44.006472 |
|Spring 2010||Astronomy 110||MWF 9:30 &mdash 10:20|
The Milky Way Galaxy is a vast pinwheel of stars and gas turning within an enormous cloud of invisible matter. Many generations of stars have formed and died within its disk, enriching our galaxy's stock of heavy elements. Before the disk formed, the future Milky Way probably existed as several distinct galaxies which fell together and merged.
Please read all of Chapter 14, along with the specific subsections of other chapters listed below.
• How is our solar system moving in the Milky Way Galaxy?
|2.1||Patterns in the Night Sky
The Milky Way
|14.1||The Milky Way Revealed|
|14.3||The History of the Milky Way|
|14.4||The Mysterious Galactic Center|
|16.2||Evidence for Dark Matter
Distribution of Mass in the Milky Way
Joshua E. Barnes
(barnes at ifa.hawaii.edu)
17 April 2010 | <urn:uuid:24643003-6964-4a5e-822d-7d0915e20aad> | 3.828125 | 214 | Content Listing | Science & Tech. | 70.288787 |
In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.
“This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together,” said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.
The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell in New York City.
“A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer,” Spector said.
Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient’s harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.
To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject’s ear, and converted the image into a digitized “solid” ear using a 3-D printer to assemble a mold.
This Cornell-developed, high-density gel is similar to the consistency of Jell-o when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.
The process is also fast, Bonassar added: “It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted.”
The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.
Spector and Bonassar have been collaborating on bioengineered human replacement parts since 2007. The researchers specifically work on replacement human structures that are primarily made of cartilage – joints, trachea, spine, nose – because cartilage does not need to be vascularized with a blood supply in order to survive.
“Using human cells, specifically those from the same patient, would reduce any possibility of rejection,” Spector said.
He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or six 6 old. At that age, ears are 80 percent of their adult size.
If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.
Blaine Friedlander | Source: Newswise
Further information: www.cornell.edu
More articles from Life Sciences:
Study details genes that control whether tumors adapt or die when faced with p53 activating drugs
23.05.2013 | University of Colorado Denver
Scientists announce Top 10 New Species
23.05.2013 | Arizona State University
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
University of Würzburg physicists have succeeded in creating a new type of laser.
Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature.
It also emits light the waves of which are in phase with one another: the polariton laser, developed ...
Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions.
They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics.
“When water boils, its molecules are released as vapor. We call this ...
23.05.2013 | Physics and Astronomy
23.05.2013 | Health and Medicine
23.05.2013 | Ecology, The Environment and Conservation
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:e9a1b34c-e62a-4a53-a9bd-41eba0d84ac0> | 3.828125 | 1,276 | Content Listing | Science & Tech. | 44.241462 |
Case Studies in Earth & Environmental Science Journalism
Guest Scientist: Drew Shindell, NASA Goddard Institute for Space Sciences
Before Molina and Rowland’s 1974 paper, a large portion of the world attributed ozone decrease to airplanes and jets. Why do you think there wasn’t a lot of press coverage about their chlorofluorocarbon discovery? Did any article cover their study particularly well (or not well)?
The EPA ordered a phase-out of the non-essential uses of chlorofluorocarbons in 1978. However, it wasn’t until the 1985 Farman et al. paper that people started to really pay attention, especially within the international political community. Why do you think it took over a decade for widespread response? What was different about the Farman et al. paper.
Many of the articles about the 1974 Molina and Rowland paper were quick blurbs, not often directly mentioning their study. In the 1980s and beyond, articles tended to the report more directly on scientific studies in this example. Do you think that readership interests have helped this kind of environmental reporting become a more prevalent part of the media, or do you think the media has affected the readership’s interest?
What trends do you notice in the popular articles (ozone science) section?
Do the early articles do a good job explaining the science behind chlorofluorocarbon’s role in ozone depletion? Any particularly bad ones?
“The Ozone Layer; Cold Comfort” was the first article (that we could find) that dealt with the 1985 Farman et al paper. How well does it deal with the issue and science behind it?
Do you get a sense of balanced reporting in these articles?
The Montreal Protocol is considered to be the first and one of the best global environmental treaties. What aspects of the Montreal Protocol made it such as successful international treaty?
The Montreal Protocol addresses nine classes ozone depleting chemicals and how each will be managed in the future. However, the popular press typically focused on just one (CFCs) and occasionally two (CFCs and Halons) of these in their discussion. Why might that be?
Ozone depletion is a global issue and many nations came together to create the terms and conditions shown in the Montreal Protocol. How did coverage of the protocol differ between nations (United States, Australia, Canada)? Were different values and culture expressed in the different nations?
Who was affected more by the Montreal Protocol, the industries or the individual consumer? Is this reflected in the popular articles? How?
There were still vast amounts of uncertainty in the science of ozone depletion before and after the Montreal Protocol. Which of the popular articles reflect this uncertainty best? Worst?
Several of the articles address a link between ozone depletion and global warming. Do they address the scientific uncertainty in this link? What evidence would we still need to strengthen this link if there is one?
This September marked the 20-year anniversary of the signing of the Montreal Protocol. Looking at the 2007 articles, has the view on the Montreal Protocol changed? Is there just as much controversy now as ever or has it subsided? Has the protocol been successful?
The Montreal Protocol has been called a great predecessor to the Kyoto Protocol on climate change. Given what you have heard in the news about the Kyoto Protocol and read in this case study about the Montreal Protocol what are the major similarities and differences between them? What was the role of the US in each? How much scientific uncertainty surrounds each? Does that affect the acceptance of the Protocol?
·“Ozone Depletion” – Wikipedia
· “Ozone Science” – USEPA website, answers questions like ozone-depleting substances, current state of the ozone layer, health and environmental effects of ozone layer depletion, dispelling myths
· Nasa Ozone Hole watch – Daily updates and maps
Crutzen, Paul. 1970. “Influence of Nitrogen Oxides on Atmospheric Ozone Content.” Quarterly Journal of the Royal Meteorological Society. 96(408):320.
Molina, M.J. & F.S. Rowland. 1974. “Stratospheric sink for chlorofluoromethanes: chlorine atomc-atalysed destruction of ozone.” Nature 249: 810-812.
Farman, J.C., Gardiner, B.G., and J.D. Shanklin. 1985. “Large losses of total ozone in Antarctica reveal seasonal ClOx/NOx interaction.” Nature 315: 207-210.
Clyne, M.A.A. 1974. “Destruction of atmospheric ozone?” Nature 249: 796-797.
“Death to Ozone.” Time Magainze. October 7, 1974.
“Why aerosols are under attack.” Business Week. February 17, 1975
“Aerosol makers brace for ozone ordeal.” Chemical Week. June 11, 1975.
“Fftt Comes Back.” Time Magazine. September 22, 1975.
“Saving the Ozone.” Newsweek. September 27, 1976.
“Government ban on fluorocarbon gases in aerosol products begins October 15 (1978)” EPA Press Release. October 15, 1978.
“Bad data make the EPA duck.” Business Week. March 13, 1978.
Sullivan, Walter. “Low Ozone Level Found Above Antarctica.” The New York Times. November 7, 1985.
“The ozone layer; Cold comfort.” The Economist. July 13, 1985.
Tucker, Anthony. “Futures: The hole in the heavens/Thickness of the ozone layer.” The Guardian (London). July 25, 1986.
Peterson, Cass. “US Tax on Chlorofluorocarbons Sought to Help Save Ozone Layer; Institute Recommends Global Controls, Reduction of Emissions.” The Washington Post. November 30, 1986.
Cowen, Robert. “Geneva meeting on ozone focuses efforts to set world standards.” Christian Science Monitor. December 5, 1986.
Peterson, Cass. “Administration ozone policy may favor sunglasses, hats; Support for chemical cutbacks reconsidered.” The Washington Post. May 29, 1987.
Ardill, John. “Aerosol carbon blamed for hole in ozone layer.” The Guardian (London). August 7, 1987.
Peterson, Cass. “McDonald’s Repackaging Sandwiches to Guard Ozone; Chain Announces Environmental Gesture.” The Washington Post. August 6, 1987.
Peak, S. “Packaging not a health risk, says burger boss.” Herald. August 14, 1987.
“Montreal Protocol on Substances that Deplete the Ozone Layer” 1987. as adjusted and/or amended in London 1990, Copenhagen 1992, Vienna 1995, Montreal 1997, Beijing 1999. (SUMMARY)
Maddox, John. 1987. “The great ozone controversy.” Nature 329: 101.
Keating, Micheal. “Ratification of ozone pact thought likely to take a year.” The Globe and Mail (Canada). September 17, 1987.
Weisskopf, Michael. “45 Nations Near Treaty on Ozone; Chemical Production Would be Curbed to Protect Atmosphere.” The Washington Post. September 16, 1987.
Shabecoff, Philip. “Dozens of Nations Reach Agreement To Protect Ozone.” The New York Times. September 17, 1987.
Shabecoff, Philip. “Washington Talk: State Department; The Environment as a Diplomatic Issue.” The New York Times. December 25, 1987.
Gleick, James. “Sharp Ozone Drop Found Worldwide in 8-Year Period.” The New York Times. January 1, 1988.
Peak, S. “New Warning on Our Thinning Ozone.” Herald (Australia). January 1, 1988.
Nelson-Horchler, Joani. “Ozone’s price tag: CFC substitutes will be expensive.” Industry Week. February 15, 1988.
Gleick, James. “Treaty Powerless to Stem a Growing Loss of Ozone.” The new York Times. March 20, 1988.
Erlichman, James. “Du Pont opens way to death of ozone eater.” The Guardian (London). March 26, 1988.
“Chlorofluorocarbons; On the way out, with luck.” The Economist. May 21, 1988.
Cowen, Robert. “Turning the Heat Off.” Christian Science Monitor. June 30, 1988.
Greaves, William. “The man who saw the hole; Joe Farman; Ozone layer; Spectrum.” The Times (London). August 1, 1988.
Landrey, Wilburg G. “The changing climate is creating the world crisis.” St. Petersburg Times (Florida). August 14, 1988.
Garelik, Glen. “Environment A Breath of Fresh Air.” Time Magazine. September 28, 1987
Melkert, Ad. “Twenty year later, the Montreal Protocol is still making its mark; The pact was a ground-breaking deal that set the stage for Kyoto.” The Gazette (Montreal). September 16, 2007.
Mulroney, Brian. “Twenty Years Later, Learning from Success.” National Post (f/k/a The Financial Post) (Canada). September 17, 2007.
Revkin, Andrew C. “From Ozone Success, a Potential Climate Model.” The New York Times. September 18, 2007.
Lieberman, Ben. “Ozone: the hole truth.” The Washington Times. September 19, 2007.
Return to Case Studies
Return to E&ESJ Home Page | <urn:uuid:ea11e2be-fbc7-4cf2-a524-d1ffeb075209> | 3.203125 | 2,129 | Content Listing | Science & Tech. | 57.656468 |
When the main function of your program is invoked, it already has
three predefined streams open and available for use. These represent
the “standard” input and output channels that have been established
for the process.
These streams are declared in the header file stdio.h.
— Variable: FILE * stdin
The standard input stream, which is the normal source of input for the
— Variable: FILE * stdout
The standard output stream, which is used for normal output from
— Variable: FILE * stderr
The standard error stream, which is used for error messages and
diagnostics issued by the program.
In the GNU system, you can specify what files or processes correspond to
these streams using the pipe and redirection facilities provided by the
shell. (The primitives shells use to implement these facilities are
described in File System Interface.) Most other operating systems
provide similar mechanisms, but the details of how to use them can vary.
In the GNU C library, stdin, stdout, and stderr are
normal variables which you can set just like any others. For example,
to redirect the standard output to a file, you could do:
Note however, that in other systems stdin, stdout, and
stderr are macros that you cannot assign to in the normal way.
But you can use freopen to get the effect of closing one and
reopening it. See Opening Streams.
The three streams stdin, stdout, and stderr are not
unoriented at program start (see Streams and I18N).
Published under the terms of the GNU General Public License | <urn:uuid:f937af66-8b77-48f0-bc18-10c7c60c4e11> | 3.9375 | 349 | Documentation | Software Dev. | 49.111918 |
Entomological Survey of Rio Bravo Conservation and Management Area, Belize
Peter Kovarik, John Shuey, and Chris Carlton
During the mid-late 1990's, lepidopterist John Shuey became interested in testing the widely touted concept that insect communities are useful in evaluating impacts to ecological integrity in tropical forest communities. At that time the literature had been advocating insects, especially butterflies, as appropriate indicators for assessing the impacts of specific management activities on tropical forests. People were already using insects as indicators, despite the fact that this simple premise had yet to be tested. John enlisted coleopterist Peter Kovarik as a partner in this study. The two decided that in addition to butterflies, scarabaeine scarabs and hister beetles would become part of the study. The taxa that were selected were chosen in part because of their susceptibility to bait and or passive trapping techniques. In fact the beauty of this study was that we envisioned relatively little active collecting. This way we could quasi enjoy ourselves while our traps were filling with insects!
The site chosen for our study was Rio Bravo Conservation Area located in Orange Walk Province, Belize. Rio Bravo is a 230,000 acre nature preserve in the northwestern corner of the country near the corner where Belize, Guatemala, and Mexico meet. Rio Bravo is a beautiful mosaic of semitropical moist forest, savanna, and wetland habitats with over 230 species of trees, 70 species of mammals, and approximately 400 species of birds. Among large animals, the area has healthy populations of jaguar, puma, Baird's tapir, and two species of monkeys. There are also significant Mayan archeological sites, and the area has a colorful recent history of mahogany logging, chicl? extraction, and marijuana farming. This preserve is ministered by Programme For Belize (PfB), a private conservation organization that holds Rio Bravo in trust for the people of Belize. PfB integrates elements of sustainable forestry and natural product harvesting, ecotourism, and education into a single comprehensive long-term management plan. For a location map of Rio Bravo click on the PfB logo above. For additional information about The Nature Conservancy's role, click on that logo.
Rio Bravo turned out to be an ideal setting for our study. In addition to extensive areas of oak-pine savanna, there were huge expanses of contiguous limestone rainforest without excessive topographic variability. Within this expanse of fairly homogenous habitat, there were areas along trails or roads through the forest that had been fairly recently logged. We sought to imbed our sample sites within areas that were both recently disturbed by man and others which had been relatively untouched for quite some time. Because Rio Bravo is a preserve and PfB encourages research activities, we felt confidant that human impact on both the ecosystem and our sampling devices would be minimal. Thus, it seemed probable that any potential differences we observed in insect communities between our sample sites would be primarily due to forest integrity rather than extraneous factors.
Our core studies were conducted once during the dry season and twice during the rainy season. They were completed in 1996. Since neither John nor Pete knew much about scarabaeine scarabs, expert Bill Warner was coaxed into participating by promising him many scarabs with no strings attached. In fact Bill eventually joined us in Rio Bravo in July 1996. We are still awaiting some of our quantitative insect data, but we have botanical assessments of each forest tract, and data from the sample events for butterflies and hister beetles in hand. As soon as the last of our insect data is cleaned up, we will finish off the ecological end of this work.
Like many studies, this one began small and focused and eventually expanded into a full blown entomological survey. Coleopterist Chris Carlton became the third major player in our survey. Chris has made several trips to Rio Bravo and done some intensive leaf litter work and set up many a flight intercept trap in search of tiny pselaphid beetles. We collected massive amounts of insects and have spent the last several years processing our samples and farming specimens out to various specialists who have generously provided us with identifications. We have learned, with little surprise that many of the insects that inhabit the forests and savannas of Rio Bravo are new to science. Many others are new country records. This is not surprising either since the entomofauna of Belize is poorly known. What is perhaps a bit of a surprise are the many unusual range extensions for some of the described insect species. This indicates that knowing the insect fauna of Belize will improve our understanding of the zoogeography of Mexico and Central America.
The following checklists and are now available and more will be added as they mature. Available images may be accessed through links within the checklists.
- Anthribidae, by Charles O'Brien
- Cerambycidae, by Robert Turnbow
- Chrysomelidae, by Shawn Clark
- Curculionidae, by Charles O'Brien
- Elateroidea, by Paul Johnson
- Erotylidae, by Paul Skelley
- Mordellidae, by John Jackman
- Staphylinidae: subfamily Pselaphinae, by Chris Carlton
- Tenebrionidae, by Charles Triplehorn and Otto Merkl
For questions and comments contact Peter Kovarik. | <urn:uuid:3906d3d8-a657-4701-9dfb-88a24cc19cd2> | 3.375 | 1,110 | Knowledge Article | Science & Tech. | 27.190678 |
IS IT LAKE CHAMPLAIN'S MONSTER?
By JOHN NOBLE WILFORD
Published: June 30, 1981
ON July 5, 1977, Sandra Mansi was showing her husband-to-be the countryside around Lake Champlain where she had grown up. They talked and laughed about the legends of a large monster inhabiting the lake. While sitting on the shore, she recalled, they saw something move far out in the water. She thought it was a school of large fish, she said, until a head and then a long serpentine neck emerged from the water, growing bigger and bigger.
''I was scared to death,'' Mrs. Mansi said in an interview the other day. ''I had a feeling I shouldn't be there.'' Mrs. Mansi said she collected herself enough to snap a single color picture with a Kodak Instamatic, but decided to say nothing about it to anyone.
But as more and more people began reporting sightings of the socalled Lake Champlain monster, also known as Champ, Mrs. Mansi let it be known that she had what might be the only photograph of the creature.
For the scientists who would examine the picture in the ensuing months it would be one more of those tantalizing bits of evidence regarding the still tightly held secrets of nature. The evidence must be treated with the greatest skepticism, but it can't be rejected out of hand.
Dr. Roy P. Mackal, a University of Chicago biochemist who is an expert on the Loch Ness monster and other legendary animals, persuaded Mrs. Mansi to submit the photograph for analysis by scientists at the University of Arizona Optical Sciences Center.
The analysis has now been completed, and the Arizona optical scientists confirm that the picture has not been tampered with, though it was beyond their scope to determine whether the object in the picture was animate or inanimate, something ordinary or extraordinary. The believers can still believe that Lake Champlain, along with Loch Ness and other northern bodies of water, could harbor strange creatures from out of the past. The skeptics can still be skeptical, pointing out that observers can be deceived by atmospheric conditions or the giant sturgeon that sometimes break the surface of Champlain.
According to Dr. B. Roy Frieden, professor of optical sciences at Arizona, Mrs. Mansi's photograph was a high-quality print that ''does not appear to be a montage or a superposition of any kind'' and that ''the object appears to belong in the picture.''
''We don't see any evidence of tampering with the photo,'' Dr. Frieden concluded. To see if the object might have been superimposed on the picture, Dr. Frieden examined the wave patterns through a microscope. There were no sharp lines of divergence in the waves.
The photograph was next examined through techniques developed for processing and enhancing images in military surveillance and planetary exploration by spacecraft. Called the Interactive Picture Processing System, the technique involves scanning the photograph by a densitometer, an array of light detectors that converts every point in the photograph into digital form and then records the digital codes on magnetic tape. More than one million numbers from the Champlain picture were stored on magnetic tape.
The tape was then played back and the picture displayed on a screen. By twisting dials, Dr. Frieden could change the light contrast and highlight certain features. He did this to determine if any objects - pulleys or ropes or anything suggesting a hoax - not readily visible in the photograph might be revealed. No such objects emerged.
J. Richard Greenwell, another member of the University of Arizona staff, who worked with Dr. Frieden during the analysis, said that all of the data from the photograph were stored on computer punch cards and will be used in further studies. Marine biologists who do research for the United States Navy are expected to conduct more detailed analyses of the object and the surrounding wave patterns.
Dr. Frieden observed one detail in the picture, a horizontal dark streak going from left to right, that ''merits looking into.'' This, he suggested, could be a sandbar, which would make it easier for someone to have reached the site to perpetrate a hoax. Dr. Frieden emphasized, however, that he had no reason to believe that the object was indeed a hoax.
Mrs. Mansi said that she was sure it was no hoax. ''I saw it move. I saw it at different angles. You know if something is living or not,'' she said.
Dr. Philip Reines, a professor of communications at the State University College at Plattsburgh, N.Y., who is considered an expert on nautical phenomena, said that two aspects of the Mansi photograph bothered him. One is that Mrs. Mansi says she cannot recall exactly where she took the picture. The other is that the negative of the picture is missing.
Dr. Reines said that having the negative would enable investigators to blow up and otherwise manipulate the picture with less distortion. Knowing the cove where the picture was taken, he added, would permit a more detailed examination of the water and the depths and thus determine the scale of the picture.
Mrs. Mansi, who now lives in Winchester, N.H., said that all she could remember is that the picture was taken from a secluded location north of St. Albans, Vt. She said that she was standing on a bank about six feet above the water line and about 100 to 150 feet from the object.
Dr. Reines said that the photograph was ''very exciting,'' but he added, ''I'm skeptical - the picture can very easily be misunderstood. People around here are very cautious in these matters. They want to make sure this isn't made into a farce and a circus.''
In an article in the journal Science in 1979, Dr.W. H. Lehn of the University of Manitoba raised the possibility that some of the reported sightings of lake monsters could be attributed to atmospheric distortions. Light refraction, caused by a temperature inversion that happens when cold lake water chills the lower layers of the air, could distort a stick or some other ordinary object so that it would take on a monstrous size or form.
Dr. Mackal of the University of Chicago, on the other hand, said that the picture tends to support his ''working hypothesis'' that the so-called monsters of Loch Ness, Lake Champlain and other lakes are ''some kind of rare, elusive mammal, probably related to the zeuglodon, which was one episode in the evolution of the whale.'' The zeuglodon is thought to have been extinct for 20 million years.
''I've looked at the evidence and I'm convinced that the animals are there,'' Dr. Mackal said. ''They are seagoing, but occasionally come into fresh water following fish, most often salmon. This picture is genuine in all respects and depicts one of these animals.''
For the past seven years Joe Zarzynski, a schoolteacher from Saratoga Springs, N.Y., has been cataloging reports of those who claim to have seen the monster of Lake Champlain with a view to convincing New York and Vermont authorities that the animals do exist and should be protected. He has a list of 132 sightings, some dating back to the 19th century, when P.T. Barnum advertised a $50,000 reward for a carcass, but most of them reported in the last few years.
Mr. Zarzynski told of eight sightings since April. Two dark humps were seen in the water near Fort Ticonderoga. Something dark and 25 to 30 feet long was seen near Port Henry, N.Y. On June 10 Marty Santos of Grand Isle, Vt., reported that while fishing he saw the monster and it seemed to be herding perch.
''I've never seen it myself,'' Mr. Zarzynski said. ''I know there are theories that explain it away, the most common being that it is a large sturgeon. But some of these sightings are tough to shoot holes in. The Mansi picture is the first clear-cut photograph of Champ that I'm aware of. It really puts the cap on things.''
Illustrations: photo of a monster (Page C2) Photo of a monster | <urn:uuid:021c8b37-6e45-4159-bb8e-cbabca0583d7> | 2.6875 | 1,699 | Truncated | Science & Tech. | 57.382513 |
|Apr24-12, 05:34 PM||#1|
Gamma Ray Bursts
I've been reading about gamma ray bursts (GRBs) lately and have found them to be pretty interesting. As far as I have read, it appears that we still don't know much about what actually causes them, or rather, how the "internal engine" works.
The most popular idea for longer lasting GRBs is a very massive star, 30 solar masses I think, going supernova, right?
Is there a lot of interest in GRBs today? Is it a good field of study to look into?
|Apr24-12, 06:16 PM||#2|
GRBs are a very active field in astrophysics today. There appear to be two types - long GRBs (which last longer than ~ 1 second) and short GRBs (typically shorter than 1 seond). The long GRBs are pretty firmly established to be caused by supernova collapse of massive stars, as you said. In a few cases an optical supernova (of Type Ic if you're familiar with the nomenclature) has been seen that correspond to the long GRB. The best model for the short GRBs is that the represent the merger of two neutron stars, but this is much less firmly established.
|Apr24-12, 07:14 PM||#3|
Perhaps the title is a bit over stated, but, this is the latest news on short GRB's
|Apr26-12, 12:15 PM||#4|
Gamma Ray Bursts
..and here's some up to date research, video included, from NASA: http://www.space.com/15119-mysteriou...e-objects.html
The Fermi space telescope has spotted nearly 500 powerful gamma-ray sources in deep space over the last three years. Before its launch in 2008, scientists only knew of four such objects.
"We're not looking for the ordinary things," Thompson said. "We're looking for the extraordinary; powerful things that might produce gamma rays."
Of the newly discovered bodies, more than half are active galaxies. Pulsars and supernova remnants each make up about 5 percent of the sources, with high-mass binary stars and other galaxies contributing just a smidge more, the researchers said.
Yet a large collection of objects remains unidentified, they added.
Edit: I found this interesting story,
|Apr26-12, 05:22 PM||#5|
however, it is still a very interesting article. Pretty nifty that they've found so many things in the sky that are complete mysteries.
|Apr28-12, 10:39 PM||#6|
SHISHKABOB, these may help in your quest:
"New results out of Antarctica support the idea that the most energetic of the superspeedy space particles raining down on Earth are not from gamma-ray bursts. The conclusion, reported in the April 19 Nature, has upped the ante on a long-standing mystery in astrophysics."
Nature | Letter
“An absence of neutrinos associated with cosmic-ray acceleration in γ-ray bursts
“Very energetic astrophysical events are required to accelerate cosmic rays to above 1018 electronvolts (Etavolts). GRBs (γ-ray bursts) have been proposed as possible candidate sources. In the GRB ‘fireball’ model, cosmic-ray acceleration should be accompanied by neutrinos produced in the decay of charged pions created in interactions between the high-energy cosmic-ray protons and γ-rays. Previous searches for such neutrinos found none, but the constraints were weak because the sensitivity was at best approximately equal to the predicted flux. Here we report an upper limit on the flux of energetic neutrinos associated with GRBs that is at least a factor of 3.7 below the predictions. This implies either that GRBs are not the only sources of cosmic rays with energies exceeding 1018 electronvolts or that the efficiency of neutrino production is much lower than has been predicted.”
|Similar Threads for: Gamma Ray Bursts|
|Gamma Ray Bursts||Cosmology||1|
|Any gamma ray bursts of doom?||Astrophysics||1|
|Gamma Ray Bursts and Gravitational Radiation||Special & General Relativity||0|
|Gamma ray bursts!||General Astronomy||6|
|gamma bursts||General Astronomy||13| | <urn:uuid:3983d5da-203d-4ac0-b952-7a5c7722c826> | 2.921875 | 946 | Comment Section | Science & Tech. | 59.622484 |
santaclaus wrote:(again, don't know notation for square root)
Please review the article on formatting math as text
, or else follow the instructions in the forum posting on LaTeX formatting
santaclaus wrote:How do I solve and equation like the following?
I would first do 1.05 log x = log 9
I will guess that you are using the common (base-ten) log, but I'm afraid I don't follow your steps...?
To learn the properties of logs, try here
. But I think your only error is that you changed the "9" to a "1.09", or vice versa.
Then next step in solving the log equation
is to isolate the x-term, and then convert back to exponentials:. . . . .. . . . .
Simplify the left-hand side to just "x", and use your calculator if you need a decimal approximation for the right-hand side.
santaclaus wrote:In sq.rt x+sq.rt.x = 3 here there are 2 square roots in the problem.
There may be two square roots, but unfortunately you haven't used grouping symbols to say where they go. Do you mean either of the following?. . . . .
ln(sqrt(x)) + sqrt(x) = 3. . . . .
ln(sqrt(x) + sqrt(x)) = 3
When you reply, please provide the instructions for this equation, along with a clear listing of everything you have tried so far. Thank you! | <urn:uuid:f11ea693-0997-4203-84a5-70078301fd96> | 2.6875 | 334 | Comment Section | Science & Tech. | 88.777132 |
Web edition: April 12, 2007
"Gentlemen, that is surely true, it is absolutely paradoxical; we cannot understand it, and we don't know what it means. But we have proved it, and therefore we know it must be the truth." Benjamin Pierce, a Harvard mathematician, after proving Euler's equation, eip = 1, in a 19th-century lecture.
Sunday, April 15, is the 300th birthday of Leonhard Euler (pronounced "oiler"), one of the most important mathematicians ever to have lived. His works help form the foundation of nearly all areas of mathematics, including calculus, number theory, geometry, and applied math.
One of the many discoveries for which he is famous is the equation eip = 1 . In a 1988 poll, readers of the journal Mathematical Intelligencer chose this equation as the single most beautiful equation in all of mathematics. The equation weaves together four seemingly unrelated mathematical numbers, e, p, i, and 1, in an astonishingly simple way.
But what does eip = 1 really mean?
First, let's examine what the letters mean. The symbol e stands for a particular irrational number. Since it is irrational, its value can't be given precisely in decimal notation, but it is approximately equal to 2.7183. Euler introduced this constant to the world of mathematics. He probably named it after the word "exponential," because e is the base of the natural logarithms. Initially, he recognized the importance of e because of its remarkable properties in calculus. But e pops up over and over again in surprising places throughout mathematics. Somehow, this nearly magical number seems to tie the world together.
Pi, or p, is another irrational number. It rounds off to 3.14159, and it is defined as the ratio of the circumference of a circle to its diameter.
The "imaginary" number i is defined as the square root of 1. Imaginary numbers are unlike any number we encounter in ordinary experience. If i were an ordinary positive number, then multiplying it by itself would give a positive number, not 1. And if i were an ordinary negative number, then multiplying it by itself would also give a positive number, because multiplying a negative number by another negative number produces a positive number. Mathematicians therefore invented imaginary numbers, and they gave the name i to the square root of 1.
Cooking up new numbers might seem like a questionable proposition. But when a strange creation like i turns out to have remarkable and surprising properties, such as linking e to p in a simple equation, mathematicians are inclined to put aside any lingering qualms in favor of investigating what other secrets i might be hiding.
Imagining the imaginary
On inspection, eip is a peculiar concept. If the existence of an imaginary number seems strange, raising a real number to the power of an imaginary number is even odder. What could it possibly mean?
Euler found an answer to that question almost accidentally. He set out to understand functions, which are essentially mathematical machines that start with one number and produce another. Mathematical functions include x2 + 3x + 1, and sin x, and ex. Each of those formulas provides a method for starting with one number to produce another number.
Euler built on a remarkable discovery that mathematician Brook Taylor made in 1712. Taylor discovered that all types of mathematical functions, no matter how diverse they may seem, can be expressed in the same form, as a sum of powers of x.
When we apply Taylor's method to the function ex, we get the following equation:
The right-hand side of the equation looks much more complicated than the left, but in fact it is much easier to calculate. The sums of powers of x, which we call polynomials, are the simplest kind of function in mathematics. In this case, the equation is a bit more complex than an ordinary polynomial because the sum continues infinitely. Nevertheless, expressing a function in this form allows mathematicians to understand it more clearly. Most importantly, the equation's right-hand side doesn't have any number raised to the power of x.
Taylor's expansion offers a way to understand the complex and peculiar expression eip from Euler's equation. That expression raises e to the power of i, which is a perplexing concept. But using the expanded formthe right side of Taylor's equationhelps clarify matters.
Euler also faced the unfortunate fact that the Taylor expansion of eip continues infinitely. He solved that problem by comparing the Taylor expansion of eix with those of cos x and sin x. Those functions seem to be completely unrelated to ex, since they describe the relationships between the lengths of the sides of triangles. But Euler found the following remarkable identity:
eix = cos x + i sin x
When Euler set x equal to p, this equation made it easy to calculate the value of eip. The cosine of p equals 1, and the sine of p equals 0. So ein = (1) + i (0). Therefore, eip = 1 + 0 = 1.
In honor of Euler's birthday, we can all enjoy a birthday gift from him: Euler's beautiful theorem.
Editor's note: The character that appears as "p" on this page represents the Greek letter pi. In some browsers, it may not display properly.
If you would like to comment on this article, please see the blog version. | <urn:uuid:18a51e07-5ad3-4ed9-ace6-5557173435fb> | 3.5 | 1,132 | Knowledge Article | Science & Tech. | 46.699464 |
How was the mass and radius of the Earth calculated?
The mass may be determined using Newton's law of gravitation. Force (F) is said to be equal to the gravitational constant multiplied by the mass of the planet and the mass of the object, divided by the square of the radius of the planet. This is set as an equivalent of the fundamental equation wherein force (F) equals mass (m) multiplied by acceleration (a). Using this equation, the mass is calculated to be 5.96 x 1024 kg. The radius was first calculated by the Greek Eratosthenes thousands of years ago. He compared the midsummer's noon shadow in deep wells in Syene (now Aswan on the Nile in Egypt) and Alexandria. He properly assumed that the Sun's rays are virtually parallel. Knowing the distance between the two locations, he calculated the circumference of the Earth to be 250,000 stadia. As the exact length of a stadia is unknown, his accuracy is uncertain. | <urn:uuid:2cde2762-861d-49c5-8dc4-49f5bfec71b9> | 3.8125 | 202 | Knowledge Article | Science & Tech. | 60.540619 |
Guess what just came about...
They say, "Common sense tell us there is nothing." but I say the opposite. Common sense would say there is water. http://news.yahoo.com/s/nm/20080709/sc_nm/moon_water_dc
New scans show evidence of water on the moon
By Maggie Fox, Health and Science Editor
Wed Jul 9, 2:02 PM ET
Tiny green and orange glass balls brought back from the moon nearly 40 years ago by astronauts show evidence that water existed there from the very beginning, scientists reported on Wednesday.
They used a new method of analyzing elements in the lunar sand samples to show strong evidence of water, dating back 3 billion years.
Their study, published in the journal Nature, could support evidence that water persists in shadowed craters on the moon's surface -- and that the water could be native to the moon and not carried there by comets.
Most scientists believe the moon was formed when a Mars-size body collided with Earth 4.5 billion years ago.
The giant impact would have melted both proto-planets and sent molten debris into orbit around the Earth.
Some of this would have eventually coalesced into the moon, but the heat of the impact would have vaporized light elements such as the hydrogen and oxygen needed to make water -- theoretically, anyway.
Erik Hauri of the Carnegie Institution for Science in Washington had developed a technique called secondary ion mass spectrometry or SIMS, which could detect minute amounts of elements in samples. His team was using it to find evidence of water in the Earth's molten mantle.
"Then one day I said, 'Look, why don't we go and try it on the moon glass?"' Alberto Saal of Brown University, who helped lead the study, said in a telephone interview.
"It took us three years to convince NASA to fund us."
The space agency was also loath to part with any of the precious samples brought back by astronauts during the Apollo missions in the 1970s.
Saal, Hauri and colleagues were able to get about 40 of the little glass beads and break them apart for analysis.
What they found overturned the conventional wisdom that the moon is dry.
"For 40 years people have tried (to find evidence of water) and were not successful," Saal said.
"Common sense tell us there is nothing."
Saal's team did not find water directly, but they did measure hydrogen, and it resembled the measurements they have done to detect hydrogen, and eventually water, in samples from Earth's mantle.
The evidence shows that the hydrogen in the sample vaporized during volcanic activity that would be similar to lava spurts seen on Earth today.
"We looked at many factors over a wide range of cooling rates that would affect all the volatiles simultaneously and came up with the right mix," said James Van Orman, a former Carnegie researcher now at Case Western Reserve University.
"It suggests the intriguing possibility that the moon's interior might have had as much water as the Earth's upper mantle," Hauri said in a statement.
"But even more intriguing -- if the moon's volcanoes released 95 percent of their water, where did all that water go?"
Some might still remain at the poles, frozen in the shadows of craters, he speculated. Several lunar missions have found just such evidence.
"If parts of the lunar mantle contain as much water as Earth's, does this imply that the water has a common origin?" Marc Chaussidon of the Centre de Recherches Petrographiques et Geochimiques in Vandoeuvre-les-Nancy, France, asked in a commentary in Nature.
More analysis might answer that question.
"We will pressure NASA for more samples," Saal said.
(Editing by Jackie Frank) | <urn:uuid:2201516c-659c-4c49-afa3-5efed1c4d373> | 3.4375 | 797 | Comment Section | Science & Tech. | 54.884936 |
A constant function is a linear function for which the range does not change no matter which member of the domain is used.
for any x1 and x2 in the domain.
With a constant function, for any two points in the interval, a change in x results in a zero change in f(x).
Graph the function f(x) = 3.
The graph of a constant function is always a horizontal line. | <urn:uuid:b0d5d056-fa49-4a95-9ea5-9efe4a3ad989> | 3.203125 | 88 | Tutorial | Science & Tech. | 82.799465 |