text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
What Charles Adams and colleagues at Durham University have now done is come up with a way of storing individual optical photons in highly excited states of an atomic gas. Once stored, the photons can be made to interact strongly, before being released again. An important feature of the technique is that it uses microwaves, which are also used to control some types of stationary qubit.
Apparently, this is to be published in PRL.
Definitely a commendable accomplishment here in the evolution of our capability of using photons for quantum communications. | <urn:uuid:58d7d52f-7870-40d6-998f-d61f9217c809> | 2.6875 | 107 | Personal Blog | Science & Tech. | 23.810352 |
Driving home the other night, I was teasing my 4-yr old daughter (as I often do). We were talking about what sorts of games to play when we got home and I suggested that we could spend our time poking Lucy’s baby brother Daniel in the eye. I also pointed out that we’d have to take turns and asked Lucy which of us should go first.
Lucy protested. “Daddy”, she said excitedly, “we don’t have to take turns–Daniel has two eyeballs”!
After I stopped laughing at this, I realized that Lucy had seen something very important, which hadn’t even occurred to me:
Poking a baby in the eye is something that can be done in parallel!
Yes, yes, of course–you should never poke a baby in the eye. And if you have a pre-schooler, you should never suggest eye-poking as a legitimate game to play, even jokingly. Of course I did explain to Lucy that we really can’t poke Daniel in the eye(s).
But Lucy’s observation pointed out to me how limited I’d been in my thinking. I’d just assumed that poking Daniel in the eye was something that we’d do serially. First I would poke Dan in the eye. And then, once finished, Lucy would step up and take her turn poking Dan in the eye. Lucy hadn’t made this assumption. She immediately realized that we could make use of both eyeballs at the same time.
There’s an obvious parallel here to how we think about writing software. We’ve been programming on single-CPU systems for so long, that we automatically think of an algorithm as a series of steps to be performed one at a time. We all remember the first programs that we wrote and how we learned about how computers work. The computer takes our list of instructions and patiently executes one at a time, from top to bottom.
But, of course, we no longer program in a single-CPU environment. Most of today’s desktop (and even laptop) machines come with dual-core CPUs. We’re also seeing more and more quad-core machines appear on desktops, even in the office. We’ll likely see an 8-core system from Intel in early 2010 and even 16-core machines before too long.
So what does this mean to the average software developer?
We need to stop thinking serially when writing new software. It just doesn’t cut it anymore to write an application that does all of its work in a single thread, from start to finish. Customers are going to be buying “faster” machines which are not technically faster, but have more processors. And they’ll want to know why your software doesn’t run any faster.
As developers, we need to start thinking in parallel. We have to learn how to decompose our algorithms into groups of related tasks, many of which can be done in parallel.
This is a paradigm shift in how we design software. In the same way that we’ve been going through a multiprocessing hardware revolution, we need to embark on a similar revolution in the world of software design.
There are plenty of resources out there to help us dive into the world of parallel computing. A quick search for books reveals:
Introduction to Parallel Computing – Grama, Karypis, Kumar & Gupta, 2003.
The Art of Multiprocessor Programming – Herlihy & Shavit, 2008.
Principles of Parallel Programming – Lin & Snyder, 2008.
Patterns for Parallel Programming – Mattson, Sanders & Massengill, 2004.
The following paper is also a good overview of hardware and software issues:
The Landscape of Parallel Computing Research: A View from Berkeley – Asanovic et al, 18 Dec 2006.
My point here is this: As a software developer, it’s critical that you start thinking about parallel computing–not just as some specialized set of techniques that you might use someday, but as one of the primary tools in your toolbox.
After all, today’s two-eyed baby will be sporting 16 eyes before we know it. Are you ready to do some serious eye-poking? | <urn:uuid:12eba3a3-cee4-4078-8e5c-65fc51dce414> | 2.734375 | 911 | Personal Blog | Software Dev. | 60.073833 |
The prediction is the sea level will rise about 3 feet.
"CapNemo" has been going to global warming questions and copying a statement pooh-poohing the threat. His statement is misleading and incorrect.
He says it’s only increased by 1 degree (F) in 125 years. Actually, it’s increased by twice that. If you look at the link below, the area in red is 1.1 degree (C) which converts to 2 degrees (F). He says, “The average temperature in Antarctica is 109 degrees below zero.” If you go to his source, that’s the number for the MINIMUM temperature, not the average. Then he says, “Back in the '70s all the hype was about global COOLING”. I was around then. I don’t remember any hype. And if you go to his source, it says, “The theory never had strong scientific support”.
The truth is that those 2 degrees are HUGE in the scale of average weather change. But the real problem is the speed of change and that it's accelerating.
Scientists are predicting a temp 4 to 8 degree (F) increase over the next 75 years. “This may not sound like a great deal, but just a fraction of a degree can have huge implications on the climate, with very noticeable consequences." (http://www.channel4.com/science/microsi ... cting.html
The link between CO2 and global warming is undisputed at this time. The amount of CO2 in the atmosphere has increased by more than 50% over the last 115 years (250 to 381 ppm, http://awesomenature.tribe.net/thread/f ... 6c08031c65
). In the last 30 years, it increased at a rate 30 times faster than at any period during the last 800,000 years. In other words, this change is totally unprecedented. (http://awesomenature.tribe.net/thread/f ... 6c08031c65
). What else is totally unprecedented about the last 115 years? Industrialization and the population explosion. Duh. This is not rocket science; it is simple arithmetic!
"... the good news is that, within the foreseeable future, Maine residents will be able to stop banking their foundations and to store their down parkas and snow blowers in the barn permanently. The bad news is that a lot of those barns will be underwater" (http://awesomenature.tribe.net/thread/f ... 6c08031c65
If global warming wasn't a real threat, why have 178 nations ratified the Kyoto Protocol to limit CO2 emissions? Why are the US and Australia the only two holdouts among the industrialized nations? (http://environment.about.com/od/kyotopr ... ocol_2.htm
CapNemo’s statement reminds me about the frog in the pot on the stove that doesn’t move as the water gradually gets hotter and hotter. From this seemingly insignificant 2 degree change, we’ve already seen enormous consequences. (http://www.davidsuzuki.org/Climate_Change/Impacts/
) How much hotter does it have to get for some people to wake up and face the music? And in the meantime, while you’re pondering all of this, be sure to check the dates on people’s references. Things are changing so rapidly that older information is no longer useful.
Average Northern Hemisphere Temperatures for last 1000 years:
http://www.co2science.org/scripts/Templ ... nh1000.jpg | <urn:uuid:b7504dd9-1b6e-43d4-b1b2-2c8c3d95d7b6> | 2.71875 | 774 | Comment Section | Science & Tech. | 72.407083 |
The Web Services Description Language (WSDL) (see Resources) specification provides a simple XML-based vocabulary for describing XML-based Web Services that are available over the network. The services themselves communicate using the Simple Object Access protocol (SOAP), HTTP, SMTP, or by some other means; WSDL, however, gives the user the meta-data required to set up the communications. WSDL itself says nothing about how to publish or publicize such service descriptions, leaving this to other specifications. Universal Description, Discovery and Integration (UDDI), an initiative for creating directories of Web Services, defines one framework for cataloging and dispatching WSDL descriptions, but it is just emerging and is quite complex.
UDDI does some hefty lifting for online contracts, and should certainly find its place soon in the distributed services arena. However, since the first likely WSDL deployments will be in tightly closed systems rather than over the open Web, there is probably a better alternative. There is a more natural early entry for WSDL cataloging and discovery in the form of the Resource Description Framework (RDF). RDF is the mechanism developed by the World Wide Web Consortium (W3C) to encode and manager Web metadata (see Resources). It provides simple methods to integrate large quantities of metadata in multiple domains.
To help provide a framework for this article, you may want to read my previous article, "Using WSDL with SOAP Applications" (see Resources), where I explore the function of a WSDL specification through a specific example that describes a service for snow boarding experts to offer endorsements to vendors in their industry. This article discusses ways of harnessing RDF's simplicity and power to augment WSDL's descriptive ability. To gain familiarity with RDF, RDF Schemas, and the basic XML representations of RDF, which will be important to understand the rest of this work, please review the RDF information links in the Resources section.
All that WSDL provides could really have been written in the RDF Serialized format. It is an odd choice by IBM, Microsoft, and Ariba not to consider this. Other somewhat similar standards, such as RDF Site Summary (RSS), show how XML resource description formats can be expressed as an XML serialization of RDF. This doesn't make sense for all standards -- one would hardly have expected the Scalable Vector Graphics (SVG) working group, for instance, to design SVG around RDF. However, when most of the information represents brief labels and relationships between resources, RDF makes a lot of sense due to its growing array of users and tools.
For example, I have modified the snow boarding example using a WSDL specification discussed above to use a RDF serialized format as seen in 1.
Listing 1: What the WSDL description of snow boarding endorsement search would have looked like in valid RDF syntax.
<?xml version="1.0"?> <definitions name="EndorsementSearch" targetNamespace="http://namespaces.snowboard-info.com" xmlns:es="http://www.snowboard-info.com/EndorsementSearch.wsdl" xmlns:esxsd="http://schemas.snowboard-info.com/EndorsementSearch.xsd" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:w="http://schemas.xmlsoap.org/wsdl/" xmlns="http://schemas.xmlsoap.org/wsdl/"> <types> <schema targetNamespace="http://namespaces.snowboard-info.com" xmlns="http://www.w3.org/1999/XMLSchema"> <element name="GetEndorsingBoarder"> <complexType> <sequence> <element name="manufacturer" type="string"/> <element name="model" type="string"/> </sequence> </complexType> </element> <element name="GetEndorsingBoarderResponse"> <complexType> <all> <element name="endorsingBoarder" type="string"/> </all> </complexType> </element> <element name="GetEndorsingBoarderFault"> <complexType> <all> <element name="errorMessage" type="string"/> </all> </complexType> </element> </schema> </types> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <message w:name="GetEndorsingBoarderRequest" rdf:ID="GetEndorsingBoarderRequest"> <part w:name="body" w:element="esxsd:GetEndorsingBoarder"/> </message> <message name="GetEndorsingBoarderResponse" rdf:ID="GetEndorsingBoarderResponse"> <part w:name="body" w:element="esxsd:GetEndorsingBoarderResponse"/> </message> <portType w:name="GetEndorsingBoarderPortType" rdf:ID="GetEndorsingBoarderPortType"> <operation w:name="GetEndorsingBoarder"> <input rdf:resource="GetEndorsingBoarderRequest"/> <output rdf:resource="GetEndorsingBoarderResponse"/> <fault rdf:resource="GetEndorsingBoarderFault"/> </operation> </portType> <binding w:name="EndorsementSearchSoapBinding" w:type="es:GetEndorsingBoarderPortType" rdf:ID="EndorsementSearchSoapBinding"> <soap:binding soap:style="document" soap:transport="http://schemas.xmlsoap.org/soap/http"/> <operation rdf:about="GetEndorsingBoarder"> <soap:operation soap:soapAction="http://www.snowboard-info.com/EndorsementSearch"/> <input> <soap:body soap:use="literal" soap:namespace="http://schemas.snowboard-info.com/EndorsementSearch.xsd"/> </input> <output> <soap:body soap:use="literal" soap:namespace="http://schemas.snowboard-info.com/EndorsementSearch.xsd"/> </output> <fault> <soap:body soap:use="literal" soap:namespace="http://schemas.snowboard-info.com/EndorsementSearch.xsd"/> </fault> </operation> </binding> <service w:name="EndorsementSearchService" rdf:ID="EndorsementSearchService"> <documentation>snowboarding-info.com Endorsement Service</documentation> <port rdf:about="GetEndorsingBoarderPort"> <binding rdf:resource="EndorsementSearchSoapBinding"/> <soap:address soap:location="http://www.snowboard-info.com/EndorsementSearch"/> </port> </service> </rdf:RDF> </definitions>
These changes basically take a section of the description and encode
it into RDF serialization. As required by the RDF Model and Syntax 1.0
Recommendation (RDF M&S) (see Resources), this section is enclosed in an
element. If you were to feed this into an RDF processor, you would get
a huge wealth of information that neatly maps out the qualifications and
relationships that make up the WSDL description. I used every RDF serialization
trick available to minimize the changes from the original XML structure.
It's a neat illustration of how hard the RDF working group must have worked
on RDF M&S to allow existing XML formats to be shoe-horned into valid
RDF form. Note that this is also a source of great controversy as the many
tricks available lead to a certain brittleness about the translation from
the various syntaxes and the resulting RDF abstract model.
I left the
types element out of the RDF section. The main problem
is that the contents of this section are really completely outside the
domain of WSDL. The W3C or some other Web standards body might come up with a standard
mapping from XML schemas to RDF, and other data-typing methodologies might
do the same, but this is really an activity of a completely different scope.
types specification could still have been inserted wholly
into the RDF model by specifying the
but then we would have had all the problems that come with
notably, that RDF M&S explicitly disavows definition of equivalence
between literals containing markup (it could have said, for example, that
two literals with markup are equivalent if their reduction to Canonical
XML is identical). This would mean that we could never reliably compare
what we stored in the RDF model with any other data. In practice, this
parseType="Literal" quite useless and so I didn't bother.
In the section that was converted to RDF, the most noticeable
change is the addition of
rdf:ID attributes to the core WSDL elements.
This is how RDF handles what the WSDL authors tried to do with qualified
names in attributes: to establish relationships between abstract entities.
You can see where the references to these identified resources are made
by using the
Another change you'll notice is that all the previously un-prefixed attributes
now have had prefixes added. There is an RDF abbreviation
namespace-resolved attribute names to be treated as property
names. The attribute value then forms the literal value of a statement.
I used this abbreviation heavily in Listing 1. Namespaces on property
names are not strictly required for RDF, but are highly recommended as a way
to disambiguate such names. If you were to put this WSDL meta-data in a
model containing other information, there is a good chance that such a
commonly-used label such as "name" would clash with another property representing,
say, the name of a person or an organization. Since XML Namespaces 1.0
does not allow application of the default namespace to attributes, we must
explicitly specify the prefix to disambiguate.
In some cases, such as message parts, I used anonymous resources. Note
message/part element has property attributes but no
rdf:about attribute. I chose to do so because it seemed to
me that one would rarely need to refer to a message part outside the context
of its message, and that such an operation is the only reason not to make it
anonymous. Otherwise I would have had to make up a unique ID (not as natural
as for the top-level resources), used XPointer in an
rdf:about attribute, or some other gimmick. I went with what feels most natural.
A few final notes. You can see that I used
rdf:about in the
element to add a statement about the port I had previously described as
its subject. This is another example of taking advantage of the flexibility
of RDF syntax to minimize mutation from the original WSDL. Further, you can see how
useful RDF's arbitrary nesting of descriptions is, allowing our version
binding element to retain its original structure.
In the end, my above example is not valid WSDL according to the WSDL 1.0 specification that we have been examining. Luckily, it is simple enough to generate the RDF from the official syntax, either through automated processing from the same source that provides the data in the WSDL document, or after the fact using XSLT (an example of how to do this will be presented in my next article on WSDL). The RDF representation I give in this article is still useful because it provides one concrete RDF model of the meta-data encoded in WSDL. I believe that this comes quite close to capturing all the relevant and practical statements. Thus, it is a simple matter to use any particular RDF syntax extracted from the WSDL to construct the same model.
One nice benefit of the RDF model of WSDL is that it leads directly to a handy visualization of what a WSDL model specifies. By mapping the meta-data to RDF, I have conformed it to a formalization for which RDF M&S suggests a useful representation as a directed graph. To see this in action, go to Dan Brickley's brilliant RDF visualization tool (see Resources) and enter the following URL into the URL text box: http://www-4.ibm.com/software/developer/library/ws-rdf/endorse.rdf. I have placed a copy of the document for review in Listing 1.
Have a quick look at the resulting graph. If
you are using the GIF output from the visualization tool, it does require
a bit of squinting, so I recommend the SVG output if you have a viewer
such as Adobe SVG Plug-in (see Resources). Additionally,
you can also generate 2-D Virtual Reality Markup Language (VRML) from the
visualizer site. Notice that the anonymous resources have been assigned
a generated Uniform Resource Identifier (URI), which is usual for RDF tools, although it differs from
the sample graphs given in RDF M&S, where anonymous resources are empty
ovals. For instance, the anonymous port resource enclosed in the first
element is given the URI:
note the explicit drawing of
rdf:type properties, which in listing 1 were
implicit in the use of typed nodes rather than
The type properties exposed in the diagram suggest an RDF Schema for WSDL that could be used as a formal framework for translation into an RDF model. Again, the WSDL specification provides no RDF Schema, but in Listing 2 I provide my modest proposal to the cause. It is not a complete RDF Schema for WSDL; it only covers a subset of my example in Listing 1 that, in turn, uses only a subset of WSDL, but hopefully this can be the basis for further work.
Notice first that the capitalization
does not match the convention used in the RDF Schemas Candidate Recommendation
(RDF Schemas). This is to avoid mutating the element-type names used by
WSDL. Other than that, it is a run-of-the mill schema. I
range- constraints wherever WSDL has corresponding restrictions
of internal relationships between description elements. So, for instance,
since the value of the message attribute in a WSDL input element must be
the name of a message, the RDF Schema uses the equivalent range constraint
in the snippet in Listing 3.
Listing 3. RDF Schema range constraint
<rdf:Property ID="input"> <rdfs:range rdf:resource="message"/> <rdfs:domain rdf:resource="operation"/> </rdf:Property>
The domain constraint in the above makes sure that the input property can only be used from an operation element, a stipulation that also comes from WSDL.
Note, however, that many of these constraints
are not as strong as they could be. For instance, the element property
indicates the particular element in the types section that is used by a
message part. The value of this property should really be a valid
qname. We should not, for instance, be able to set the string "
as the value of this property because it's not even a valid XML element
type name, but since we use
rdfs:Literal as the range of the property,
an RDF processor would allow such an error. Unfortunately, RDF Schemas
provides notoriously little support for data-typing, leaving this to a
later version that can take advantage of the work of the XML Schemas group.
I strategically chose my subset to exlude the SOAP operations in order to keep things simple. In order to keep the SOAP-specific statements in their own namespace, there would most likely have to be another RDF Schema file for those classes and properties, and some help from the WSDL authors in publishing the schemas. But, then again, any use of RDF Schemas, with the framework I have set up in this article, would require some help from the WSDL authors who own the base "http://schemas.xmlsoap.org/wsdl" URL (see Resources). Most RDF processors, including 4RDF which I used for this article, can help dodge this problem by allowing developers to map and override base URIs. Possibly other RDF processors can do so as well.
By mapping WSDL to RDF, as this article discusses, service descriptions could be automatically incorporated into RDF-aware search engines and classification systems. The W3C already uses its considerable clout to encourage vendors and webmasters to use RDF to categorize their content, which should boost the value of embedded service descriptions. If so, the white and yellow pages of the "services Web" (to steal a term from XML commentator Len Bullard) become as simple as an RDF search restrained to the WSDL schema. Much simpler than the titanic framework being assembled by the UDDI folks, but that's another story altogether.
Hopefully, this article has given you some insight into how you can experiment with WSDL using the existing tools available for RDF. We looked at the internal structure of metadata relationships encoded in WSDL, brought clearly to light by the RDF conversion. We have also looked at the general process of deriving RDF Schemas and instances from non-RDF XML vocabularies. In the next article, we shall see what another core W3C technology (that is, XSLT) can do for WSDL developers and users.
My previous article, Using
WSDL with SOAP Applications, explains how WSDL works and how it applies
to SOAP-based application programming
Review the Web
Services Description Language (WSDL) specification.
The W3C maintains an RDF information
page that you can review for futher information.
- To further your RDF education, try this
I used Dan Brickley's amazing RDF
visualizer to generate an image of the WSDL description we have been
discussing, available in GIF
(141 KB) or SVG
(27 KB) format (Download the Adobe SVG plug-in to view this file. After you install the plug-in, open any RDF file, in the plug-in or in your Web browser, to view it graphically). The RDF source is Listing 1 from this article, of which I have also made a copy
You might also be interested in using the RDF
schema visualization tool against Listing 2.
There are many SVG resources at the W3C's
- Except for the diagram generation, I used 4RDF
for working with and testing the RDF files and schemas for this article.
Uche Ogbuji is a consultant and co-founder of Fourthought Inc., a consulting firm specializing in XML solutions for enterprise knowledge management applications. Fourthought develops 4Suite, the open source platform for XML middleware. Mr. Ogbuji is a Computer Engineer and writer born in Nigeria, living and working in Boulder, Colorado, USA. You can reach him at firstname.lastname@example.org. | <urn:uuid:b33ce72a-6072-4220-b713-e32995620d94> | 2.921875 | 4,207 | Documentation | Software Dev. | 44.215354 |
If 15 grams of table salt is used how many moles of sodium sulfate are produced
can you help with this one Find the area of the regular polygon. Round your answer to the nearest tenth. the individual sides are 10 the radius is 13.07
The area of a parallelogram is 420 centimeters squared and the height is 35 cm. Find the corresponding base. Be sure to show all work and label your answer
What wavelength of sound would a bat need to detect a 1cm diameter mosquito? Also, if it took 0.3 seconds for sound to return to a moth how far away would the moth be?
Put the following equations in standard form. State the center and the radius. x^2-5x+y^2+4y=-3 I get how to end up with (-2.5,2), what is the radius?
Radical equation, with a graphing calculator to the nearest thousandth?? 1 over /pi x-3+ sq2over/ 2.7x = 1.8??
I came here earlier with this question: "A force of 20 N acts toward the west. Another force acts at the same point so that the resultant of the two forces is zero. What is the magnitude and direction of the second force?" I was wondering if someone could explain to ...
A force of 20 N acts toward the west. Another force acts at the same point so that the resultant of the two forces is zero. What is the magnitude and direction of the second force?
This question is about a lab called Heats of Ionic Reaction in my Physical Chemistry class. I need to know why there should be a large percentage error in deltaH4
For Further Reading | <urn:uuid:ab646088-86f2-47bb-aa3d-0d55f617adec> | 3.796875 | 353 | Q&A Forum | Science & Tech. | 80.959718 |
Saguaro National Park AQRV's
Most surface waters in Saguaro NP are likely to be well-buffered and, as a result, insensitive to acidic atmospheric deposition because of an abundance of base cations in underlying park soils and rocks. However, studies currently underway have identified certain soils in the park that appear to be very sensitive to acidification; small potholes or other waterbodies on these soils may also be vulnerable to acidification. Small potholes may also be sensitive to nutrient enrichment from nitrogen deposition. Nitrogen enrichment may result in algae blooms and oxygen depletion, but no studies have been done to study these potential effects in the park.
While there have been no systematic studies, there is currently no information indicating that wildlife in Saguaro NP are being affected by air pollutants.
Dark night skies are considered an important air quality related value at Saguaro NP, possessing value as a cultural, scenic, natural, and scientific resource. Air pollution and poor quality outdoor lighting degrade night skies, lessening a viewer's ability to see stars and other astronomical objects, and altering the nocturnal scene. Use of high quality lighting that produces very little scattered light can greatly improve the night sky. Reduction of haze from air pollution can also improve the night sky.
Soils in Saguaro NP may be sensitive to atmospheric deposition of nitrogen compounds. In some areas of the country, elevated nitrogen deposition has been shown to alter soil nutrient cycling.
Several species of vegetation in Saguaro NP are known to be sensitive to ozone, including Pinus ponderosa (ponderosa pine), Populus tremuloides (quaking aspen), and Rhus trilobata (skunkbush). Ozone concentrations and cumulative ozone doses are high enough to induce foliar injury to sensitive vegetation under certain conditions. Surveys done in the late 1980s found symptoms of ozone injury on ponderosa pine.
Vegetation in Saguaro NP may also be sensitive to nitrogen deposition. In some parts of the country, excess nitrogen deposition has resulted in changes in species composition and abundance; native plants adapted to nitrogen-poor conditions have been replaced by invasive and exotic species that are able to take advantage of increased nitrogen levels.
- Ozone Sensitive Plant Species Listed by Park
- Ozone Sensitive Plant Species on NPS and U.S. FWS Lands
- Ozone Bioindicators on NPS and U.S. FWS Lands
Visibility is a sensitive AQRV at Saguaro NP. Visibility monitoring in the park has documented frequent visibility impairment (haze) due to fine particle pollution in the area. | <urn:uuid:94f282ab-8936-4ed9-8a3d-519178148f5a> | 3.5 | 543 | Knowledge Article | Science & Tech. | 20.576164 |
Shuttle's rate of descent
When the shuttle burns out of orbit, What is its rate of descent?
During daylight hours, at what altitude does it become a naked eye object?
I have watched a shuttle land. I saw the next shuttle that went up
after the "Challenger" disaster land at Edwards Air Force Base in California.
You *hear* the shuttle before you see it: it produces a characteristic
double sonic boom as it crosses the California coast. Then everybody looks
for it --- in those days typically thousands of people turned out for each
landing --- and it takes about 3 or 4 minutes from when it is spotted until
it lands, which is much quicker than if you were watching airplanes at the
airport, because the shuttle lands faster than a jet and glides downward at
a much steeper angle. The outside limit on seeing far-away objects is the
resolution of your eye --- how small a thing can you make out? As a rough
estimate, you can see something L feet long at a distance of D = 1720 * L/x
feet away, if you can resolve something x seconds of arc across. To
estimate how big x is for you, look at craters on the moon or a bird flying
across it: the moon is 30 seconds of arc across, what fraction of the moon
would an object have to be for you to recognize it? If you can make out
things 1/5 the size of the moon, x = 6 and you can see 100 foot objects
(the size of the shuttle) 30,000 feet away. But the problem with the
shuttle is not the outside limit, but *finding* the thing in the sky. If
you have ever watched a child's balloon rise up in a clear sky, taken your
eyes off it for a moment and then tried to find it again you know what I
mean. It helps that the shuttle is black (from below) but normally it
is not spotted until it is well within the outside limit.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:5ce1445f-ab76-4f29-9094-6a6bde2a7625> | 3.3125 | 441 | Knowledge Article | Science & Tech. | 63.060791 |
Name: j middle school
Date: Around 1993
How do space ships get off the ground?
Gravity pulls on any object near the earth, including you, and a spaceship.
To get off the ground, you have to push yourself up harder than the earth is
pulling you down. When you jump in the air, you are pushing against the
ground. When a bird flies, it is pushing against the air with its wings. A
spaceship (or rocket) pushes against its exhaust - the stuff that comes out
the back. Just as if you were sitting on a swing and threw a baseball
forward, you would move backwards. The harder you throw the baseball, the
more you will move. In the same way a rocket throws hot gasses out the back
and pushes itself forward. If it pushes hard enough, it "blasts off". If
it keeps pushing hard enough, it keeps going higher.
daniel n koury jr
Click here to return to the Engineering Archives
Update: June 2012 | <urn:uuid:ef3ed1fb-d0ab-4e31-a57b-d28dbb4d9c15> | 3.90625 | 217 | Knowledge Article | Science & Tech. | 76.759091 |
Web edition: May 21, 2010
Print edition: June 5, 2010; Vol.177 #12 (p. 4)
SOLVING OF SUN’S RIDDLES — Future space probes may skim as “close” as two million miles from the sun’s visible surface, a report to the National Academy of Sciences suggests. Before this can be done, however, greatly improved materials must be developed since temperatures at that distance would be about 5,000 degrees Fahrenheit, roughly the melting point of the toughest materials now known. A near-sun space probe is one of the several kinds of solar studies from high-flying balloons, satellites and probes recommended by the Academy’s Space Science Board. The suggested experiment could yield answers to most of the still unsolved problems of the sun and its mighty outpouring of radiation. | <urn:uuid:82d05d41-10b1-4574-842a-64960733310c> | 3.328125 | 173 | Truncated | Science & Tech. | 50.993251 |
February 15, 2013
Julie Larsen Maher ©WCS
The beaver is one of nature’s most skillful architects, but it doesn’t just create lodges for its own toothy kin. The dams this engineering rodent builds can create water storage ponds that provide habitat for entire communities of wildlife, and ensure streams flow even when there is little rain and snowfall. As climate change warms up the earth and dries out valleys across the West, beavers have become an increasingly important ally in helping natural communities adapt.The Grand Canyon Trust is a 2011 recipient of a WCS Climate Adaptation Fund grant, provided by the Doris Duke Charitable Foundation. The group is working to reintroduce beavers in dozens of stream segments in Southern Utah, and tracking the benefits they provide to local ecosystems. | <urn:uuid:3f19590d-6c4e-4a0d-be3f-ffaf1ad5fc30> | 3.4375 | 166 | Truncated | Science & Tech. | 37.757182 |
This cartoon shows some of the gases in the lowest layer of Earth's atmosphere. Most of the gas is nitrogen (N2). There is also a lot of oxygen (O2). The cartoon also shows carbon dioxide (CO2), water vapor (H2O), methane (CH4), sulfur dioxide (SO2), and carbon monoxide (CO).
Image courtesy UCAR, modified by Windows to the Universe staff (Randy Russell). | <urn:uuid:104a0940-cd45-463d-bb43-040d869f62cd> | 3.03125 | 90 | Truncated | Science & Tech. | 55.763475 |
The original plan was for three N1 boosters to assemble a 200 tonne payload in low earth orbit. This would launch the original L3 to a direct landing on the moon and return. By comparison, Chelomei's UR-500 for the manned lunar flyby mission put only 20 tonnes in low orbit. However the leadership was only willing to fund N1 production at the rate of four per year, and Korolev concluded the only moon mission he could propose at such a rate was the single-shot lunar orbit rendezvous scheme selected by the Americans.
It was commonly believed that the N1 was inadequate for the one-shot moon mission proposed by Korolev, and there was no time to develop enhancements to it to make it suitable for such a mission. Kalmykov (Minister of the Radio-Technical Industry) sent a letter to Military-Industrial Commission Chairman Smirnov pointing this out. Feoktistov and the other spacecraft designers knew the mass of the payload was absolutely critical, with no margin for growth. But Mishin wanted to go ahead with development of the rocket anyway. Bushuyev said they needed 100 tonnes payload in LEO to accomplish the one-shot moon mission, not 75 tonnes, and that the only way to get this was to develop Lox/LH2 second and third stages for the N1 (the growth version outlined in the draft project). But there was no authority from the government to pursue this development. Korolev and later Mishin wouldn't admit they had miscalculated the minimum payload mass needed, and couldn't admit that Soviet engines were not as good as the Americans -- that would result in the whole project being killed. They wanted to see the N1 built at any cost, even that of failure. Chertok observes that after all, chief designers are only people too.
This was difference between 'upstairs' and 'downstairs' at a design bureau. At the stroke of a pen, Korolev increased the N1 payload from 75 to 93 tonnes, upgraded the gross lift-off weight to 2750 tonnes, and moved the LK production schedule up to 1965. This led to the absurd project schedule in the 19 July 1964 decree, which imagined first flight test of the as-yet-undesigned small LK in 1966. All involved downstairs knew that to achieve a 93 tonne payload for the booster, and a one-shot moon landing payload no larger than 93 tonnes, would require enormous effort. | <urn:uuid:5dc88563-c04f-4cba-a1cb-0bb4f4b1c56c> | 3.046875 | 509 | Knowledge Article | Science & Tech. | 52.568796 |
The radio telescope can see much more than an optical telescope. Things that have been discovered in the universe would not have been discovered if it were not for the radio telescope.
The inventor of the radio telescope is Grote Reber. He was born in Wheaton, Illinois, in 1911. As an adult he was an American engineer and amateur astronomer. In 1937 in his backyard he built the world's first radio telescope. The dish was a mere 31 feet across. The telescope could move both the dish and the mount. This is called fully steerable. After three years of working with his radio telescope, Grote Reber wrote a report concluding that the Milky Way actually gives off radio raves. He went on to lead experimental microwave research in Washington D.C.
Although Grote Reber alone built the world's first radio telescope, he could not have done it without the findings of a man named Karl Jansky. Karl Jansky was born in 1905 and worked at a telephone company and had to build a radio antenna to find out why there was too much static on long distance telephone calls. It turned out to be radio waves coming from space. As a result of this, Jansky wrote a report on the radio waves he discovered. Grote Reber being highly interested in astronomy found Jansky's work and decided to continue it. Thus building the radio telescope.
Jansky and Reber both built a radio antenna or a radio telescope. Being different people they each built their instrument out of different things. Jansky built his radio antenna mostly out of wood. It has 21 foot metal poles and reflectors. It was mounted on a track with tires. Grote Reber on the other hand built his radio telescope more like people do today. It had a dish which was made of wire screen. More commonly used today is glass or metal.
There are many different kinds of light. There's Visible, Ultra Violet,
Infrared, X-rays [ the shortest ] and radio [ the longest ]. A wave length
is measured from one crest to another. This would be a short wave length:
This would be a long wave length:
Frequency measures the number of waves. The higher the frequency the
shorter the wave length and the lower the frequency the longer the wave
The amplitude of a wave is measured by how high or low the wave goes from
the middle. The higher the amplitude the more energy the wave carries.
The radio telescope is used to detect most of the things that an optical telescope can not see. The comments of a radio telescope are the same as in a car radio. The both have the antenna, the receiver, the detector, and the computer or speaker.
The radio telescope's dish has to be very large. The reason for this is that the radio telescope detects very weak signals. So it has to be big to do that. The dish of a radio telescope collects all the waves at the wave length that the receiver is tuned to. After the dish gets the waves they bounce to the antenna or feed if you want to be technical about it. During that action the waves are concentrated so they can be turned in to electrical signals. Then the receiver comes into play. it is tuned to one wave length by an astronomer. The radio telescope can only detect the wave length that the receiver is tuned to. The receiver also amplifies the waves and changes them in to electrical signals. [ One more thing ] The receiver must be kept very cold. If it is not, there will be a lot of unnecessary static and you cannot see what you want to see what you want to see as well. Next cones the detector. The detector measures the amount of energy in the wave so the computer knows how to make the picture. After going through the detector signal moves on to the computer. The computer records the signal and stores it. The computer cam make a picture by having the radio telescope go back and forth across an object. By doing that the radio telescope can get good details [ that it wouldn't be able to see otherwise ] so the computer can make a picture.
When people decide to build a radio telescope they can't just build it any old place, they have to build it away from any things that give off radio waves. [ radio and T.V. stations are not good ] The reason for this is that the radio and T.V. station's waves might mix in with the incoming waves.
One way of getting very fine details is to use an array. An array is a lot of radio telescopes that are connected to one another. When you do this you get a very powerful radio telescope. Arrays can also join up to each other all over the world and get even finer detail. The only down side of an array is it is very hard to see very faint objects.
Since the time of Grote Reber the radio telescope has gotten much bigger. Now a days a big radio telescope would be about ninety-two meters. Compare that to Grote Reber's measly ten meter radio telescope. Here is a chart of some other famous radio telescopes:
|250 feet (83 m)||Jodrell bank, England||1957||.|
|1,000 feet (333 m)||Arecibo, Puerto Rico||1960's||World's biggest (covers a valley)|
|78,000 feet (surface area)||National Radio Astronomy Observatory, Green Bank, Maryland||.||U.S. Biggest (fell down)|
|328 feet||West Germany||1971||Biggest fully steerable|
|82 feet (each)||New Mexico||1970's||An Array (each weigh 200 tons)|
|3 miles long||Cambridge, England at Mullard Radio Astronomy Observatory||.||An Array (8 radio telescopes)|
|210 feet (70 m)||New South Wales at Parkes Observatory||.||.|
|45 m||Pune, India||.||An Array (being built Cost: $15,000,000 they think)|
The radio telescope has already helped us to find many fascinating things. If we chose to continue the works of Karl Jansky and Grote Reber, we will be able to find things for years to come.
Macaulay, David, The Way Things Work, Dorling Kindersley limited, Great Britain, 1988
Schloerb, Peter, interview in January and February 1996
Compton's Interactive Encyclopedia, Simon and Schuster, 1995
Branley, Franklyn, The Electromagnetic Spectrum, Fitzhenry and Whiteside limited, Toronto, 1979
Kerrod, Robin, The Universe, Great Britain, Sampson Low, 1975
Science, 22 April, 1994
Heiserman, Dave, Radio Astronomy for the Amateur, Tap Books, 1975 | <urn:uuid:16da6728-1f3e-4596-b408-788b413fefc4> | 4.125 | 1,398 | Nonfiction Writing | Science & Tech. | 60.701961 |
See also the
Dr. Math FAQ:
3D and higher
Browse Elementary History/Biography
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
About Roman numerals.
- Adding and Subtracting Roman Numerals [10/07/1997]
Do you have any suggestions for how to teach adding and subtracting of
- African-American Mathematicians [11/07/1996]
I am trying to compile a list of African American mathematicians.
- Egyptian Method of Multiplication [6/26/1996]
Have you ever heard of an Egyptian Method of Multiplication?
- Eratosthenes and the Circumference of the Earth [10/7/1995]
How did Eratosthenes measure the circumference of the earth?
- First Math Teacher, Pythagorean Theorem [1/6/1995]
Who was the first math teacher? How did Pythagoreas come up with a^2 +
b^2 = c^2?
- Geometry History [8/8/1996]
I'm looking information about the history of Geometry.
- History of Numbers [09/04/1997]
My Algebra 2 teacher asked us to do a report on the history of numbers.
- Lattice Multiplication [8/30/1996]
Can you please explain the lattice method of multiplication?
- Roman Numerals [01/13/1997]
What does MCMLXXXVI mean?
- Roman Numerals [06/05/1997]
Is there a general reference that deals with writing numbers in Roman
- Russian Peasant Method of Multiplication [10/07/1998]
I understand the 'Russian peasant' method of multiplication, but not why
- What is the Definition of Zero? Who Invented the Symbol? [1/9/1995]
A student from Monta Vista wants to know the origins of zero.
- Why is a Circle 360 Degrees? [1/2/1995]
What is the origin/basis of the degree measurement? Why is a circle
divided into 360 degrees rather than some other number?
- Year 0 [10/19/1998]
How many years are there between 10 B.C. and 10 A.D.?
- 360 Degrees in a Circle [06/09/1998]
Why is a complete rotation around a circle equivalent to 360 degrees?
- Abraham Lincoln and the Rule of Three [04/13/2003]
In the biography of Abraham Lincoln he states that he learned to
'read, write, and cipher to the rule of 3.' Can you please explain
the phrase 'cipher to the rule of 3'?
- Alabama legislature and pi [04/15/1998]
Did Alabama really vote that pi should have its biblical value of 3?
- American Mathematicians [4/16/1996]
I am doing a project on a famous American mathematician, but I can't
think of any!
- American vs. European Billion [07/30/2000]
Why are there differences between the American and the European systems
for naming large numbers?
- Ancient Math Symbols [09/07/1997]
I need to know the numerals for 1,10 100, 1000 in Arab, Samarian, Greek,
Roman, and Hindu.
- Bachet's Theorem [10/08/1998]
My maths teacher said that a mathematician discovered that you can make
any number by adding together a combination of no more than 4 square
- Bar over a Whole Number? [06/05/2001]
What does a bar over a whole number indicate?
- The Base of Roman Numerals [02/19/2004]
What base does the Roman Numeral system use?
- Chinese Abacus [4/8/1996]
Where can I find information on using a Chinese abacus?
- Commas and Decimal Points in Currency Notations [11/08/2007]
Why do Europeans use the comma in currency amounts instead of the
decimal point as in the United States?
- Contribution of Romans in Math History [12/18/2006]
How did the Romans contribute to math? What did they add to
mathematics besides Roman numerals?
- Counting in the Teens [09/10/1998]
Why do we count ten eleven twelve thirteen, instead of ten eleventeen
- Counting Sheep the Traditional Way [10/13/2003]
Do you know anything interesting about mathematics in England, say
from 400 to 1200 AD?
- Decimals and Roman Numerals [10/22/1998]
Is there any notation for fractions in Roman numerals?
- Decimal System [08/12/2003]
Why the decimal system? Why not 12 as the base number?
- Division with Roman Numerals [03/02/2004]
I'm having real problems trying to divide using a non place system,
like with Roman numerals. The concept of using letters rather than
numbers is confusing me. Can you please give me an example of a
- Egyptian Division [06/23/1998]
How did the ancient Egyptians do division?
- Egyptian Numerals [10/14/1998]
Can you give me some information or references on Egyptian numerals?
- Etruscan Origins of Roman Numerals [12/07/2003]
In Roman numerals, what do the letters stand for? Are there words that
they come from?
- Feet per Mile [12/08/1998]
What is the history behind the fact that there are 5,280 feet in a mile?
- Fraction Riddle [09/05/2003]
How can 1/2 of 9 be 4?
- Galley or Scratch Method of Division [12/08/2002]
In the 15th century, what was the method for performing division?
- Googol, Kasner, and Milton Sirotta [07/14/1999]
Who coined the phrase "googleplex," and when?
- Greatest Mathematician [8/9/1996]
Who was the greatest mathematician ever?
- History and Origination of the Metric System [08/13/2005]
Who invented the metric system? | <urn:uuid:b7af8e32-cc39-45db-8e00-f0f1235b2589> | 3.59375 | 1,359 | Q&A Forum | Science & Tech. | 66.247325 |
Find the equation of the normal lines to the curve which is parallel to the line
Please answer in steps..
First you need to find the derivative:
Then, from the equation of the line you are given, you see that the slopes of the normal line is -2 ; and since you want the slope of the tangets, you get the opposite reciprocal of -2, which is m=1/2.
Now you have y', just plug it in the derivative function and find your x.
Then with that, go back to the original equation and find y.
And you have all the ingredients necessary to set a line equation of the normals. | <urn:uuid:eebbff61-2488-4284-ad38-0204028e923e> | 3.0625 | 136 | Q&A Forum | Science & Tech. | 68.398041 |
Research Article at the Digital Library for Physics and Astronomy. Their website is hosted by the High Energy Astrophysics Division at the Harvard-Smithsonian Center for Astrophysics. The SAO/NASA Astrophysics Data System (ADS) is a Digital Library portal for researchers in Astronomy and Physics, operated by the Smithsonian Astrophysical Observatory (SAO) under a NASA grant.
|Title:||An experimental study for scale prevention in boiler by use of ultrasonic waves|
|Authors:||Heo, Pil Woo; Lee, Yang Lae; Lim, Eui Su; Koh, Kwang Sik|
|Affiliation:||AA(Korea Inst. of Machinery and Mater., Taejon), AB(Korea Inst. of Machinery and Mater., Taejon), AC(Korea Inst. of Machinery and Mater., Taejon), AD(Kyungpook Natl. Univ.)|
|Publication:||The Journal of the Acoustical Society of America, vol. 112, iss. no. 5, p. 2441-2441 (ASAJ Homepage)|
|NASA/STI Keywords:||CHEMICAL REACTIONS, HEAT TRANSFER, METAL IONS, SURFACE REACTIONS, THERMAL RESISTANCE, ULTRASONIC RADIATION|
|Comment:||NASA/STI Accession number: 20020082568|
In case of a boiler, scale is made on the surface of tube by the chemical reactions of Ca and Mg ions contained in the water and heat transfer rate is reduced because of the increment of heat resistance. Thus, it brings about the reduced energy transfer efficiency and also environmental pollution due to the use of chemicals for scale removing. In this paper, we have first investigated the effects of irradiated ultrasonic wave on water with impurities in a beaker. The experiments show that exposed water is less transparent and has finer particles as compared to unexposed water. This means that the ultrasound shakes water in the beaker and breaks out particles, and so broken fine particles are suspend in the state of less precipitation. Second, the laboratory experiment with an exposed sample in the similar condition to the boiler shows better scale prevention effects as compared to the unexposed one. And also 20 kHz ultrasound represents about 3 times better scale prevention effects than 40 kHz. Finally, scale prevention experiments in the real small boiler tell us exposed sample results in 3659% less scale formation than an unexposed one.
The ADS maintains three bibliographic databases containing more than 7.8 million records: Astronomy and Astrophysics, Physics, and arXiv e-prints. The main body of data in the ADS consists of bibliographic records, which are searchable through highly customizable query forms, and full-text scans of much of the astronomical literature which can be browsed or searched via our full-text search interface. Integrated in its databases, the ADS provides access and pointers to a wealth of external resources, including electronic articles, data catalogs and archives. | <urn:uuid:1a5525bc-32b2-4511-9ca6-0470d238ff2b> | 2.75 | 636 | Academic Writing | Science & Tech. | 35.509176 |
This laser is a type of ultraviolet laser often used in UV photolithography in eye surgery.
The term excimer comes from excited dimer English (excited dimer)
This uses a combination of inert gas such as argon, krypton or xenon, with reactive gas. Under appropriate conditions of electrical stimulation, a pseudo-molecule is formed, which exists only in an excited state and can cause laser light in the ultraviolet range.
The excimer laser removes tissue with an accuracy of 0.25 microns. Currently in the second decade of use, the technologically sophisticated Excimer Laser has added a tremendous range of precision, control and security for the correction of vision errors. Using this remarkable technology, the cornea is reshaped to suit the requirements of your glasses or lenses, while reducing or eliminating dependence on corrective lenses for life.
Ultraviolet light from the excimer laser is absorbed well into tissues and organic compounds. Instead of cutting or burning, the excimer laser has enough energy to separate the bonds between molecules of the tissues. The excimer laser has the ability to lift or remove small and thin layers of cells without damaging tissue. is rare because most of excimer lasers (like this) are of noble gas halides.
and is complicated by the action of laser excimer molecule is because it has an associated state excited but also has a non-associative.
this is because the noble gases like xenon and krypton are inert and do not tend to form chemical compounds. However, in an excited state molecules can be temporarily linked with themselves (dimers) or halogen atoms such as fluorine and chlorine (forming complexes excited). I hope this serves as an answer. | <urn:uuid:176e6d5d-c545-42ca-b7fd-7a1794baf36f> | 3.375 | 350 | Q&A Forum | Science & Tech. | 26.735897 |
The gender issue
Would you like to become a mathematician, or rather stick to being a woman? There are of course many people who are both, and very successful ones at that, but the fact remains that women are still under-represented in maths-based careers and degree courses. They are also twice as likely to drop out of maths-based careers as men. Now a new study suggests that this may be due to the strain a maths environment puts on women's self-perception; a strain that works through unconscious gender stereotypes.
In an article published in the January issue of Psychological Science, psychologists Amy Kiefer of the University of California, San Francisco, and Denise Sekaquaptewa of the University of Michigan report on a study they carried out on undergraduates that were enrolled in an introductory calculus class. They rated women's implicit gender stereotypes, for example by checking if they automatically associated "male" with maths ability, and their self-perception, for example by asking if they identified themselves as feminine. They then followed their performance independently of the maths ability they had displayed previously.
The researchers found that the worst performers were those that had strong implicit gender stereotypes and were likely to identify themselves as feminine. This may seem unsurprising, but the important point is that the women's stereotypes were unconscious: the majority of women taking part in the study had explicitly stated that they do not believe that men are better at maths than women.
Another interesting point is the extent to which under-performance seems to be linked to gender identification. The authors suggest that this may give some insight into the high drop-out rate of women in maths-based careers. Women may feel that to be in tune with their work environment, they need to distance themselves from feminine characteristics. And the more they value these characteristics, the bigger the sacrifice that this involves, so that even women who are very good at what they do may come to leave their field.
It's sad to see how deep-seated women's stereotypes about their own abilities are, but there is hope. In recent decades women's participation in maths and science has increased drastically. There are many highly successful women mathematicians that can serve as role models. And once a critical mass has been reached, even the most ingrained stereotype can be overturned by experience.
posted by Plus @ 3:08 PM
- At 1:37 PM, said...
Hi!, I am Sarah-Jane and I am an Intersex-46xx Woman. I have a phd in nuclear physics and have never had a problem with the three 'R's. I believe it is because we are indeed steretyped by a stereotypical society and because of this we are continually compartmentalised into believing what we can and can not do. As a feminist of the seventies I fought my family and socity to do what I wanted to do and dd not take s*** from anyone. We all have to be who we know ourselves to be and then we can be honest with ourselves to give us the strength to achieve what we ourselves are inside of us. I only broke free when I realised the only one holding me back was me. This pi**** me off and gave me the strength form within to succeed. We are only held back by are inability to realise our own ambitions.... | <urn:uuid:e301aab9-c870-4fd4-91b4-37a44aea8966> | 2.734375 | 670 | Comment Section | Science & Tech. | 45.8025 |
The more you understand, the less you have to memorize.
A good example is trigonometric identities, of which there are quite a number. Should a student memorize trigonometric identities? Well, at first, it is probably wise to memorize a few of them. Part of a teacher’s job is to help students identify what is essential to memorize, and what is more peripheral. In the case of trig identities, the most important ones are
Even the signs are superfluous in equations (2) and (3); one can remember the top signs only, and make use of the symmetry properties of the sine and cosine functions (i.e, sine is odd and cosine is even). That is, to derive the bottom signs from the top signs, just replace by , and use the facts that and on the right sides of equations (2) and (3).
Now divide each term of equation (1) by and you will get another standard identity; similarly if you divide each term of equation (1) by .
Let in equation (2) (with the plus signs), and you will instantly derive another important identity, . Similarly for equation (3); then combine the result with equation (1) to derive the three versions:
Rearranging the last two equations, one can derive the following identities (important for integrating even powers of sine and cosine):
And so on; but you get the idea: Knowing a few basic identities, you can rapidly derive many more. If you practice these kinds of calculations, you will get good at them, and will be able to effect them very quickly, even on exams.
But is even the three basic identities too much for you to memorize? Well, once you learn a little bit about complex numbers, including Euler’s formula
then you don’t even have to remember them! (Of course, you do have to remember Euler’s formula, but there is a nice picture to help you!)
For example, consider
which follows from the properties of exponential functions. Apply Euler’s formula to each side of the previous equation to get
Expanding the right side, we get:
Equating real and imaginary parts on each side of the previous equation, we end up with the trigonometric identities in equations (2) and (3). Voila!
To derive equation (1) in a similar way, use
Then write the left side of the previous equation as
and use Euler’s formula on each factor; within a few lines of algebra you will have it.
For more along these lines, see this page at John Baez’s site.
Returning to a general discussion, I have always advised my students to memorize the absolute minimum necessary, but make sure to know that bit cold. And the best way to do this is to practice using what you wish to memorize in solving exercises and problems. In this way, you naturally memorize just by repetition of use. Just staring at a formula and trying to will it into memory never worked very well for me, and I don’t think it typically works well for others either. And it wastes precious time that would be better spent solving problems or reading around the subject so as to open the mind to new concepts.
Another excellent way to memorize things is to represent the thing to be memorized by a picture, which is much more memorable than a collection of words. A good example is the mean value theorem. As a student, I found it very easy to remember this theorem just by remembering the picture. One can remember such pictures for a lifetime, whereas theorems remembered merely as collections of words tend to fade a lot sooner.
The best way to remember something is to understand it thoroughly, from many different perspectives. This takes time and work, but ultimately it is the most satisfying. For example, one can memorize the multiplication table by rote (I did, but then I grew up at a time when this was extremely common). But once you’ve memorized it, that’s all you know. But if you examine the table carefully, noting the many patterns that are present in it, then you will know a lot more.
For instance, you will notice that every number in the multiplication table that is one step NE or SW from a number on a main diagonal is one less than the nearest number on the main diagonal. For example, , which is one less than , , which is one less than , and so on. You might speculate that this is always true, no matter which whole numbers you use, as long as you follow the pattern. And you might convince yourself that this is true using an area model: Take a square “floor” that has tiles on each side. If you take a row of tiles off the North end of the floor and move the tiles so that they are all on the East side of the floor, you will end up with a rectangular floor of dimension , but with an extra tile sticking out at the East end of the North side of the floor. Algebraically, the fact that the number of tiles is the same in each figure can be represented by , which can be rearranged to get a common factoring pattern learned in high school: .
The moral is that by understanding the multiplication table, not just memorizing it, your knowledge can extend far beyond the table (for instance, you can immediately determine that ), but you also prepare your mind to understand more advanced material. In our example, noticing a simple pattern in the multiplication table prepares you to understand a factoring pattern when you tackle algebra in high school.
To continue the story about patterns in the multiplication table, you can go two steps away from the main diagonal; this corresponds to taking two rows of tiles from the North end of the floor and placing them on the East end of the floor (except now a block of tiles will stick out). Then continue with three rows, four rows, and so on. This gives a concrete model for the factoring pattern .
But, of course, to each his (or her) own. So experiment, play with mathematics, and then do what works best for you. | <urn:uuid:2446cfd4-3abc-4a4e-bf31-7b1fc7dc4439> | 3.65625 | 1,289 | Personal Blog | Science & Tech. | 49.962752 |
Adam Eyre-Walker has published a review of adaptive evolution in a few well studied systems: Drosophila, humans, viruses, Arabidopsis, etc. These organisms have been the subject of many studies that used DNA polymorphism, DNA divergence, or a combination of the two to detect natural selection in both protein coding and non-coding regions of the genomes. Now that we have whole genome sequences for multiple closely related species from a few different taxa, many researchers are interested in determining the role of natural selection in the evolution of DNA sequences.
Eyre-Walker claims that the evidence for adaptive evolution is greater in Drosophila than in humans. But JP at GNXP thinks that Eyre-Walker doesn’t give the full story of adaptive evolution in the human genome, leaving out important examples. Eyre-Walker relates the difference in adaptive evolution between these two well studied species to differences in population size; humans have a smaller population size, therefore they fix less weakly advantageous mutations.
One way of measuring adaptive evolution is by comparing polymorphism and divergence at synonymous and non-synonymous sites (the McDonald-Kreitman test). Unlike some other tests (ie, Tajima’s D) the MK test is fairly immune to historical changes in population size, but an ancestral increase in population size may lead to an overestimate of advantageous substitutions. Eyre-Walker claims that this is not a concern for studies of a pair of model Drosophila species for two reasons:
“First, if anything, D. melanogaster appears to have gone through a population size decrease. Second, estimates using polymorphism data from either D. simulans or D. melanogaster are very similar; it is difficult to see how the bias could be the same given that the two species have different Ne.” [References omitted.]
Eyre-Walker’s citation for the D. melanogaster ancestral population size is a study that looked at codon bias. The effect of Ne on codon bias will persist much longer than that on polymorphism. It’s more probable that D. melanogaster has been recovering from a small ancestral population size (one that left that signature in codon usage), and has in fact been increasing in population size. It seems to me that the estimate of adaptive evolution in Drosophila is a bit high because of the increased population size in both D. melanogaster and D. simulans.
As mentioned previously, MK tests are robust to many violations of the assumptions that underlie the tests. Eyre-Walker points out that slightly deleterious mutations may lead to biased estimates of advantageous substitutions:
“The exception is the segregation of slightly deleterious non-synonymous mutations, because these can bias the estimate of α [proportion of non-synonymous substitutions that have been fixed by adaptive evolution] either upwards or downwards depending on the demography of the population. If the population size has been relatively stable, the estimate of α is an underestimate, because slightly deleterious mutations tend to contribute relatively more to polymorphism than they do to divergence when compared with neutral mutations. These slightly deleterious mutations can be controlled for by removing low-frequency polymorphisms from the analysis, because such mutations tend to segregate at lower frequencies than do neutral mutations. However, slightly deleterious mutations can lead to an overestimate of α if population sizes have expanded, because mutations that might have been fixed in the past, when the population size was small, no longer segregate as polymorphisms. Even fairly modest increases in population size can create artifactual evidence of adaptive evolution.” [References omitted.]
Slightly deleterious mutations exaggerate the effect of changes in population size. I’m not pointing this out because of how it relates to adaptive evolution in Drosophila. Instead, I find the solution to this problem quite fascinating: remove low-frequency polymorphisms from the data-set. This should remove most slightly deleterious polymorphisms from consideration (assuming constant population size). This immediately led me to think of a particular data set that has this quality built in: Hap-Map.
One major criticism of much of the SNP data in circulation is that it suffers from ascertainment bias (see here for example). Because SNPs are first identified in a small sample and assayed in a larger sample, many rare SNPs are missed. This poses a big problem for tests that depend on the site frequency spectrum of polymorphisms (eg, Tajima’s D), but could actually be useful if slightly deleterious mutations are segregating in the population. This assumes two things: the researcher is using an MK based test and the population size has been constant for many generations. We know that human populations have increased greatly over many generations, so we’re probably still overestimating adaptive evolution if we don’t take mildly deleterious mutations into account. | <urn:uuid:887c498c-f3fe-4e55-8300-1cdb016bde5b> | 2.953125 | 1,020 | Personal Blog | Science & Tech. | 22.560526 |
Saturn's 'day' shorter by five minutes
An analysis of Saturn's atmosphere has resulted in the definition of the planet's 'day' becoming somewhat shorter.
According to a study in the journal Nature, the time it takes the ringed behemoth to complete a spin on its axis is 10 hours, 34 minutes and 13 seconds, more than five minutes shorter than previous estimates.
Unlike a rocky planet, Saturn has no visual landmarks. Instead it is covered in clouds of gas driven by layers of jetstreams, making it hard to measure the planet's rotation.
As a result, astronomers have traditionally based their calculations on Saturn's magnetic field. But this signal can fluctuate and does not accurately measure how fast the planet's deep interior is rotating.
Dr Andrew Prentice of Monash University in Melbourne says the problem with using Saturn's magnetic field is that it changes over time.
"It does not give a proper measure of Saturn's internal rotation since the magnetic field is slipping relative to the planet," he says.
"As a result the period seems to have lengthened by seven to eight minutes since the time of the Voyager 1 and 2 missions in the mid-1980s."
Mapping the winds
An international team led by scientists from Oxford University and the University of Louisville, Kentucky, used a different technique based on infrared images taken by the US spacecraft Cassini orbiting Saturn.
"We realised that we could combine information on what was visible on the surface of Saturn with Cassini's infrared data about the planet's deep interior and build a three-dimensional map of Saturn's winds," says Oxford professor Dr Peter Read.
"With this map, we were able to track how large waves and eddies develop in the atmosphere and from this come up with a new estimate for the underlying rotation of the planet."
Read says the fact that a Saturn day has been shortened by five minutes is a bigger deal than one might think.
"It implies that some of our previous estimates of wind speeds may be out by more than 160 miles (250 kilometres) per hour," he says.
"It also means that the weather patterns on Saturn are much more like those we observe on Jupiter, suggesting that, despite their differences, these two giant planets have more in common than previously thought."
Prentice agrees, saying the result helps provide a consistent story across the solar system's gas giants.
"The bottom line is that Jupiter and Saturn are very similar in physical structure, origin and evolution." | <urn:uuid:4a3617fe-ae0f-49bf-8354-0e8cd3687561> | 3.515625 | 509 | Truncated | Science & Tech. | 40.939761 |
In light of the publishing of their new book, the authors of Climate Change -- Past, Present & Future: A Very Short Guide gave a lecture this past Saturday, April 25th, outlining the key points they hoped to make by publishing this work.
According to Dr. Warren Allmon, one of the three authors of this publication, the book was written for two reasons: First, so that the museum had a physical product to give to people interested in or curious about climate change, and secondly, because they believe that climate change “is the most important topic of the 21st century. It is something we really should be thinking about.”
Dr. Robert M. Ross, another of the authors of Climate Change -- Past, Present & Future: A Very Short Guide presented the team’s research on how to accurately analyze climate science by looking at the climate change of the past, as well as why it is important to study the phenomenon of global warming. “Global temperature is a very abstract concept to most people,” says Ross. It is difficult to grasp the concept of an average temperature over the entirety of the earth’s surface, he says. Scientists will oftentimes average the earth’s climate from multiple years and compare this data to that of years past. The purpose in doing this is to analyze trends in global climate in order to determine any abnormalities in the earth’s current conditions. It is important to know this information, says Ross, because climate change affects all life on earth.
A second key point of the publication, as presented by author Trisha A. Smrecak, is the effective teaching of climate change to youth and adults alike. According to Smrecak, there are a number of necessary steps that must be taken in classrooms in order to truly educate the American public about the importance of climate science.
First, it is important to engage students’ existing conceptions about climate and climate change. Everyone has preconceived notions, she says, that are based in science. It is important to address what people already know and move from there.
Secondly, it is important to teach science in science classrooms, not politics. “We can go so much further by understanding the science itself and letting people make their own inferences,” says Smrecak.
Also, it would be beneficial to teach climate change across the curriculum in schools to promote a higher understanding of the topic. Climate change can be incorporated into all areas of the science curriculum since it is steeped in scientific history and affects all facets of life on earth.
The book also suggests that being emotional or pessimistic about the topic of climate change works as a detriment to the cause of proper education on the topic. “People get sick and tired of hearing that everything is going to fall apart,” says Smrecak. “It is important to think about small, measurable goals that we can achieve.” It is more important when trying to motivate people to take action to focus on the small steps they can take than it is to focus on the big picture. People feel overwhelmed trying to fix everything at once.
Going off of this point, the book explains that it is important to highlight opportunities for youth to make a difference. Showing youth volunteer opportunities as well as future career opportunities empowers them to see that they can have an impact on the future of climate change.
Finally, says Smrecak, it is easier to teach students and community members about climate change if local, tangible examples are used. Since global climate change is such an abstract concept, it can be difficult for people to grasp. They may not always understand the severity of climate change if they have not directly experienced it. Instead, people respond better if they can see an example that affects their daily lives. Encouraging a community to take action is much easier if people can directly see the motivating factor behind their actions.
Allmon also addressed the audience on the issue of society not truly accepting the significance of climate change.
One key reason society in general tends to ignore the importance of climate change, says Allmon, is that we like to avoid bad news, and because of this we are often susceptible to listen solely to things we want to hear. “We actively turn off what we don’t want to think about,” says Allmon. “If we only listen to one point of view and that point of view happens to be uninformed, then it becomes a self-perpetuating cycle.”
Also an issue, says Allmon, is that as a society, we don’t like to accept science that is not backed by absolute data, not understanding that science is an ever-evolving process that rarely boasts a great deal of certainty. “One of the primary reasons for people not accepting that climate change is occurring,” says Allmon, “is that they don’t fundamentally understand how science works.” According to the book, people are reluctant to accept climate change because the science is not “certain.” Since the scientific data relating to climate change may be flawed to a certain extent, people are less willing to make dramatic changes to their everyday lives.
According to Allmon, however, “life is a cost-benefit analysis.” The costs of climate change research being wrong are gigantic, but the cost of going about business as usual and not taking any action if climate change research is correct, he ascertains, could be catastrophic.
Despite the depressing nature of the topic, Allmon states that the possibility for change does indeed exist. “There is no possibility of stopping global warming,” he says, “but reversing it is a possibility eventually. The only way out of this is education, and right now education is pretty slow.”
Climate Change -- Past, Present & Future: A Very Short Guide can be purchased for $15 at the Museum of the Earth’s gift shop or online by clicking here: Climate Change.
--Megan Davis, Ithaca College | <urn:uuid:ce722ed8-a2f7-4cdd-801f-98c68e9752e5> | 3.375 | 1,244 | Personal Blog | Science & Tech. | 40.645448 |
Joined: 16 Mar 2004
|Posted: Fri May 08, 2009 9:45 am Post subject: Enzyme Coat Kills Germs, Digests Stains
|A way to attach a coating of 'live' enzymes onto plastic and other materials could lead to clothes that digest stains as soon as they occur, or kitchen surfaces able to kill bacteria.
US researchers have shown they can make plastic films containing active enzymes like those in biological clothes detergents. The process used is based on one typically used to produce thin, flat plastic products such as CDs, DVDs and flat-screen displays.
Known as "spin coating", it involves placing a large dollop of a liquid onto a flat surface which is then rotated at great speed. This generates powerful centrifugal forces that push the solution towards the surface edges and cause some liquid to evaporate, leaving behind a thin, solid film over the entire surface.
The thickness of the film depends on the properties of the original solution, such as its viscosity, and the spinning speed.
In a spin
Using 10-cm plastic discs as their flat surface, the team led by Ping Wang at the University of Minnesota, St. Paul, US, used spin coating to layer four films on top of each other.
First came a thin film of polystyrene modified to chemically bind to enzymes. Then Wang and his team covered this with a solution containing a protein-digesting enzyme known as subtilisin Carlsberg, commonly used in biological washing powders to remove stains.
The enzymes in the solution naturally bound to the chemical groups displayed on the polystyrene film.
Protein-digesting enzyme subtilisin Carlsberg forms layers in the coating
The team next added a layer of a chemical called glutaraldehyde that creates links between enzymes to ensure they are firmly attached to the plastic. Finally, another layer of subtilisin Carlsberg topped off the film.
Tests showed that nothing short of burning or harsh chemical treatments could now dislodge the enzymes, Wang says. "The bonding between the enzyme and the polymer coating is as strong as the chemical bonds that are responsible for the integrity of plastics."
Despite this strong bond, the attached enzymes still retained much of their activity, able to digest the protein albumin when it was sloshed on in solution, or deposited on the film using spin coating.
Building such a film into fabric could allow it to start digesting stains as soon as they occurred. The mothod could also provide an alternative to using silver nanoparticles to make fabrics anti-bacterial, which were recently found to easily wash out, potentially causing environmental damage. The enzymes could take on bacteria by attacking proteins on the outside of the cells.
"Our preliminary results showed that enzymes can be spin-coated onto any pre-prepared plastic structures and, beyond that, probably inorganic structures such as metals and ceramics," Wang says.
The underlying chemistry is flexible enough to spin coat directly onto fabric, alternatively a plastic-enzyme film could be made first and later incorporated into material.
Wang says such enzyme-coated materials could have a wide range of uses, including self-healing materials or protective suits able to digest chemical or biological hazards. One surface could be given multiple functions by simply coating it with a variety of enzymes.
Suwan Jayasinghe, a biophysicist at University College London, UK, agrees that there are a wide range of potential uses for such enzyme-coated inorganic materials.
"Functional structures such as these are going mainstream, and are continuously elucidating their great promise to the biomedical sciences," he says. | <urn:uuid:1e254efc-ab4d-4c34-b3d7-b391b86a7301> | 2.984375 | 749 | Comment Section | Science & Tech. | 33.686625 |
|The atmosphere that protects us from gamma rays prevents us from directly observing them from the ground.|
A gamma ray is a photon more energetic than an X-ray (more than about 50 keV). Gamma rays are created from nuclear reactions or particle accelerations. Gamma rays are the most energetic photons of the electromagnetic spectrum.
The GLAST Burst Monitor (GBM) is the instrument on Fermi that is specifically designed to detect gamma-ray bursts.
Gravity is the attractive force of an object with mass on another object. The gravitational force between two objects depends on their masses and the distance between them. | <urn:uuid:22acc8c9-1b21-4e18-a593-a22cd5db46b2> | 3.984375 | 126 | Knowledge Article | Science & Tech. | 46.897941 |
Titin: derived from the Greek titan (a giant deity, anything of great size). This is a fitting name considering its name. Because it is the largest known protein it also has the longest IUPAC name which starts with “methiony...” and ends with “...isoleucine” and contains about 1.5 thousand letters in between.
This makes it the longest word in not only the English language but also in any other language on Earth. Titin is very important in the contraction of muscle tissue.
Only a few lexicographers accept chemical formulae as words, even if the chemical makes it possible for them to write words. As a result you won’t find the word in most dictionaries. But hey, Google doesn’t seem to mind it, and so it’s still considered by many to be the longest word. | <urn:uuid:98683c9c-039d-4884-8446-0f2a6f12cb80> | 3.015625 | 186 | Knowledge Article | Science & Tech. | 67.268316 |
Carbon nanotubes are one of the leading-edge ingredients of the nanotech revolution. The tiny, cylindrical tubes made of a lattice of carbon atoms have potential as strengthening fillers in products such as tennis racket handles and brake pads. At a higher level, the tubes are being considered for use in electronic devices such as display screens or as ultra-small wiring in computer chips. Writing in the journal Nature Nanotechnology, however, a group of researchers waves a cautionary flag.
The team found that when injected into the body cavities of mice, long carbon nanotubes could behave in a way similar to the way asbestos fibers behave, forming lesions that, in the case of asbestos, lead to cancer. Shorter nanotubes did not seem to have the potentially dangerous effect, and it's not known whether the tubes can make their way to the mesothelium if inhaled. "This is of considerable importance, because research and business communities continue to invest heavily in carbon nanotubes for a wide range of products under the assumption that they are no more hazardous than graphite," wrote the scientists. In this segment, we'll talk about what is known about the potential health effects of these and other nanomaterials, and what steps should be taken now to prevent future asbestos-like environmental and health problems.
Produced by Charles Bergquist, Director and Contributing Producer | <urn:uuid:3ce47252-5b50-449f-9dd9-bcb0b244df48> | 3.84375 | 284 | Truncated | Science & Tech. | 23.741864 |
Excerpt from the book Darwin’s Dangerous Idea, Daniel dennett.
Danny Hillis, the creator of the Connection Machine, once told me a story about some computer scientists who designed an electronic component for a military application (I think it was part of a guidance system in airplanes). Their prototype had two circuit boards, and the top one kept sagging, so, casting about for a quick fix, they spotted a brass doorknob in the lab which had just the right thickness. They took it off its door and jammed it into place between the two circuit boards on the prototype. Sometime later, one of these engineers was called in to look at a problem the military was having with the actual manufactured systems, and found to his amazement that between the circuit boards in each unit was a very precisely milled brass duplicate of the original doorknob.
Its impossible keep sleeping!
Works with compressed air and an computer.
The Brown iGEM Team shows off the Nanodrop Spectrophotometer and compares it to regular spectrophotometers.
Can prevent Gray Goo
Never release nanobots assemblers
without replication limiting code. | <urn:uuid:90e7a2e7-b32c-4bcb-b5dc-2a07ca61f341> | 2.9375 | 236 | Content Listing | Science & Tech. | 39.413237 |
Scripts and Forms
WAI Definition (Checkpoints 6.4, 8.1, 9.2, 9.3)
Ensure that user interfaces are device-independent and use logical device-independent event handlers.
Ensure that scripts and applets are accessible.
Ensure that it is possible to logically interact with the site using keyboard and/or mouse input/output methods. Scripts and applets are types of programmatic objects that should either be inherently accessible or should include features that are usable with assistive technologies.
Interfaces which do not provide flexibility in the type of device which the user relies on to input information are inherently inaccessible. For example, a laptop user may choose to work without a mouse. If this were the case and interactive features on the site relied on "drag and drop" interactivity as the only means of interaction, the site would be unusable. Additionally, if the site were delivered through a kiosk or public access terminal with a touch screen interface, it would be unusable.
Screen-reader users, on the other hand, rely entirely on the keyboard for interacting with websites. Failing to support the keyboard as an input device will make the site unusable to them. Accessibility features which are provided in programming technologies and supported by assistive technologies should be included in programmatic elements.
A. If you must use device-dependent attributes, provide redundant input mechanisms
- "onfocus" (the event occurs when an object gets focus): this will highlight the form field when the event is tabbed into or clicked on.
- "onblur" (the opposite of "onfocus" where the event occurs when an object loses focus)
- "onselect" (the event occurs when text is selected in a “text” or “textarea” field)
Note that the above attributes are designed to be device-independent, but are implemented as keyboard specific events in some browsers. Otherwise, if you must use device-dependent attributes, provide redundant input mechanisms (i.e., specify two handlers for the same element):
- "onmousedown" (the event occurs when a mouse button is clicked) along with "onkeydown" (the event occurs when a keyboard key is pressed): the event will be triggered when the user tabs or clicks on a specific field
- "onmouseup" ( the event occurs when a mouse button is released) and "onkeyup" (the event occurs when a keyboard key is released)
- "onclick" (the event occurs when an object gets clicked) along with "onkeypress" (the event occurs when a keyboard key is pressed or held down)
- On links and form controls only, the “onclick” event can be used without the redundant “onleypress” event.
- There is no keyboard equivalent to double-clicking ("ondblclick").
B. Do not write event handlers that rely on mouse coordinates
This is bad practice because it relies on the use of a mouse and a highly visual style of interaction to invoke an event handler. Event handlers that rely on mouse coordinates don't support keyboard input.
C. If you use programmatic elements such as Java applets and Flash
Consult the accessibility guidelines from the application vendor.
<p><a href="index.html">Go back to previous page</a></p>
Use programmatic elements with keyboard-only and mouse-only navigation. Try to operate it using only the keyboard and using only the mouse.
All functionalities should be reached and properly controlled. | <urn:uuid:2035f965-ed4d-4653-8e5a-e8d3c164036c> | 3.34375 | 745 | Tutorial | Software Dev. | 35.272999 |
Mathematical model of the groundwater system in this area includes 13 types of data and spans multiple aquifers over more than a century. This enables us to assess the quantity of groundwater, where and how it is being used, and how pumping affects it.
Links to information on species of frogs, toads, and salamanders located in the southeastern United States and the U.S. Virgin Islands, with information on appearance, habitats, calls, and status, plus photos, glossary, and provisional data.
We estimated a mean undiscovered natural gas resource of 84,198 billion cubic feet and a mean undiscovered natural gas liquids resource of 3,379 million barrels using a geology-based assessment methodology.
Locations for nine species of large constrictors, from published sources, along with monthly precipitation and average monthly temperature for those locations. Shapefiles for each snake species studied.
Consistent, historic, and up-to-date ground-water data, such as water levels collected at wells and springs, are available from the USGS National Water Information System as graphs, tables, or files to download. | <urn:uuid:8ee37feb-f8e7-47f3-8e65-0e361aa3e93b> | 3.65625 | 229 | Knowledge Article | Science & Tech. | 29.680632 |
12 became Hurricane Katrina in Aug. 2005
The Weather Channel
Week continues with "Hurricane Force", an all new documentary offering
the definitive look at one of the most powerful forces of nature.
Cooks in the
kitchen follow recipes with specific ingredients and steps to achieve
culinary success. Leave out an important ingredient or miss a key step,
and the meal won't be great.
The atmosphere has a lot of variety on its menu. Cooking up a hurricane
is the result of following a particular recipe with the following
sea-surface temperatures, usually greater than 80°F
Weak vertical wind shear
(small changes in wind speed & direction with height) in the
and unstable atmosphere
distance away from the equator (so the earth's rotation can
help add some spin to the atmosphere)
Does that sound about right? Did we forget anything?
Go back into the kitchen for a moment. If you have all the right
ingredients, mix them all together in just the right amounts, and just
let them sit there, the meal won't just cook itself. You have to ignite
the burner to start the cooking.
What does this have to do with the atmosphere making hurricanes? Think
of this another way.
If you want to leave town fast, what do you do? Choose an interstate
with no traffic, make sure there's gas in the car, then drive, right?
Wait. There's an important thing in there you do, almost without
thinking about it. You have to start the car.
The atmosphere is the same way. Even with all the hurricane-making
ingredients on the list above, there will be no hurricane if the burner
isn't ignited - or if the car isn't started - to begin with.
Hurricane formation requires an initiating weather disturbance,
something with a little spin to spark the beginnings of development.
Hurricanes don't just magically appear over the fuel of warm waters
where there's no impediment from strong vertical wind shear.
So let's take a look at some of the trouble-making weather disturbances
that get the spin going, starting with the most prolific of all
Atlantic hurricane-makers: African easterly waves, or just tropical
waves for short...
NEXT > Instigator
#1: African easterly waves
share your comment, please go to end of article | <urn:uuid:a6156cf4-6b30-46da-ad9f-64353507c89d> | 2.84375 | 498 | Personal Blog | Science & Tech. | 54.8825 |
Last time, I discussed how El Niño events make normally dry places in the tropics rainy and normally rainy places dry. This redistribution of rainfall generates waves in the atmosphere that disrupt normal weather patterns around the world. El Niño is not the only process that redistributes rain in the tropics. Waves in the tropical atmosphere propagate both eastward and westward, and portions of these waves enhance rainfall while other portions reduce it. These different waves going in different directions at the same time make the weather of the tropics extremely difficult to predict. One process known as the Madden-Julian Oscillation (MJO) influences rainfall in the tropics almost as much as El Niño. As with many other weather and climate processes, the MJO is named after the scientists who first discovered it.
The MJO organizes clusters of thunderstorms more than six thousand miles across over the tropical oceans. These clusters move eastward at around 11-15 miles per hour (roughly the same as a 4 minute mile, but that’s a slow crawl from a weather perspective). These clusters of thunderstorms alternate with regions of reduced rainfall of similar size. Given the size of the clusters and their speed, it often takes 40 to 50 days from the beginning of a rainy period for it to go through the full cycle from rainy to dry to rainy again. As the clusters of thunderstorms move, global wind patterns change.
Just like with El Niño, the MJO convection forces trains of waves to propagate across the global atmosphere where they influence the weather and increase or decrease the likelihood of extreme weather events such as cold air outbreaks, heat waves, and floods. Recent research suggests that when MJO convection is located over the Indian Ocean, the threat of hurricanes over the Atlantic Ocean increases during the late summer and fall, and violent tornado outbreaks across the eastern United States are more likely in the spring (in fact, twice as likely as when the MJO convection is located somewhere else). The MJO influences bursts of westerly wind near the equator over the western Pacific Ocean, so it probably also influences the development of El Niño.
MJO impacts on the global weather have led some investment firms to use MJO signals to predict the chances of cold or warm periods across the eastern United States and other parts of the world. They invest in natural gas or oil futures when they anticipate abnormally cold conditions during the winter or hot conditions during the summer. Agricultural markets also change with the MJO. The price of orange juice fluctuates with winter temperatures in Florida, which vary with the MJO. Although temperature forecasts based on signals of the MJO occasionally turn out to be wrong, over time they show skill and yield net income for investors.
Just like El Niño, the MJO has substantial influence on the US and global economies. This connection to economics increases interest in understanding how it works. However, just like El Niño events, no two MJO events are exactly alike, and the associated weather patterns do not always evolve in the way that history suggests is most likely. To further complicate matters, scientists do not yet agree about exactly what causes the MJO in the first place. Most of the computer models that forecasters use to predict the weather do not provide any useful skill in predicting the MJO beyond about 2 weeks. In spite of the things we do not know, forecasters and investors and investors do in fact benefit from knowing the present state of the MJO and how history suggests that it is most likely to evolve over the following days and weeks.
Academic and economic interest has led the United States government to invest tens of millions of dollars in MJO research. Although this research has not yielded all the answers, utilization of results of this research by the private sector suggests that this money has been well spent. This research has allowed scientists to develop many different ways to track and predict the MJO. None of these approaches are perfect, but all can yield useful information.
The Internet has substantial information about the MJO and these tracking approaches. One of the more useful websites is http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/mjo.shtml,
posted by the Climate Prediction Center (CPC) of the National Oceanic and Atmospheric Administration of the US department of Commerce. The site includes a forecast for the next 2 weeks of MJO activity, provided each week by the CPC MJO working group.
One reason we understand the MJO so poorly is that we have little observational data to work with over the Indian Ocean where MJO convection most frequently amplifies. The benefits of increasing our observation of that region might be enormous. This fall and winter, various agencies of the United States and several other nations are participating in intensive observations and measurements in and over the Indian Ocean using ships, aircraft, radars, and weather balloons. The United States’ portion of the observation network is named DYNAMO, and you can read about its progress at
We have much to gain from such observational experiments, but we also need better long-term data collection from the Indian basin. Hopefully the improved observational work there will not end with DYNAMO. | <urn:uuid:ec9436a1-a691-470c-9ea6-47fc4399749c> | 3.953125 | 1,062 | Knowledge Article | Science & Tech. | 40.441241 |
The science world is abuzz with news of a strange new life form found in California’s Mono Lake: Researchers report that they’ve discovered a bacterium that can not only thrive in an arsenic-rich environment, it can actually use that arsenic to build its DNA. If the researchers, who published their findings in Science, are correct, then they’ve found a form of life unlike anything we’ve ever seen before.
As you might expect, DISCOVER’s blogs offered plenty of coverage of this exciting news.
At The Loom, Carl Zimmer writes: “Scientists have found a form of life that they claim bends the rules for life as we know it. But they didn’t need to go to another planet to find it. They just had to go to California.”
At Bad Astronomy, Phil Plait explains exactly how the bacteria can make use of arsenic to build their DNA. A few days ago, Phil also took NASA to task for its press release promising news of “an astrobiology finding that will impact the search for evidence of extraterrestrial life,” which fueled wild speculation on whether NASA had found little green men in the solar system.
At Not Exactly Rocket Science, Ed Yong debunks a few of the more breathless accounts. The bacteria do not “belong to a second branch of life on Earth…. They aren’t a parallel branch of life; they’re very much part of the same tree that the rest of us belong to. That doesn’t, however, make them any less extraordinary.”
80beats: Life Found in the Deepest, Unexplored Layer of the Earth’s Crust
80beats: Do Asphalt-Loving Microbes Point the Way to Life on Titan?
80beats: Arsenic-Eating Bacteria May Resemble Early Life on Primordial Earth
DISCOVER: Renewed Hope for Life on the Red Planet | <urn:uuid:7ac0b2b1-65f0-4cf3-9128-ee7175bc5705> | 3.21875 | 416 | Content Listing | Science & Tech. | 53.416721 |
As chance would have it, the night after writing this post about the equations shown in science fiction, an episode of Eureka aired in which Sheriff Carter was faced with the pictured board full of equations.
Carter, not the most technical of men, had to learn the equations in order to have chance at stopping a runaway time-loop. The equations looked familiar, so I checked in with Kevin Grazier, Eureka‘s science advisor, a JPL researcher, and a panelist on DISCOVER’s “Science Behind Science Fiction” Panel at this year’s Comic-Con. It turns out that Kevin actually wrote the equations, borrowed from a real class he gives that touches on the theories of special and general relativity. The equations refer to how time behaves in Einstein’s relativity theory, in particular, the phenomenon of time dilation. The neat part is that pretty much anybody who finished high school can master the math and science behind special relativity’s prediction of time dilation (as the title of this post says, if Carter can do it, so can you!).
Time dilation occurs noticeably when a object is moving close to the speed of light: imagine a spacecraft shooting by the Earth. From the point of view of someone standing on Earth, time dilation means that time is running slowly onboard the spacecraft. A second on the spaceship could be equal to an hour on Earth. (Time dilation has been experimentally verified using subatomic particles and particle accelerators, but the principle is the same.) The key is this one part of the board, which I’ve highlighted.
See that bit? The triangle followed by the symbols t’ is read as Delta. t’ (pronounced t-prime) represents time on board the spaceship. Delta is used to mean “change in” in a lot of scientific equations. Delta-t-prime then is the measured time on board the spaceship. Delta-t (without the prime) is the measured time that is passing on Earth.
The factor used to convert between Earth time and spaceship time is the complicated-looking fraction with the square root symbol underneath the delta-t-prime. This is the time-dilation factor, and it’s the core of special relativity. The only variable in this factor is v, the velocity of the spaceship. The other symbol, c, stands for the speed of light in a vacuum, which is a universal constant. Using this factor, you can work out for yourself just how fast a spacecraft has to be traveling so that one second of ship time equals one hour of Earth time (it works out to 99.999996 per cent of the speed of light).
Working out the time dilation factor from special relativity’s first principle, which was Einstein’s assumption that the laws of the universe do not change because you are moving relative to some object, requires only a little physics (if you understand that distance equals rate times time, you’re there), and some high school algebra. A little more work gets you to one of the biggest equations in science: E=mc2. There are tutorials which will step you through the process: I recommend this one, and this one. (General relativity, which deals with accelerating objects in addition to those moving with a constant velocity as in Special Relativity, is a whole other ball of wax, and requires some serious math, alas) I really recommend you try working through the time dilation derivation: at the end you’ll have grasped for yourself one of the most elegant and important elements of modern science and really understood it in the way that scientists do, instead of just through the kinds of wordy explanations that journalists like me fall back on when discussing relativity.
Links to this Post
- Planet-x.com.au » Eureka and Special Relativity: If Carter Can Do It, So Can You! | August 26, 2008
- Greg Egan’s Incandescence: Upping the Relativistic Ante | Science Not Fiction | Discover Magazine | September 3, 2008
- Stargate Atlantis: Colonizing The Galaxy | Science Not Fiction | Discover Magazine | November 17, 2008 | <urn:uuid:3626ac04-979c-4dcd-8bee-27f483915413> | 3.6875 | 871 | Personal Blog | Science & Tech. | 47.424139 |
Posts Tagged: washboarding
Ever seen honey bees engaging in washboarding?
It's a behavior so named because they look as if they're scrubbing clothes on a washboard or scrubbing their home.
It occurs near the entrance of the hive and only with worker bees. They go back and forth, back and forth, a kind of rocking movement. No one knows why they do it. It's one of those unexplained behaviors they've probably been doing for millions of years.
Bee breeder-geneticist Susan Cobey of the University of California, Davis and Washington State University, has witnessed washboarding scores of times. Last week the unusual behavior occurred on two of her hives at the Harry H. Laidlaw Jr. Honey Bee Research Facility at UC Davis. She hypothesizes that these bees are in the "unemployment line." It's a time when foraging isn't so good, so these bees are "sweeping the porch" for something to do, she speculates.
Emeritus professor Norman Gary of UC Davis Department of Entomology writes about it in his chapter, Activities and Behavior of Honey Bees, in the Dadant publication The Hive and the Honey Bee.
"They stand on the second and third pairs of legs and face the entrance. Their heads are bent down and the front legs are also bent," wrote Gary, who has kept bees for more than six decades. "They make 'rocking' or 'washboard' movements, thrusting their bodies forward and backward. At the same time they scrape the surface of the hive with their mandibles with a rapid shearing movement, sliding over the surface as if cleaning it."
They pick up some material and then clean their mandibles.
Gary thinks that "these rocking movements probably serve as a cleaning process by which the bees scrape and polish the surface of the hive."
Like most people, professor/biologist/bee researcher James Nieh of UC San Diego has never seen this behavior. Nieh, who recently presented at seminar at UC Davis, later commented "It is an interesting behavior that would be particularly fascinating to observe in natural colonies in trees. It does seem to involve some cleaning behavior, although it is possible that bees are depositing some olfactory compound while they are rubbing the surface with their mandibles. We are currently conducting research in my lab on the effects of bee mandibular gland secretions on foraging orientation behavior. A new set of experiments will involve examining the effect of mandibular gland secretions on bee behaviors at the nest. I will definitely consider looking at how this potential pheromone affects washboarding."
We managed to capture the behavior with our Iphone and posted it on YouTube.
It's interesting that of the some 25 research hives at the Laidlaw facility, occupants of two of Cobey's hives exhibited washboarding last week.
So, what are washboarding bees doing? Cleaning their home where pathogenic organisms might congregate, per a theory by Katie Bohrer and Jeffrey Pettis of the USDA-ARS Bee Research Lab?
Or are they just creating "busy work"--"sweeping the porch" for something to do?
It would be interesting to find out!
Honey bees engaging in washboarding behavior with "rocking" or up-and-down movements. (Photo by Kathy Keatley Garvey)
Foragers flying back to the hive as their sisters engage in washboarding activity on the wall, or what Susan Cobey calls "sweeping the front porch." (Photo by Kathy Keatley Garvey) | <urn:uuid:a7d754c7-3950-4cc0-96ab-c1ac14f48421> | 2.75 | 741 | Personal Blog | Science & Tech. | 46.759979 |
1) The Inner Core - The inner most section of the Earth is called the inner core, it is basically the round center of our planet. This gigantic sphere is over 2700Km in diameter and is composed of iron and nickel. Despite having a temperature between 4000-6000 degrees Celsius, this giant mass is a solid, due mainly to its density and the pressure of our planet.
2) The Outer Core - The outer core is the outer layer that surrounds the inner core. This layer is also composed of Iron and Nickel but unlike the inner core, this layer is in a molten state. These liquid walls are over 2300Km thick and are constantly rotating and flowing around the inner core. The temperature here is slightly lower than the inner core at a temperature of 4000 degrees Celsius.
3) The Mantle - The next layer outward is the mantle. This layer is the largest of all the earths structural zones, being around 2900Km thick, and makes up about 70% of the Earth's volume. It is mainly thought of as a hard shell made of rock that encloses the Earth's core's and which lies just beneath the Earth's crust. Despite being classified as one zone, the mantle contains two of its own sub-zones, which are significantly unique from one another.
3A) The Asthenosphere - This is the upper most section of the mantle. It covers the top 250Km of the Mantle which boundaries the crust. Temperatures here can be as low as 100 degrees Celsius, but increases as you move further into the mantle towards the cores. Whats real unique about this layer is its physical and structural properties. This layer is neither solid or liquid but behaves like a plastique.
3B) The Lower Region - Oddly enough the lower region of the mantle doesn't have a name and is just referred to by its position. It consists of the other, lower 2650Kmof the mantle, and its properties are basically the general mantle properties listed above. Temperatures here range from 4000 degrees Celsius near the cores, to slightly above 100 degrees near the asthenosphere.
4) The Mohorovicic Discontinuity - This layer is often called the "Moho" and forms the boundary between the Astenosphere and the Earth's crust (lithiosphere). Its position between about 5Km beneath ocean ridges to approximately 75 km beneath continental crust. It is classified mainly do to its unique set of properties. Here scientists have measured rapid increases in the velocity of Earthquake waves.
5) Lithosphere - The lithosphere is the outer shell of the Earth, also known as the 'crust'. It is solid, rigid, and composed mainly of rock and similar materials and minerals. The lithosphere includes the ground we walk on, the surface of the Earth, down about 150Km continental or 70Km oceanic to the mohorovicic discontinuity. The lithosphere 'floats' on the asthenosphere, and do to its shell like nature, is generally brittle and has broken into several large fragments. These fragments are also known as 'tectonic plates' which float and push slowly across the Earth.
This article was written by Ryan Woodford | <urn:uuid:01ae396d-dbed-4acc-8e95-4304d73f1869> | 4.25 | 663 | Knowledge Article | Science & Tech. | 51.621023 |
|Monday||22 December 2003||8:41pm MST||2003-12-23 UTC 0341|
Small objectsIn the period from yesterday back to December 13th, when A/CC Major News last summarized observations of the smallest near-Earth asteroids [fn], ten of these objects were tracked, including four new discoveries. 2003 XZ12, YN1, and YR1 were found by LINEAR, and 2003 YW1 by Spacewatch with its 0.9m telescope. (2003 YR1 was announced at H=21.2, now H=22.3.) Confirmation was provided for one or more of these objects by Great Shefford, Ondrejov, Powell, Sabino Canyon, and Tenagra II observatories, the Observatorio Astronomico de Mallorca (OAM), KLENOT, and the Spacewatch 1.8m telescope. Further follow-up for 2003 XZ12 came from Great Shefford and Powell observatories and Klenot on the 17th and/or 18th. Powell and Tenagra II followed 2003 YR1 on the 19th and Great Shefford on the 20th. 2003 YN1 hasn't been reported since its December 19th discovery MPEC, but 2003 YW1 was caught by LINEAR this morning.
A population of small near-Earth asteroids (NEAs) passes by too dimly and too quickly to be easily discovered or followed long enough to calculate an orbit allowing good prediction of future returns. To come sufficiently close to be seen at all, such objects likely have an orbit that would be considered potentially hazardous, except they are too small for that categorization. The official hazard dividing line is absolute magnitude (brightness) H>22.0, which translates to less than a median width of about 135 meters/yards by standard conversion formula, but possibly as large as 240 meters. Higher H means smaller size, down to as small as H=30.1, perhaps three meters wide.
Animation from EasySky screen shots showing the known smallest (H>22.0) asteroids in Earth's neighborhood during 14-21 December. Those reported observed last week are circled and identified. The ecliptic grid is set at 15 lunar distances.
The Spacewatch 1.8m telescope provided follow-up for small objects 2003 SN215 and WP25 on December 18th, and on the 14th caught 2003 WH166, which was also reported by Desert Moon Observatory on the 13th and KLENOT on the 18th. Linz Observatory observed 2003 WJ98 on the 18th, while Jornada Observatory got 2003 WW26 on the 14th. And 2003 WY153 had its last impact solutions eliminated after observations from Great Shefford on the 16th and 17th and from Powell on the 17th.
There was also archive work reported with newly located or remeasured positions for small NEAs 2001 QF96, 2002 PX39, and 2003 UW5.
The Monday Daily Orbit Update MPEC (DOU) carries observations of 2003 YT1 from San Marcello Pistoiese Observatory Saturday night in Italy and Jornada Observatory early yesterday in New Mexico. Correction: It was originally reported here that NEODyS, which updated its risk assessment for YT1 yesterday ahead of the next DOU has done so again today, raising its overall risk rating and still having only one solution, but a solution that is less than eight years away. In fact, NEODyS was not working ahead of the DOUs in this case. It and JPL had both issued new YT1 risk assessments based on San Marcello Pistoiese's observations received before the Monday DOU. When that DOU appeared with those and Jornada's new observations, NEODyS updated, but the day passed without JPL issuing a new assessment incorporating the Jornada data.
The European Spaceguard Central Node today posted a 2003 YT1 observing campaign (dated December 20th):
[The] object will remain visible in the evening sky at small solar elongations, further observations are required as soon as possible, certainly before any conflict with moon light in a week from now. Since there are large margins for orbital improvement, all the collision solutions should be eliminated fairly soon; however, the relative large size of this object does impose a rapid reaction from observers.
No new observations of 2003 XM were reported in today's DOU.
Update at 0341 UTC: JPL has posted 2003 YD45.
Publisher information, privacy statement, and disclaimer
The contents and presentation of this page are © Copyright 2003 Columbine, Inc. - All Rights Reserved
Please report broken links or other problems with this page to <email@example.com>.
Any mentioned trademarks are the property of their respective owners.
Do NOT copy or mirror this page, but you are welcome to link to it. All information here is subject to change.
Individuals may make "snapshot" copies for their own private non-commercial use.
Linking: A/CC's Major News via frame or redirection, via partial mirror frame or redirection, or via news feed or XML/RSS
Bookmarks: A/CC's Major News via frame or redirection –&– via alternate partial mirror site frame or redirection | <urn:uuid:70e637fc-0bae-4ef7-9afb-dad7102459f5> | 2.734375 | 1,118 | Content Listing | Science & Tech. | 53.591159 |
(Submitted September 30, 1997)
I am a 6th grader and I know a lot of the constellations by name and enjoy
looking at the stars
and sky at night. This past weekend while camping our Girl Scout Troop
went star gazing in an open field. It was really dark and really
I really like to look at stuff on the Internet about the Universe. I need
some help in understanding how I might find specific information on the
Sun, planets, or stars.
For example, I was wondering: "How hot is the sun?"; "What star might be
hotter than the sun?"; and "What star might be cooler than the sun?". Is
there someplace that I can find just facts such as temperature, size,
distance from earth of stars?
You are asking how to find answers to questions on astronomy, using the
Internet. I will tell you how I would find out these answers.
First of all, the Internet is not always the best place to learn things.
If you can go to a library and look through their books on the Sun and
stars, you will probably find a book which explains most of what you want
to know. If I wanted to answer the questions you asked, the easiest thing
for me to do would be to grab some books from my shelf and say:
"The temperature of the surface of the Sun is 5770 Kelvin which
is 5,500 Celsius (or 10,000 F). Stars which appear bluish are
hotter than the Sun, stars which appear reddish are cooler than
the Sun. For example: Rigel, Vega and Sirius are blue (hot),
Arcturus, Aldebaran, and Betelgeuse are red (cool)."
These numbers and examples come out of an astronomy textbook.
But sometimes you can't get to a good library, sometimes the Internet is
more practical or convenient. If I were looking for information on
starfish instead of stars, all the astronomy books in my office wouldn't be
much help. There is an encyclopedia in an office two floors down, and that
might be the fastest way to learn things. But if I wanted to know a
specific obscure question, such as 'are there were any poisonous
starfish?', the encyclopedia probably wouldn't help me. I'd either go to
the library, ask a biologist, or go to the net where, with enough digging,
I would find that the 'Crown of Thorns' is the only known venomous
It is easiest to find things with search engines if you know the exact
words or phrases commonly used. Searching for "list of stars" will be less
useful than searching for "star catalog", and searching for "star colors"
is less useful than "spectral classification". And you will not try to
find "Hertzsprung-Russell diagram" unless you already know what it is. If
you do not know the exact words that people use, a web directory (such as
Yahoo) will often make it easier to find
than a search engine such as Alta-Vista.
There are many places to learn things on the Internet, if you can find
them. You found one: 'Imagine the
Universe'. We also have a site called
The Astrophysics Data System is a good place for astronomy:
It has star catalogs and other information. It also has a copy of the
'Handbook of space Astronomy & Astrophysics' by Martin V. Zombeck
This is one of the (paper) books on my shelf that I often refer to. Most
of the pages in it are very hard for anybody but an astronomer to
understand. The book includes a list (a 'star catalog') of the brightest
stars in the sky at
and the following pages.
The catalog includes distances (labeled 'd' and given in units called
'parsecs', each of which is 3.26 light years) and a quantity called 'B-V',
which tells how blue a star is: numbers below 0.65 are bluer and hotter
than the Sun, numbers above 0.65 are redder and cooler.
If you look at a
Hertzsprung-Russell diagram you will often find them marked with both the spectral classification and
the temperature. If you look in the 'spec' column of the table, you will
see entries like 'A0 p'--the first letter means that the spectral class is
'A', and so the temperature is about 10,000 Kelvin.
As you can see, it is easiest to use the web to find out about astronomy if
you are already an astronomer. But I hope my reply has been helpful.
for Ask an Astrophysicist | <urn:uuid:d1705535-3300-4ed2-a656-58b7a5bc1056> | 3.453125 | 1,009 | Q&A Forum | Science & Tech. | 65.670239 |
Manual Section... (2) - page: sigsuspend
NAMEsigsuspend - wait for a signal
int sigsuspend(const sigset_t *mask);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
DESCRIPTIONsigsuspend() temporarily replaces the signal mask of the calling process with the mask given by mask and then suspends the process until delivery of a signal whose action is to invoke a signal handler or to terminate a process.
If the signal terminates the process, then sigsuspend() does not return. If the signal is caught, then sigsuspend() returns after the signal handler returns, and the signal mask is restored to the state before the call to sigsuspend().
RETURN VALUEsigsuspend() always returns -1, normally with the error EINTR.
- mask points to memory which is not a valid part of the process address space.
- The call was interrupted by a signal.
Normally, sigsuspend() is used in conjunction with sigprocmask(2) in order to prevent delivery of a signal during the execution of a critical code section. The caller first blocks the signals with sigprocmask(2). When the critical code has completed, the caller then waits for the signals by calling sigsuspend() with the signal mask that was returned by sigprocmask(2) (in the oldset argument).
See sigsetops(3) for details on manipulating signal sets.
SEE ALSOkill(2), pause(2), sigaction(2), signal(2), sigprocmask(2), sigwaitinfo(2), sigsetops(3), sigwait(3), signal(7)
COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/.
This document was created by man2html, using the manual pages.
Time: 15:26:33 GMT, June 11, 2010 | <urn:uuid:8598e73a-97cd-4007-b478-9d9f43a57846> | 2.828125 | 454 | Documentation | Software Dev. | 55.650134 |
My colleague's daughter wanted to be a stop sign for Halloween, and he needed to cut out such a sign from a piece of cardboard that was 3 feet by 4 feet. What cuts did he make so that the stop sign (a regular octagon) was as big as possible?
Intuitively, we want to make the height as large as possible, leaving a little left over material on the sides. (We could alternately think of it as make the sides as large as possible leaving a little left over for the height, it doesn't matter.) Using this way of thinking, obviously, we're thinking of the cardboard as having the 3-foot side running vertically and the 4-foot side running horizontally.
When we see the octagon as triangles and rectangles, we know that the interior rectangle has the same height as the sides of the octagon, which we'll call . Looking at the triangles, then, we want to find out what is the side length of a right triangle with hypotenuse . Whatever that side length is, call it , we want to double it, add it to , and set that equal to 3. When we solve for , we'll know the value of .
Thus he started a cut on the left side (could be on the right, again, not really important how you view this), b units down from the top, at a 45-degree angle from the board (since the interior angle of an octagon is 135) until he cut that piece off. He did a symmetrical cut at the bottom. He also made a cut units to the right of the upper-left corner (measured from the point before anything had been cut), straight down the cardboard. He then made cuts on the right that were symmetric to those on the left.
I'll see if I can make and post a .pdf showing the picture that I used to think about this. | <urn:uuid:018f65eb-cb59-4da9-b762-687e9e5ebd31> | 3.1875 | 390 | Q&A Forum | Science & Tech. | 69.320844 |
Drought is a normal, recurring feature of the climate in most parts of the world.
As a drought persists, the conditions surrounding it gradually worsen and its impact on the local population gradually increases. Droughts go through three stages before their ultimate cessation :
1. Meteorological drought is brought about when there is a prolonged period with less than average precipitation. Meteorological drought usually precedes the other kinds of drought.
2. Agricultural droughts are droughts that affect crop production or the ecology of the range. This condition can also arise independently from any change in precipitation levels when soil conditions and erosion triggered by poorly planned agricultural endeavors cause a shortfall in water available to the crops. However, in a traditional drought, it is caused by an extended period of below average precipitation.
3. Hydrological drought is brought about when the water reserves available in sources such as aquifers, lakes and reservoirs falls below the statistical average. Like an agricultural drought, this can be triggered by more than just a loss of rainfall. For instance, Kazakhstan was recently awarded a large amount of money by the World Bank to restore water that had been diverted to other nations from the Aral Sea under Soviet rule. Similar circumstances also place their largest lake, Balkhash, at risk of completely drying out.
Information That Caught our Interest
The state's water woes are going to get worse before they get better. That's the word from North Carolina Gov. Mike Easley Monday.
Easley addressed the North Carolina
League of Municipalities Monday afternoon in
The governor says all outdoor watering should immediately cease.
"North Carolina is facing a potential emergency just as severe as any eye storm or hurricanes that we usually face," Easley said. "This is an emergency where we can lessen the impact of but only, if we take the action necessary today and work together - you and me and all of our citizens in the state."
The governor is not yet declaring a state of emergency yet - but he worries the drought may lead to a full-blown water crisis in 2008.
State under Burning Ban
The lack of rain has also triggered a new statewide burning ban
The extreme drought has left brush tinder dry. One errant spark or smoldering ember could create disaster.
Brunswick County Fire Marshal Scott Garner says the drought makes fight the fires trickier too.
"Anythime we get a fire we have to be careful about the communities aroind it, and knowing that some of the water supplies of some of the places that we would pull it from, ponds and lakes maybe down, that could be a significant source of hazard that we may not be able to put the fire out," Garner said.
Again, the new burn ban covers the entire state.
And so the ideas were formed | <urn:uuid:cda892e0-b644-483b-957f-dc106178546b> | 3.78125 | 573 | Knowledge Article | Science & Tech. | 42.600252 |
Shields up, Captain! There's an unidentified entity ahead! On second thoughts, no need to take evasive action -- this particular space blob is just the glowing remains of a sun-like star, 3,300 light-years away. ->
Alpha Centauri A has been seen to have a cool layer in its atmosphere, just above it's surface -- the first time this has been observed in a star other than the sun. ->
A neutron star has been discovered orbiting its binary partner every 93 minutes -- a record.
A cool brown dwarf has been detected through the radio waves generated by its magnetic field -- could the method be used to detect exoplanets with magnetospheres?
If a large gas giant drifts too close to its star, it might have its thick atmosphere stripped, leaving its bare, rocky core.
Despite our best search strategies, are signals from E.T. manifested in anomalous flashes of radio energy from our galaxy that are missed, or dismissed as natural phenomena?
The Herschel Space Observatory has spotted a young star blasting a cavity out of a nebula. The resulting 'bubble' is sparking the birth of more stars, one of them with the potential to grow into a stellar 'Goliath'.
Stellar nurseries are chock-full of energetic phenomena, it's little wonder that we are always finding new and bizarre things in star-forming regions.
+ Load More | <urn:uuid:2f73a1cb-48b4-422f-8539-801d42dc8cd5> | 2.71875 | 290 | Content Listing | Science & Tech. | 50.471891 |
The neutron at low momentum is quantum mechancal during the scattering, and the bouncing off is by wave mechanics, not by bouncing off. The neutron gets reflected into all directions by the small region with a potential, and the amount of scattering is given by the Born approximation to lowest order in the momentum of the neutron:
$$ A(k-k') = -i\tilde V(k-k') $$
For scattering from incoming direction k to outgoing direction k'. You decorate this with phase-space factors to take into account the size of the neutron wave. If the neutron wave is large, nearly all the scattering is in the forward direction, and this is expressed by writing the S-matrix as;
$$ S = 1 + i A $$
Where the "1" gives a contribution to the scattering which is $\delta(k-k')$ for the case of single-particle scattering. If you smear this with the incoming wavepacket, you get the outgoing wavepacket, which is mostly the incoming wave, plus a spherical outgoing wave. The delta-function guarantees that as you approach a plane-wave limit, there will be no scattering, because the neutron wavefunction area will be much larger than the nuclear area.
This scattering, when the neutron wavefunction is larger wavelength than the nucleus (almost always in real life) leads to a spherical scattering which is roughly isotropic, which you add to the incoming wave to get the full outgoing wave. The imaginary part of the scattering in the forward direction only subtracts some weight from the wavefunction that keeps going forward, and unitarity guarantees that this imaginary part is equal to the scattering probability in all directions added together.
This is covered in scattering theory in most quantum mechanics books, but the treatment in Gribov's Regge theory book "The Theory of Complex Angular Momentum" is most instructive to my mind. | <urn:uuid:65b5088b-0ded-47c5-b24f-f34ddf8a935f> | 2.71875 | 391 | Q&A Forum | Science & Tech. | 34.089706 |
The 1976 Viking 1 Lander seemed to find life on Mars. Scientists quickly dismissed the data, and seldom mentioned it in later years. But thirty six years later an article appeared that claimed Viking was right—and it was roundly ignored... for about six weeks. Then the story first appeared in Discovery News April 12, and since then respected news outlets have carried it around the world. Small resurgences have happened before, but have barely broken the surface of 'real' news outlets. What made this one different?
View from Viking 1 on Mars - 1976
To begin with, the scientists who did this study weren't analyzing the data itself. The great knock originally had been that other experiments did not support the results that the radiation labeled gases experiment showed, because they also showed Perchlorates, for which no one could find a mechanism other than contamination. The scientists involved in this study did a multivariate analysis that focused on the number and complexities of reactions that occurred within the reaction chamber, as shown by the gases coming off.
The point of the exercise was that organic processes are far more ordered and complex than inorganic processes that produce the same compounds. The study focused on the rate of gas production, the number of gases and precursors throughout the time of their production, and the last proportions of the gases. The final conclusion was that only organic processes could have produced the organic compounds in the ways and complexities found. Case closed. There's life on Mars... or not. Joseph Miller of USC's Keck School of Medicine said, 'On the basis of what we've done so far, I'd say I'm 99 percent sure there's life there.'
Well, that's pretty definitive. As Dr. Miller also said, "They should just send a microscope and watch the bacteria move around." Then again, others in the research group are less sanguine. For instance, Christopher McKay of NASA's Ames Research Centre said, in an interview with Discovery News, 'Finding organics is not evidence of life or evidence of past life. It's just evidence for organics.'
So what triggered all this in the first place? It seems that the original dismissal may have been a bit hasty. Perchlorates (salts comprising one Chlorine atom, four Oxygen atoms, and a third element) were found in the sample. Contamination from Earth, was the judgement meaning the entire experimental chamber was contaminated. Pff-f-f-ft... end of story, no Martian life... until 2008.
In 2008, the Phoenix Mars Lander found Perchlorates in Martian soil, and this time there was no question. That meant that the perchlorates found by Viking could have been Martian... as could the carbon-based organics have been. It was time for a re-look.
Not everyone accepts the results as meaning Mars has life, although the idea of organics seems OK with all of them. Critics insist that the method has not been proven to truly differentiate between biological and inorganic processes.
"Ideally to use a technique on data from Mars one would want to show that the technique has been well calibrated and well established on Earth. The need to do so is clear; on Mars we have no way to test the method, while on Earth we can," planetary scientist and astrobiologist Christopher McKay, with NASA's Ames Research Center in Moffett Field, Calif., told Discovery News.
The study report is found in the March issue of "International Journal of Aeronautical and Space Sciences," vol. 13, no. 1, pp.14-26, March, 2012. "The Korean Society for Aeronautical & Space Sciences" publishes the journal.
There are two problems with this investigation and published article. First, the journal itself is an issue—no, not that it's a Korean journal, but that it is not a mathematical/statistical journal. Would it not have been better to have the article reviewed for the mathematical/statistical value of its central investigative technique, against the results claimed? Second, this method almost appears a search for data supporting a hypothesis, exactly the opposite of classic scientific method, where the investigator attempts to prove the null hypothesis.
Still, the study is incomplete. Part II is being undertaken as these words are typed. In this second phase, scientists will attempt to discover whether "...there are variations when sunlight was blocked by a weeks-long dust storm on Mars, with the idea being that biological systems would have acted differently to the environmental change than geologic ones. Results of the research are expected to be presented in August." And then the answer will be maybe there's Martian life after all... or not.
Photo Source: Wikipedia Commons | <urn:uuid:8cb4d467-0cd6-4281-9552-8defa7be258a> | 2.953125 | 963 | Knowledge Article | Science & Tech. | 54.800465 |
The Living Forests Model, developed with the International Institute for Applied Systems Analysis, allows us to explore the implications of various land-use scenarios. We used the model to look at the potential impact of the large increase in bioenergy required by ambitious targets to reduce greenhouse gas emissions.
What the model shows us
Deforestation: It should still be possible to achieve WWF’s goal of ZNDD by 2020 while increasing bioenergy production, assuming ambitious climate change mitigation goals are the driving force behind bioenergy expansion. If bioenergy producers need to avoid land-use changes that cause increased greenhouse gas emissions, bioenergy should not become a major cause of forest loss.
Forest management: To meet anticipated demand for wood, especially for bioenergy, the area of forest that is managed for timber production is projected to increase by over 300 million hectares between now and 2050. While this is preferable to deforestation, the impacts will largely depend on how closely the principles of sustainable forest management are followed.
Tree plantations: Fast-growing tree plantations will continue to increase, largely to meet the demand for bioenergy: around 250 million hectares of new tree plantations are likely to be added between now and 2050. By 2050, the projected expansion rate may be more than 10 million hectares per year.
Other natural ecosystems: As land competition becomes more acute, bioenergy will threaten other diverse natural ecosystems too, such as shrublands and grasslands. Growing demand for bioenergy could become the main driver behind their conversion.
Food consumption and security: Increased demand for bioenergy could drive up food prices and threaten food security. But it is possible to meet the world’s food, fibre and energy needs while protecting forests if we move toward a global diet in which people in richer countries reduce calories from animal protein while people in poor countries increase them, improve agricultural efficiency and reduce food waste. | <urn:uuid:37a54fe9-8635-417e-8fed-914747a20f8a> | 3.84375 | 377 | Knowledge Article | Science & Tech. | 21.883249 |
Geometry arose as the field of knowledge dealing with spatial relationships. Geometry was one of the two fields of pre-modern mathematics, the other being the study of numbers ....
, an interior angle
(or internal angle
) is an angle
In geometry, an angle is the figure formed by two rays sharing a common endpoint, called the vertex of the angle.Angles are usually presumed to be in a Euclidean plane with the circle taken for standard with regard to direction. In fact, an angle is frequently viewed as a measure of an circular arc...
formed by two sides of a polygon
In geometry a polygon is a flat shape consisting of straight lines that are joined to form a closed chain orcircuit.A polygon is traditionally a plane figure that is bounded by a closed path, composed of a finite sequence of straight line segments...
that share an endpoint. For a simple, convex or concave polygon, this angle will be an angle on the 'inner side' of the polygon. A polygon has exactly one internal angle per vertex
In geometry, a vertex is a special kind of point that describes the corners or intersections of geometric shapes.-Of an angle:...
If every internal angle of a simple, closed polygon is less than 180°, the polygon is called convex
In geometry, a polygon can be either convex or concave .- Convex polygons :A convex polygon is a simple polygon whose interior is a convex set...
In contrast, an exterior angle
(or external angle
) is an angle formed by one side of a simple, closed polygon and a line extended from an adjacent side.
The sum of the internal angle and the external angle on the same vertex is 180°.
The sum of all the internal angles of a simple, closed polygon can be determined by 180(n
-2) where n
is the number of sides. The formula can by proved using mathematical induction and starting with a triangle for which the angle sum is 180, and then adding a vertex and two sides, etc. A pentagon's internal angles add up to of 540 degrees (shown below)
Knowing this you can easily find the measure of each angle if it is a equiangular polygon with
So continuing from the above example with the pentagon:
(The exterior angle can be worked out by doing the following sum 360/number of sides in the equiangular polygon
The interior angle can then be found by taking away the value of the exterior angle from 180.
So in a pentagon there are 5 sides so to work out the exterior angle you do 360/5 =72
So the interior angle is 180-72=108 so the interior angle = 108 degrees)
The sum of the external angles of any simple closed (convex or concave) polygon is 360°.
The concept of 'interior angle' can be extended in a consistent way to crossed polygons such as star polygons by using the concept of 'directed angles'. In general, the interior angle sum in degrees of any closed polygon, including crossed (self-intersecting) ones, is then given by 180(n
) where n
is the number of vertices and k
= 0, 1, 2, 3. ... represents the number of total revolutions of 360o
one undergoes walking around the perimeter of the polygon, and turning at each vertex, until facing in the same direction one started off from. In other words (or put differently), 360k
represents the sum of all the exterior angles. For example, for ordinary convex and concave polygons k
= 1, since the exterior angle sum = 360o
and one undergoes only one full revolution walking around the perimeter. | <urn:uuid:4397185f-cfff-4414-89ef-f5fa5809b301> | 3.78125 | 791 | Tutorial | Science & Tech. | 53.916784 |
Protonís Force Field - Fills the Atom, Attracting Electrons
In order for the atom1 to exist, the proton must capture and hold an electron in the atomís sphere which it does. For lack of technology or language I shall refer to the protonís hold as a positively charged force field.
Our positively charged proton, a + monopole, located in the heart of an atom, attracts the electron Ė the negatively valued monopole; like the pull of opposite magnet poles.
Separating these giant [atom sized] force fields is the strong atomic force [SAF] acting as a chaperone denying their embrace. One could think of the SAF acting like a hollow golf ball surrounding the nucleus containing the positively charged proton force fields.
The inside of this hollow SAF golf ball is positively charged, confining the protons. And like the poles of a magnet the outside of the golf ball is negatively charged driving away the electrons in the area of the nucleus. Diagrams of this model can be found at: http://www.allnewuniverse.com/DemoOfDarkEnergy.pdf or http://www.allnewuniverse.com/BigBangFuel.ppt pdf site slide 36 or ppt site slide 16.
All this requires power [and our atom is similar to an electric motor using the magnetic north/south poles that drives an armature when supplied with a external source of power such as a household current known as a 110 line.] Our atom requires a power supply to maintain all the forces operating within. For a discussion of power requirements see the above pdf or ppt presentations or text version at http://www.allnewuniverse.com/atoms-power-requirements.html.
This discussion notes that the approximate smallest radius of the protonís force fields must be in the area of the atomís radius, the electronís, like a magnet, is the mirror image.1For a current picture of the atom that looks like a dark blue beach ball, created by the electronís orbitals, see: http://www.scientificamerican.com/article.cfm?id=the-shape-of-atoms or http://scitechtoday.blogspot.com/2009/12/new-microscope-reveals-shape-of-atoms.html Specifically, Igor Mikhailovskij and his collaborators at the Kharkov Institute of Physics and Technology in Ukraine have imaged the shapes of those orbitals in carbon atoms by improving an old imaging technique called field-emission microscopy. The results are in the American Physical Societyís October Physical Review B. [Received 17 July 2009; published 7 October 2009 ] -http://prb.aps.org/abstract/PRB/v80/i16/e165404
The Book | Book Jacket | Universe on CD
The Author | Public Talks | Articles by the Author
Links & Resources | Glossary | Reader's Comments
Order Now | Contact Us
| HOME |
41242 N. Westlake Avenue
Antioch, Illinois USA 60002
All World Rights Reserved.
[ Privacy and Security Statement ] | <urn:uuid:63a3455d-0b39-4117-853d-96992ef3c868> | 3.28125 | 657 | Academic Writing | Science & Tech. | 49.970674 |
Joined: Jan. 2006
|Hogwash. A class of devices called the Field Programmable Gate Array (FPGA) has been around for almost 25 years. I remember when my Intel Field Service Engineer was hyping Xilinx FPGA’s to me in the mid-1980’s. I presumed Intel had a financial stake in Xilinx. They were way too slow for anything I was doing but they can be reconfigured on the fly, in-circuit, running hot. |
That's interesting because five or ten years ago, New Scientist had a cover story on a laboratory evolution demonstration using FPGAs. They used one to evolve a pair of audio tone detectors, utterly unaided by human intelligence.
Some background: A Gate Array is a chip with a very large number of basic logic gates on it - AND gates, OR gates, flip-flops, etc. These gates are not interconnected as they come out of the factory. Instead, the outputs of all of the gates go to a series of switches and crossbars that allow the output of just about any gate to be connected to the input of just about any other gate. That means that you can "wire up" a custom circuit by setting the switches.
The first gate arrays were programmed at the factory. You worked out which switches needed to be set to which positions to produce the circuit you wanted, then you sent a list of those switch positions to the factory and the factory "flipped" the switches while it was manufacturing the chip. You got your chips a month or two later and the minimum order was usually a thousand chips.
In the next advance, Gate Array chips were produced whose switches could be Programmed in the Field by inputting a string of ones and zeros. Typically, a circuit would have a Field Programmable Gate Array chip plus a small memory chip. At startup, the memory chip would feed a string of ones and zeros into the FPGA which would flip all the switches necessary to produce the circuit you wanted and you were off. This was a lot faster and cheaper than sending a list into the factory and getting your programmed chips two months later, especially if you didn't need a thousand of them.
The New Scientist article told how some scientists decided to evolve a series of ones and zeros that would convert a FPGA into a circuit that took in an audio signal and made one output pin go high if a certain tone was detected and a second output pin go high if a second tone was detected.
To do this, they loaded a computer with a string of totally random ones and zeros. They programmed the computer to feed this string into a FPGA, input a series of audio tones and watch the output pins to see if either went high when the appropriate tone was detected. If either pin failed to give the appropriate output, the string of ones and zeros was mutated slightly and fed back into the FPGA chip and the test was repeated. At this point, the experimenters went out for a cup of coffee.
As I recall, one of the first steps towards evolving a tone detector circuit was when the outputs of two gates were connected together (normally a big no-no) and one was set to give a high output and the other a low. This effectively short circuited the power supply, screwed up all the voltage levels in the FPGA chip - and the screwed up voltage levels converted the chip from digital to linear!
Several thousand iterations later, the chip was reliably detecting the first tone and a few thousand iterations after that, it was detecting both tones properly, exactly as desired.
It's kind of scary to think that DaveTard could have performed a pioneering labratory experiment demonstrating Darwinian evolution twenty five years ago and blew it. I'd hate to think of him on our side. | <urn:uuid:b2c335f9-3bb1-45ce-8a99-665b2eaaa03e> | 3.28125 | 791 | Comment Section | Science & Tech. | 51.104539 |
Twenty five years ago, the
brightest supernova of modern times
astronomers have watched and waited for
the expanding debris from this tremendous stellar
explosion to crash into
previously expelled material.
A clear result of such a collision is demonstrated in the above time lapse video of images
recorded by the Hubble Space Telescope between 1994
The movie depicts the collision of an outward
blast wave with the pre-existing, light-year wide ring.
The collision occurred at speeds near
60 million kilometers per hour and
ring material causing it to glow.
Astronomers continue to study the collision as it
illuminates the interesting past of
SN 1987A, and provides clues to
the origin of the mysterious rings. | <urn:uuid:d04a8c38-e0c7-4a2b-86fc-42735e94d09d> | 3.296875 | 152 | Truncated | Science & Tech. | 27.891379 |
Wisps like this are all that remain visible of a Milky Way star.
About 9,000 years ago that star exploded in a
supernova leaving the
Veil Nebula, also known as the Cygnus Loop.
At the time, the expanding cloud
was likely as bright as a crescent Moon, remaining
visible for weeks to people living at the dawn of
Today, the resulting supernova remnant
has faded and is
now visible only through a small telescope
constellation of the Swan
The remaining Veil
Nebula is physically huge, however, and even though it lies about 1,400
light-years distant, it covers over five times the size of the
like this of the
complete Veil Nebula,
should be able to identify several of the individual filaments.
A bright wisp at the right is known as the
Witch's Broom Nebula.
Credit & Copyright: | <urn:uuid:2bd382f3-cec4-48b9-bdd8-bbc5d87d021c> | 3.1875 | 191 | Knowledge Article | Science & Tech. | 37.762857 |
All > Science > Weather
- Energy that a body has as a result of its motion. Mathematically, it is defined as one-half the product of a body's mass and the square of its speed (KE = 1/2 * mass * velocity squared).
NOAA National Weather Service - Cite This Source - This Definition
- Couplet, Laminar flow, Rankine Vortex, UVM, VAD, Vertical Velocity, VLCTY | <urn:uuid:d560b49a-4d62-4b2b-bac8-fba2d29e9880> | 3.28125 | 95 | Structured Data | Science & Tech. | 34.936568 |
Project Prairie Birds
A Citizen Science Project for Wintering Grassland Birds
|Gulf Coast Bird Observatory|
Texas Parks & Wildlife
TX Partners in Flight
Raven Environmental Services, Inc.
US Forest Service
|US Fish & Wildlife Service, Region 2|
Our southeastern grasslands are the primary destination for more than a dozen species of Nearctic migratory grassland birds. However, there are large gaps in information concerning winter distribution, habitat requirements and population changes in this group of nationally-recognized declining species. In fact, some of Partners in Flight's highest priority birds are grassland species such as LeConte's, Grasshopper and Henslow's Sparrows. Because of their declining status, the interest in this group has greatly increased, but even so, few studies address winter distribution or winter densities. Following-up on the need for new information, the project partners listed above developed and initiated Project Prairie Birds in the winter of 1998. Designed as a 5-year, all-volunteer program, our first task was to engage and train a bevy of excellent birders to map the distribution and identify specific habitat requirements of all over-wintering avian grassland species along the upper Texas coast--our pilot area. This project is now complete. A published report of the results is available below.
|The objectives of Project Prairie Birds are to: (1) determine the abundance of wintering grassland species, (2) identify their winter habitat preferences, and (3) utilize the data collected to develop land management guidelines and recommendations for these little-known species.
Click here for more information on Project Prairie Birds at the Texas Parks & Wildlife website including the data sheets and protocol booklet.
Click here for the Project Prairie Birds report published in the Bulletin of the Texas Ornithological Society.
|All Species Detected on PPB Transects:|
||Yellow-rumped Warbler (Myrtle)|
||Palm Warbler (Yellow)|
|Greater Prairie-Chicken (Attwater's)
||Le Conte's Sparrow|
||Nelson's Sharp-tailed Sparrow|
| ||© 2013, Gulf Coast Bird Observatory. All rights reserved.| | <urn:uuid:58628593-00d9-4455-87ca-1a7468cd9bf9> | 3.28125 | 454 | Knowledge Article | Science & Tech. | 27.389375 |
Physics is the study of matter and its motion based on fundamental concepts such as force, energy, mass, and charge. Some topics include acoustics, aerodynamics, chemical physics, fluid mechanics, geophysics, optics, particle physics, plasma, quantum mechanics, and solid state physics.
Advanced Technologies & Aerospace Collection indexes citations, abstracts, and full text of journal articles, books, conference papers, technical reports, and patents on all aspects of aeronautics, technology, computing, meteorology, and telecommunications. Full text journal articles and books provided as HTML and PDF. You can limit to peer-reviewed sources. 1962-present.
IEEE Xplore is a digital library providing full text access to the world’s highest quality technical literature in electrical engineering, computer science, electronics, and related disciplines. IEEE Xplore contains full text documents from IEEE journals, transactions, magazines, letters, conference proceedings, standards, and IET (Institution of Engineering and Technology) publications. Full text content is provided as PDFs. 1893-present, full text from 1988-present. | <urn:uuid:fca468b0-24e6-4eb1-ac97-0ef830cf33af> | 2.75 | 226 | Content Listing | Science & Tech. | 20.355948 |
A summary of the distribution of mussels in the St. Croix based on quantitative sampling completed by the Macalester team can be found here.
A map of the St. Croix
access to more of the data on mussels in the river can be found by
There are 41 species of native mussels in the St. Croix River as well as two species of introduced bivalves (zebra mussels and Asian clams).
There are two federally endangered species of mussels - the Higginsi Pearly Mussel (Lampsilis higginsii) and the winged mapleleaf (Quadrula fragosa). In addition there are a number of species listed as endangered or threatened in the states of Minnesota and Wisconsin.
There have been a number of qualitative and quantitative studies of the mussel assemblages of the St. Croix River. Much of the information has been summarize in:
Hornbach, D.J. 2001. Macrohabitat factors influencing the distribution and abundance of naiads in the St. Croix River, MN and WI, USA, pp. 213-230. In G. Bauer and W. Wächtler [eds] Ecology and Evolutionary Biology of the Freshwater Mussels Unionoidea. Ecological Studies Vol. 145 Springer Verlag: Berlin.
In this paper Hornbach has examined the factors which may help to explain the distribution of mussel species in the river. Click here for more information.
In addition Hornbach and researchers at Macalester College have been conducting quantitative studies of populations at 9 sites on the St. Croix River. The methods used can be examined by clicking here. | <urn:uuid:7cb0c2ee-ff84-4961-9740-9f543e9c897c> | 3.5 | 346 | Knowledge Article | Science & Tech. | 51.443274 |
During experiments carried out in 1919 involving magnetism and acoustics, German scientist Heinrich Barkhausen provided convincing evidence that iron and other ferromagnetic materials are magnetized in small, distinct intervals rather than in a smooth, continuous manner, as had been theorized. Barkhausen did so by connecting a wire coil surrounding an iron core to an amplifier, then bringing a magnet close to the coil.
Any signal picked up by the amplifier was sent to a speaker, which enabled Barkhausen to hear a progression of clicking noises whenever he moved the magnet. The sound reflects the shifting of what are known as magnetic domains in the iron. Magnetic domains are microscopic areas in the iron in which the atoms – each a kind of tiny magnet with its own tiny magnetic field – are all aligned in the same direction. When the bar magnet is moved near the core, those domains within the iron gradually realign with the field of the magnet. Due to electromagnetic induction, the shifting of a domain creates a change in the magnetic field around the iron, and that changing magnetic field induces a current in the surrounding coil detectable by the amplifier.
A magnet, a long copper coil, an amplifier and a speaker are presented in this tutorial demonstrating the Barkhausen effect. The magnet can be moved along the outside of the coil by adjusting the Magnet Position slider. An iron core is located inside the copper coil when the tutorial initializes, but it can be removed by unchecking the Iron Core check box.
When the iron core is in position and the magnet is moved along the outside of the coil, discrete jumps in the output of the amplifier can be observed in the Amplifier Output graph. A crackling sound similar to what is actually produced as domains in the iron “snap” and “click” into place can also be heard through your computer speakers. If the iron core is removed from the coil, there is no reorientation of atoms and, therefore, no sound or amplifier output. | <urn:uuid:64b19d43-7285-4c86-bdab-bd7f57292e6e> | 4.1875 | 403 | Tutorial | Science & Tech. | 36.524882 |
Mar 10, 2013 — Not only is there no consensus yet on how life might have started on Earth, there is not even any agreement on where it started. But still, many think the mystery of life's origin can be solved. Commentator Wim Hordijk revels in the subject at a conference hosted by Princeton University.
Dec 17, 2012 — Secretions from a brown frog's skin contain chemicals that might be useful in fighting bacteria. Russian researchers are cataloging compounds in the slimy goo. Although the odds against them are long, the researchers hope their work will aid the search for new drugs. | <urn:uuid:2590a02e-ebac-44d1-aac8-fe7ef71bf491> | 2.75 | 124 | Content Listing | Science & Tech. | 58.532941 |
After decades of experimentation, scientists can finally grow diamonds that outshine even the rarest De Beers rocks. Launch the slideshow
By Elizabeth Svoboda
Posted 05.30.2006 at 2:00 am 4 Comments
What: Perfect single-crystal diamonds of more than two carats
(the average engagement ring is less than a carat) churned out in a day. Scientists create the gemstones using a process called chemical vapor deposition (CVD), which grows diamond crystals one carbon atom at a time.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:c330e527-c45e-4ca9-9ac8-8f4a98bf9546> | 3.125 | 161 | Content Listing | Science & Tech. | 51.843361 |
In general, the term network can refer to any interconnected group or system.
Java Tutorial 3 – The For Statement and Operators
Java isn’t as redundant as perl, but there’s still almost always more than one way to write any given program. The following...
Java Tutorial 2 – Classes and Objects: A First Look
Classes are the single most important feature of Java. Everything in Java is either a class, a part of a class, or describes...
Java Tutorial 1 – Hello World: The Application
At least since the first edition of Kernighan and Ritchie’s The C Programming Language it’s been customary to begin programming...
C tutorial index
C Tutorial 1 – The basics of C
C Tutorial 2 – If statements
C Tutorial 3 – Loops
C Tutorial 4 – Functions
C tutorial 5 – Switch case
C Tutorial 6 – An...
C Tutorial 12 – Accepting command line arguments
Typecasting is a way to make a variable of one type, such as an int, act like another type, such as a char, for one single...
C Tutorial 11 – Typecasting
Typecasting is a way to make a variable of one type, such as an int, act like another type, such as a char, for one single operation. To typecast...
C Tutorial 10 – C File I/O and Binary File I/O
When accessing files through C, the first necessity is to have a way to access the files. For C File I/O you need to use a FILE...
This lesson will discuss C-style strings, which you may have already seen in the array tutorial. In fact, C-style strings are really arrays of chars with a little bit of special...
C Tutorial 8 – Arrays
Arrays are useful critters that often show up when it would be convenient to have one name for a group of variables of the same type that can be accessed...
C tutorial 7 – Structures
When programming, it is often convenient to have a single name with which to refer to a group of a related values. Structures provide a way of storing...
Pointers are an extremely powerful programming tool. They can make some things much easier, help improve your program’s efficiency, and even allow you to handle unlimited...
Switch case statements are a substitute for long if statements that compare a variable to several “integral” values (“integral” values are simply values that can be expressed as...
Now that you should have learned about variables, loops, and conditional statements it is time to learn about functions. You should have an idea of their uses as we have already...
Loops are used to repeat a block of code. Being able to have your program repeatedly execute a block of code is one of the most basic but useful tasks in programming — many...
The ability to control the flow of your program, letting it make decisions on what code to execute, is valuable to the programmer. The if statement allows you to control if a...
Learning by examples
This tutorial is a port of the C++ tutorial but is designed to be a stand-alone introduction to C, even if you’ve never programmed before. Unless you have a particular reason...
Matlab Tutorial 1 – The basic features
Matlab Tutorial 2 – Vectors and matrices
Matlab Tutorial 3 – Built-in functions
Matlab Tutorial 4 – Plotting
Matlab Tutorial 5 – M-files:...
Why use a GUI in MATLAB? The main reason GUIs are used is because it makes things simple for the end-users of the program. If GUIs were not used, people would have to work from the command...
Matlab Tutorial 9 – Numerical Methods
In this tutorial we mention some useful commands that are used in approximating various quantities of interest.
We already saw that MATLAB can find the roots...
Matlab Tutorial 8 – Polynomials in MATLAB
Even though MATLAB is a numerical package, it has capabilities for handling polynomials. In MATLAB, a polynomial is represented by a vector containing its... | <urn:uuid:4037fea1-43a9-43c2-bdec-a501aeaefdea> | 3.578125 | 864 | Content Listing | Software Dev. | 61.372728 |
Click on any phrase to play the video at that point.Close
Last year at TED I gave an introduction to the LHC. And I promised to come back and give you an update on how that machine worked. So this is it. And for those of you that weren't there, the LHC is the largest scientific experiment ever attempted -- 27 kilometers in circumference. Its job is to recreate the conditions that were present less than a billionth of a second after the universe began, up to 600 million times a second. It's nothing if not ambitious.
This is the machine below Geneva. We take the pictures of those mini-Big Bangs inside detectors. This is the one I work on. It's called the ATLAS detector -- 44 meters wide, 22 meters in diameter. Spectacular picture here of ATLAS under construction so you can see the scale.
On the 10th of September last year we turned the machine on for the first time. And this picture was taken by ATLAS. It caused immense celebration in the control room. It's a picture of the first beam particle going all the way around the LHC, colliding with a piece of the LHC deliberately, and showering particles into the detector. In other words, when we saw that picture on September 10th we knew the machine worked, which is a great triumph. I don't know whether this got the biggest cheer, or this, when someone went onto Google and saw the front page was like that. It means we made cultural impact as well as scientific impact.
About a week later we had a problem with the machine, related actually to these bits of wire here -- these gold wires. Those wires carry 13 thousand amps when the machine is working in full power. Now the engineers amongst you will look at them and say, "No they don't. They're small wires." They can do that because when they are very cold they are what's called superconducting wire. So at minus 271 degrees, colder than the space between the stars, those wires can take that current.
In one of the joints between over 9,000 magnets in LHC, there was a manufacturing defect. So the wire heated up slightly, and its 13,000 amps suddenly encountered electrical resistance. This was the result. Now that's more impressive when you consider those magnets weigh over 20 tons, and they moved about a foot. So we damaged about 50 of the magnets. We had to take them out, which we did. We reconditioned them all, fixed them. They're all on their way back underground now. By the end of March the LHC will be intact again. We will switch it on, and we expect to take data in June or July, and continue with our quest to find out what the building blocks of the universe are.
Now of course, in a way those accidents reignite the debate about the value of science and engineering at the edge. It's easy to refute. I think that the fact that it's so difficult, the fact that we're overreaching, is the value of things like the LHC. I will leave the final word to an English scientist, Humphrey Davy, who, I suspect, when defending his protege's useless experiments -- his protege was Michael Faraday -- said this, "Nothing is so dangerous to the progress of the human mind than to assume that our views of science are ultimate, that there are no mysteries in nature, that our triumphs are complete, and that there are no new worlds to conquer." Thank you. (Applause)
You can share this video by copying this HTML to your clipboard and pasting into your blog or web page.
need to get the latest Flash player.
Got an idea, question, or debate inspired by this talk? Start a TED Conversation.
In this short talk from TED U 2009, Brian Cox shares what's new with the CERN supercollider. He covers the repairs now underway and what the future holds for the largest science experiment ever attempted.
Physicist Brian Cox has two jobs: working with the Large Hadron Collider at CERN, and explaining big science to the general public. He's a professor at the University of Manchester. Full bio » | <urn:uuid:07a2aaf8-f4d7-48e5-987d-549eab23386e> | 3.03125 | 868 | Audio Transcript | Science & Tech. | 65.803556 |
Creating Accessible Data Tables
This article demonstrates how to code accessible data tables in (X)HTML, enabling visually impaired users who employ assistive technologies to interpret the table data. Two views of a tabular data table are presented and discussed.
- Source Markup - Vertical View: the table markup as written in a source code/text editor
- Source Markup - Linear View: the table markup as an assistive device will interpret it
Source Markup: Vertical View
This is how accessible data table markup appears when written in a text editor. Each element must be correctly opened, closed, and correctly nested.
<table summary="contains accessible tablular data"> <caption>Accessible Data Table</caption> <thead> <tr> <th scope="col">Column 1</th> <th scope="col">Column 2</th> <th scope="col">Column 3</th> </tr> </thead> <tfoot><tr><td colspan="3">End table</td></tr></tfoot> <tbody> <tr> <th scope="row">Row A</th> <td>data</td> <td>data</td> </tr> <tr> <th scope="row">Row B</th> <td>data</td> <td>data</td> </tr> <tr> <th scope="row">Row C</th> <td>data</td> <td>data</td> </tr> <tr> <th scope="row">Row D</th> <td>data</td> <td>data</td> </tr> </tbody> </table>
The table looks as follows when rendered:
|Column 1||Column 2||Column 3|
Defining the markup
Let's break this down and look at what all the different parts of the table mean:
- table element -
- This element starts and ends a data table.
- summary attribute -
This attribute is added to the opening
- It can be acknowledged by a screenreader, but it will not be rendered in a graphical browser view of the data table.
- It is implemented to give a short idea of what is contained within the data table.
- Important: Following web standards and accessibility guidelines, always try to keep your attribute value descriptions as concise as possible.
- caption element -
According to web standards, the
captionelement should accompany all HTML data tables.
Its opening tag comes directly after the opening
When added to your tabular data table markup, it:
- Gives a short descriptive title to the data table
- Is visible in browser view
- Is easily identified by assistive technologies
- Is discoverable by search engines
- table header element -
Not to be confused with the table heading element
<th></th>, this element defines the table header section of the data table.
Its opening tag is placed directly after the
</caption>and directly before the first table row
- table footer element -
- This element defines the footer section of the data table.
- It is an optional addition and can be omitted.
Note: If you use it, it must be placed directly before the table body opening tag
- colspan attribute -
This attribute is added to the opening table data element tag
<td colspan="">, which is part of the table footer element section.
- Enter the number of columns you want to span between the quotation marks.
- It enables the table footer to safely span all columns without a break, eliminating the vertical column lines.
- table body element -
- This element defines the body area of the table and surrounds its contents.
It comes directly after the closing table footer element
</tfoot>, and before the opening table row element
- table row element -
- This element marks the start and end of a data table row.
- table heading element -
- This element identifies our data table rows and columns.
You can now apply the
scope=""in the opening table heading tag
<th scope="">to define our rows and columns.
- scope attribute -
You can use the
scopeattribute to declare a table heading element
<th></th>either as a row or a column.
It is inserted in the opening table heading element tag, for example
- table data element -
- This element contains tabular data.
- It is located at the intersection of a column and row.
Source Markup: Linear View
Markup is authored in a vertical manner (as in the previous code block - Source Markup: Vertical View). The following tabular data table illustrates markup elements and attributes in linear order, as rendered by a user agent.
|<table summary="Contains accessible tablular data">|
|<caption>Accessible Tabular Data Table</caption>|
|<thead><tr><th scope="col">Column 1</th>||<th scope="col">Column 2</th>||<th scope="col">Column 3</th></tr></thead>|
|<tfoot><tr><td colspan="3"> End table</td></tr></tfoot>|
|<tbody><tr><th scope="row">Row A</th>||<td>data</td>||<td>data</td></tr>|
|<tr><th scope="row">Row B</th>||<td>data</td>||<td>data</td></tr>|
|<tr><th scope="row">Row C</th>||<td>data</td>||<td>data</td></tr>|
|<tr><th scope="row">Row D</th>||<td>data</td>||<td>data</td></tr></tbody>|
Assistive Technology linearization
Now that we've seen the source markup in linear view, lets look at how an Assistive Technology handles it. These devices read a table starting with the first cell in the first row (1, 1), and proceed horizontally to the end of that row (1, 3). It then moves to the first cell in the next row and proceeds to the end of that row, and so on until the end of the table ... although in this case (2,1) is defined as the footer, so it moves on to (3,1) and reads (2,1) last. Modern ATs will read all data contained within a cell. Older ATs used to read just the first line of a cell and then move to the next cell. This produces major confusion for a user if a cell contains data broken over more than one line. Assigning the coordinates discussed above, an assistive device gives cognitive meaning to the data in the cell.
|1, 1||1, 2||1, 3|
|3, 1||3, 2||3, 3|
|4, 1||4, 2||4, 3|
|5, 1||5, 2||5, 3|
|6, 1||6, 2||6, 3|
This is a short guide to making HTML data tables accessible. I believe that the most important point of this exercise is to enable tabular data to be understood by all users. If you're dealing with complex tabular data tables, see if there is a logical way to break them up into simpler units. Again, we're striving for a quick and easy understanding of the data. Please remember that the foundation of any accessible web page is code written to current web standards.
This article is licensed under a Creative Commons Attribution, Non Commercial - Share Alike 2.5 license. | <urn:uuid:22fa42cd-5c1d-44f0-b2a1-46f55f2f5fa7> | 3.734375 | 1,637 | Tutorial | Software Dev. | 52.956326 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
Re: Mesozoic snow - refined
Let's review what I already said...
GCMs, continental interiors, freezing temperatures in winter, large seasonal
variation in temperatures, climate not equitable, ice rafted deposits through
much of Phanerozoic, small ice sheets favorable at high latitudes...
Granted, you'd have to take a few steps out side of the box in order to apply
what I said to the coastal regions of the interior seaway during the esrly late
Maastrichtian. All things considered, seasonality within the vicinity of the
seaway would have been minimal. But, given the probable extent of the seaway at
that time, and the location of the Hell Creek formation, the occurrence of snow
was most definitely a viable possibility.
It's a meteorological truth; Cold air always wins. During the winter, I should
like to think that up north would have radiated out enough to produce a dry
continental polar air mass that would have been cold enough to kick off
snowfall as it moved south and encountered that moist marine tropical air mass
from the south. The denser cold air stays at the surface and lifts the warm air
aloft. In other words, a cold front. The warm air adiabatically cools and the
moisture precips out directly into the cold air below. If the air is freezing,
you get snow. If the surface is freezing, you get accumulation. If it snows
long enough over that above freezing surface, evaporative cooling will
eventually drop the surface to freezing and accumulation will take place. Even
if you back off slightly when it comes to the dynamics by weakening atmospheric
temperature contrasts, you'd still get the type of scenario, though at a
reduced scale. With a continental polar air mass during the winter !
at was less potent than today's (the most-likely case), whenever a surge of
cold air advection took place, fronts and their precip would push south, though
they wouldn't have been as vigorous.
In any case, given the situation, such events normally wouldn't have lasted for
very long, with only a slight accumulation before the pattern shifted enough to
allow warm marine air to once again advect north and into the region. | <urn:uuid:fdc26938-eaa1-4577-bfbf-a6a3756d9d53> | 3 | 510 | Comment Section | Science & Tech. | 48.086142 |
A Comparative Mind: Crows can ‘reason’ about causes, a recent study finds
In an experiment, researchers found that crows were more likely to forage when they could attribute changes in their environment to a human presence.
This behaviour may suggest “complex cognition”, according to a study published in the Proceedings of the National Academy of Sciences. Until now the ability to make inferences based on causes has been attributed to humans but not animals.
The study was a collaboration between researchers from the University of Auckland, New Zealand, the University of Cambridge, UK and the University of Vienna, Austria.
In their experiment eight wild crows used tools to remove food from a box. Inside the enclosure there was a stick and the crows were tested in two separate series of events that both involved the stick moving.
In one instance a human entered the hide and the stick moved. In the other, the stick still moved but no human entered. On the occasions when no human was observed entering the hide, the crows abandoned their efforts to probe for food using a tool more frequently than they did when a human had been observed.
According to the scientists, the study proved that crows attributed the stick’s movement to human presence. | <urn:uuid:68af57d3-c8cc-4945-b571-325fa4836fc1> | 3.71875 | 257 | Personal Blog | Science & Tech. | 37.468737 |
In abstract algebra, a division ring, also called a skew field, is a ring in which division is possible. Specifically, it is a non-trivial unital ring in which every non-zero element a has a multiplicative inverse, i.e., an element x with a·x = x·a = 1. Stated differently, a ring is a division ring if and only if the group of units is the set of all non-zero elements.
Division rings differ from fields only in that their multiplication is not required to be commutative. However, by Wedderburn's little theorem all finite division rings are commutative and therefore finite fields. Historically, division rings were sometimes referred to as fields, while fields were called “commutative fields”.
Relation to fields and linear algebra
All fields are division rings; more interesting examples are the non-commutative division rings. The best known example is the ring of quaternions H. If we allow only rational instead of real coefficients in the constructions of the quaternions, we obtain another division ring. In general, if R is a ring and S is a simple module over R, then, by Schur's lemma, the endomorphism ring of S is a division ring; every division ring arises in this fashion from some simple module.
Much of linear algebra may be formulated, and remains correct, for (left) modules over division rings instead of vector spaces over fields. Every module over a division ring has a basis; linear maps between finite-dimensional modules over a division ring can be described by matrices, and the Gaussian elimination algorithm remains applicable. Differences between linear algebra over fields and skew fields occur whenever the order of the factors in a product matters. For example, the proof that the column rank of a matrix over a field equals its row rank yields for matrices over division rings only that the left column rank equals its right row rank: it does not make sense to speak about the rank of a matrix over a division ring.
The center of a division ring is commutative and therefore a field. Every division ring is therefore a division algebra over its center. Division rings can be roughly classified according to whether or not they are finite-dimensional or infinite-dimensional over their centers. The former are called centrally finite and the latter centrally infinite. Every field is, of course, one-dimensional over its center. The ring of Hamiltonian quaternions forms a 4-dimensional algebra over its center, which is isomorphic to the real numbers.
- As noted above, all fields are division rings.
- The real and rational quaternions are strictly noncommutative division rings.
- Let be a nontrivial automorphism of the field onto itself (e.g., complex conjugation). Let denote the ring of formal Laurent series with complex coefficients, wherein multiplication is defined as follows: instead of simply allowing coefficients to commute directly with the indeterminate , for , define for each index . The resulting ring of Laurent series is a strictly noncommutative division ring known as a skew Laurent series ring. This concept can be generalized to the ring of Laurent series over any fixed field , given a nontrivial -automorphism .
Ring theorems
Related notions
Division rings used to be called "fields" in an older usage. In many languages, a word meaning "body" is used for division rings, in some languages designating either commutative or non-commutative division rings, while in others specifically designating commutative division rings (what we now call fields in English). A more complete comparison is found in the article Field (mathematics).
Skew fields have an interesting semantic feature: a modifier (here "skew") widens the scope of the base term (here "field"). Thus a field is a particular type of skew field, and not all skew fields are fields.
- Lam (2001), .
- Simple commutative rings are fields. See Lam (2001), and .
- Lam (2001), p. 10
See also
- Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate texts in mathematics 131 (2 ed.). Springer. ISBN 0-387-95183-0. | <urn:uuid:d290d9d2-b5e5-4d84-824c-97619c475b62> | 4.15625 | 899 | Knowledge Article | Science & Tech. | 43.298473 |
Solving the Carbon Conundrum
Progress Made on State-of-the-Art Carbon–Measuring Instrument
Scientists know the global levels of carbon dioxide in the atmosphere. They know how much is produced through natural processes and the burning of fossil fuels. But the numbers don’t jibe; the budget doesn’t balance. More carbon dioxide is produced than what shows up in the atmosphere. So where precisely does the carbon go and will the “sinks” that remove it from the environment continue doing so?
A state-of-the-art carbon-measuring instrument currently under development at Goddard promises to provide a more complete picture of the carbon issue and perhaps solve the conundrum, scientists believe.
A few of the CO2 Sounder Lidar team members are shown here inside a DC-8 during the 2010 field campaign demonstrating the instrument's capabilities. From top left (clockwise); Haris Riris, Jim Abshire, Bill Hasselbrack, and Mike Rodriguez.
Begun several years ago under Goddard research and development programs, the CO2 Sounder Lidar is a strong candidate for a next-generation carbon-monitoring mission, the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS). Led by Principal Investigator Jim Abshire, the new measurement technique is proving in aircraft demonstrations that it can meet the ASCENDS criteria of providing global around-the-clock measurements with unprecedented precision and resolution, regardless of the season.
For scientists, the data couldn’t come too soon.
“We know the background levels of carbon dioxide very well. We have a global average,” said Randy Kawa, a Goddard atmospheric scientist involved in the sounder’s development. Scientists also know Earth’s oceans and vegetation remove and store carbon. “However, half of what we’re emitting isn’t showing up in the atmosphere,” Kawa added. “That’s why we need details of the processes. We need to know where the sinks are and how well they will operate in the future.” This is especially important since the carbon dioxide produced by coal-burning plants and vehicle emissions increases each year, he said.
Currently, a Japanese-built satellite, IBUKI or GOSAT (Greenhouse Gases Observing Satellite), is making global measurements of carbon dioxide. NASA, too, plans to launch its first-ever mission dedicated to carbon measurements in 2013. This mission, the Orbiting Carbon Observatory-2 (OCO-2), will fulfill the objectives of its predecessor, which was destroyed during a launch failure in 2009. NASA is planning to launch OCO-3 on the International Space Station after 2015.
CO2 Sounder Provides 24-hour Coverage
Although IBUKI and OCO were developed to characterize and map the locations of carbon sources and sinks and monitor how they change over time, neither one can provide around-the-clock coverage. That’s because they are equipped with “passive” instruments that rely on reflected sunlight to gather carbon measurements. The instruments, therefore, do not work at night, including high latitudes during winter, or on partially cloudy days.
In sharp contrast, the CO2 Sounder Lidar carries its own light source — a pair of tunable laser transmitters — and a two-wavelength laser-absorption spectrometer that measures both carbon dioxide and oxygen. Although laser light cannot penetrate thick clouds, it can measure through thin clouds and particles, which prove troublesome for passive systems.
“What we do is bounce a laser beam off Earth’s surface,” explained Abshire. Like all atmospheric gases, carbon dioxide and oxygen absorb the light in narrow wavelength bands. By tuning its lasers across those absorption lines, the instrument can determine the levels of both gases in that vertical path. “The more carbon-dioxide molecules in the path, the deeper the absorption lines,” Abshire said. Measuring oxygen is important, he added, because it reveals atmospheric pressure and helps scientists discern the mixing ratios of both gases.
Efforts also are afoot to develop a similar laser capability to measure methane, another greenhouse gas. Led by Goddard scientist Haris Riris, the addition would further enhance the sounder’s usefulness, said Bob Connerton, who oversees Earth science-related technology-development efforts at Goddard.
To demonstrate its viability as a space-based instrument, Abshire and his team began flying the sounder on high-altitude aircraft in 2008 — a field campaign supported by NASA’s Earth Science Instrument Incubator Program. These demonstration flights have occurred every summer since. In July, his team once again packaged the sounder inside a DC-8 and carried out seven separate flights collecting data over California, Nevada, British Columbia, Iowa, Minnesota, Wisconsin, and New Mexico.
“The sounder worked quite well,” Abshire said. “We recorded strong lidar signals and clear absorption lines for both carbon dioxide and oxygen at all altitudes and surface types. The instrument’s measurements clearly separated the laser signal scattered from the atmosphere and ground surface, as needed for a space mission.”
Though pleased with his instrument’s performance, he’s not declaring victory yet. NASA plans to compete the ASCENDS space instrument, and the Jet Propulsion Laboratory and the Langley Research Center are developing competing approaches. “Of course, I think our chances are very good because we have a really strong team. Our approach is also the most scalable to space and provides more information about carbon dioxide.”
Whether NASA will select his technique over the others remains to be seen. The Agency is expected to release an ASCENDS announcement of opportunity sometime in the next two years. “We’ll definitely be in competitive mode within a year,” but regardless of who ultimately wins, Abshire said, “we want this mission to occur.”
The Office of the Chief Technologist is involved in a variety of projects, missions, and technologies. | <urn:uuid:0b3ac681-6274-471d-990a-b1a5509f52a8> | 3.625 | 1,269 | Knowledge Article | Science & Tech. | 33.267641 |
Date: Nov 29, 2012 1:31 AM
Author: Peter Duveen
Subject: Re: Some important demonstrations on negative numbers
To demonstrate that 1/-a = -(1/a):
(1/-a) x -a = 1 definition of 1/-a eq.1
- - (1/a) x a = -1 definition of 1/a
- -1 (1/a) x a = -1
(1/a) x -1 x a = -1
(1/a) x -a = -1
- -(1/a) x -a = 1 eq.2
>From eq. 1 and eq. 2:
- -(1/a) = (1/-a).
Perhaps a bit clumsy, but once demonstrated in its generality, the student no longer needs to wonder about this relationship when he sees a negative number in the denominator. As far as even clumsier analogies, that is fine to interpret the result as long as the result has been demonstrated clearly. Seems those who took geometry should easily assimilate such a demonstration (proof) . | <urn:uuid:00554b0f-8997-4f33-847c-c7330a2edc27> | 3.1875 | 238 | Comment Section | Science & Tech. | 80.427539 |
A water management project requires a "wet coil" (coil will be submerged in aqueous media) designed to generate a steady-state electromagnetic field of adjustable magnetic magnetic flux density at the center. The coil will be helical with a hollow core (wound on a nylon perforated cylinder used as former). The inner diameter of the coil needs to be between 6 and 12 centimeters, say 8 cm to put a number to it.
I would like to understand how to compute the number of turns needed, and the current that must be driven through the coil, to generate a specific magnetic flux density.
In this context the required range is from 0.1 Tesla to 1 Tesla, but I would rather understand the method than the result. Also, if there are any suggested commercial resources / products to look at, for both coils and drivers, pointers would be very helpful.
My background is oriented more towards banal analog and digital electronics and software, than electromagnetic phenomena or applied physics. Though I did learn basic electricity and magnetism 2 decades ago, that's all very rusty. Hence, I apologize if my question leaves too many gaps, please let me know and I will amend the question accordingly. | <urn:uuid:bc283406-6795-42b5-beb7-2c4dd10bc32d> | 2.71875 | 246 | Q&A Forum | Science & Tech. | 44.064573 |
Algae cause a stir in the local environment
Oct 18, 2010
The swishing actions of tiny swimming organisms play a key role in distributing heat and nutrients throughout the world's oceans and lakes, but these mixing effects are more complicated than we first thought. That is according to two separate research groups, based in the US and the UK, that have examined the fluid disturbances that occur in the immediate vicinity of swimming algae.
Many microorganisms have evolved to be able to move through liquids for various biological processes, including foraging for food and reproduction, and this motion acts to stir the fluids. While scientists have examined these processes at relatively large scales, there is still a lack of quantitative data on the fluid dynamics behind these processes at the microscale.
Now, a group of physicists based at the University of Cambridge, UK, led by Knut Drescher, has succeeded in taking a closer look through an experiment involving two types of common algae. The first organism, Chlamydomonas reinhardtii, is a small alga that swims by paddling a pair of whip-like flagella. The second was Volvox carteri, a larger, spherical type of algae that propels itself with thousands of flagella covering its surface.
By suspending fluorescent polystyrene microspheres in the water surrounding the algae, Dresher's team was able to trace the time-averaged water flows using a tracking microscope. The experiments revealed that Volvox carteri interact with water in a similar way to sedimenting particles acted on by gravity. "People assumed that the effects of gravity would be minimal," says Ray Goldstein, a member of the Cambridge team. "The flow field arising from the gravitational force on the organism falls off very slowly with distance, so the mutual interaction of such organisms is much stronger than without gravity," he explained.
The researchers found that Chlamydomonas reinhardtii, on the other hand, trigger more complicated flow fields in the vicinity, set up by the combined action of the cell body and its two flagella.
In a separate study, based at the Haverford College in the US, a group led by Jeffrey Guasto created a different vantage point by confining Chlamydomonas reinhardtii in a thin film of water. The researchers focused on the two-dimensional motion of a single stroke of the algae's flagella, using tracer particles and a high-speed camera, a set-up that enabled them to study the impact of individual flagella movements in more detail.
While its time-averaged results agree with the Cambridge team, Guasto's group discovered that flow fields vary significantly over the course of one complete "breaststroke". This finding suggests that the shape and scale of the fluid flows triggered by this alga may be yet more complicated. "Researchers in this field have been content with the widely accepted – but simplified – hydrodynamic models of single swimming cells that ignore near-field and time-dependent effects of swimming," says Guasto.
Impact of the individual
The next stage in this research is to explore the dynamics across a range of scales to build a more complete picture of how individual swimmers can influence the impact of large groups of swimming organisms. "The locomotion of swimming organisms contributes to the distribution of nutrients, pollutants and heat. Only recently have researchers devoted considerable attention to these effects at the scale of unicellular microorganisms," says Guasto.
One avenue the Cambridge team intends to pursue is to consider how these latest findings fit in with their previous research that explored the interactions between individual Volvox alga, as reported on physicsworld.com last year.
Howard Stone, a fluid mechanics researcher at Princeton University, is impressed by the work of both groups. "I am confident that [both sets of] measurements will become standard references in the field," he says.
Both groups have recently published their findings in Physical Review Letters.
About the author
James Dacey is a reporter for physicsworld.com | <urn:uuid:05e10e6f-093f-4ede-9529-aad1593eb5f2> | 3.578125 | 832 | Truncated | Science & Tech. | 29.244925 |
Science Fair Project Encyclopedia
The Flehmen response is a particular type of curling of the lips in ungulates, felids, and many other mammals, which facilitates the transfer of chemicals into the vomeronasal organ. This behavior allows animals to smell the urine of others in order to determine several factors. These factors can be the presence or absence of estrus, the physiological state of the animal, and how long ago the animal passed by. This particular response is most recognizable in stallions when smelling the urine of a mare in heat.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:6604aa29-8df7-4b95-9dce-20983f52612f> | 3.671875 | 144 | Knowledge Article | Science & Tech. | 31.516154 |
Trawls were rugged metal nets used at a variety of depths to collect
larger animals. The Challenger crew found it easier to use trawls,
rather than the heavier and more awkward dredges, when collecting
life from the bottom.
Nets of different size and mesh were the primary tool
aboard Challenger for collecting living samples from all depths
of the ocean. A tow net was attached at various depths on the sounding
line, and collected both floating and slow-swimming life forms. | <urn:uuid:ded62028-91a2-49b3-ae1c-a29be6853517> | 3.4375 | 105 | Knowledge Article | Science & Tech. | 52.592361 |
Chemical reactions (stoichiometry)
Stoichiometry Introduction to stoichiometry
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- We know what a chemical equation is and we've learned
- how to balance it.
- Now, we're ready to learn about stoichiometry.
- And this is an ultra fancy word that often makes people
- think it's difficult.
- But it's really just the study or the calculation of the
- relationships between the different
- molecules in a reaction.
- This is the actual definition that Wikipedia gives,
- stoichiometry is the calculation of quantitative,
- or measurable, relationships of the
- reactants and the products.
- And you're going to see in chemistry, sometimes people
- use the word reagents.
- For most of our purposes you can use the word reagents and
- reactants interchangeably.
- They're both the reactants in a reaction.
- The reagents are sometimes for special types of reactions
- where you want to throw a reagent in and see if
- something happens.
- And see if your belief about that substance is true or
- things like that.
- But for our purposes a reagent and
- reactant is the same thing.
- So it's a relationship between the reactants and the products
- in a balanced chemical equation.
- So if we're given an unbalanced one, we know how to
- get to the balanced point.
- A balanced chemical equation.
- So let's do some stoichiometry.
- Just so we get practice balancing equations, I'm
- always going to start with unbalanced equations.
- Let's say we have iron three oxide.
- Two iron atoms with three oxygen atoms.
- Plus aluminum, Al.
- And it yields Al2 O3 plus iron.
- So remember when we're doing stoichiometry first of all, we
- want to deal with balanced equations.
- A lot of stoichiometry problems will give you a
- balanced equation.
- But I think it's good practice to actually balance the
- equations ourselves.
- So let's try to balance this one.
- We have two iron atoms here in this iron three oxide.
- How many iron atoms do we have on the right hand side?
- We only have one.
- So let's multiply this by 2 right here.
- All right, oxygen, we have three on this side.
- We have three oxygens on that side.
- That looks good.
- Aluminum, on the left hand side we only have
- one aluminum atom.
- On the right hand side we have two aluminum atoms. So we have
- to put a 2 here.
- And we have balanced this equation.
- So now we're ready to do some stoichiometry.
- There's not just one type of stoichiometry problem, but
- they're all along the lines of, if I give you x grams of
- this how many grams of aluminum do I need to make
- this reaction happen?
- Or if I give you y grams of this molecule and z grams of
- this molecule which one's going to run out first?
- That's all stoichiometry.
- And we'll actually do those exact two types of problems in
- this video.
- So let's say that we were given 85 grams of the iron
- three oxide.
- So my question to you is how many grams of
- aluminum do we need?
- Well you look at the equation, you immediately
- see the mole ratio.
- So for every mole of this, so for every one atom we use of
- iron three oxide we need two aluminums.
- So what we need to do is figure out how many moles of
- this molecule there are in 85 grams. And then we need to
- have twice as many moles of aluminum.
- Because for every mole of the iron three oxide, we have two
- moles of aluminum.
- And we're just looking at the coefficients, we're just
- looking at the numbers.
- One molecule of iron three oxide combines with two
- molecule of aluminum to make this reaction happen.
- So lets first figure out how many moles 85 grams are.
- So what's the atomic mass or the mass number
- of this entire molecule?
- Let me do it down here.
- So we have two irons and three oxygens.
- So let me go down and figure out the atomic masses of iron
- and oxygen.
- So iron is right here, 55.85.
- I think it's fair enough to round to 56.
- Let's say we're dealing with the version of iron, the
- isotope of iron, that has 30 neutrons.
- So it has an atomic mass number of 56.
- So iron has 56 atomic mass number.
- And then oxygen, we already know, is 16.
- Iron was 56.
- This mass is going to be 2 times 56 plus 3 times 16.
- We can do that in our heads.
- But this isn't a math video, so I'll get
- the calculator out.
- 2 times 56 plus 3 times 16 is equal to 160.
- Is that right?
- That's 48 plus 112, right, 160.
- So one molecule of iron three oxide is going to be 160
- atomic mass units.
- So one mole or 6.02 times 10 to the 23 molecules of iron
- oxide is going to have a mass of 160 grams.
- So in our reaction we said we're starting off with 85
- grams of iron oxide.
- How many moles is that?
- Well 85 grams of iron three oxide is equal
- to 85 over 160 moles.
- So that's equal to, 85 divided by 160 equals 0.53125.
- Equals 0.53 moles.
- So everything we've done so far in this green and light
- blue, we figured out how many moles 85 grams of
- iron three oxide is.
- And we figured out it's 0.53 moles.
- Because a full mole would have been 160 grams. But
- we only have 85.
- So it's point 0.53 moles.
- And we know from this balanced equation, that for every mole
- of iron three oxide we have, we need to have
- two moles of aluminum.
- So if we have 0.53 moles of the iron molecule, iron three
- oxide, then we're going to need twice as many aluminum.
- So we're going to need 1.06 moles of aluminum.
- I just took 0.53 times 2.
- Because the ratio is 1:2.
- For every molecule of this, we need two molecules of that.
- So for every mole of this, we need two moles of this.
- If we have 0.53 moles, you multiply that by 2, and you
- have 1.06 moles of aluminum.
- All right, so we just have to figure out how many grams is a
- mole of aluminum and then multiply that times 1.06 and
- we're done.
- So aluminum, or aluminium as some of our friends across the
- pond might say.
- Aluminium, actually I enjoy that more.
- Aluminium has the atomic weight or the
- weighted average is 26.98.
- But let's just say that the aluminium that we're dealing
- with has a mass of 27 atomic mass units.
- So one aluminum is 27 atomic mass units.
- So one mole of aluminium is going to be 27 grams. Or 6.02
- times 10 to 23 aluminium atoms is going to be 27 grams. So if
- we need 1.06 moles, how many is that going to be?
- So 1.06 moles of aluminium is equal to 1.06 times 27 grams.
- And what is that?
- 1.06 times 27.
- Equals 28.62.
- So we need 28.62 grams of aluminium, I won't write the
- whole thing there, in order to essentially use up our 85
- grams of the iron three oxide.
- And if we had more than 28.62 grams of aluminium, then
- they'll be left over after this reaction happens.
- Assuming we keep mixing it nicely and the whole reaction
- happens all the way.
- And we'll talk more about that in the future.
- And in that situation where we have more than 28.63 grams of
- aluminium, then this molecule will be the limiting reagent.
- Because we had more than enough of this, so this is
- what's going to limit the amount of this
- process from happening.
- If we have less than 28.63 grams of, I'll start saying
- aluminum, then the aluminum will be the limiting reagent,
- because then we wouldn't be able to use all the 85 grams
- of our iron molecule, or our iron three oxide molecule.
- Anyway, I don't want to confuse you in the end with
- that limiting reagents.
- In the next video, we'll do a whole problem devoted to
- limiting reagents.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:307dd8b3-1c10-45af-a193-2178ac70bd60> | 4 | 2,293 | Truncated | Science & Tech. | 73.243284 |
We have used biomass energy or bioenergy - the energy from organic matter - for thousands of years, ever since people started burning wood to cook food or to keep warm.
And today, wood is still our largest biomass energy resource. But many other sources of biomass can now be used, including plants, residues from agriculture or forestry, and the organic component of municipal and industrial wastes. Even the fumes from landfills can be used as a biomass energy source.
The use of biomass energy has the potential to greatly reduce our greenhouse gas emissions. Biomass generates about the same amount of carbon dioxide as fossil fuels, but every time a new plant grows, carbon dioxide is actually removed from the atmosphere. The net emission of carbon dioxide will be zero as long as plants continue to be replenished for biomass energy purposes. These energy crops, such as fast-growing trees and grasses, are called biomass feedstocks. The use of biomass feedstocks can also help increase profits for the agricultural industry.
Converting biomass into liquid fuels for transportation.
Burning biomass directly, or converting it into a gaseous fuel or oil, to generate electricity.
Converting biomass into chemicals for making products that typically are made from petroleum. | <urn:uuid:5ea4ae69-730b-4c64-bc2a-5ea12489003c> | 3.75 | 249 | Knowledge Article | Science & Tech. | 34.065294 |
This is an exciting episode! It is one that takes a positive look at the possible future of energy creation. Our blogger was a lot of help on this one in explaining some of the interesting facts behind the science of this episodes topic. Hopefully the E-CAT will not be under the topic of unexplained science for long. Enjoy!
Hey everyone! Just wanted to let you all know that our anonymous scientist blogger is going to be a bit delayed in writing their next article. They will still be blogging for us, but is currently busy traveling. You can expect to read some interesting articles in the not to distant future though. Also, feel free to post your comments on the current posted articles. They’d love to see your response and thoughts. Thanks!
In the early 1910’s, in a patent office in Switzerland, one of the clerks was thinking deeply about the speed of light. How could the speed of light be constant, he wondered, if you could run alongside a light beam (like being passed on the freeway by someone going slightly fast than you)? Wouldn’t the light be going slower from your perspective? Or what if you ran head-on into a light beam, in which case the light would appear to be going faster? Doesn’t the speed of light depend on how fast you are going, and in what direction? Or is it always the same no matter what…. and our concepts of distance and time are inaccurate? Though it may not seem related at first, these ponderings eventually led the humble clerk down a long, winding path to a revelation that completely changed the way scientists think about gravity.
That patent clerk was Albert Einstein – a name that has practically become synonymous with the word ‘genius’. But many don’t realize that in the beginning, physicists considered his theories outlandish…. an interesting curiosity perhaps, but nothing that was actually relevant to the real world. But it turns out Einstein was exactly right…. and by daring to question the prevailing worldview, he stumbled upon a more useful (and accurate) way of thinking about a force that scientists thought they already understood. His story serves as a reminder to remain open to new ideas, even in arenas we believe we’ve conquered.
A decade later in Florida, Latvian inventor Edward Leedskulnin was in the process of building his masterpiece – a garden of massive coral stones that he called “Rock Gate Park”. There’s no denying what he accomplished – to this day, anyone who wants to can visit the property and see his creation for themselves. The question, of course, is how did a 5 foot tall, 100 pound man carve, transport, and flawlessly position dozens of stones weighing many tons without the use of modern machinery?
Some propose that Leedskulnin was an unappreciated Einstein, in having discovered a way to somehow manipulate magnetism to overcome gravity and levitate the blocks into place. Using magnetism to levitate large objects is certainly no piece of science fiction…. this is exactly how magnetic trains work! The train literally levitates above the tracks due to magnetic force, enabling it to glide effortlessly at high speed towards its destination. But whatever innovation Leedskulnin took advantage of, he could not have used the earth’s magnetic field to move the coral stones into place for two reasons. First, the coral stones are not magnetic, and therefore wouldn’t have been susceptible to the influence of magnetic fields the way a magnet would. Second, the earth’s magnetic field is too weak. Using the equations of electromagnetism, we can calculate how much lifting power could be provided to a magnet near the earth’s surface. It turns out that the natural magnetism of our planet would be able to lift just 0.015 nanograms…. a magnet of only 1 micron in diameter (no larger than a particle of dust)! So even if Leedskulnin had found a way to endow the stones with magnetic properties, the earth’s magnetic field simply doesn’t provide enough lifting power for the job.
Another theory is that Leedskulnin simply used the known principles of leverage to his advantage, with techniques such as block and tackle. This is a technique that uses a system of multiple pulleys to reduce the force required to lift a heavy object. The pulleys provide some of the force for you, but in return, you must pull the rope a greater distance in order to lift the object. For example, a block and tackle with 2 windings on its pulley allows you to lift a 100 pound object with only 50 pounds of force – but you must also pull the rope twice as far as you otherwise would. So could a block and tackle with enough windings allow one to lift the stones of Coral Castle? It turns out that the more windings you have, the greater the friction you need to overcome, and for this reason pulleys with 4 windings are generally the most efficient used for commercial purposes. This would have enabled Leedskulnin to reduce the lifting force of his 27 ton stone down to around 7 tons….. but this is still a rather daunting task for a 100 pound man!
So where does the missing scientific principle lie that helped to build Coral Castle? Did Edward Leedskulnin make a discovery that could have revolutionized our understanding of electromagnetism? Would we be talking about Leedskulnin alongside Newton and Einstein had he not been so secretive about his methods? We may never know for certain. Or, did he just find a way to make better use of the principles of leverage and simple machines than anyone has yet imagined? Regardless of what theories you prefer to entertain, only one thing seems certain: Nearly a century ago, a reclusive Latvian immigrant understood something that the rest of us don’t.
Hey everyone! We are in the process of moving our recording studio so it may be another week before we can get another episode up. Sorry for the wait! Hopefully once things get set up, we’ll have plenty of great content rolling out for you! | <urn:uuid:8ad5ad85-ce88-4cf5-9f0a-c2bf5535446d> | 2.90625 | 1,251 | Personal Blog | Science & Tech. | 53.753349 |
CAMBRIDGE, England, Jan. 21 (UPI) -- DNA, normally structured as a double helix, has been seen in a "quadruple helix" form in human cells and could be related to cancer, British researchers say.
The double helix has been considered the normal form of DNA for 60 years, since researchers James Watson and Francis Crick described the way two long chemical chains wound up around each other to encode the information that cells need to build and maintain our bodies.
A four-stranded version scientists have been able to produce in test tubes for a number of years has now been found in human cells for the first time, the BBC reported Monday.
Researchers at Cambridge University found the four-stranded DNA arose most frequently during a phase when a cell copies its DNA just prior to dividing.
That could be significant in the study of cancers, they said, which are usually driven by genes that mutate to increase DNA copying.
"The existence of these structures may be loaded when the cell has a certain genotype or a certain dysfunctional state," Shankar Balasubramanian from Cambridge's department of chemistry said.
Control of the structures could provide novel ways to fight cancer, he said.
"We need to prove that; but if that is the case, targeting them with synthetic molecules could be an interesting way of selectively targeting those cells that have this dysfunction."
|Additional Science News Stories|
OKLAHOMA CITY, May 20 (UPI) --A huge tornado cut a devastating path in suburban Oklahoma City Monday, slamming schools, a hospital, businesses and homes, and killing at least 51 people.
PASADENA, Calif., May 20 (UPI) --NASA says its Mars rover Curiosity has drilled a hole in a second rock target to gather samples for analysis in the rover's internal science lab instruments.
PAINESVILLE, Ohio, May 20 (UPI) --Police in Ohio said they arrested a man charged with child endangerment for allegedly allowing his 9-year-old daughter to drive a car, which she crashed. | <urn:uuid:881285f1-4775-4bda-b613-b6484194a1ab> | 3.15625 | 432 | Content Listing | Science & Tech. | 40.582343 |
Flags, counters, and literal value persistent variables can be used inside prelude expressions like any other variable. If the persistent variable does not exist before it is used, the default value is 0. For example, the following declaration would add 3 to the value stored in the entity counter
ent:fizz, storing the result in
KRL includes special built-in functions for accessing individual places in a trail:
history. This takes an expression and a persistent variable as arguments. If the expression evaluates to n, then the
historyfunction will return the nth place on the trail. The most recent place is at history location 0. If n is larger than the length of the trail, the history function returns the empty string.
current. This takes a persistent variable as its argument. The
currentfunction performs the same operation as the
historyfunction with a location of 0. That is, it returns the most recent place on the trail. | <urn:uuid:05c67ec4-9cb5-41a2-a243-2e1602ed6da5> | 3.09375 | 192 | Documentation | Software Dev. | 50.898147 |
Epoch B1950.0 Equinox B1950.0
|Right ascension||19h 13m 12.4655s|
|Declination||16° 01′ 08.189″|
PSR B1913+16 (also known as PSR J1915+1606 and PSR 1913+16) is a pulsar (a radiating neutron star) which together with another neutron star is in orbit around a common center of mass, thus forming a binary star system. In 1974 it was discovered by Russell Alan Hulse and Joseph Hooton Taylor, Jr., of the University of Massachusetts Amherst. Their discovery of the system and analysis of it earned them the 1993 Nobel Prize in Physics "for the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation."
The system is also called the Hulse–Taylor binary pulsar after its discoverers.
Using the Arecibo 305m antenna, Hulse and Taylor detected pulsed radio emissions and thus identified the source as a pulsar, a rapidly rotating, highly magnetized neutron star. The neutron star rotates on its axis 17 times per second; thus the pulse period is 59 milliseconds.
After timing the radio pulses for some time, Hulse and Taylor noticed that there was a systematic variation in the arrival time of the pulses. Sometimes, the pulses were received a little sooner than expected; sometimes, later than expected. These variations changed in a smooth and repetitive manner, with a period of 7.75 hours. They realized that such behavior is predicted if the pulsar were in a binary orbit with another star.
Star system
The pulsar and its neutron star companion both follow elliptical orbits around their common center of mass. The period of the orbital motion is 7.75 hours, and the two neutron stars are believed to be nearly equal in mass, about 1.4 solar masses. Radio emissions have been detected from only one of the two neutron stars.
The minimum separation at periastron is about 1.1 solar radii; the maximum separation at apastron is 4.8 solar radii. In the case of PSR B1913+16, the orbit is inclined at about 45 degrees with respect to the plane of the sky. The orientation of periastron changes by about 4.2 degrees per year in direction of the orbital motion (relativistic precession of periastron). In January 1975, it was oriented so that periastron occurred perpendicular to the line of sight from Earth.
The orbit has decayed since the binary system was initially discovered, in precise agreement with the loss of energy due to gravitational waves predicted by Einstein's general theory of relativity. The ratio of observed to predicted rate of orbital decay to be 0.997±0.002. The total power of the gravitational radiation (waves) emitted by this system presently, is calculated to be 7.35 × 1024 watts. For comparison, this is 1.9% of the power radiated in light by our own Sun. (Another comparison is that our own Solar System radiates only about 5000 watts in gravitational waves, due to the much larger distances and orbit times, particularly between the Sun and Jupiter).
With this comparatively large energy loss due to gravitational radiation, the rate of decrease of orbital period is 76.5 microseconds per year, the rate of decrease of semimajor axis is 3.5 meters per year, and the calculated lifetime to final inspiral is 300,000,000 years.
- Mass of companion: 1.387 MSun
- Orbital period: –7.751939106 hr
- Eccentricity: –0.617131
- Semimajor axis: 1,950,100 km
- Periastron separation: 746,600 km
- Apastron separation: 3,153,600 km
- Orbital velocity of stars at periastron (relative to center of mass): 450 km/s
- Orbital velocity of stars at apastron (relative to center of mass): 110 km/s
In 2004, Taylor and Joel M. Weisberg published a new analysis of the experimental data to date, concluding that the 0.2% disparity between the data and the predicted results is due to poorly known galactic constants, and that tighter bounds will be difficult to attain with current knowledge of these figures. They also mapped the pulsar's two-dimensional beam structure using the fact that the system's precession leads to varying pulse shapes. They found that the beam shape is latitudinally elongated, and pinched longitudinally near the centre, leading to an overall figure-of-eight shape.
In popular culture
Science fiction writer Arthur C. Clarke offhandedly speculated, in his television series Mysterious World, that this pulsar was the Star of Bethlehem. He ended the 12th episode with the line, "How romantic if even now we can hear the dying voice of the star which heralded the Christian era."
See also
- wikisky.org SKY-MAP for 19:15:28 / +16:06:27 (J2000 position)
- Weisberg, J. M.; Taylor, J. H.; Fowler, L. A. (October, 1981). "Gravitational waves from an orbiting pulsar". Scientific American 245: 74–82. Bibcode:1981SciAm.245...74W. doi:10.1038/scientificamerican1081-74.
- Weisberg, J.M.; Taylor, J.H. (July 2005). "The Relativistic Binary Pulsar B1913+16: Thirty Years of Observations and Analysis". In F.A. Rasio and I.H. Stairs (eds.). ASP Conference Series 328. Aspen, Colorado, USA: Astronomical Society of the Pacific. p. 25. arXiv:astro-ph/0407149. Bibcode:2005ASPC..328...25W.
- "The Nobel Prize in Physics 1993". Nobel Foundation. Retrieved 2011-03-12. "for the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation"
- Taylor, J. H.; Weisberg, J. M. (1982). "A new test of general relativity - Gravitational radiation and the binary pulsar PSR 1913+16". Astrophysical Journal 253: 908–920. Bibcode:1982ApJ...253..908T. doi:10.1086/159690.
- Taylor, J. H.; Weisberg, J. M. (1989). "Further experimental tests of relativistic gravity using the binary pulsar PSR 1913 + 16". Astrophysical Journal 345: 434–450. Bibcode:1989ApJ...345..434T. doi:10.1086/167917.
- Weisberg, J. M.; Nice, D. J.; Taylor, J. H. (2010). "Timing Measurements of the Relativistic Binary Pulsar PSR B1913+16". Astrophysical Journal 722: 1030–1034. arXiv:1011.0718v1. Bibcode:2010ApJ...722.1030W. doi:10.1088/0004-637X/722/2/1030. | <urn:uuid:cf7362fc-ed82-4c1e-b2d3-03daca95e34a> | 2.90625 | 1,561 | Knowledge Article | Science & Tech. | 69.26527 |
Perhaps one of the hardest problems in knot theory is determining whether two knots are isotopic. One method of solving this problem is the polynomial invariant.
The ideal polynomial invariant would yield a unique polynomial for each isotopic class that is consistant through each projection of the class. Most of the polynomial invariants described on this site are based on skein relations. The Alexander Polynomial is the exception; it was originally designed to use matrices. However, another mathematician, known as John H. Conway, found a way to calculate the Alexander Polynomial through a skein relation.
The earliest polynomial invariant was the Alexander Polynomial, named after its creator, James Waddell Alexander II. The Alexander Polynomial was based on the concept that two knots can be distinguished as different through linear color tests. Unfortunately, it did not distinguish between a knot and its mirror image, and thus assumed that all knots were achiral. The Alexander Polynomial, published in 1928, remained the only knot polynomial invariant for over five decades.
In 1984, Vaughan F. Jones created a new polynomial invariant, named the Jones Polynomial. The amazing part, however, was that he "discovered" the polynomial when he noticed that some equations in knot theory looked similar to those found in operator algebras, which are associated with quantum mechanics. The Jones Polynomial was also the first polynomial invariant to use skein relations.
In 1987, L. H. Kauffman discovered his own polynomial derived from the Bracket Polynomial. The Bracket Polynomial recognizes the regularly isotopic projections of an isotopic class as the members of the same isotopic class, but the Bracket Polynomial is not an isotopy invariant because it is not invariant under the first Reidemeister Move. Kauffman, however, found an expression to multiply by the Bracket Polynomial to obtain an isotopically invariant polynomial, and another expression to multiply by the Kauffman Polynomial to obtain the Jones Polynomial.
The publication of the Jones Polynomial excited the mathematical community to the point that new polynomial invariants were being created at a stupendous rate. One of the objectives of the time was to find a polynomial that generalized the Alexander and Jones polynomial. The Oriented Polynomial, or the HOMFLY Polynomial was a successful solution, published by several groups of mathematicians simultaneously. The paper was published under the names of Hoste, Ocneanu, Millett, Freyd, Lickorish, and Yetter. The HOMFLY Polynomial uses a skein relation, like the Jones Polynomial, but the new polynomial uses two variables, unlike the Alexander and Jones Polynomials.
The polynomial invariant, probably the most important aspect of knot classification, still has many questions surrounding it. What kind of information is being expressed in the polynomials discovered so far? Is there a polynomial invariant simple enough to calculate, no matter how many crossings are involved? Is there a polynomial invariant that is a generalization of the HOMFLY Polynomial? Does an invariant that is ideal for both knots and links exist? Only the right application of the right knowledge will reveal the answers.
The currently published polynomial invariants include:
The Alexander Polynomial
The Bracket and Kauffman Polynomials
The Oriented/HOMFLY Polynomial
The Jones Polynomial | <urn:uuid:53a53bbf-4e0f-4515-a6b3-b1c76b0fd00b> | 3.25 | 765 | Knowledge Article | Science & Tech. | 20.608631 |
> An integer constant expression with the value 0, or such an expression cast to type void *, is called a null pointer constant.57) If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
So, my guess is that GCC folks simply put two and two together and got four. In other words, if tun was zero, it would not be pointing to any valid object. Therefore, the fact that the programmer is using it must mean that the programmer _knows_ that it will be pointing to a valid object (i.e. is non-zero). Therefore, the check for it being zero later on is superfluous and can be removed. Ergo, no bug in GCC. | <urn:uuid:bc4557d9-165c-42bc-a314-c7c2bdd84999> | 2.78125 | 166 | Comment Section | Software Dev. | 69.41 |
If = 3 - 4 + 6, and · = 4, find tanθ where θ is the angle between and . Give an exact answer.
not sure what to do here, i know that i can divide sin by cos to get tan but i get a vector... help please
Evaluating the dot and cross product is probably the best way to work out angles between vectors because you don't have to normalise anything. But the tangent function has a period of pi radians, so when you divide to work out the tangent of the angle you are losing some information. Note that the signs of the dot and cross product together tells you which quadrant the angle comes from. So in fact you can work out the angle without any ambiguity at all. Math libraries for programming languages like C/C++ usually have an atan2() function which is just what you want to get the angle properly. | <urn:uuid:73382428-bcd1-4544-99ee-90db35a598ba> | 3.046875 | 187 | Q&A Forum | Science & Tech. | 71.870206 |
Monday, June 14, 2010
Air Pressure: Straw Fountain
Fill a cup part way with water.
Hold a straw vertically in the water – make sure it’s not touching the bottom of the cup. (You could probably tape the straw to the cup if you’re having a hard time holding on to all of the parts.)
Using another piece of straw, blow across the top of the straw that’s in the water.
Water shoots out of the vertical straw.
Normally there’s a whole lot of air molecules stacked up on top of the surface of the water. When you blow across the top of the straw, you’re pushing some of those air molecules out of the way. There are still air molecules pushing on the rest of the water, and with the pressure over the straw reduced, the water is pushed up the straw.
If you're having a hard time getting the water to shoot out of the straw, try some of these tips - they helped me!
1 - Aim the straw you're blowing through slightly up (see picture above).
2 - Use shorter pieces of straw.
3 - Fill the cup with more water.
4 - Practice - it takes a few times to get the technique down. | <urn:uuid:019e8317-ee08-4fc6-bc39-9c955559d2cb> | 3.25 | 260 | Tutorial | Science & Tech. | 75.153287 |
Frost (also called Hoarfrost and White Frost), ice crystals produced by the freezing of water vapor on the ground, on objects near the ground, or on windows. Water vapor in the air condenses into a liquid or a solid at a temperature called the dew point. This temperature varies, depending on the amount of vapor in the air. When both the dew point and the temperature of the air are above 32o F. (Oo C.), dew is formed; when both are below 32o F., frost occurs.
Frost is most likely to occur on clear, windless nights when the layer of air next to the ground contains considerable moisture. A frost that forms at temperatures of 27 F. (-3 C.) or less is termed a killing frost because it kills plants by freezing their fluids. Black frost is a term for intense cold that blackens and kills vegetation; no actual frost is involved.
Frost is a hazard to agriculture, particularly in valleys, where cold air flowing down from the hills often causes heavy frost damage. Fruit growers protect their orchards by burning open fires, or by placing heaters or smudge pots at intervals to raise the temperature of the air near the trees. | <urn:uuid:ad2f0b62-6b12-4d4d-bbc3-f245cdf342b1> | 3.90625 | 249 | Knowledge Article | Science & Tech. | 65.708812 |
Places like Antarctica have lots of ice, but people can also find it in home appliances (machines in the home) like the refrigerator or freezer. If people put water in a freezer and leave it for a while, the water gets very cold and will freeze solid, creating ice. People can put water into a copper (or metal) container if they want ice to freeze faster. Copper is a very good conductor of heat--it can freeze water faster than a regular (plastic) ice tray would be able to. Surprisingly, an open tray of hot water can freeze faster than the same amount of cold water. This happens because enough of the hot water can evaporate before cooling, reducing the amount of water to be frozen.
Unlike other liquids, water expands as it freezes to become ice; so ice floats on water because ice has less density than water. This is very unusual - just about every other liquid gets more dense as it cools; water ice, however, is an important exception. Liquid water expands by about 9% as it becomes ice - it takes up more space. If water in pipes freezes it can burst the pipe. Water in glass bottles can explode in the freezer if people leave it there long enough to freeze. Water freezing in rock crevices can expand enough to split hard rocks apart; this is an important geological weathering process that can wear down mountains and make rock into soil.
Salt water needs a lower temperature to freeze than pure water. The resulting ice contains much less salt than the salt water it came from. This salty ice is not as strong as frozen pure water. Similarly spreading salt on ice melts it: the salt progressively eats into the ice, forming salty water which is not cold enough to be frozen at the same temperature.
Because ice floats, even large bodies of water that freeze, like some oceans, only form ice on the surface. Most lakes never freeze to the bottom. Even the coldest oceans, like the Arctic, only freeze on the top, leaving liquid ocean circulating below. Because of this the Earth's oceans are able to redistribute heat and the climate of the earth has less extremes of heat and cold than it would otherwise. If ice were to sink instead of float, the oceans would fill up with ice from the bottom, would remain solid and only some of the top would thaw. A solid ocean would not circulate heat. But because ice floats on the surface the water beneath can continue to circulate and the ice on the surface stays exposed and readily melts when the temperature rises.
The earth's climate is always changing. When it is very cold it's called an ice age. Some speculate that most recent ice age finished only ten thousand years ago. During ice ages very large areas of the earth are covered in ice, snow and glaciers. The causes of ice ages are complex, or hard to understand. Global warming is currently affecting the Earth's ice resources and its causes are also very complex.
When materials are cooled their molecules vibrate less and compact together. When most materials reach a temperature called the freezing point, the molecules form a crystalline solid - although some materials (like glass and tar) do not crystallise at all but form super stiff fluids, which seem to be solid. Only Helium will not freeze; all other substances will freeze if cold enough, but fluids like cooking oil, anti-freeze, petrol (gasoline), nitrogen, etc. freeze at temperatures that most people will rarely, if ever, experience.
Dry ice [change]
- For the main article on dry ice.
There is also 'dry ice'; it is frozen carbon dioxide. Dry ice exposed to normal air gives off carbon dioxide gas that is odorless and colorless. The gas is so cold that when it mixes with air it cools the water vapour in the air to fog, which looks like a thick white smoke. It is often used in the theatre to create the appearance of fog or smoke. | <urn:uuid:f68988d1-c68a-4715-b6e0-92e5f3967f88> | 3.9375 | 807 | Knowledge Article | Science & Tech. | 49.543947 |
Statistics and Probability Dictionary
Select a term from the dropdown text box. The online statistics
glossary will display a definition, plus links to other
related web pages.
The absolute value of a
number is its distance from zero on the number line. For example, -7 is 7
units away from zero, so its absolute value would be 7. And 7 is also 7
units away from zero, so its absolute value would also be 7.
Thus, the absolute value of a number refers to the magnitude
of the number, without regard to its sign. The absolute value of -1 and 1 is
1, the absolute value of -2 and 2 is 2, the absolute value of -3 and 3 is 3, and | <urn:uuid:a8fc6aa1-eed9-4fa1-b985-771d7c1c886f> | 3.890625 | 155 | Structured Data | Science & Tech. | 58.603224 |
Two Tucson solar scientists with the National Solar Observatory at Kitt Peak have been keeping track of sunspots. The steady decline in sunspots may foreshadow a period of strong global cooling. The last time sunspots behaved this way was from 1645 to 1715, a time known as the Maunder Minimum, and also “the little ice age.” Following is the press release from Physorg.com:
(PhysOrg.com) — Sunspot formation is triggered by a magnetic field, which scientists say is steadily declining. They predict that by 2016 there may be no remaining sunspots, and the sun may stay spotless for several decades. The last time the sunspots disappeared altogether was in the 17th and 18th century, and coincided with a lengthy cool period on the planet known as the Little Ice Age.
Sunspots are regions of electrically charged, superheated gas (plasma) on the surface of the sun, formed when upwellings of the magnetic field trap the ionized plasma. The magnetic field prevents the gas from releasing the heat and sinking back below the sun’s surface. These areas are somewhat cooler than the surrounding sun surface and so appear to us as dark spots.
Sunspots have been observed at least since the early 17th century, and they are known to follow an 11 year cycle from solar maximum to solar minimum. The solar minimum usually lasts around 16 months, but the current minimum has already lasted 26 months, which is the longest minimum in a hundred years.
Since 1990, Matthew Penn and William Livingston, solar astronomers with the National Solar Observatory (NSO) in Tucson, Arizona, have been using a measurement known as Zeeman splitting to study the magnetic strength of sunspots. The Zeeman splitting is the distance between a pair of infrared spectral lines in a spectrograph taken of the light emitted by iron atoms in the atmosphere of the sun. The wider the distance, the greater is the intensity of the magnetic field.
Penn and Livingston examined 1500 sunspots and found that the average strength of the magnetic field of the sunspots has dropped from around 2700 gauss to 2000 gauss. (In comparison, the Earth’s magnetic field is below one gauss.) The reasons for the decline are unknown, but Livingston said that if the strength continues to decrease at the same rate it will drop to 1500 gauss by 2016, and below this strength the formation of sunspots appears to be impossible.
During the period from 1645 to 1715, a time known as the Maunder Minimum, there were almost no sunspots. This period coincided with the Little Ice Age, which produced lower than average temperatures in Europe. Livingston said their results should be treated with caution as their techniques are relatively new and it is not yet known if the decline in magnetic field strength will continue, and that “only the passage of time will tell whether the solar cycle will pick up.”
David Hathaway, a solar physicist with the Marshall Space Flight Center in Huntsville, Alabama, also cautioned the calculations do not take into account that many small sunspots with relatively weak magnetic fields appeared during the last solar maximum, and if these are not included in the calculations the average magnetic field strength would seem higher than it actually was.
Penn and Livingston’s paper has been submitted to the online colloquium, International Astronomical Union Symposium No. 273.
Maybe, instead of trying to reduce carbon dioxide emissions, we should follow my modest proposal to triple our carbon footprints. | <urn:uuid:264d74fc-62c2-44c0-aac1-26ab9d8e726d> | 3.5625 | 730 | Personal Blog | Science & Tech. | 44.602692 |
Science Fair Project Encyclopedia
A Brownian tree, whose name is derived from Robert Brown via Brownian motion, is a form of computer art that was briefly popular in the 1990s, when home computers started to have sufficient power to simulate Brownian motion. Brownian trees are mathematical models of dendritic structures associated with the physical process known as diffusion-limited aggregation.
A Brownian tree is built with these steps: first, a "seed" is placed somewhere on the screen. Then, a particle is placed in a random position of the screen, and moved randomly until it bumps against the seed. The particle is left there, and another particle is placed in a random position and moved, and so on.
The resulting tree can have many different shapes, depending on principally three factors:
- the seed position
- the initial particle position (anywhere on the screen, from a circle surrounding the seed, from the top of the screen, etc.)
- the moving algorithm (usually random, but for example a particle can be deleted if it goes too far from the seed, etc.)
Particle color can change between iterations, giving interesting effects.
At the time of their popularity (helped by a Scientific American article in the Amateur Scientist section), a common computer took hours, and even days, to generate a small tree. Today's (2003) computers can generate trees with tens of thousands of particles in a few minutes.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:45f10abf-be23-4dbd-94d2-fe77e97dc637> | 3.90625 | 323 | Knowledge Article | Science & Tech. | 39.951301 |
Matter (or mass) is the physical stuff of the universe, while energy is what causes matter to move and change. There are different forms of matter (for example, solid, liquid and gas) as well as different forms of energy (such as electrical, chemical and nuclear energy). For centuries, scientists thought that matter could not be created or destroyed--it could only change form. The same idea seemed to apply to energy.
Rethinking the Laws of Nature
Einstein's work on the Special Theory of Relativity prompted him to rethink the fundamental laws of physics. He realized that one of the long-held views of nature--that matter could not be created or destroyed--was wrong. Einstein showed instead that matter can be destroyed and converted to energy. Conversely, energy can be converted to mass.
Einstein's equation E=mc2 demonstrates the unexpected finding that energy and mass are interrelated: Mass is a form of energy, and energy is a form of mass. The equation also helps explain the energy source for a variety of physical phenomena, from stars to the atomic bomb--although initially Einstein did not anticipate any practical applications for his formula. | <urn:uuid:fc03fa63-ed9d-4acf-a640-4d4010228e7f> | 3.515625 | 235 | Knowledge Article | Science & Tech. | 41.492454 |
By Dr. Mikko Syrjäsuo, Alberta Ingenuity Fellow, University of Calgary
It all begins with our nearest star, the Sun, whose surface is so hot that it gives off a continuous stream of solar wind. The wind from the Sun carries particles with it to the rest of the solar system and beyond. This not-so-gentle breeze is continuous but varying: the average speed is around 400 km per second—by comparison a car's highway speed is about 30 metres per second. However, in gusts or solar storms the wind speed can reach 1000 km per second and it will carry more particles with it.
Although the solar wind howls around and past the Earth, it does not directly reach the surface because Earth's magnetic field acts as a shield. This invisible windshield is called the magnetosphere.
The flow of the solar wind around the Earth's magnetosphere is the ultimate power source for the aurora.
It is like any electrical generator, where a conductor moves in a magnetic field and the motion creates an electric current. The solar wind is ionized gas that can conduct electric currents, and it acts as a moving conductor in the Earth's magnetic field. So together they form a giant generator: this electric dynamo creates currents of millions of amperes in the near-Earth space and the upper atmosphere. Electric currents, or electrons in motion, eventually cause proton and other ion motion as well. A thousand billion watts of power can create quite a lot of action anywhere!
The formation of aurora borealis works a lot like a television. When you plug a TV into the power outlet, the back end of the cathode ray tube starts boiling off electrons. The electrons are accelerated and then guided by electric and magnetic fields so that the electrons hit appropriate locations on the TV screen. The screen is coated with phosphor which gives off light when hit by electrons, and the light forms a picture.
In the cosmic TV, the power is obtained from the solar wind and used to accelerate electrons and protons inside the magnetosphere. The particles are then guided by the Earth's magnetic field until they precipitate into the upper atmosphere and produce the northern lights. Auroral scientists frequently point out that the TV analogy goes even further: the acceleration voltage both in TVs and auroras is about 20,000 volts. After that, the scientists hesitate: the actual mechanism of creating this electric potential in the appropriate place is not really understood. The cosmic show itself is programmed by the variations in the solar wind, of course, but we still do not know enough to explain how the auroral shapes are formed. Nevertheless, we are sure that the show is about the environment of our home planet .
Because the solar wind flows continuously, there is always some aurora. We can even observe auroras during the day. The twist is that we need a dark day, such as in the arctic during mid-winter when the Sun does not rise. The colours in the aurora are due to different collisions. For example, electrons crashing into atmospheric oxygen atoms at different speeds produce either yellowish-green or red. Collisions with nitrogen molecules, on the other hand, produce blue or violet. Brightness of the aurora simply depends on the number of collisions— the more precipitating particles, the brighter the display. All of these collisions occur at altitudes of about 90 kilometres or more.
There still are many unanswered questions that continue to puzzle space physicists. In Canada, we build and use auroral imagers, magnetometers and ionospheric radars, as well as satellite and rocket instruments for studying the aurora. We create mathematical models and simulations to verify whether we're right or wrong. Sometimes, though, it's more fun to put the science aside, lean back, and enjoy the show. | <urn:uuid:c01d9af0-6f54-40d2-a8a4-b297f9333782> | 4.03125 | 778 | Knowledge Article | Science & Tech. | 47.115586 |
Full Lab Manual
Introduction & Goals
Chemistry & Background
In Your Write-up
1. A 20.0 mL sample of 0.050 M [Co(NH3)5(H2O)]3+ is titrated with 0.025 M NaOH. Calculate the concentrations of [Co(NH3)5(H2O)]3+, [Co(NH3)5(OH)]2+, H3O+, and OH- before titration begins. What volume of NaOH will be required to reach the endpoint?
2. In the titration of 40 mL of 0.025 M [Co(NH3)5(H2O)]3+ with 0.100 M NaOH, calculate the the volume of NaOH required to reach the endpoint. Calculate the concentrations of [Co(NH3)5(H2O)]3+, [Co(NH3)5(OH)]2+ and OH- at the end point
3. A 0.246 g sample of [Co(NH3)5(H2O)][NO3]3 is dissolved in 75 mL of water and titrated with 0.0962 M NaOH according to the procedure in this experiment. The end point comes at 7.21 mL. The pH when 3.60 mL base had been added was 6.20. Compare the observed end point position with that expected on the basis of the weight of complex used and its molecular formula. From this data, what is the experimental pKa of the cobalt complex?
Trustees of Dartmouth College, Copyright 19971999 | <urn:uuid:02602ebd-aa32-46df-ae9f-28df2d5f80b3> | 2.890625 | 329 | Tutorial | Science & Tech. | 100.471791 |
The Venus Transit 2004
... Educational Sheet 4
How to estimate the Earth-Sun Distance by means of trigonometry and a model of the Phases of Venus
Lars Petersen, Alan C. Pickwick, Rosa M. Ros, Mogens Winther (EAAE)
Level: Some mathematical ability is required.
- To use the transit of Venus to determinate the distance to Venus and to the Sun.
BackgroundThe students need to know:
- Mathematical content
- Definition of the sine function
- Sine Theorem
- Latitude and Longitude
AssumptionsWe assume that:
- All measurements are performed close to local noon (the Sun being in the South)
- Both observers on are the same meridian
- The centres of the Sun, Venus and the Earth are coplanar
- The original lab exercise also assumes that the data for measuring Venus' orbit are made with declination equal to zero
- These approximations are made because they are reasonable and we want to reach as large an audience as possible.
- A pocket calculator (may be performed without astronomical equipment).
- Table tennis balls.
- If you have an astronomical telescope, that will be fine - but neither a telescope nor other expensive equipment is needed.
- A lab experiment to understand and visualise the phases of Venus. The students will repeat Galileo's first proof of 1610 that the Sun, and not the Earth, is at the centre of our planetary system.
IntroductionWe will use the transit of Venus observations to calculate the Earth - Sun distance. The phenomena are similar to the transit of Mercury which it was possible to observe from Europe in 2003.
Imagine two observers, one at position 2 (the subsolar point) and the other at position 1, further to the North.
In general, observers' positions on the globe are defined by their longitudes and latitudes. You may find your own latitude from any geographical atlas, or by means of this collection of WWW interactive maps.
In the following pages we consider the situation where both observers are situated at approximately the same longitude. In addition, we restrict our attention to the end of the transit, when the Sun is approximately due South, thus allowing a simple approximation.
The latitude of Observer 1 relative to the equator is marked in yellow. From the drawing it is obvious that Observer 1 sees Venus at a lower angle than does Observer 2. So as we move towards more northerly latitudes we see that the declination of Venus appears to decrease. You have probably been in a similar situation: as you climb a tree, you look down more and more on your neighbours. This is the parallax effect, mentioned previously.
The trigonometry of the transitThis drawing shows in more detail how much the declination of Venus decreases as we move from position 2 to position 1:
Note, how the small angle ß beta (indicated in the figure) occurs at two places. The drawing below shows how we may find beta :
If we apply the Sine Theorem to the shaded triangle above, we find:
|sin (ß N ) = R|| |
sin(Ψ N - D v )
During the Venus Transit 2004 it would be nice to get data from both the northern and the southern hemisphere.
An observer at a position 3 - say South Africa - will get a parallax relative to position 2
Performing the same calculations as before - we get the following result for the southern red triangle
|sin (ß s ) = R|| |
sin(D v - Ψ s )
Applying the approximation (valid if beta is small and measured in radians):
sin(ß) ≈ ß
We get this approximation:
|ß N + ß s =|| |
|(sin(Ψ N - D v ) + sin(D v - Ψ s ))|
which gives Venus ß Venus = ß N + ß s as
|ß Venus =|| |
|(sin(Ψ N - D v ) + sin(D v - Ψ s ))|
or in simplified form:
|ß Venus =|| |
If we could measure the parallax of Venus ß V relative to the distant fixed stars, this formula could be used to find the distance of Venus. However, we are unable to see the stars during the daytime, so we have to measure the parallax relative to the Sun.
The solar distance is not infinite as the Sun has a parallax also.
This parallax ß Sun is (same formula as before):
|ß Sun =|| |
Observing the parallax of Venus relative to the Sun, will only give us the difference Δß between these two parallaxes:
Δß = ß Venus - ß Sun
|Δß = ß Venus - ß Sun =|| |
We know that the Earth-Venus distance is 0.3 times the Earth-Sun distance, because r V /r S = 0.3. This makes the observed parallax Δß equal to
|Δß =|| |
K Venus - 0.3K Sun
that is to say, the distance to Venus r V is
|r v =|| |
K Venus - 0.3K Sun
Test Calculations...In case of bad weather - and as a preparation of what to expect - we now take a look at the real numbers involved.
We have chosen two random positions, one in Europe (position 1) at Latitude North φ N = +55º and one in South Africa (position 3) at Latitude South φ S = - 30º. Both are in the same meridian in order to simplify the figure and the trigonometrical content of the geometrical problem.
We calculated previously the Solar declination, as seen from the centre of the Earth (or position 2) as being D S = + 22º.8897. In addition, we found the declination of Venus (geocentric) was D V = 22°.6788. The radius of the Earth, R = 6378 km.
Entering all the above values into the formulas, we are able to find both K Venus and K Sun. Please do this (remember, your pocket calculator should be set to "degrees") and you will get K Venus = 8482.2 km and K Sun = 8476.5 km.
Now, let us take a look at the parallax to be expected. Each position was entered into the Institut de Mecanique Celeste webpage.
For Venus we have three declinations: the geocentric DV, the declination for position 1 in the North hemisphere D V|N and for position 3 in the South hemisphere D V|S as shown in the figure below.
We can see directly from the figure ß N = D V - D V|N and ß S = D V|S - D V then
ß Venus = ß N + ß S = D V - D V|N + D V|S - D V = D V|S - D V|N
and by analogy
ß Sun = D S|S - D S|N
In position 1, at +55° Latitude North, we get declination values corresponding to the end of the transit, nearly local noon (1100 UT). The Solar Declination D S|N is +22º53'18".3999 and Venus' Declination D V|N is +22º40'27".5448.
In South Africa, at -30° Latitude South, we get the following values, also at local noon (1100 UT). The Solar Declination D S|S is +22º53'29".8560 and Venus' Declination D V|S is +22º41'07".8303.
Using 1'= 60", we see that the North South Venus Parallax ß Venus (difference in declination) becomes:
ß Venus = 22º41'07".8303 - 22º40'27".5448 = 40".2855
And using the solar values, we see the Solar Parallax ß Sun becomes
ß Sun = 22º53'29".8560 - 22º53'18".3999 = 11".456
We recall that the observed Parallax,
Δß = ß Venus - ß Sun = 40".2855 - 11".4561 = 28".8294
that is, approximately half a minute of arc. For comparison, the full angular size of the Sun is approximately 30 minutes.
Converting the parallax to degrees,
Δß = 28".8294 /3600 = 0º.0080
And then to radians,
Δß = 0º.0080 pi /180 = 1.39769⋅10 -4 radians
In order to calculate distances we enter all values into:
|r v =|| |
K V - 0.3K S
We now get
|r v =|| |
8482.2 - 0.3⋅8476.5
|= 46⋅10 6 km|
The official value is, for comparison: 43.217 106 km.
Again, using the results from our simple previous lab exercise we already know that r V /r S = 0.3,
then r S = 3.33 r V = 3.33⋅43⋅10 6 = 143⋅10 6 km
The official value is 149.6⋅10 6 km, so our simplified approach has a deviation of less than 5%.
Any High School Mathematical text book tells us about the famous sine relation:
In our astronomical triangle, the length of "a" will be approximately equal to the length of "c".
Seasonal VariationsYou have (probably) all noticed that during summer time shadows are short (left)
and during winter time shadows are long (right). So, the 'height' of the Sun (above the horizon) varies from season to season. This height is referred to as the altitude by astronomers.
Astronomers usually describe this variation by considering the conditions at the Earth's Equator:
Note that the solar rays hit our planet at a certain angle with respect to the Equator. Astronomers call this angle the solar declination .
During the transit of Venus, the Sun will be positioned north of the equator so we experience Summer in the northern hemisphere. Mathematically speaking, at this time the Sun has a positive declination . The solar declination reaches a maximum at +23.44 degrees around 22 June.
During wintertime in the northern hemisphere, the Sun reaches its minimum declination, -23.44 degrees, around 22 December. Halfway between these two dates, that is, around 22 March and 22 September, the Sun's declination is nearly zero.
The exact declination values of the Sun and Venus have been measured by astronomers for centuries - knowing these values has been critical for navigating the oceans by means of sextants. Precise values may be found at this Institut de Mecanique Celeste webpage.
Applying the information from these webpages, we find that at the end of the Venus Transit, on 8 June at 1100 UT = 1300 Central European Summer Time, the Solar declination will be +22º 53' 22".9771 ("Geocentric" coordinates - as seen from the centre of the Earth).
Remember that 1 minute of arc is equal to 1/60 degree, and 1 second of arc is equal to 1/3600 Degree. So, converting into ordinary degree units, this becomes
Solar declination = ( 22 + 53/60 + 22.9771/3600 ) degrees = 22º.8897
Make a schematic drawing of the Earth, including the directions to the Sun and Venus. Will you observe the transit happening at the "upper" (northern) half, or the "lower" (southern) half of the Sun?
Venus will become a beautiful "evening star" during spring 2004. During autumn 2004 - the planet will be visible at the morning sky also. And - exactly on 8 June 2004 - Venus will perform a very rare transit across the solar disk.
This occasion will allow our students to estimate the distance to the Sun and to Venus.
However, in order to do so, we have first to investigate the orbit of our neighbouring planet. This chapter describes how this investigation can be done using simple maths and easy methods.
This idea originated with Galileo in 1610. The method shown below was the first scientific proof that the Sun, and not planet Earth was at the centre of our solar system.
By observing the phases of Venus, Galileo proved the heliocentricity of the solar system.
If you want to repeat Galileo's observations and proof, here is how to perform the several experiments.
To observe the phases of Venus with a telescope or a pair of binoculars
Try this yourself: Find a good telescope - an astronomical telescope with a magnification of 25-50 times is well suited for this task. When the Earth is very close to Venus, an ordinary pair of binoculars will even be sufficient. The picture below gives an indication of the phases you may observe.
Click to see a most detailed image - taken at the TGS observatory.
Please notice the nearly "half-moon" phase at the end of March 2004.
- Ask your students to bring a piece of paper and a pencil. Before the observations start, they have to draw a good circle on their paper.
- Now, observe the phases of Venus. One student from each group has to make a detailed drawing. Tell them they have to watch the planet for at least half a minute in order to enjoy the short moments of calm, steady air.
- Then, let them draw the phases in detail. Let the different groups compare their results, in order to see who is most accurate.
How, by means of a lab activity, can these phases show that the solar system has the Sun at the centre?
In the following, we take an example based on a computer graphic for Mar 31 2004.
Now comes the trick. The Earth, the Sun and our sister planet Venus together form a gigantic triangle. We call the corresponding angles for the Earth, the Sun and Venus: "E", "S", and "V".
The angle E is very easy to estimate. The method below is not 100% accurate, but there is no need to introduce advanced maths for these simple measurements. On 31 Mar 2004, the Sun will be located due South at 12h 04m UT.
Venus however, will be due South at 14h 59m. The time difference:
E = 14h 59m - 12h 04m = 2 h 55 m = 2 + (55/60) h = 2.92 h. = 44º
As you know, the Earth rotates 360 degrees in 24 hours which corresponds to 15 degrees per hour. Therefore the time difference of 2.92 h corresponds to an angle of approximately 44 degrees. Now we are very close to the solution.
(If you have a measuring device like a sextant, the angle E may be measured directly in a more precise way, but do not forget a solar filter.)
Go into your laboratory , and place a torch light, slide projector, or similar powerful light source on the floor. We will now try to make a scaled down version of our solar system:
- Take a piece of chalk, and draw a line, 1 metre long, going from the light source to where you stand. (Astronomers define this distance between the Sun and the Earth as "1 Astronomical Unit" = 1 AU. Modern measurements have shown that this distance is equal to 149.6 million km.)
- You now only have to draw the angle E, in our case above: 44 degrees.
- Extend the left leg of this 44 degree angle, so it runs out into the laboratory for say 2 metres.
- Find a circular object (Galileo took an apple, but an orange is even better). Place it along the left leg. Let the light from the light source fall on the orange. Now observe how the phases vary as the distance increases.
- Vary the distance, until the phases correspond to our 31March 2004 image. Now you have a correct model of the solar system. Your students may measure directly the Earth-Venus distance, and the Sun-Venus distance.
- On 31 March 2004 we have a 90º triangle as below:
Putting the base line equal to "1 astronomical unit" (the Earth-Sun distance) - we may estimate the Venus-Sun distance by applying trigonometry (or by drawing):
X = 1 sin 44º = 0.7
During the 2004 transit, the situation will be as follows:
So the ratio of the distances of Venus and the Sun from the Earth becomes:
This result will be applied in our calculations.
- Calculate the angle E for the date of your observations, or for the drawing of 9 Feb 2004 or the drawing of 9 Apr 2004.
Detailed data may be found below - you should make a plot of these values against time - in order to estimate the data needed on your observation day.
|Sun - Venus Data 2004|
|Day/Month||Solar Data||Venus Data||Angular Diameter|
| Jan 10 |
|Sun 12h 07m||Venus 14h 34m||13.3 arcseconds|
| Feb 09 |
|Sun 12h 14m||Venus 14h 49m||15.8 arcseconds|
| Mar 10 |
|Sun 12h 10m||Venus 14h 55m||19.8 arcseconds|
| Mar 31 |
|Sun 12h 04m||Venus 14h 59m||24.1 arcseconds|
| Apr 09 |
|Sun 12h 01m||Venus 14h 59m||27.0 arcseconds|
| May 09 |
|Sun 11h 56m||Venus 14h 26m||41.8 arcseconds|
|Jun 08||Sun 11h 59m||Venus 11h 58m||57.6 arcseconds|
| Jul 08 |
|Sun 12h 05m||Venus 09h 33m||40.7 arcseconds|
| Aug 07 |
|Sun 12h 06m||Venus 08h 55m||26.3 arcseconds|
| Aug 17 |
|Sun 12h 04m||Venus 08h 54m||23.4 arcseconds|
| Sep 06 |
|Sun 11h 58m||Venus 09h 02m||19.4 arcseconds|
| Oct 06 |
|Sun 11h 48m||Venus 09h 19m||15.5 arcseconds|
| Nov 05 |
|Sun 11h 44m||Venus 09h 36m||13.0 arcseconds|
Concerning angular size: 1 arcsecond = 1/3600 degree
- In the laboratory: - Once again move the orange until its shadows are comparable with the December drawing of Venus.
- Now measure the Sun-Venus distance, and the Earth-Venus distance.
- Please observe that the Earth-Venus distance has increased dramatically but the Sun-Venus distance has remained constant .
As you can see, these observational arguments are simple, and they convinced Galileo that Copernicus was right.
Click to compare with the official orbit values. | <urn:uuid:7f7c2c2d-7715-4787-b777-56ca5feff159> | 3.71875 | 4,129 | Tutorial | Science & Tech. | 70.871136 |
Modeling Flood Control Waterways
McKee Applegate River Bridge in Oregon
Model area as meshed
FLOW-3D results, water
is colored by velocity
The installation of a new bridge over the Applegate River in Oregon presented some very challenging modeling circumstances for ASCG Inc., a FLOW-3D customer in Colorado. With three drops and an irrigation diversion upstream of an existing bridge (see photo, upper right), the project involved some complicated flow patterns. To resolve the complex domain and minimize the number of computational cells, ASCG made extensive use of FLOW-3D's multi-block model (see figure at center right).
The three step dam diverts water into an irrigation canal. At flood stage, some of that flow overtops the ditch bank and spills onto the far piers. The combination of the complex flow down the dam, water overtopping a large rock outcrop, and the spill from the irrigation canal creates a dangerous situation at flood stage.
Simulations with FLOW-3D yielded an average water surface profile in good agreement with the conventional 2D analytical tool used by hydraulics engineers, but the stepped diversion dam and 6 foot waves could not be handled by a 2D modeling tool. For example, the FLOW-3D simulation showed very unusual spiral flow patterns hitting the far piers.
Using their results from these simulations, ASCG engineers were well-armed to develop designs which stand a better chance of preventing life-threatening situations during flood events. | <urn:uuid:27ce594e-4460-49bd-8f39-94a5e2735c6c> | 2.953125 | 310 | Knowledge Article | Science & Tech. | 33.72 |
[Haskell-cafe] Practical introduction to monads
stefan at cs.uu.nl
Wed Aug 3 05:06:35 EDT 2005
> I have also one time read an example where you use monads while
> implementing the unification or type inference algorithm, perhaps in
> the original
> monad paper (the essence of functional programming).
I guess you are referring to Mark Jones' _Functional Programming with
Overloading and Higher-order Polymorphism_ .
Mark P. Jones. Functional programming with overloading and
higher-order polymorphism. In Johan Jeuring and Erik Meijer, editors,
Advanced Functional Programming, First International Spring School on
Advanced Functional Programming Techniques, Bastad, Sweden, May 24–30,
1995, Tutorial Text, volume 925 of Lecture Notes in Computer Science,
pages 97–136. Springer-Verlag, 1995.
More information about the Haskell-Cafe | <urn:uuid:c64bde7d-f8c3-4ef0-af25-e9da279ebda1> | 2.78125 | 202 | Comment Section | Software Dev. | 37.295717 |
Try This at Home: Sight or Scent
The prefix "nano" means one billionth. At such a small scale, some tools are more reliable and useful than others.
What You Need
- 100 mL (~1/2 cup) water
- Graduated cylinder or measuring spoons
- 1 mL (1/4 teaspoon) colored mouthwash (or colored water with a drop of vanilla or perfume added)
- 10 identical small cups or an ice cube tray
What To Do
Make sure you have an adult with you to supervise this experiment.
Line up the nine cups in a row and label them 0 through 9.
Fill Cup 0 with 10 mL (2 teaspoons) water.
Fill Cup 1 with 9 mL (1 3/4 teaspoons) water and 1 mL (1/4 teaspoon) mouthwash. Stir.
Fill Cup 2 with 9 mL (1 3/4 teaspoons) water and 1 mL (1/4 teaspoon) solution from Cup 1. Stir.
Fill Cup 3 with 9 mL (1 3/4 teaspoons) water and 1 mL (1/4 teaspoon) solution from Cup 2. Stir.
Observe the color and scent of the mixtures in Cups 1, 2, and 3. Are they the same? Make a hypothesis! How do you think the mixture in Cup 9 will look and smell?
Fill the remaining cups in the same manner used to fill Cups 1, 2, and 3.
Compare the colors and scents of Cups 0, 1, and 9. Can you see which cups contain mouthwash? Can you tell the difference by smell?
By adding water to the mixture, you are diluting the mouthwash. Cup 0 contains 100% water for comparison. Cup 1 contains 10% mouthwash, Cup 2 contains 1% mouthwash, Cup 3 contains 0.1% mouthwash, etc., Cup 9 contains one billionth of a percent mouthwash. We call one billionth of something "nano", so you can think of Cup 9 as containing one "nano-percent" mouthwash. Similarly, one nanometer is a billionth of meter. Nanotechnology concerns science on this tiny scale.
When working on such a small scale, scientists' hands and eyes are simply too big to handle these tiny particles. They need special tools, such as magnets, light beams, and electron microscopes, to be able to work with nanoparticles. Similarly, our eyes are unable to be sure of which cup contains the mouthwash when comparing Cups 0 and 9. However, our more sensitive noses can sniff out the scent of the mouthwash much more reliably.
In order to create objects on the nanoscale, scientists can use either a "bottom up" or "top down" approach. The "bottom up" approach means building nanostructures one atom at a time. The "top down" approach means cutting a large structure down until it is very small. Our experiment follows that "top down" approach because we took a significant amount of mouthwash and diluted it until it was on the nanoscale. Scientists are also research methods of self-assembling nanostructures. Imagine shaking a box of Legos and having some of the Legos stick together in the process. In this analogy, each Lego represents an atom. By shaking the box the individual atoms clump together to form nanostructures without the intensive help of an outside resource.
Our hair and fingernails grow about a nanometer every second! | <urn:uuid:e1b5bc4d-4904-418b-80f4-ff15c358674c> | 3.28125 | 720 | Tutorial | Science & Tech. | 73.27347 |
Joined: 03 Oct 2005
|Posted: Tue Dec 20, 2005 10:18 am Post subject: Semiconductor Nanocrystals: Alternative to Toxic Compounds?
|Scientists at the University of Arkansas Develop Semiconductor Nanocrystals as an Alternative to Toxic Compounds
University of Arkansas researchers have used a technique known as "doping" to create semiconducting nanocrystals using zinc, opening up a new, non-toxic alternative to the current industry "workhorse," cadmium selenide. This technique creates a nanocrystal product that does not contain the carcinogenic cadmium element, and can be used for biomedical labeling, light emitting diodes, lasers and sensors.
Xiaogang Peng, Scharlau Professor of chemistry and biochemistry, Narayan Pradhan, postdoctoral associate, graduate student Jason Thessing all from the J. William Fulbright College of Arts and Sciences and research scientist David Goorsey of NN-Labs, a start-up company at the university's technology incubator, report their findings in the current issue of the Journal of the American Chemical Society.
"Cadmium-based nanocrystals have a doubtful future because of their toxicity," Peng said. He and his group have been working on a way to replace the cadmium selenide with the non-toxic zinc selenide for several years. While zinc-based nanocrystals have none of the toxicity of cadmium-based ones, until now they proved less effective as a semiconducting nanocrystal emitters because the nanocrystals did not emit light through most of the visible spectrum.
Peng and his colleagues resolved this problem by "doping" the nanocrystals with copper and magnesium ions - adding a few to several tens of copper and magnesium ions to each nanocrystal. The ions become the emitters, creating "tunable lasers" much like those formed by the toxic cadmium nanocrystals.
"This is basically the starting point for a new class of materials," Peng said.
The researchers used two different strategies to achieve their goals. First, they began forming small seeds of particles with both the host ions and the dopant ions present and reacting together, then they stopped the reaction of the dopant ions and continued the formation of the crystals to the desired size. The other strategy involved growing the host ions to a certain size, quenching their growth then adding the dopant. After the doping is finished, the crystals are coated with more host ions.
"It's basically a programmed process," Peng said. The emission wavelength, which is important for any kind of industrial applications of nanocrystals, is controlled by the chemical nature of the dopants and by the size of the nanocrystals.
When the researchers examined these doped zinc-based nanocrystals, they discovered some unexpected properties - they were extremely stable even at high temperatures, a trait that may prove important in laser and LED applications. They appear to be less sensitive to environmental changes. Also, nanocrystals created by doping may serve multiple functions because they may contain magnetic as well as semiconducting properties.
"Dopant materials can bring other functions into a nanocrystal," Peng said.
Source: University of Arkansas.
This story was posted on 16 December 2005. | <urn:uuid:dc282b5d-ce94-4ede-95d1-a04c3e0f6f69> | 2.828125 | 688 | Comment Section | Science & Tech. | 28.577276 |
Willy Wonka could have powered his Great Glass Elevator on hydrogen produced from his chocolate factory.
Microbiologist Lynne Mackaskie and her colleagues at the University of Birmingham in the UK have powered a fuel cell by feeding sugar-loving bacteria chocolate-factory waste. "We wanted to see if we tipped chocolate into one end, could we get electricity out at the other?" she says.
The team fed Escherichia coli bacteria diluted caramel and nougat waste. The bacteria consumed the sugar and produced hydrogen, which they make with the enzyme hydrogenase, and organic acids. The researchers then used this hydrogen to power a fuel cell, which generated enough electricity to drive a small fan (Biochemical Society Transactions, vol 33, p 76).
The process could provide a use for chocolate waste that would otherwise end up in a landfill. What's more, the bacteria's job doesn't have to end once they have finished chomping on ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:4a9ebacd-0a09-4247-9f4f-6bddd73a76b9> | 3.046875 | 223 | Truncated | Science & Tech. | 39.924681 |
|Jan16-12, 05:49 PM||#1|
Galactic frame dragging
Do galaxies produce measurable/significant frame dragging effects by their rotation?
I would think frame dragging depends on the mass and speed of rotation... Are such effects only felt at the boundary between rotating gravitational fields and flatter space, i.e. the edge of the milky way? Or would frame dragging produce effects throughout the milky way, i.e. in the gaps between spiral arms as well as between stars.
|Jan16-12, 08:31 PM||#2|
These effects are completely negligible on the galactic scale. Frame dragging is only significant near the event horizon of a black hole. For comparison, if the entire mass of a galaxy were compressed to a point; its event horizon would be about a million times smaller than the radius of the galaxy, suggesting the effects will be completely negligible.
You're right that the degree of the effect depends only on the mass and speed of rotation (generally measured by the angular momentum). What (little) effects there are would be felt both at the edge of the disk, and between the spiral-arms.
|Feb8-12, 02:37 PM||#3|
Hi, one question.
Are there any calculations to figure out frame dragging, or the strength of frame dragging is purely "intuitively implied" (in case the math is prohibitively complex)?
Never mind, I've read up on this one.
|frame dragging, galactic rotation|
|Similar Threads for: Galactic frame dragging|
|Frame dragging = CTC?||Special & General Relativity||9|
|Frame dragging question.||Special & General Relativity||3|
|Frame Dragging Confirmed?||Special & General Relativity||8|
|A frame dragging experiment in an accelerated reference frame||Special & General Relativity||5|
|Frame Dragging and Albert||General Astronomy||1| | <urn:uuid:aff0fcf0-842f-49da-922d-eeac3307a288> | 2.8125 | 409 | Comment Section | Science & Tech. | 48.170801 |
The Washington Post picked up on the latest update to the 2005 temperature anomaly analysis from NASA GISS. The 2005 Jan-Sep land data (which is adjusted for urban biases) is higher than the previously warmest year (0.76°C compared to the 1998 anomaly of 0.75°C for the same months, and a 0.71°C anomaly for the whole year) , while the land-ocean temperature index (which includes sea surface temperature data) is trailing slightly behind (0.58°C compared to 0.60°C Jan-Sep, 0.56°C for the whole of 1998). The GISS team (of which I am not a part) had predicted that it was likely the 2005 would exceed the 1998 record (when there was a very large El Niño at the beginning of that year) based on the long term trends in surface temperature and the estimated continuing large imbalance in the Earth’s radiation budget.
In 1998 the last three months of the year were relatively cool as the El Niño pattern had faded. For the 2005 global land-ocean index to exceed the annual 1998 record, the mean anomaly needs to stay above 0.51°C for the next three months. Since there was no El Niño this year, and the mean so far is significantly above that, this seems likely.
Will a new record by a few hundredths of a degree really mean much? The important climate trends aren’t based on individual years, but on the underlying trends which have been solidly positive for decades. We still don’t expect each year to be warmer than the last due to the intrinsic variability (‘weather’) in global mean temperature (around 0.1 to 0.2°C), but at the current rate of global warming (~0.17°C/decade), new records can be expected relatively frequently. Stay tuned for further stories on this…
Update: The CRU/Met Office numbers are slightly different from the GISS analysis, but one should be careful to compare like with like. The 2005 Jan-Aug land anomaly from CRU is 0.81°C compared to 0.84°C for the same period in 1998. Their Sep update is due on the 26th, and so comparisons should become easier then. | <urn:uuid:cc526e29-b61c-4526-9d1b-1f483a5299c5> | 2.6875 | 469 | Personal Blog | Science & Tech. | 70.505225 |
Our galaxy is still reverberating from a strike by a small galaxy or massive dark matter structure.
A large asteroid that initially had scientists worried has only a small chance of striking the Earth in 2040, says NASA.
Scientists say they've found more evidence of a major impact by a comet or asteroid, some 13,000 years ago.
Ever wondered exactly what would happen if a massive meteorite hit the Earth? Princeton University scientists can show you.
The Secure World Foundation (SWF) has warned the UN that it may be too disorganised to avoid disaster in the event of a major meteorite strike. | <urn:uuid:ce764f25-f953-4e06-b9bb-cf0223b43466> | 3.5 | 126 | Content Listing | Science & Tech. | 47.699359 |
This image shows the Sun's polarity
measured by Ulysses
VHM/FGM Instrument Page
Ulysses has a unique orbit
. By going past Jupiter
, Ulysses was able to go into an inclined orbit that has never been achieved before (this is the sling shot approach
to getting a spacecraft into orbit). From this new type of orbit, Ulysses can track parts of the Sun's magnetic field
that have never been tracked.
The magnetometer was included on the Ulysses spacecraft to track the Sun's magnetic field. It uses two different sensors to tell how strong the magnetic field is at different times and at different places around the Sun. The two sensors are called the Vector Helium Magnetometer and the Fluxgate Magnetometer (try saying that 3 times in a row!).
Since being turned on in October 1990, the magnetometers have produced a steady stream of observations. Several disturbances in the magnetic field have been tracked. Scientists look forward to even more findings as Ulysses is in its second pass of the Sun. It is during this second pass that solar activity related to the magnetic field will be at its peak.
Shop Windows to the Universe Science Store!
Our online store
on science education, ranging from evolution
, classroom research
, and the need for science and math literacy
You might also be interested in:
The Hubble Space Telescope (HST) was one of the most important exploration tools of the past two decades, and will continue to serve as a great resource well into the new millennium. The HST found numerous...more
Driven by a recent surge in space research, the Apollo program hoped to add to the accomplishments of the Lunar Orbiter and Surveyor missions of the late 1960's. Apollo 11 was the name of the first mission...more
Apollo 12 was launched on Nov. 14, 1969, surviving a lightning strike which temporarily shut down many systems, and arrived at the Moon three days later. Astronauts Charles Conrad and Alan Bean descended...more
Apollo 15 marked the start of a new series of missions from the Apollo space program, each capable of exploring more lunar terrain than ever before. Launched on July 26, 1971, Apollo 15 reached the Moon...more
NASA chose Deep Impact to be part of a special series called the Discovery Program on July 7, 1999. The Discovery program specializes in low-cost, scientific projects. In May 2001, Deep Impact was given...more
The Galileo spacecraft was launched on October 19, 1989. Galileo had two parts: an orbiter and a descent probe that parachuted into Jupiter's atmosphere. Galileo's main mission was to explore Jupiter and...more
During 1966 through 1967, five Lunar Orbiter spacecrafts were launched, with the purpose of mapping the Moon's surface in preparation for the Apollo and Surveyor landings. All five missions were successful....more | <urn:uuid:2e8b24e0-46a9-4c95-b052-f7ecfc1d7eef> | 3.65625 | 590 | Content Listing | Science & Tech. | 57.042454 |
The Cosmic Microwave Background (CMB) is radiation we receive today from a time when the universe was about 300,000 years young. At that time, radiation decoupled from matter and since then, photons could travel almost undisturbed. The CMB shows the temperature, or the inverse wavelength, of the microwaves that we receive on Earth from these early times.
The mean temperature of the CMB is approximately 2.7 Kelvin, and is a blackbody spectrum to truly amazing accuracy. What we will be concerned with here however is not the mean temperature, but tiny fluctuations around this temperature. These carry a lot of information about the conditions in the early universe which can help us understand the origin of the structures that we see today, and the processes that were important in the early universe. These fluctuations are of the order micro-Kelvin, and have been measured by NASA's WMAP mission. You probably have all seen their skymap of the temperature fluctuations:
We have discussed the features and usefulness of the CMB temperature fluctuations a few times already, see e.g. my earlier posts The CMB Power Spectrum and Anomalous Alignments in the CMB.
One way to extract information from the data is to look at correlation functions. These come in integer orders like the two-point function, the three-point function, the four-point function etc. There also is a one-point function but ‒ assuming a homogeneous probability distribution ‒ you already know it: it's just the expectation value. In our case, it would be the mean temperature. The two-point function tells you something about the correlation length in the distribution.
The relevant quantity we are concerned with here is the three-point function. (Confusingly enough the three-point function is also known as bi-spectrum.) To compute it, you roughly take three different points of your distribution, multiply the value of the function (here the temperature), and integrate over combinations of three points. Even from this rough description you can notice two things. First, it's several ugly integrals that are hard to compute, especially with loads of data. Second, multiplying small numbers makes even smaller numbers, thus the result is in risk of dropping below the uncertainties in your measurements. Therefore it's hard to come by this observable, yet it's what one wants to extract from the data because it contains information beyond the simplest (single-field, slow-roll) inflation scenario. This simplest scenario predicts the temperature fluctuations to be to very good precision a Gaussian distribution. If they were exactly Gaussian, the three-point function would vanish, and all higher-order correlations would follow from the two-point function. A non-vanishing three-point function would thus be, here it comes, an indication for the non-Gaussianity of the temperature fluctuations, and an indication for new physics.
There had indeed previously been rumors that non-Gaussianities had been found in an analysis of the CMB data, see e.g. this post on Non-Gaussian CMB over at Resonaances. At that time I heard like half a dozen talks on the topic, yet was reasonably sure the "signal" would vanish back into noise as indeed it did. To our present knowledge, the data with the uncertainty we have is still compatible with a Gaussian spectrum. (One has to be somewhat careful when one reads about these bounds since there's several different ones. That's because the full three-point function is pretty much impossible to calculate. What people have done instead is to take samples of specific threesomes of points, e.g. those forming equilateral triangles, or obtuse angled ones, thus there's different bounds depending on the triangles chosen.)
However, the important thing to note is that the uncertainty in these observations will go down in the soon future, with the WMAP 8 years mission results one expects a 20% improvement on the bounds, while Planck can yield a factor of 4. Now if there was an indication for non-Gaussianity this would be very exciting. Then the question is of course, what is the physics behind that? What I guess is going to happen is that anybody with their model will predict non-Gaussianities. I wouldn't be surprised if suddenly it will be a signature for cosmic strings, evidence for the multiverse and also a prediction of Loop Quantum Cosmology. It will certainly take some while to sort out these things. In any case however, I am sure it is a topic you will hear more about in the coming years. | <urn:uuid:3521b6b3-f1c1-4cc6-906f-e82485d11609> | 3.921875 | 942 | Personal Blog | Science & Tech. | 47.575408 |
The Human Brain Project plans to build a high fidelity virtual model of a workinghuman brain, modeling aspects of the brain including neuron connections, blood flow,and even the behavior of individual neurons, down to the level of ion channels. Theproject will be lead be Henry Markram, a scientist at the Swiss Federal Instituteof Technology.
They already have a prototype in place. The project's goal is to include every aspectof brain biology, including thousands of discoveries made in the past decade. Of course,a real human brain has immense computing power, so the virtual model would in effectrun in super slow motion. Each computational step in the computer model would be onlybe a tiny slice of "real" virtual time.
The virtual model will be used for everything from modeling the behavior of new drugsto better understanding of strokes and other brain injuries.
Perhaps the most amazing part of this ambitious plan is to provide the virtual brainsensory input from a virtual world, one in which human scientists might interact withthe virtual brain. The question is, will it work? After compiling a high fidelityvirtual model of a human brain, will it begin to think and act like a human mind?Are we on the eve of discovering the source of human consciousness? Or, maybe thereis some missing element, something not accounted for in our current understandingof brain biology?
Could this be the first step of the singularity?
Read the full post | <urn:uuid:da8f36b6-b8af-42b7-a5b2-0ba6d6a8211e> | 2.890625 | 290 | Comment Section | Science & Tech. | 42.689001 |
RECORD: Swale, W. 1858. Hive-Bees in New Zealand [a letter to Darwin]. Gardeners' Chronicle and Agricultural Gazette (13 November): 829.
REVISION HISTORY: Scanned by John van Wyhe, transcribed (single key) by AEL Data 10.2008. RN1
Hive-Bees in New Zealand.—The hive bee was introduced into Wellington in 1842 and into Canterbury in 1852. In Christchurch, in the latter province, an old hive standing in a warm sheltered situation has this summer cast off six swarms during the short time of two months. English bee keepers would open their eyes with astonishment if they were out here to see the produce of a single hive. I have had the pleasure several times of partaking of the fruit of their industry, and most delicious it is. Bee keeping here is different to what it is in England. The perpetual succession of flowers, the fine warm summer, and mildness of the winter all tend to a great increase of the bees. Our management of them is very simple. We furnish them with small boxes 18 inches or 2 feet in length and a foot or 18 inches in depth, with a small aperture on the sunny side for ingress and egress. Inside the box we fix small rails across for them to commence building their combs. I have seen very severe conflicts between them and the native wasps. When a wasp approaches the hive the bees give no quarter. They soon slay their enemy and down with him. Extract of a letter to Mr. Darwin from Mr. Swale of Christchurch, New Zealand, dated July 13, 1858.
Return to homepage
Citation: John van Wyhe, editor. 2002-. The Complete Work of Charles Darwin Online. (http://darwin-online.org.uk/)
File last updated 2 July, 2012 | <urn:uuid:8fabfee2-187a-4d18-acc0-061dfef2b416> | 2.984375 | 388 | Truncated | Science & Tech. | 70.508425 |
In computing, Pic is a domain-specific programming language by Brian Kernighan for specifying diagrams in terms of objects such as boxes with arrows between them. The pic compiler translates this description into concrete drawing commands. Pic is a procedural programming language, with variable assignment, macros, conditionals, and looping. The language is an example of a little language originally intended for the comfort of non-programmers in the Unix environment (Bentley 1988).
Pic was first implemented, and is still most typically used, as a preprocessor in the troff document processing system. The pic preprocessor filters a troff document, replacing diagram descriptions by concrete drawing commands, and passing the rest of the document through without change.
A version of pic is included in groff, the GNU version of troff. GNU pic can also act as a preprocessor for TeX documents. Arbitrary diagram text can be included for formatting by the word processor to which the pic output is directed, and arbitrary post-processor commands can also be included. Dwight Aplevich's implementation, DPIC, can also generate postscript or svg images by itself, as well as act as a preprocessor. The three principal sources of pic processors are GNU pic, found on many Linux systems, and dpic, both of which are free, and the original AT&T pic.
- Kernighan, Brian W. (1982). "PIC - A Language for Typesetting Graphics". Software Practice Experience (12): 1–20.
- J. Bentley. More Programming Pearls, Addison-Wesley (1988).
- Making Pictures With GNU PIC
- Troff resources (see the "pic" section)
- Janert, Philipp K. (June 21, 2007). "In Praise of Pic". ONLamp.com. O'Reilly Media. Retrieved 2011-09-06.
- DPIC, an implementation of the PIC language by Dwight Aplevich. This implementation has a few nice extensions and outputs many different image formats.
- figr, web based pic renderer.
|This programming language–related article is a stub. You can help Wikipedia by expanding it.|
Read in another language
This page is available in 1 language | <urn:uuid:d092a264-4536-45f5-a3fa-385b798ecd93> | 3.484375 | 464 | Knowledge Article | Software Dev. | 51.217444 |