text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Why Mars again? The big unknown remains. Scientists want to know if any form of life ever existed there, and that means microscopic organisms. Since the 1960s, spacecraft have zipped past, orbited or landed on Mars in this quest. Two small NASA rovers that arrived in 2004 explored different craters and one is still functioning today. Curiosity is the most ambitious effort ever, but it's not the be-all and end-all. During its two-year exploration, it will try to answer whether the giant crater where it lands had the right conditions to support microbes. But future missions would still be needed for more answers. What will Curiosity do? Curiosity carries a toolbox of 10 instruments, including a rock-zapping laser and a mobile organic chemistry lab. It also has a long robotic arm that can jackhammer into rocks and soil. It will hunt for basic ingredients of life including carbon-based compounds, nitrogen, phosphorus, sulfur and oxygen, as well as minerals that might provide clues about possible energy sources. How did Curiosity get its name? The spacecraft is formally called the Mars Science Laboratory. In 2008, NASA held a naming contest open to students and What does this mission cost? $2.5 billion. That's $1 billion over its original budget. Curiosity was supposed to launch in 2009 and land in 2010, but development took longer than expected. The delay gave engineers more time to debug problems and test the spacecraft, but also put the project over budget. When will we send astronauts to Mars? President Barack Obama has set a goal for astronauts to orbit Mars by the mid-2030s followed by a landing. Before that can happen, the plan is to send astronauts to an asteroid first. Follow Alicia Chang's Mars coverage at: http://www.twitter.com/SciWriAlicia
<urn:uuid:ca8aa5f9-4d8c-4aa5-9f7e-51003e87bc7a>
3.546875
383
Knowledge Article
Science & Tech.
59.522205
The Saturn family of rockets were the first dedicated space rockets of the United States. All previous rockets used were adapted from military designs. Rockets such as the Atlas and Titan were primarily designed as InterContinental Ballistic Missiles (ICBMs), with the payload being nuclear missiles. This generally worked well enough for low-earth orbit, but larger and heavier payloads needed a completely new design dedicated for space. The mighty Saturn V rockets were what took America to the Moon, and were seen as a symbol of the technological superiority of the United States. It is fitting then that the first test flight of the Ares I rocket was supposed to occur on the 48th anniversary of the first Saturn I launch. Unfortunately, high winds and poor weather conditions have led to a postponement. mission is designated Ares I-X, and is only the first of several planned test flights that will demonstrate and test multiple key components of the Ares I system. NASA wants to follow the methodology of the Apollo program and use multiple tests to validate their designs. That way improvements can be made early on and integrated more quickly. There are two main stages to the Ares I rocket. The First Stage is a reusable solid fuel rocket derived from the Space Shuttle's Solid Rocket Boosters. It features a nozzle with thrust vectoring control. A fifth segment has been added in order to attain more thrust and a longer burn, but it will be inert for this test flight. It will be active during the second Ares I test flight in 2014, currently designated Ares I-Y. The Upper Stage will be propelled by a new engine derived from the Saturn program. The J-2X engine will be fueled by liquid hydrogen and liquid oxygen. It will be built by Rocketdyne, the prime contractor for the original J-2 engines used by Saturn rockets in the Apollo program. The Upper Stage for Ares I-X will use simulators, but Ares I-Y will use the The Orion crew capsule that will sit atop the Ares I is still being designed and will not be ready for spaceflight until 2012. Ares I-X uses a non-functional payload of the same size and shape known as a boilerplate. The entire Upper Stage, including the boilerplate, will fall into the Atlantic Ocean if all goes The primary test objectives for Ares I-X will be to demonstrate flight control system performance during ascent and to test the Parachute Recovery System of the First Stage. The parachutes use Kevlar and are much stronger and lighter than the nylon versions currently used during Space Shuttle launches. Another major goal is to gather data on the Ares I's roll torque during flight, which will reach a maximum height of 150,000 feet (45.72 KM). Roll torque is a major issue caused by vehicle aerodynamics and the manner in which the liquid propellant burns. Computer models have been used so far, but flight safety increases dramatically as more accurate and precise data is used. engineers will bring to bear more than 700 sensors to collect data during the six minute flight. A through analysis is not expected to finish until next year. There is a Critical Design Review currently scheduled for the Ares I in 2011, and the findings there will be based on lessons from tomorrow's launch. Orion 1 test flight in 2014 will be the first time all of the components of Ares I will fly together. The first manned test of Orion is also targeted for 2014 with the Orion 2 mission. Orion 3 through Orion 9 will see the first visits to the International Space Station starting in 2015.
<urn:uuid:850db5ca-b324-46c1-8ca2-e16b9dd9afb8>
3.78125
782
Knowledge Article
Science & Tech.
50.380962
Since a file is a unit of storage and it stores information, it has a size, which is the number of bits it contains. To manage it, a file also has a location, also called a path that specifies where and/or how the file can be retrieved. Also, for better management, a file has attributes that indicate what can be done on a file or that provide specific information that the programmer or the operating system can use when dealing with the file. There are differences and similarities between a directory and a file. Among the differences: The similarities of both types are: File processing consists of creating, storing, and/or retrieving the contents of a file from a recognizable medium. For example, it is used to save word-processed files to a hard drive, to store a presentation on floppy disk, or to open a file from a CD-ROM. To perform file processing on your application, you have many choices: the C and/or C++ languages, the Win32 library, or the MFC. In order to manage files stored in a computer, each file must be able to provide basic pieces of information about itself. This basic information is specified when the file is created but can change during the life time of a file. To create a file, a user must first decide where it would be located: this is a requirement. A file can be located on the root drive. Alternatively, a file can be positioned inside of an existing folder. Based on security settings, a user may not be able to create a file just anywhere in the (file system of the) computer. Once the user has decided where the file would reside, there are various means of creating files that the users are trained to use. When creating a file, the user must give it a name following the rules of the operating system combined with those of the file system. The most fundamental piece of information a file must have is a name. Once the user has created a file, whether the file is empty or not, the operating system assigns basic pieces of information to it. Once a file is created, it can be opened, updated, modified, renamed, etc. Because files on a computer can be stored in various places, Microsoft Windows provides various means of creating, locating, and managing files through objects named Windows Common Dialog Boxes. Indeed, these dialog boxes are part of the operating system and are equipped with all the necessary operations pertinent to their functionality. The common dialog boxes available in the MFC share a parent as the CCommonDialog class. The CCommonDialog class is derived from CDialog: The only member the CCommonDialog class has a constructor that takes a point to CWnd as argument. The C language provides very impressive support for file processing through a structure and many functions. The fundamental structure used to perform file processing in C is the FILE structure. In reality, to use this structure, you must call another function and get its return value. The primary operation of file processing consists of opening one. In the strict sense, this means that you are initiating file processing. This can be done in C by calling one of the fopen-variant functions. Their syntax are: FILE *fopen(const char *filename, const char *mode); FILE *_wfopen(const wchar_t *filename, const wchar_t *mode); To support Unicode, you should use the _wfopen() version. In both cases, the first argument specifies the name of the file you want to work on. If you pass just the name of the file, the compiler will consider the directory of the current project. Otherwise, you can provide the complete path of the project. The second argument specifies the type of operation you want to perform. It is passed as a string. When this function has been called, if it succeeds in carrying its operation (we will see what types of operations are available), it returns a FILE object. If the function fails, it returns NULL. To create a file, you can pass the second argument of fopen withe one the following values: The C language provides various functions you can use to write values to a file. Some functions deal with strings while others are supposed to handle any type of value. All functions that a FILE object as argument. This is the object that holds the file. To write a character to a FILE object, you can call one of the following functions: int fputc(int c, FILE *stream); wint_t fputwc(wchar_t c, FILE *stream); The first version of these functions takes a character as argument. The second takes a wide character. The character would be written to the file. To write a string to a file, you can call one of these functions: int fputs(const char *str, FILE *stream); int fputws(const wchar_t *str, FILE *stream); To write other types of values to a file, you can call one of the following functions: size_t fwrite(const void *buffer, size_t size, size_t count, FILE *stream); To format a value before writting it to the file, you can call one of these functions: int fprintf(FILE *stream, const char *format [, argument ]...); int _fprintf_l(FILE *stream, const char *format, locale_t locale [, argument ]...); int fwprintf(FILE *stream, const wchar_t *format [, argument ]...); int _fwprintf_l(FILE *stream, const wchar_t *format, locale_t locale [, argument ]...); After using a FILE object, you should close it to free the resources it was using. This is done by calling the fclose() or _fcloseall() function. Their syntaxes are: int fclose(FILE *stream); int _fcloseall(void); in the same way, C provides various functions to read values from a file. Like their writing counterparts, function readers take a FILE object as argument. To read a character from a FILE object, you can call one of the following functions: int fgetc(FILE *stream); wint_t fgetwc(FILE *stream); The first version reads a regular character and the second reads a wide character. To read a string from a file, you can call one of these functions: char *fgets(char *str, int n, FILE *stream); wchar_t *fgetws(wchar_t *str, int n, FILE *stream); To read other types of values from a file, you can call one of the following functions: size_t fread(void *buffer, size_t size, size_t count, FILE *stream); To read values that were formatted, you can call one of these functions: int fscanf(FILE *stream, const char *format [, argument ]...); int _fscanf_l(FILE *stream, const char *format, locale_t locale [, argument ]...); int fwscanf(FILE *stream, const wchar_t *format [, argument ]...); int _fwscanf_l(FILE *stream, const wchar_t *format, locale_t locale [, argument ]...); int fscanf_s(FILE *stream, const char *format [, argument ]...); int _fscanf_s_l(FILE *stream, const char *format, locale_t locale [, argument ]... ); int fwscanf_s(FILE *stream, const wchar_t *format [, argument ]...); int _fwscanf_s_l(FILE *stream, const wchar_t *format, locale_t locale [, argument ]...);
<urn:uuid:a6b0deef-705e-48b6-9960-5225171810f9>
4.15625
1,662
Documentation
Software Dev.
54.579266
A piece in Asimov’s Science Fiction this month argues just this point. Reflections: The Death of Gallium makes the case that we’re quickly running out of a few rare elements whose existence makes modern electronic innovations possible. The problem is one of basic chemistry. The elements are the basic building blocks of, well, everything. The periodic table lists the elements and does a pretty good job of organizing them according to their similarities. But here’s the thing with elements — you can’t produce them. Once we’ve used up all the copper available on Earth, that’s it. It’s impossible to manufacture more copper. (Excepting nuclear transmutation, but that can’t be done on a large scale, and usually the end result is radioactive anyway.) Other materials that we must recycle — plastics, glass, and paper — are made up of many elements. Plastics are primarily hydrocarbon chains created from petroleum products. Glass is mostly silicon and oxygen, both available in abundance on the earth. Paper is an organic material derived from wood pulp. All of these materials are fairly readily obtained, at least as of now. Generally, it’s relatively easy to put elements together to make compounds, or to pull compounds apart to get to their constituent elements. But if you don’t have the source elements to begin with, you’re out of luck. Recycle those electronics, kids.
<urn:uuid:d56ab674-6d13-4596-96e7-64775e8303c9>
3.609375
304
Personal Blog
Science & Tech.
46.961298
What is encoding What is character encoding, and why should I care? First, why should I care? If you use anything other than the most basic letters and numbers of the English alphabet, people may not be able to read your text unless you say what character encoding you used. For example, you may intend the text to look like this: but it may actually display like this: Not only does inadequate encoding information spoil the readability of displayed text, but it may mean that your data cannot be found by a search, or reliably processed in a number of other ways. What's a character encoding? Words and sentences in text are created from characters. Examples of characters include the Latin letter á or the Chinese ideograph 請 or the Devanagari character ह. Characters are grouped into a character set (also called a repertoire), in which each character is assigned a particular number, called a codepoint. These codepoints are then represented in the computer by one or more bytes. Basically, this means that all characters are stored in computers using a code, like the ciphers used in espionage. A character encoding is a key to unlock (ie. crack) the code. It is a set of mappings between the bytes representing numbers in the computer and characters. Without the key, the data looks like garbage. Unfortunately, there are many different character sets and character encodings, ie. many different ways of mapping between bytes, codepoints and characters. For example, in the character set called ISO 8859-1 (also known as Latin1) the codepoint value for the letter é is 233. In ISO 8859-5, the same codepoint represents the Cyrillic character щ. These character sets contain less than 256 characters and map codepoints to byte values directly. So a codepoint with the value 233 is represented by a single byte with a value of 233. Note however that that byte may represent either é or щ, depending on the context. Other character sets use a more complicated approach. With the Unicode character set, which covers most characters you are likely to need to use in a single set, that same Cyrillic character щ has a codepoint value of 1097. This is too high a number to be represented by a single byte. Most Web pages use the UTF-8 encoding for Unicode text. In that encoding щ will be represented by two bytes, but the codepoint value is not simply derived from the value of the two bytes - some more complicated decoding is needed. Other Unicode characters map to one, three or four bytes in the UTF-8 encoding. But UTF-8 is only one of the possible ways of encoding Unicode characters. This means that a codepoint in the Unicode character set can actually be represented by different byte sequences, depending on which encoding was used. The Devanagari character क, with codepoint 2325, can be represented by two bytes (09 15), three bytes (E0 A4 95), or four bytes (00 00 09 15), depending on which encoding was used (here UTF-16, UTF-8, and UTF-32 respectively). Most of the time you will not need to understand a character encoding at this level of detail. You will just need to be sure that the application you are working with knows which character encoding is appropriate for the data you are working with, and can handle that encoding. How do fonts fit into this? A font is a collection of glyphs (shapes) used to display characters. Once your application has worked out what characters it is dealing with, it will then look in the font for glyphs in order to display or print those characters. (Of course, if the encoding information was wrong, it will be looking up glyphs for the wrong characters.) A given font will usually cover a single character set, or in the case of a large character set like Unicode, just a subset of all the characters in the set. When your font doesn't have a glyph for a character some applications will look for the missing character in other fonts on your system (which will mean that the glyph will look different from the surrounding text, like a ransom note). Otherwise you will see a square box, a question mark or some other character instead. For example: How does this affect me? You need to choose the best encoding for your purposes. Unicode encodings are often a good choice here, since you can use a single encoding to handle pretty much any character you are likely to meet. This greatly simplifies things. Using Unicode throughout your system also removes the need to track and convert various character encodings. You need to check what encoding your editor or scripts are saving text in, and how to save text in the encoding of your choice. Note, however, that just declaring a different encoding won't change the bytes, you need to save the text in that encoding too. You need to find out how to declare the character encoding you used for the document format you are working with. You may also need to check that your server is serving documents with the right HTTP declarations. You need to ensure that the various parts of your system can communicate with each other, understand which character encodings are being used, and support all the necessary encodings and characters. The links in the next section provide some further reading on these topics. - Introducing Character Sets and Encodings - Choosing an encoding - Character sets & encodings in XHTML, HTML and CSS. - The HTTP charset parameter - Setting encoding in web authoring applications - Topic index: Characters - Technique index: XHTML and HTML authoring: Characters - Technique index: Server setup: Characters
<urn:uuid:096b4c61-9fe8-4412-ae64-c2cb057a8a36>
4.46875
1,209
Knowledge Article
Software Dev.
49.988209
serialize(attr_name, class_name = Object)public Specifies that the attribute by the name of attr_name should be serialized before saving to the database and unserialized after loading from the database. The serialization is done through YAML. If class_name is specified, the serialized object must be of that class on retrieval or SerializationTypeMismatch will be raised. # File activerecord/lib/active_record/base.rb, line 536 def serialize(attr_name, class_name = Object) serialized_attributes[attr_name.to_s] = class_name end Or, perhaps clearer would be: That may seem obvious, but it is common to be in the habit of passing things as a key/value pair. A serialized attribute will always be updated during save, even if it was not changed. (A rails 3 commit explains why: http://github.com/rails/rails/issues/8328#issuecomment-10756812) Guard save calls with a changed? check to prevent issues. class Product < ActiveRecord::Base serialize :product_data end
<urn:uuid:1518627d-fcfc-45d1-bdbc-3f5615a68ed8>
2.890625
251
Documentation
Software Dev.
34.003017
SBDART: A Practical Tool for Plane-Parallel Radiative Transfer in the Earth's Atmosphere Entry ID: USCB_SBDART SBDART (Santa Barbara DISORT Atmospheric Radiative Transfer) is a FORTRAN computer code designed for the analysis of a wide variety of radiative transfer problems encountered in satellite remote sensing and atmospheric energy budget studies. The program is based on a collection of highly developed and reliable physical models, which have been developed by the atmospheric science ... community over the past few decades. The following discussion is a brief introduction to the key components of the code and the models on which they are based. Clouds are a major modulator of the earths climate, both by reflecting visible radiation back out to space and by intercepting part of the infrared radiation emitted by the Earth and re-radiating it back to the surface. The computation of radiative transfer within a cloudy atmosphere requires knowledge of the scattering efficiency, the single scattering albedo, which is the probability that a extinction event scatters rather than absorbs a photon, and the asymmetry factor, which indicates the strength of forward scattering. SBDART contains an internal database of these parameters for clouds composed of spherical water or ice droplets. This internal database was computed with a Mie scattering code and covers a range of particle size effective radius in the range 2 to 128um. (The effective radius is the ratio of the third and second moments of the droplet radius distribution). By default, the angular distribution of scattered photons is based on the simple Henyey-Greenstein parameterization, but more detailed scattering functions may be input as desired. (The Henyey-Greenstein approximation has been shown to provide good accuracy when applied to radiative flux calculations (van de Hulst, 1968; Hansen, 1969). Gas Absorption Model In its standard mode of operation SBDART relies on low resolution band models developed for the LOWTRAN 7 atmospheric transmission code (Pierluissi and Marogoudakis, 1986). These models provide the clear sky atmospheric transmisson from 0 to 50000 cm-1 and include the effects of all radiatively active molecular species found in the earth's atmosphere. The models were derived from detailed line-by-line calculations which were degraded to 20 cm-1 resolution for use in LOWTRAN. This translates to a wavelength resolution of about 5 nm in the visible and about 200 nm in the thermal infrared. Because these band models represent rather large wavelength bins, the transmission functions do not necessarily follow Beers law; i.e., the fractional transmission through a slab of material depends not only on the slab thickness but also on the amount of material penetrated before entering the slab. In order to allow these transmission functions to be used with DISORT (which assumes Beers law behavior), the band models are approximated with a three term exponential fit (Wiscomb and Evans, 1977). A capability to read high resolution k-distribution optical depths from a disk file was introduced in SBDART version 2.0. This mode of operation is less convenient to use than the standard approach, as it requires use of another program, which accesses a large spectral database to generate the high resolution files. However, it has the advantage of removing limitations in the ultimate spectral resolution available with SBDART. The ancillary programs, CKLW and CKSW, may be used to create these high-spectral resolution optical depth files. These programs have not been exercised as thoroughly as SBDART. For now, they should be considered experimental. Extraterrestrial Source Spectra To facilitate comparison with other radiative transfer codes, SBDART may be run with any of three extraterrestrial solar spectrum models. The default is to use the LOWTRAN-7 solar spectrum (Thekeakara, 1974). This model is based on measurements between 300 and 610nm and uses a lambda-4 power law for longer wavelengths. Optionally, SBDART may be run with the solar models used in 5s (Tanre et al. 1990) or MODTRAN-3. The MODTRAN-3 model is probably the most accurate. It is a composite of information gathered by several different spectral measurement campaigns. For wavelengths between 174 and 351nm the spectral information is based on observations made with the Solar Ultraviolet Spectral Irradiance Monitor flown on Spacelab 2 (VanHoosier et al. 1988). Wavelengths between 351 and 868nm are based on the results of Neckel and Labs (1984). The observations of Wehrli (1985) are used for wavelengths between 0.868 and 3.226um. And finally, for wavelengths greater 3.23um, the longwave power law dependence of LOWTRAN-7 (Thekeakara, 1974) is used. Standard Atmospheric Models We have adopted six standard atmospheric profiles from the 5s atmospheric radiation code which are intended to model the following typical climatic conditions: tropical, midlatitude summer, midlatitude winter, subarctic summer, subarctic winter and US62. These model atmospheres (McClatchey et al, 1971) have been widely used in the atmospheric research community and provide standard vertical profiles of pressure, temperature, water vapor and ozone density. In addition, the user can specify their own model atmosphere based on, for example, a series of radiosonde profiles. The concentration of trace gases such as CO2 or CH4 are assumed to make up a fixed fraction (which may be specified by the user) of the total particle density. Standard Aerosol Models SBDART can compute the radiative effects of several common boundary layer and upper atmosphere aerosol types. In the boundary layer, the user can select either rural, urban, or maritime aerosols. These models differ from one another in the way their scattering efficiency, single scattering albedo and asymmetry factors vary with wavelength. The total vertical optical depth of boundary layer aerosols is derived from user specified horizontal meteorologic visibility at 0.55 um and an internal vertical distribution model. In the upper atmosphere up to 5 aerosol layers can be specified, with radiative characteristics that model fresh and aged volcanic, meteoric and the climitologic tropospheric background aerosols. The aerosol models included in SBDART were derived from those provided in the 5s (Tanre, 1988) and LOWTRAN7 computer codes (Shettle and Fenn, 1975). Radiative Transfer Equation Solver The radiative transfer equation is numerically integrated with DISORT (DIScreet Ordinate Radiative Transfer, Stamnes et al, 1988). The discrete ordinate method provides a numerically stable algorithm to solve the equations of plane-parallel radiative transfer in a vertically inhomogeneous atmosphere. The intensity of both scattered and thermally emitted radiation can be computed at different heights and directions. SBDART is configured to allow up to 65 atmospheric layers and 40 radiation streams (40 zenith angles and 40 azimuthal modes). The ground surface cover is an important determinant of the overall radiation environment. In SBDART six basic surface types -- ocean water (Viollier, 1980), lake water (Kondratyev, 1969), vegetation (Manual of Remote Sensing), snow (Wiscombe and Warren, 1980) and sand (Staetter and Schroeder, 1978) -- are used to parameterize the spectral reflectivity of the surface. The spectral reflectivity of a large variety of surface conditions is well approximated by combinations of these basic types. For example, the fractions of vegetation, water and sand can be adjusted to generate a new spectral reflectivity representing new/old growth, or deciduous vs evergreen forest. Combining a small fraction of the spectral reflectivity of water with that of sand yields an overall spectral dependence close to wet soil. [Summary provided by Paul Ricchiazzi, Shiren Yang, and Catherine Gautier.] ISO Topic Category Kneizs, F.X., E.P. Shettle, W.O. Gallery, J.H. Chetwynd, L.W., Abreu, J.E.A. Selby, S.A. Clough and R.W. Fenn, "Atmospheric transmittance/radiance: computer code LOWTRAN 6", Air Force Geophysics Laboratroy, Report AFGL-TR-83-0187, Hanscom AFB, MA. 1983. Kondratyev, K. Y., 1969: "Radiation in the atmosphere", Academic Press, N.Y. 10003 USA Manual of Remote Sensing, ... American Society of Photogrammetry, R.G. Reeves, A. Anson, D. Landen, eds. 1st ed. Falls Church, Va., 1975. McClatchey, R.A., R.W. Fenn, J.E.A. Selby, F.E. Volz, J.S. Garing, 1972: Optical properties of the atmosphere, (third edition), Air Force Cambridge Research Laboratories, Report AFCRL-72-0497. Neckel, H. and D. Labs, 1984: "The solar radiation between 3300 and 12500 angstroms", Solar Physics, 90, 205-258 Nakajima T. and M.D. King, 1990: "Determination of theoretical thickness and effective particle radius of clouds from reflected solar radiation measurements, part I: theory." Journal of the Atmospheric Sciences, 47, 1878-1893. Pierluissi, J.H., and Maragoudakis, C.E. 1986: "Molecular Transmission Band Models for LOWTRAN", AFGL-TR-86-0272, AD A180655. Shettle, E. P., and R. W. Fenn, 1975: "Models of the atmospheric aerosols and their optical properties." AGARD conference proceedings no. 183, Optical Propagation in the Atmosphere, 700 pages, presented at the Electromagnetic Wave Propagation Panel Symposium, Lyngby, Denmark 27-31 October 1975, sponsored by North Atlantic Treaty Organization, Advisory Group for Aerospace Research. Stamnes, K., S. Tsay, W. Wiscombe and K. Jayaweera, 1988: "Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media." Appl. Opt., 27, 2502-2509. Staetter, R., and M. Schroeder, 1978: "Spectral characteristics of natural surfaces", Proceeding of the tenth Int. Conf. on Earth Obs. from Space, 6-11 March 1978, (ESA-SP, 134) Tanre, D. et al, "Simulation of the Satellite Signal in the Solar Spectrum (5s), Laboratorire d'Optique Atmospherique Universite des Sciences et Techniques de Lille, 59655 Villeneuve d'Ascq Cedex, France, Jan 1988 Thekeakara, M.P., 1974: "Extra-Terrestrial solar spectrum, 3000-6100 A at 1 A intervals, Appl. OPT., 13, 518-522 Viollier, M. "Teledetection des concentrations de seston et pigments chlorophylliens contenus dans l'Ocean", These de Doctorat d'Etat, no 503, 1980 VanHoosier, M.E., J.D. Bartoe, G.E. Brueckner, and D.K. Prinz, 1988: "Absolute solar spectral irradiance 120nm-400nm: Resuts from the Solar Ultraviolet Spectral Irradiance Monitor (SUSIM) experiment on board Spacelab 2", Astro, Lett. and Communications, 27, 163-168 Wiscomb, W.J., J.W. Evans, 1977: "Exponential-Sum Fitting of Radiative Transmission Functions", Journal of Computational Physics, 24, 416-444 Wiscombe, W.J. and and S.G. Warren, 1980: "A model for the spectral albedo of snow. I: pure snow." J. Atmospheric Sciences, 37, 2712-2733. Wehrli, Ch., 1985: "Extra-Terrestrial solar spectrum", Pubilication No. 615 July 1985, Physikalisch-Meteorologisches Observatorium and World Radiation Center, CH-7260 Davos-Dorf, Switzerland Creation and Review Dates
<urn:uuid:0ad94154-430a-48cb-867f-02c5b690a99f>
2.78125
2,635
Knowledge Article
Science & Tech.
46.501591
Related page: significant vernal pools Photo courtesy of Woodlot Alternatives, Inc. Vernal pools or "spring pools" are shallow depressions that usually contain water for only part of the year. In the Northeast, vernal pools may fill during the fall and winter as the water table rises. Rain and melting snow also contribute water during the spring. Vernal pools typically dry out by mid to late summer. Although vernal pools may only contain water for a relatively short period of time, they serve as essential breeding habitat for certain species of wildlife, including salamanders and frogs. Since vernal pools dry out on a regular basis, they cannot support permanent populations of fish. The absence of fish provides an important ecological advantage for species that have adapted to vernal pools, because their eggs and young are safe from predation. Species that must have access to vernal pools in order to survive and reproduce are known as "obligate" vernal pool species. In Maine, obligate vernal pool species include wood frogs, spotted and blue-spotted salamanders (two types of mole salamanders) and fairy shrimp. While wood frogs and mole salamanders live most of their lives in uplands, they must return to vernal pools to mate and lay their eggs. The eggs and young of these amphibians develop in the pools until they are mature enough to migrate to adjacent uplands. Fairy shrimp are small crustaceans which spend their entire life cycle in vernal pools, and have adapted to constantly changing environmental conditions. Fairy shrimp egg cases remain on the pool bottom even after all water has disappeared. The eggs can survive long periods of drying and freezing, but will hatch in late winter or early spring when water returns to the pool.
<urn:uuid:bb185547-fe42-40a8-adc0-ee9a40fd0ad6>
3.65625
369
Knowledge Article
Science & Tech.
41.647
A team of physicists have curbed the hope that quantum physics might be squared with common sense. At least if we want to hang on to Einstein's highly respected theory of relativity. Their result concerns what Einstein called "spooky action at a distance" and it may soon be possible to test their prediction in the lab. The 2012 Nobel Prize for Physics has been awarded to Serge Haroche and David J. Wineland for ground-breaking work in quantum optics. By probing the world at the smallest scales they've shed light on some of the biggest mysteries of physics and paved the way for quantum computers and super accurate clocks. Researchers in Germany have created a rare example of a weird phenomenon predicted by quantum mechanics: quantum entanglement, or as Einstein called it, "spooky action at a distance". The idea, loosely speaking, is that particles which have once interacted physically remain linked to each other even when they're moved apart and seem to affect each other instantaneously. Researchers from the University of Maryland have devised a new kind of random number generator that is cryptographically secure, inherently private and — most importantly — certified random by the laws of physics. Randomness is important, particularly in the age of the Internet, because it guarantees security. Valuable data and messages can be encrypted using long strings of random numbers to act as "keys", which encode and decode the information. Randomness implies unpredictability, so if the key is truly random, it's next to impossible for an outsider to guess it. "God does not play dice" Albert Einstein once said. Since then the undisputable successes of the quantum theory have convinced all but a handful of contemporary physicists that God does indeed play dice. The question some are now asking is why does God play dice?
<urn:uuid:958bbd1f-800b-4db9-a0ec-b1d6c116c3d9>
3.171875
363
Content Listing
Science & Tech.
38.63101
As an artist and a programmer I look to identify three principle aspects when I am observing recursion or considering the concept in use, identify a rule which has self-reference (in programming an example is a named function containing the same named function), producing a process of repetition with the requirement of a break case (to stop looping forever), resulting in observable self-similar patterns. The last rule will not be observable feature of the results of a program--you will not see the pattern but have the knowledge the same rule was applied multiple times. The beauty of recursion in programming is "observed" in the economical code. The reverse is the case in biological growth and natural ecological patterns where we observe number three and infer one (and maybe two). Standing on the shoulders of giants, I hope I am correct in suggesting these three elements are, in these two combinations, required to distinguish recursion from iteration. If your project can teach students to make this distinction they will have learned recursion. Its with this in mind my suggestion for teaching recursion is to observe nature. A most excellent primer on observing recursion is the Nova program Hunting the Hidden Dimension (and for the time being can be viewed at Youtube). I wish to point out this passage from the video describing recursion in trees: 5:15 "One of the most familiar examples of self similarity is a tree. If we look at each of the nodes, the branching nodes of this tree, what you will actually see is that the pattern of branching is very similar throughout the tree." What is more amazing is different trees have different patterns. The count of tree limbs from one compass point, twisting upward to the next tree limb at the same compass point with be a specific number. This makes for an interesting nature walk with so many points of observation from the trunk to the tips of the branches. The tree is the best example, but of course recursion doesn't stop there. Going one step further, if you were teaching art, this knowledge is very useful in mapping what your mind says you're seeing to the true physical reality. This is because of the optical illusion of foreshortening when drawing objects which are arranged pointing toward the artist. Knowledge of recursion helps the artist understand what they are seeing when a branch is see from different angles. Again from the Nova episode, Mandelbrot said "think not of what you see, but what it took to produce what you see." Your natural drawings will be better because of it.
<urn:uuid:9c214174-be32-4ffe-b00c-1c5b5462aad7>
3.28125
517
Q&A Forum
Software Dev.
44.578357
Genus: Cells are equal in size (Illustrations of The Japanese Fresh-water Algae, 1977). Planktonic; colony within a gelatinous sheath; cell body spherical, elliptical or pear-shaped; 4 daughter cells attached to a filament dichotomously radiating from the center of the colony; a single chloroplast cup-shaped, with a pyrenoid. Images of collecting locality: Oisezuka Park, Isehara-machi, Kawagoe city, Saitama Pref., Japan, March 18, 2006 by Y. Tsukii
<urn:uuid:766b7df1-69d3-47da-98fc-7f5ebebc3c3b>
2.765625
128
Knowledge Article
Science & Tech.
26.256685
hi, jndi is an api which provides a connection object to the programmer . by using jndi we can get the connection and the connection and given to the server. actually server maintain a connections which will call it a connectionpooling mechnisam. this type connection use jdbc to access the connection. we no need to write the Class.forName("oracle.jdbc.driver.OracleDriver"); Connection con=DriverManager.getConnection(url,username,password); this logic we just configure the following structure if it is apache server: Context context=new InitialContext(); it will provide the connection if it is weblogic server Context context=new InitialConntext() Connection con=datasource.getConnection() it will provide the Connection object and also one more answer the question it is like jdbc,hibernate jdbc uses the jdni, which provide a connection Hibernate is orm tool .hibernate is internaly use the jdbc api which will genarate the querys,prepate the connection object s If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:169fbc52-3899-4905-af2f-ca73cef71bd4>
2.859375
279
Q&A Forum
Software Dev.
40.226059
The main thing that distinguishes spiders from the rest of the animal kingdom is their ability to spin silk, an extremely strong fiber. A few insects produce similar material (silkworms, for example), but nothing comes close to the spinning capabilities of spiders. Most species build their entire lives around this unique ability. Scientists don't know exactly how spiders form silk, but they do have a basic idea of the spinning process. Spiders have special glands that secrete silk proteins (made up of chains of amino acids), which are dissolved in a water-based solution. The spider pushes the liquid solution through long ducts, leading to microscopic spigots on the spider's spinnerets. Spiders typically have two or three spinneret pairs, located at the rear of the abdomen. Photo courtesy Ed Nieuwenhuys Each spigot has a valve that controls the thickness and speed of the extruded material. As the spigots pull the fibroin protein molecules out of the ducts and extrude them into the air, the molecules are stretched out and linked together to form long strands. The spinnerets wind these strands together to form the sturdy silk fiber. Most spiders have multiple silk glands, which secrete different types of silk material optimized for different purposes. By winding different silk varieties together in varying proportions, spiders can form a wide range of fiber material. Spiders can also vary fiber consistency by adjusting the spigots to form smaller or larger strands. Some silk fibers have multiple layers -- for example, an inner core surrounded by an outer tube. Silk can also be coated with various substances suited for different purposes. Spiders might coat fiber in a sticky substance, for example, or a waterproof material. Spider silk is incredibly strong and flexible. Some varieties are five times as strong as an equal mass of steel and twice as strong as an equal mass of Kevlar. This has attracted the attention of scientists in a number of fields, but up until recently, humans haven't been able to get much out of this natural resource. It's simply too hard to extract silk from spiders, and each spider has only a small amount of it. Photo courtesy Steve Clark This may change in the near future. Researchers at a company called Nexia Biotechnologies have genetically modified goats using silk-producing genes from spiders. The hope is that a small number of goats will be able to produce a large amount of silk material in their milk. Engineers will be able to put this material to work in aircraft, bulletproof vests and artificial limbs, among other things (check out this page for more information).
<urn:uuid:4503d201-3d62-419c-b3bb-3eabee844c14>
4.34375
536
Knowledge Article
Science & Tech.
45.855746
Nov. 7, 2012 Scientists and sky watchers are converging on the northeast coast of Australia, near the Great Barrier Reef, for a total eclipse of the sun on Nov. 13/14. For researchers, the brief minutes of totality open a window into some of the deepest mysteries of solar physics. Oct. 31, 2012 New results from NASA's Mars rover Curiosity show that the mineralogy of Martian soil is similar to weathered basaltic soils of volcanic origin in Hawaii. Oct. 25, 2012 Astronomers have caught a red giant star in the act of devouring one of its planets. It could be a preview of what will happen to Earth five billion years from now. Oct. 12, 2012 Earth is about to pass through a stream of debris from Halley's Comet, source of the annual Orionid meteor shower. Forecasters expect 25 meteors per hour when the shower peaks on Oct. 21st. Sept. 28, 2012 A NASA spacecraft has recorded eerie-sounding radio emissions coming from our own planet. These beautiful "songs of Earth" could, ironically, be responsible for the proliferation of deadly electrons in the Van Allen Belts. Sept. 27, 2012 Mars rover Curiosity has found evidence that a stream once ran vigorously across the area where the rover is now driving. Sept. 21, 2012 A former rock-n-roller turned NASA engineer explains why he thinks Curiosity--both the Mars rover and the human desire to learn new things--matters to ordinary people on Earth. Sept. 14, 2012 NASA's Mars rover Opportunity, still active after all these years, has just discovered a dense accumulation of puzzling little spheroids in a rock outcrop on the Red Planet. Sept. 12, 2012 Once, astronomers thought planets couldn't form around binary stars. Now Kepler has found a whole system of planets orbiting a double star. This finding shows that planetary systems are weirder and more abundant than previously thought. Sept. 12, 2012
<urn:uuid:b64d991b-69a5-4879-8892-30707a9f6007>
3.0625
415
Content Listing
Science & Tech.
66.945348
Dwindling food supplies, caused by climate change, are threatening two species of penguin. Adélie and chinstrap penguins are both suffering thanks to falling availability of tiny shrimp-like crustaceans called krill that they eat. This is contrary to previous predictions, explained George Watters, director of the US National Oceanic and Atmospheric Administration (NOAA)’s Antarctic Ecosystem Research Division. That’s because those predictions directly link the amount of sea ice cover to the number of penguins. “The prevailing ‘sea-ice hypothesis’ would say that chinstrap penguins might be expected to benefit from climate change because they are “ice-avoiding” penguins,” Watters told Simple Climate. By contrast, “ice-loving” Adélie penguin populations had been expected to fall as the planet warms and ice cover decreases. “But we’re showing that in fact the populations of both species are declining,” Watters said. “We think that the the availability of krill is governing the decline of both animals.” The sea-ice theory developed from some of the earliest penguin population studies in the 1970s and 1980s, showing that Chinstrap penguin populations were increasing as Adélie penguins decreased. The team that did that work included Wayne Trivelpiece, lead author of the Proceedings of the National Academy of Sciences USA paper published on Monday that now shows both species are in decline. Both studies involved extensive fieldwork in the world’s coldest continent. “Wayne has been going since the Antarctic was invented,” Watters joked. Husband and wife reunion The NOAA team surveys penguin populations from field camps at Cape Shireff and Admiralty Bay in the South Shetland Islands region of Antarctica. “It’s an ongoing long-term monitoring effort, a lot of really detailed work we do to study penguins that involves counting, weighing and tagging birds,” Watters explained. Also among the team doing this is Susan Trivelpiece, Wayne’s wife, who spends approximately half the annual October-March study period in Antarctica, rotating with her husband. “Usually Sue goes in first,” said Watters. “She comes out around Thanksgiving time and then Wayne goes in.” As I interviewed Watters the couple were departing for a long weekend reunion at the end of another year’s separation. As well as their own detailed records, the NOAA scientists also called on population studies made by other researchers who visited penguin habitats at a wider range of locations. “The other kind of study, the way a lot of those work is somebody can get to a colony once and count the birds there, then go back after some period of time and recount them” Watters said. With some colonies boasting populations in the millions, the counting process isn’t necessarily simple. “If it’s small enough, you break it up into grid lines visually and just count the birds,” Watters said. “Another way is with aerial photography.” A third option is to walk around the edge of the colony with a GPS receiver to determine its size, Watters added. Then a scientist can work out how many birds are typically in a given area, and then multiply that by the size of the whole colony. Reversal of fortune Watters, the Trivelpieces and their colleagues combined these studies with measures of krill abundance, air temperature changes and sea-ice levels in the region since the 1970s to explore the relationship between these factors and penguin abundance. These revealed that the availability of krill – the penguins’ main food source – could influence penguin populations more directly than access to sea-ice. That means that both penguin species would suffer as sea-ice disappears, because krill need it to reproduce successfully. Overall, the studies showed that rather than growing as had been predicted, populations of chinstrap penguins were less than half their 1977 levels. The situation is therefore “particularly critical” for them, the team say, as they haven’t been seen to adapt to other locations. Consequently the team recommend that the International Union for the Conservation of Nature consider putting chinstraps higher on their “at risk” Red List. While Adélie penguins had suffered a similar fate in the rapidly-warming West Antarctic Peninsula, climate change has helped them elsewhere. “Adélie populations in the Ross Sea area are actually growing right now,” Watters explained, although he warned that they may ultimately have problems if the ice is not where they need to feed in winter. “In Antarctica, climate change is supposed to play out so that you’ll have more ice in some areas and less in other areas. The Ross Sea is an area where you’re going to have more ice and possibly as a result of that better production of krill, so Adélie penguins are doing well there for now.”
<urn:uuid:d701b9c7-e48d-4fd6-9021-595c277c75f0>
3.46875
1,060
Personal Blog
Science & Tech.
38.600341
palladium (Pd)Article Free Pass palladium (Pd), chemical element, least dense and lowest-melting of the platinum metals of Groups 8–10 (VIIIb), Periods 5 and 6, of the periodic table, used especially as a catalyst (a substance that speeds up chemical reactions without changing their products) and in alloys. A precious, gray-white metal, palladium is extremely ductile and easily worked. Palladium is not tarnished by the atmosphere at ordinary temperatures. Thus, the metal and its alloys serve as substitutes for platinum in jewelry and in electrical contacts; the beaten leaf is used for decorative purposes. Relatively small amounts of palladium alloyed with gold yield the best white gold. Palladium is used also in dental alloys. The largest use of the pure metal is for electrical contacts in telephone equipment. Palladium coatings, electrodeposited or chemically plated, have been used in printed-circuit components. Native palladium, though rare, occurs alloyed with a little platinum and iridium in Colombia (department of Chocó), in Brazil (Itabira, Minas Gerais), in the Ural Mountains, and in South Africa (the Transvaal). Palladium is one of the most abundant platinum metals and occurs in the Earth’s crust at an abundance of 0.015 parts per million. For the mineralogical properties of palladium, see native element (table). Palladium also occurs alloyed with native platinum. It was first isolated (1803) from crude platinum by the English chemist and physicist William Hyde Wollaston. He named the element in honour of the newly discovered asteroid Pallas. Palladium is also associated with a number of gold, silver, copper and nickel ores. It is generally produced commercially as a by-product in the refining of copper, and nickel ores. Surfaces of palladium are excellent catalysts for chemical reactions involving hydrogen and oxygen, such as the hydrogenation of unsaturated organic compounds. Under suitable conditions (80° C and 1 atmosphere), palladium absorbs more than 900 times its own volume of hydrogen; it expands and becomes harder, stronger, and less ductile in the process. The absorption also causes both the electrical conductivity and magnetic susceptibility to decrease. A metallic or alloylike hydride is formed from which the hydrogen can be removed by increased temperature and reduced pressure. Because hydrogen passes rapidly through the metal at high temperatures, heated palladium tubes impervious to other gases function as semipermeable membranes and are used to pass hydrogen in and out of closed gas systems or for hydrogen purification. Palladium is more reactive than the other platinum metals; for example, it is attacked more readily by acids than any of the other platinum metals. It dissolves slowly in nitric acid to give palladium nitrate, Pd(NO3)2, and with concentrated sulfuric acid it yields palladium sulfate, PdSO4∙2H2O. In its sponge form it will dissolve even in hydrochloric acid in the presence of chlorine or oxygen. It is rapidly attacked by fused alkali oxides and peroxides and also by fluorine and chlorine at about 500° C. Palladium also combines with a number of nonmetallic elements on heating, such as phosphorus, arsenic, antimony, silicon, sulfur, and selenium. A series of palladium compounds can be prepared with the +2 oxidation state; numerous compounds in the +4 state and a few in the 0 state are also known. Among the transition metals palladium has one of the strongest tendencies to form bonds with carbon. All palladium compounds are easily decomposed or reduced to the free metal. An aqueous solution of potassium tetrachloropalladate, K2PdCl4, serves as a sensitive detector for carbon monoxide or olefin gases because a black precipitate of the metal appears in the presence of exceedingly small amounts of those gases. Natural palladium consists of a mixture of six stable isotopes: palladium-102 (0.96 percent), palladium-104 (10.97 percent), palladium-105 (22.23 percent), palladium-106 (27.33 percent), palladium-108 (26.71 percent), and palladium-110 (11.81 percent). |melting point||1,552° C (2,826° F)| |boiling point||2,927° C (5,301° F)| |specific gravity||11.97 (0° C)| |oxidation states||+2, +4| What made you want to look up "palladium (Pd)"? Please share what surprised you most...
<urn:uuid:652d8672-c616-4511-932d-fa878d0d33d3>
3.609375
989
Knowledge Article
Science & Tech.
38.606435
The Free Fall model allows the user to examine the motion of an object in freefall. This is simply one-dimensional motion (vertical motion) under the influence of gravity. The Free Fall model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_freefall.jar file will run the program if Java is installed. Please note that this resource requires at least version 1.5 of Free Fall Model Source Code The source code zip archive contains an XML representation of the Free Fall model. Unzip this archive in your EJS workspace to compile and run this model… more... download 4kb .zip Published: April 25, 2010 6-8: 4B/M3. Everything on or anywhere near the earth is pulled toward the earth's center by gravitational force. 4G. Forces of Nature 9-12: 4G/H1. Gravitational force is an attraction between masses. The strength of the force is proportional to the masses and weakens rapidly with increasing distance between them. 11. Common Themes 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast, too complex, or too dangerous to study. 6-8: 11B/M2. Mathematical models can be displayed on a computer and then modified to see what happens. Common Core State Standards for Mathematics Alignments Standards for Mathematical Practice (K-12) MP.4 Model with mathematics. High School — Algebra (9-12) Creating Equations? (9-12) A-CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions. Reasoning with Equations and Inequalities (9-12) A-REI.3 Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters. High School — Functions (9-12) Linear, Quadratic, and Exponential Models? (9-12) F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another. %0 Computer Program %A Duffy, Andrew %D April 16, 2010 %T Free Fall Model %8 April 16, 2010 %U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639 Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:ddf3557a-2db8-44d9-826e-081ee125c43e>
3.734375
619
Knowledge Article
Science & Tech.
56.835209
5.23.2 Object-Oriented Terminology This section is mainly for reference, so you don't have to understand all of it right away. The terminology is mainly Smalltalk-inspired. In - a data structure definition with some extras. - an instance of the data structure described by the class definition. - instance variables - fields of the data structure. - (or method selector) a word (e.g., draw) that performs an operation on a variety of data structures (classes). A selector describes what operation to perform. In C++ terminology: a (pure) virtual function. - the concrete definition that performs the operation described by the selector for a specific class. A method specifies how the operation is performed for a specific class. - selector invocation - a call of a selector. One argument of the call (the TOS (top-of-stack)) is used for determining which method is used. In Smalltalk terminology: a message (consisting of the selector and the other arguments) is sent to the object. - receiving object - the object used for determining the method executed by a selector invocation. In the objects.fs model, it is the object that is on the TOS when the selector is invoked. (Receiving comes from the Smalltalk message terminology.) - child class - a class that has (inherits) all properties (instance variables, selectors, methods) from a parent class. In Smalltalk terminology: The subclass inherits from the superclass. In C++ terminology: The derived class inherits from the base class.
<urn:uuid:9a3510ae-be46-4fc5-84c3-a18baf8064ed>
3.625
354
Documentation
Software Dev.
44.143997
[Haskell-cafe] Why 'round' does not just round numbers ? ok at cs.otago.ac.nz Mon Oct 27 21:44:17 EDT 2008 On 27 Oct 2008, at 11:00 pm, Henning Thielemann wrote: > On Mon, 27 Oct 2008, L.Guo wrote: >> I think this is unresonable. then try it in GHC 6.8.3. >> Prelude> round 3.5 >> Prelude> round 2.5 >> Is there any explanation about that ? > It's the definition we learnt in school ... where you will find that the version of rounding "generally taught in elementary schools" is the one used in Pascal, namely round(X) = truncate(X + 0.5*sign(X)) (The Wikipedia entry says that Pascal uses a different algorithm, but ISO 7185 says "If x is positive or zero, round(x) shall be equivalent to trunc(x +0.5); otherwise, round(x) shall be equivalent to trunc(x-0.5).") > I think one reason is that repeated rounding should not be worse > than rounding in one go. That would be nice, but while round-to-even _reduces_ the harm from double rounding, it does not eliminate it. More information about the Haskell-Cafe
<urn:uuid:3338e304-edd0-44ab-93d9-384b6b2068d6>
3.09375
304
Comment Section
Software Dev.
88.562193
When you think of Sir Isaac Newton, you might imagine a man sitting under an apple tree. It is said that Newton discovered gravity when an apple fell on his head and he wondered what made the apple fall down instead of float away or go up. In 1687, Newton published a book with his explanation of how an object, or matter, moves. The book, entitled "Principia," is important to science because in it he explained three laws of motion and the Universal Law of Gravity. Newton's Laws of Motion describe the relationships between motion, matter and force. Also known as ... First law of motion Law of inertia An object that is not moving will not move until a force makes it move. An object that is moving will continue to move at a constant speed and direction until a force causes it to change. Second law of motion The force of an object equals its mass times its acceleration. Third law of motion Law of action and reaction For every action there is an equal and opposite reaction. Newton's second law is a formula. If force equals mass times acceleration, then acceleration equals force divided by mass. When the formula is written this way, it explains that an object's speed, or velocity, will depend on its mass and the force that is applied to it. Newton's laws are important to NASA. They determine how NASA launches rockets, flies airplanes, and conducts tests and experiments. These laws also help in understanding other scientific principles, such as how planets orbit the sun. The laws of motion work in space and on Earth. Gravity is a strong force that we encounter on Earth. On the International Space Station, demonstrating these laws is easier because of microgravity.
<urn:uuid:cb23068f-17cd-4b49-b5b3-5f86810b5ab7>
3.78125
350
Knowledge Article
Science & Tech.
56.447103
THE first "doubly magic" nickel nuclei have popped into existence in France. The nuclei can be used to test competing theories of nuclear forces and even set the stage for detecting a new form of radioactivity. Light atoms tend to break apart when there is an imbalance in the numbers of protons and neutrons. But half a century ago, the German-American Nobel prizewinner Maria Goeppert-Mayer showed that their stability depends on exactly how many protons and neutrons they have. Quantum rules force protons and neutrons into nested shells inside the nucleus, and when a shell has been filled with its so-called "magic number" of protons or neutrons, it settles into a stable shape ( Some theories predict that because it has 28 protons and 20 neutronsboth magic numbersnickel-48 should have a just-detectable lease on life, rather than flying apart the instant it forms. To ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:848d6c9c-b735-4700-b5fa-b27f5874a215>
3.59375
224
Truncated
Science & Tech.
48.279429
Name: Rithy Ngy Before the discovery of radioactivity, how did scientists estimate the age of rock layers? Thanks for your question. Before the discovery of radioactivity scientists determined the relative age of fossils by looking at where they were found in the rock layers. The idea was that deeper rocks must be older than those found higher up because the deeper rock must have been laid down before the higher rock could be laid on top of it. This was made easier in sedimentary rocks because these rocks tend to form in distinct layers with all the rock from a particular layer being about the same age. Relative dating tells you nothing about the actual age of the rocks and before the advent of radiometric dating people could only guess at the age of a rock. This made it possible for people to claim that the earth was only 12 000 years old and most people believed this until someone pointed out that, given what we know about the rate at which rock and geological features are formed, the earth would have to be much, much older than this to have reached the state it is in today. This was an important observation and led to much speculation about how old the earth would have to be to reach its current state. However, it was not until the discovery of radiation and radiometric dating that the age of a rock could be predicted with any accuracy at all. I hope this is of some help to you. Click here to return to the Environmental and Earth Science Archives Update: June 2012
<urn:uuid:afc9b34a-41c4-4e35-a314-d9aa6d6e01bd>
3.828125
325
Q&A Forum
Science & Tech.
46.653223
A number written in the place-value notation with base 10. For example 5.321 The method of positional notation used in real number system. A real number can be either rational or irrational. A rational number can be written as a terminating decimal or repeating decimal number. For example, 3/4 = 0.75 is a terminating decimal number and is a repeating decimal number where 3 is repeating forever and never an irrational number which is neither terminating nor repeating. A number system using base ten. A degree is a measure of angle forming by two rays. A complete revolution is 360o. therefore the measure of a right angle is 900. Radian is a different measure of angle, in which is the measure of a complete revolution. allows us to convert from one measure to the other. The bottom number (or expression) in a fraction. For example, in the fraction, 3/5, 5 is the denominator. of a monomial The degree of a monomial in one variable is the value of the exponent of the variable. The degree of a monomial with more than one variable is the sum of the exponents of the variables in that monomial. For example in 4x2y3z , x has degree 2 , y degree 3 and z degree 1. The degree of the whole term is the sum of these exponents, in this case 2+3+1 of a polynomial The degree of a polynomial is the degree of the nonzero term with the highest degree. For instance, x+xy+2y+5 is a polynomial of second degree, since xy has degree 2. A line segment connecting any two vertices of a polygon that are not lying on the same side of the polygon. A chord passing through the center of a circle or sphere. Any of the ten Arabic numerals of the decimal system, e.g., 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 The number of coordinates required to locate a point, usually in a plane or a expression, , associated to the general quadratic equation . Solutions of a quadratic equation are distinct real numbers, equal real numbers or conjugate complex numbers when The length of shortest line segment joining the given points. An axiom of a particular formal system that states for a given pair of operators that one distributes over the other. In arithmetic (or algebra) the multiplication is distributive over addition, in symbolically important and useful law allows to replace an expression with parenthesis for an equivalent one without it, allowing simplification of expression operation of multiplication. For instance, , a is the dividend and b is the divisor.
<urn:uuid:ee7da8bf-4681-4057-8923-0305821bf692>
3.84375
611
Structured Data
Science & Tech.
51.754738
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 7 results on physics.org and 42 results in our database of sites 42 are Websites, 0 are Videos, and 0 are Experiments) Search results on physics.org Search results from our links database All about the experiment which first suggested that neutrinos could travel faster than light, and the implications of this. The IceCube Observatory will be looking for neutrinos. Buried under over a 1km of ice, the detector will be looking for flashes of light from the decay of neutrinos. The Sudbury Neutrino Observatory (SNO) is taking data that has provided revolutionary insight into the properties of neutrinos and the core of the sun. Detailed info on neutrinos, with links to more information. This site explains in a colourful way the discovery of neutrinos. They are one of the fundamental particles which make up the universe. They are also one of the least understood. Wolfgang Pauli first suggested, around 1930, that neutrinos might exist, because without them, momentum didn't seem to be conserved in nuclear reactions. Neutrinos were actually detected 10 years ... A long and in-depth article about neutrinos, how scientists detect them and questions about neutrinos scientists are trying to solve. IceCube, a telescope under construction at the South Pole, will search for neutrinos from the most violent astrophysical sources: events like exploding stars, gamma ray bursts, and cataclysmic ... Physicist Brian Greene addresses the question of faster-than-light neutrinos at a Q&A session. The origins of cosmic rays have finally been found using the Fermi Gamma Ray Space Telescope. Showing 1 - 10 of 42
<urn:uuid:27560db1-94fa-4ce2-a0e3-38246b7504a7>
3.171875
410
Content Listing
Science & Tech.
51.480456
Will we grow babies outside their mothers' bodies? By Gretchen Reynolds Posted 08.01.2005 at 2:00 pm 8 Comments A fetus lives in a world of bubbles. In its earliest days, it’s shaped like one. Later, it floats in one-the squishy, enveloping amniotic sac. And eventually, if all goes well, the fetus releases one bubble of fluid, then another and another, like smoke signals, as it puckers and swallows and floats in the womb. It was the bubbles that first convinced Hung-Ching Liu two years ago that a baby might actually be grown outside its mother’s uterus. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:06b344e6-4d88-4811-9649-90ba0ec404ac>
2.71875
192
Content Listing
Science & Tech.
67.433808
TNRTB Archive - Retained for reference information A team of European and American conservation biologists has developed the most powerful and comprehensive field evidence to date for a major tenet of RTB’s speciation model. RTB’s model posits that large-body-sized species manifest a high probability for rapid extinction and a very low probability for speciation and, thus, are not candidates for natural evolution. The biologists’ study of 4,000 land mammal species spanning a body mass range from 2 grams to 4,000 kilograms showed that the slope of extinction risk against six established predictors of extinction becomes steeper with increasing body mass. In particular, a sharp increase in extinction risk occurs at a body mass of three kilograms. Above this size body mass “extinction risk begins to be compounded by the cumulative effects of multiple threatening factors,” the authors note. The team’s study establishes that land mammals with large body sizes possess extinction rates that are orders of magnitude larger than the most optimistic speciation rates. Consequently, mammals with large body sizes cannot be the product of natural process evolution. o Marcel Cardillo et al., “Multiple Causes of High Extinction Risk in Large Mammal Species,” Science 309 (2005): 1239-41. · Related Resource o Hugh Ross, “The Faint Sun Paradox”
<urn:uuid:adc6c019-8b59-4fea-aba9-39c7101119ae>
3.4375
280
Truncated
Science & Tech.
30.295023
Posts Tagged ‘Climate Change’ Brief Description of Global Warming. Global warming is the increase of average temperature in the Earth, as result of greenhouse gas (GHG) effect. How greenhouse effect occurs: About 30 percent of the sunlight that beams toward Earth is deflected by the outer atmosphere and scattered back into space. When the rest, 70% - alights on the Earth’s surface, some is absorbed and warms the Earth and some is radiated back to the atmosphere as slow-moving energy called infrared radiation. Some of this infrared radiation are absorbed by greenhouse gases and trapped in the atmosphere before they escape to space. This greenhouse gases act like a mirror: some of the heat radiation which would otherwise exits to space is reflected back to the Earth. The reflecting back of heat radiation from the atmosphere is called the “greenhouse effect”. The thicker the greenhouse gases the more heat radiation is reflected to the Earth. How greenhouse gases occur: Greenhouse gases are gases exist in the Earth’s atmosphere. Some of those greenhouse gases occur naturally, such as water vapor, carbon dioxide, methane, nitrous oxide, and ozone. Others such as hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), sulfur hexafluoride (SF6) and CFCs are solely generated from human industrial processes which also destroy the ozone layer. Green house gases can stay in the atmosphere for an amount of years ranging from decades to hundreds and thousands of years. Greenhouse gases role on the global warming: Greenhouse gases that occur naturally are needed to create the greenhouse effect that keeps the Earth in proper warmth to support life, but human activities, especially in the last 50 years, have been tossing massive greenhouse gas emission to the atmosphere, that they are excessively accumulated in the atmosphere. This cause extra heat that leads to global warming. There are still debates among the scientists on this global warming phenomenon, but IPCC’s scientists declare with near certainty (more than 90 percent confidence) that carbon dioxide and other greenhouse gases arising from human activities are the main cause behind the global warming since 1950. Human activities that contribute huge emission : Ever since the above global warming phenomena raised, studies on emission generation has been widely established in large scope, and human activities that mostly generated emissions are verified to be : Daily activities that cause fossil fuel combustion : According to the Intergovernmental Panel on Climate Change (IPCC), since the industrial revolution began in 1750, carbon dioxide levels have increased 35 percent and methane levels have increased 148 percent As trees absorb carbon dioxide and converse it to oxygen, then they help lessening the greenhouse gas. Deforestation are commonly done for means of harvesting timber and land clearing for agriculture. Some farming practices and land-use changes increase the levels of methane and nitrous oxide. During the last 150 years, the level of atmospheric methane has risen by 151 percent, mostly from agricultural activities such as raising cattle and growing rice. Population growth : Another factor that boosts up the greenhouse gases is population growth, because the more people use fossil fuels the level of greenhouse gases continues to raise. The more land farming made to feed millions of new people, the more deforestation is done that causes the higher greenhouse gases tossed to the atmosphere. You might also be interested to learn how to fight the global warming.
<urn:uuid:80fddeab-363c-4f92-a205-3cabbcec937a>
2.953125
697
Knowledge Article
Science & Tech.
25.232845
Through photosynthesis, plants use carbon dioxide to make oxygen and help regulate the amount of both gases in the atmosphere. Since plants grow faster and use more carbon dioxide when carbon dioxide levels are high, some people believe that plants can absorb much of the excess carbon dioxide produced by burning fossil fuels. Dr. Bert Drake, plant physiologist at the Smithsonian Environmental Research Center near Annapolis, Maryland, has studied plant responses to carbon dioxide under controlled conditions, longer than anyone else. He’s found that growing conditions such as the amount of rainfall can alter plants’ responses to carbon dioxide. Dr. Drake warns that there are limits to plant growth and to plants’ ability to remove carbon dioxide from the atmosphere.
<urn:uuid:f8b637c4-7bc6-435b-8850-f9834f748bf6>
3.734375
144
Knowledge Article
Science & Tech.
35.471818
After reporting a few months ago on the rise of carbon dioxide in the atmosphere, I got e-mails from readers asking where they could find more information about the basics of climate science. My interlocutors even included climate-change contrarians who seemed open to the possibility that they might be wrong. I found myself struggling with the question of where to send them. The Web is chockablock with blog posts and other material about climate change, of course, but picking your way through that to the actual science, or even to reliable write-ups on what the science means, is no easy task. Likewise, hundreds of books about climate change have been published, but not that many of them lay out the basics of the problem in a clear, understandable way. Still fewer provide any rich sense of the history of how the science came to exist in its present form. The Web does have some excellent resources, to be sure. I often send people to Climate Central, a fine site based in Princeton that works to translate climate science into understandable prose. For people starting from a contrarian bent, nothing beats Skeptical Science, a Web site that directly answers various skeptic talking points, with links to some of the original science. And Real Climate is a must-read, since it includes some of the world’s top climate scientists translating their research into layman’s language. Still, for wrapping our minds around a subject, many of us want to flee the Web and curl up with a good book. So I was enthused recently when “The Warming Papers” came to my attention. A hefty new volume published by Wiley-Blackwell and edited by the climate scientists David Archer and Raymond Pierrehumbert at the University of Chicago, it’s a rich feast for anyone who wants to trace the history of climate science from its earliest origins to the present. (Note that it’s a pricey book, north of $60 in paperback and closer to $150 in hardback — so perhaps it won’t be an impulse purchase for many people. But I suspect well-stocked libraries will have it, and even if yours doesn’t, you should be able to get a copy through interlibrary loan. And the book might work for college classes in climate science; by textbook standards, $60 is a steal.) The idea of the book is to present the touchstone scientific papers in the field, all of which have in some way stood the test of time, even if not in all their details. The book begins, for instance, by reprinting the 1827 paper “On the Temperatures of the Terrestrial Sphere and Interplanetary Space,” in which Joseph Fourier discovered the phenomenon we now call the greenhouse effect. It includes an 1861 paper in which John Tyndall measured, with considerable precision, the heat-trapping powers of water vapor, carbon dioxide and other trace gases in the atmosphere, and speculated that changing the concentration of some gases might alter the Earth’s climate. And most delightfully, the editors included the 1896 paper in which a Swedish scientist, Svante Arrhenius, spelled out the implications of an increase in atmospheric carbon dioxide. Arrhenius did the hard mathematics to predict what might happen to the temperature of the planet if the carbon dioxide level doubled. Although he made some errors, he came up with a number, 11 degrees Fahrenheit, that is in the same range as modern forecasts, albeit at the high end of most of them. The editors write, “In Arrhenius’ 1896 paper we witness the birth of modern climate science.” “The Warming Papers” goes on to reprint many of the seminal modern papers on climate change, including reports from Charles David Keeling about his pioneering measurements of carbon dioxide. It includes papers from the 1960s and 1970s in which Syukuro Manabe and his colleagues at the Geophysical Fluid Dynamics Laboratory in Princeton worked out the mathematics needed to build computerized models of the atmosphere. Those have become fundamental tools of the science. Dr. Pierrehumbert pointed out to me in an e-mail that reading the older papers should dispel for anyone the oft-heard claim “that climate science is ‘in its infancy’ ” — it has, in reality, been making predictions since the 19th century, and we are now living in an era when those predictions are coming true. “It’s exciting to read original scientific papers, to follow along as people struggle with figuring things out,” Dr. Archer told me in an e-mail. “For the climate-change question, there is another point to be made, about how deep the roots of the ideas go, how far back in time. The forecast for global warming predates the actual anomalous warming by many decades.”
<urn:uuid:729b4648-bcba-4468-b2c9-de201776e01b>
2.75
1,014
Personal Blog
Science & Tech.
43.456106
To achieve interstellar travel, the Kline Directive instructs us to be bold, to explore what others have not, to seek what others will not, to change what others dare not. To extend the boundaries of our knowledge, to advocate new methods, techniques and research, to sponsor change not status quo, on 5 fronts: 1. Legal Standing. 2. Safety Awareness. 3. Economic Viability. 4. Theoretical-Empirical Relationship. 5. Technological Feasibility. In Part 1 of this post I will explore Theoretical-Empirical Relationship. Not theoretical relationships, not empirical relationships but theoretical-empirical relationships. To do this let us remind ourselves what the late Prof. Morris Kline was getting at in his book Mathematics: The Loss of Certainty, that mathematics has become so sophisticated and so very successful that it can now be used to prove anything and everything, and therefore, the loss of certainty that mathematics will provide reasonability in guidance and correctness in answers to our questions in the sciences. History of science shows that all three giants of science of their times, Robert Boyle, Isaac Newton & Christiaan Huygens believed that light traveled in aether medium, but by the end of the 19th century there was enough experimental evidence to show aether could not be a valid concept. The primary experiment that changed our understanding of aether was the Michelson–Morley experiment of 1887, which once and for all proved that aether did not have the correct properties as the medium in which light travels. Only after these experimental results were published did, a then unknown Albert Einstein, invent the Special Theory of Relativity (SRT) in 1905. The important fact to take note here is that Einstein did not invent SRT out of thin air, like many non-scientists and scientists, today believe. He invented SRT by examining the experimental data to put forward a hypothesis or concept described in mathematical form, why the velocity of light was constant in every direction independent of the direction of relative motion. But he also had clues from others, namely George Francis FitzGerald (1889) and Hendrik Antoon Lorentz (1892) who postulated length contraction to explain negative outcome of the Michelson-Morley experiment and to rescue the ‘stationary aether’ hypothesis. Today their work is named the Lorentz-Fitzgerald transformation. So Einstein did not invent the Special Theory of Relativity (SRT) out of thin air, there was a body of knowledge and hypotheses already in the literature. What Einstein did do was to pull all this together in a consistent and uniform manner that led to further correct predictions of how the physics of the Universe works. (Note: I know my history of science in certain fields of endeavor, and therefore use Wikipedia a lot, not as a primary reference, but as a starting point for the reader to take off for his/her own research.) Benjamin T Solomon is the author & principal investigator of the 12-year study into the theoretical & technological feasibility of gravitation modification, titled An Introduction to Gravity Modification, to achieve interstellar travel in our lifetimes. For more information visit iSETI LLC, Interstellar Space Exploration Technology Initiative. Solomon is inviting all serious participants to his LinkedIn Group Interstellar Travel & Gravity Modification.
<urn:uuid:9f195b83-ebb0-4814-80c6-22859ae3f8fb>
3.28125
690
Personal Blog
Science & Tech.
30.453079
Probably the most frequent means of drawing a right triangle is when the length of the sides of the right triangle have been calculated, and those dimensions are used to lay out the triangle. For instance, you know a 45° right triangle with a leg length of 4" is needed. You also know that the legs of a 45° right triangle are perpendicular and are equal in length. This knowledge is used in drawing that triangle. First: Use a framing square or a compass to draw perpendicular lines. Second: Set the dimension of your compass for 4" and mark both lines from the vertex of the 90° angle. Third: Connect the points and you will have drawn a 45° right triangle. For any right triangle, if you know the lengths of the legs, you can draw perpendicular lines, measure and mark the needed distance for each leg from the vertex of the angle, and connect the marks. Example: Draw a 26.5° right triangle with an opposite side length of 6". In order to draw this triangle, the length of the adjacent side needs to be known. First: Calculate for the length of the adjacent side. A look at the NAK chart shows that if your knowns are an angle and the length of the opposite side and your need is the length of the adjacent side, the cotangent function is used for your calculation. Cot 26.5° x the length of the opposite side = The length of the adjacent side Cot 26.5° x 6" = The length of the adjacent side 2.006 x 6" = The length of the adjacent side 12.036" or 12 1/32" = The length of the adjacent side Second: Draw perpendicular lines using a compass or a framing square. Third: Measure and mark 12 1/32" from the vertex of the angle on the horizontal line and 6" on the vertical line. Fourth: Connect the two marks. Protractors don't generally offer a high degrees of accuracy. If accuracy in your work is critical, then the ability to calculate and mark lengths for the sides of right triangles is also critical. For instance, if the length of the adjacent side was 12" instead of 12 1/32". the angle would be 26.6° . For some of our work, a tenth of a degree makes little difference, but in other cases it does. Be aware and make calculations based on the accuracy needed for your work.
<urn:uuid:455bd1e5-bd8e-40c9-b7c6-57ac07040c36>
4
512
Tutorial
Science & Tech.
74.261817
Loss of genetic diversity and fitness in Common Toad (Bufo bufo) populations isolated by inimical habitat Article first published online: 23 NOV 2002 Journal of Evolutionary Biology Volume 11, Issue 3, pages 269–283, May 1998 How to Cite Hitchings, S. P. and Beebee, T. J. C. (1998), Loss of genetic diversity and fitness in Common Toad (Bufo bufo) populations isolated by inimical habitat. Journal of Evolutionary Biology, 11: 269–283. doi: 10.1046/j.1420-9101.1998.11030269.x - Issue published online: 23 NOV 2002 - Article first published online: 23 NOV 2002 - Received 9 January 1997; revised 20 February 1997; accepted 26 March 1997. - Cited By - Population genetics; Measures of genetic diversity (including heterozygosity), survival and developmental homeostasis were found to be significantly lower in small, urban populations of the Common Toad (Bufo bufo) than in larger, rural populations of the same region. The autecology and genetic analysis of this relatively sedentary species suggested that the causal mechanism was genetic drift, arising from barriers to migration created by urban development. The pre-metamorphic survival of larvae cultured in identical conditions increased positively with the mean number of alleles at a locus and the percentage of polymorphic loci. Observed heterozygosity in urban garden and rural populations was correlated inversely with the number of observed physical abnormalities (used as a measure of developmental homeostasis) in the developing tadpoles. Genetic distances between town sites of mean 2.2 km separation were significantly higher than those between rural sites of mean 37 km separation. Genetic data were based on allozyme analysis of 27 loci in 8 urban and 4 rural populations. A subset of these sites (3 urban, 2 rural) were also assessed at 3 minisatellite loci and a positive correlation found between the average number of alleles per locus detected by the two methods. Estimates of Nei's 1972 genetic distance, derived separately from the DNA and protein data, were not, however, correlated. The reduction in genetic diversity and fitness observed in these urban toads provides an example of the effect on population persistence that longer term depletion in numbers and habitat fragmentation can have in the wider environment.
<urn:uuid:f933dfa6-6498-456f-a427-0ece0e029c4c>
2.859375
498
Academic Writing
Science & Tech.
28.278754
Cliff, on the other hand, remains on the Earth. When Biff returns, it turns out that Cliff is 95 years old. What Happened?According to relativity, two frames of reference that move differently from each other experience time differently, a process known as time dilation. Because Biff was moving so rapidly, time was in effect moving slower for him. This can be calculated precisely using Lorentz transformations, which are a standard part of relativity. Twin Paradox OneThe first twin paradox isn't really a scientific paradox, but a logical one: How old is Biff? Biff has experienced 25 years of life, but he was also born the same moment as Cliff, which was 90 years ago. So is he 25 years old or 90 years old? In this case, the answer is "both" ... depending on which way you're measuring age. According to his driver's license, which measures Earth time (and is no doubt expired), he's 90. According to his body, he's 25. Neither age is "right" or "wrong," although the social security administration might take exception if he tries to claim benefits. Twin Paradox TwoThe second paradox is a bit more technical, and really comes to the heart of what physicists mean when they talk about relativity. The entire scenario is based on the idea that Biff was traveling very fast, so time slowed down for him. The problem is that in relativity, only the relative motion is involved. So what if you considered things from Biff's point of view, then he stayed stationary the whole time, and it was Cliff who was moving away at rapid speeds. Shouldn't calculations performed in this way mean that Cliff is the one who ages more slowly? Doesn't relativity imply that these situations are symmetrical? Now, if Biff and Cliff were on spaceships traveling at constant speeds in opposite directions, this argument would be perfectly true. The rules of special relativity, which govern constant speed (inertial) frames of reference, indicate that only the relative motion between the two is what matters. In fact, if you're moving at a constant speed, there's not even an experiment that you can perform within your frame of reference which would distinguish you from being at rest. (Even if you looked outside the ship and compared yourself to some other constant frame of reference, you could only determine that one of you is moving, but not which one.) But there's one very important distinction here: Biff is accelerating during this process. Cliff is on the Earth, which for the purposes of this is basically "at rest" (even though in reality the Earth moves, rotates, and accelerates in various ways). Biff is on a spaceship which undergoes intensive acceleration to read near lightspeed. This means, according to general relativity, that there are actually physical experiments that could be performed by Biff which would reveal to him that he's accelerating ... and the same experiments would show Cliff that he's not accelerating (or at least accelerating much less than Biff is). The key feature is that while Cliff is in one frame of reference the entire time, Biff is actually in two frames of reference - the one where he's traveling away from the Earth and the one where he's coming back to the Earth. So Biff's situation and Cliff's situation are not actually symmetrical in our scenario. Biff is absolutely the one undergoing the more significant acceleration, and therefore he's the one who undergoes the least amount of time passage.
<urn:uuid:7379c742-de24-4d32-b0e2-3656370f782c>
3.140625
723
Knowledge Article
Science & Tech.
51.491514
One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window." We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?
<urn:uuid:9822a1e5-3810-47cb-b308-bfc666ee4152>
2.9375
562
Q&A Forum
Software Dev.
48.46761
Rotifer, a microscopic animal. There are approximately 2,000 species. Most species live in freshwater; some live in the sea; and a few live as parasites in or on other organisms. A rotifer's body consists of a head, a long middle section that holds the internal organs, and a tailpiece bearing one to four pointed projections called toes. An adhesive substance, with which rotifers adhere to surfaces, is secreted by glands in the toes. The rotifer has a two-lobed "brain" (a small mass of nerve tissue), and most species have eye spots (cells that are sensitive to light). The mouth is surrounded by hairlike cilia that sweep in food, which typically consists of bacteria, protozoans, and other small organisms. The cilia are also used for swimming. The motions of the cilia, which suggest the turning of a wheel, are responsible for the name of the rotifer, which means wheel bearer. Rotifers can move by creeping over submerged objects or by swimming. Female rotifers produce both summer eggs and winter eggs. Summer eggs are thin-shelled and develop without fertilization. They are not all the same size; the larger ones produce females, the smaller ones males. Winter eggs are thick-shelled and must be fertilized. They produce only females. Winter eggs can survive internal water loss and freezing. Rotifers make up the phylum Rotifera.
<urn:uuid:14575739-a366-4678-8fd8-fd2a14e95f26>
3.859375
299
Knowledge Article
Science & Tech.
47.513176
Description: The Carpenter Frog is a mid-sized frog, ranging from 1 ? -2 ? in (4.1-6.7 cm). It is generally brown or bronze in color with 4 light stripes down its back, and no dorsolateral ridges. Carpenter Frogs have short hind legs, making them somewhat toad-like in body shape and a venter that is white with black mottling. and Habitat: Carpenter Frogs are found along the eastern coast of United States from New Jersey to Georgia. In our region, they are restricted to the Coastal Plain and are usually associated with acidic sphagnum bogs, blackwater swamps, and in stands of grasslike vegetation. Habits: Carpenter Frogs are highly aquatic and are usually found in, or close to water. Males call from the edges of aquatic habitats in the late spring and summer. During courtship, female Carpenter Frogs respond to male mating calls with a chirping noise. This in turn elicits an aggressive male call similar to its territorial call used when in conflict with another male. Call: The carpenter frog's call sounds like the hammering of a carpenter, giving the forg its common name. Status: Listed in Georgia as rare or uncommon. The specific habitat requirements of this species make it susceptible to habitat loss and degredation. Given, M.F. 1993. Male response to female vocalizations in the carpenter frog, Rana virgatipes. Animal Behaviour 46(6):1139-1149. Account Author: Christina Baker, University of Georgia - revised by J.D. Willson
<urn:uuid:f4a7e44f-effd-4d1c-8bcc-2675b7745276>
3.296875
359
Knowledge Article
Science & Tech.
56.852045
Today, let's do calculus. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity Math is an odd business. Anyone who's literate knows arithmetic. Anyone with a high school education has been exposed to some algebra. Almost all of you have done the arithmetic of algebra in which numbers are replaced with But far fewer people have taken the next step and studied calculus. So the word calculus sounds pretty arcane to most of us. In fact, it's not any harder than algebra, but it has quite a different look and feel. Algebra still looks a little like arithmetic, while the calculus goes off into a different place entirely. To see the difference, think about the kind of questions that each answers: Algebra gives us a foolproof way to answer questions like this one: "We have 240 dollars to give away. We want to give Mary twice as much as Bill and Jake three times as much. How much money do we give Bill?" Calculus, on the other hand, gives us a foolproof way to answer a question like this one: "I know how my drag racer's acceleration changes with its speed. If I go from a standing start to eighty miles an hour in ten seconds, how far have I Algebra is a language in which we solve problems by equating numerical elements in symbolic form. We assign a symbol to Bill's payment, equate all the payments to 240 dollars, and do the arithmetic. We get the number that'll make the symbol satisfy the equation, and we find that Bill gets forty dollars. Calculus, on the other hand, deals with instants in time, or points in space. It deals with sequences of fleeting moments or places. Go back to that dragster: At one instant it's going thirty miles an hour. But what does that mean? It doesn't spend an hour on a thirty-mile track. The time it spends at that speed is actually zero. It merely passes through thirty miles an hour. Instantaneous speed is a pure calculus idea, yet we all understand it. That's because the language of the calculus has percolated into everyday life. So, how far did the dragster go in its acceleration test? It traveled the sum of distances that each of those instantaneous speeds took us during the ten The calculus is a simple language that lets us talk about things that change from moment to moment or point to point. It serves us when we deal with smooth movements or smooth shapes. It is the language by which we can calculate the capacity of an oddly shaped gas tank, or the trajectory of a space probe aimed at Mars. It's also a language that codifies intuitive ideas that are deeply felt and understood. We all see our lives as fleeting moments and as the sum of fragments almost too small to notice. Indeed, Joan Didion captures the calculus perfectly in her dark and passionate novel, Run River. She asks, Was there ever in anyone's life span a point free in time, devoid of memory, a night when choice was any more than the sum of all the choices gone I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds The number of calculus textbooks is vast. Simply go to your library and browse them. In the Library of Congress catalog system, they are to be found under the call number QA303. I am grateful to UH colleagues Lewis Wheeler and Ralph Metcalfe for their counsel on this episode. If we let Bill's payment be x, then: x + 2x + 3x = $240 or 6x = $240. Thus x= $40. We cannot solve the drag racer problem until someone first tells us just how the acceleration varies with speed. The calculus becomes downright dramatic in these graphs of elliptic integrals The Engines of Our Ingenuity is Copyright © 1988-2003 by John H.
<urn:uuid:12682f5e-5d89-4bc5-a85c-7625d66fbdf5>
3.4375
894
Audio Transcript
Science & Tech.
59.624917
Science Fair Project Encyclopedia In physical geography, tundra is an area where tree growth is hindered by low temperatures and short growing seasons. The term "tundra" comes from the Sami language (through Russian), meaning treeless plain. - Desert - tree growth hindered by low rainfall - Heath, Pasture - tree growth hindered by human activity, not climate - Alpine climate Arctic tundra occurs in the far Northern hemisphere, north of the taiga belt. The word "tundra" usually refers only to the areas where the subsoil is permafrost, which contains permanently frozen water. (It may also refer to the treeless plain in general, so that northern Lapland would be included.) Permafrost tundra includes vast areas of northern Russia and Canada. The arctic tundra is home to several peoples who are mostly nomadic reindeer herders, e.g. Nganasan and Nenets in the permafrost area (and the Sami in Lapland). The biodiversity of tundra is low: there are few species with large populations. Notable animals in the arctic tundra include: Due to the harsh climate of the arctic tundra, regions of this kind have seen little exploitation even though they are sometimes rich in natural resources such as oil and uranium. In recent time this has begun to change, and in Alaska, Russia and some other parts of the world the tundra is being ever more subjected to human interference. Global warming is a severe threat to the arctic tundra because of the permafrost. Essentially, permafrost is frozen bog. In the summer, only its surface layer melts. Should it melt completely, the entire ecosystem would be devastated. The arctic species could not adjust for such a rapid change. Another threat is that one third of the world's soil-bound carbon is in the taiga and tundra areas. When the permafrost melts, it releases carbon more than it can bind. The effect has been observed in Alaska: in the 1970's, the tundra was a carbon sink, but today, it's a carbon source. This aggravates the problem of global warming even further. Antarctic tundra occurs on Antarctica and on several antarctic and subantarctic islands, including South Georgia and the South Sandwich Islands and the Kerguelen Islands. Antarctica is mostly too cold and dry to support vegetation, and most of the continent is covered by ice fields. However, some portions of the continent, particularly the Antarctic Peninsula, have areas of rocky soil that support tundra. Its flora presently consists of around 250 lichens, 100 mosses, 25-30 liverworts, around 700 terrestrial and aquatic algal species, which live on the areas of exposed rock and soil around the shore of the continent. Antarctica's two flowering plant species, the Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis), are found on the northern and western parts of the Antarctic Peninsula. In contrast with the arctic tundra, the antarctic tundra lacks a large mammal fauna, mostly due to its physical isolation from the other continents. Sea mammals and sea birds, including seals, penguins, inhabit areas near the shore, and some small mammals, like rabbits and cats, have been introduced by humans to some of the subantarctic islands. The flora and fauna of Antarctica and the Antarctic Islands (south of 60º south latitude) are protected by the Antarctic Treaty. Alpine tundra occurs at high enough altitude at any latitude on Earth. Alpine tundra also lacks trees, but does not usually have permafrost, and alpine soils are generally better drained than permafrost soils. Alpine tundra transitions to subalpine forests or Montane grasslands and shrublands below the tree-line; stunted forests occurring at the forest-tundra ecotone are known as Krummholz. Notable animals in the alpine tundra include: |Marielandia Antarctic tundra||Antarctic Peninsula| |Maudlandia Antarctic desert||eastern Antarctica| |Scotia Sea Islands tundra||South Shetland Islands,Bouvet Island| |Southern Indian Ocean Islands tundra||Crozet Islands,Prince Edward and Marion Islands , Heard Island,Kerguelen Islands,McDonald Islands| |Antipodes Subantarctic Islands tundra||Australia| |Alaska-St. Elias Range tundra||Canada,United States| |Aleutian Islands tundra||United States| |Arctic coastal tundra||Canada,United States| |Arctic foothills tundra||Canada,United States| |Baffin coastal tundra||Canada| |Beringia lowland tundra||United States| |Beringia upland tundra||United States| |Brooks-British Range tundra||Canada,United States| |Davis Highlands tundra||Canada| |High Arctic tundra||Canada| |Interior Yukon-Alaska alpine tundra||Canada,United States| |Kalaallit Nunaat high arctic tundra||Greenland| |Kalaallit Nunaat low arctic tundra||Greenland| |Low Arctic tundra||Canada| |Middle Arctic tundra||Canada| |Ogilvie-MacKenzie alpine tundra||Canada,United States| |Pacific Coastal Mountain icefields and tundra||Canada| |Torngat Mountain tundra||Canada| |Cherskii-Kolyma mountain tundra||Russia| |Chukchi Peninsula tundra||Russia| |Kamchatka Mountain tundra and forest tundra||Russia| |Kola Peninsula tundra||Norway| |Northeast Siberian coastal tundra||Russia| |Northwest Russian-Novaya Zemlya tundra||Russia| |Novosibirsk Islands arctic desert||Russia| |Scandinavian Montane Birch forest and grasslands||Finland,Norway,Sweden| |Taimyr-Central Siberian tundra||Russia| |Trans-Baikal Bald Mountain tundra||Russia| |Wrangel Island arctic desert||Russia| - Tundra biome information from the University of California - Arctic tundra biome information from the WWF - Alpine tundra information from the WWF - The Arctic biome at Classroom of the Future The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:6771927b-dadd-4524-8348-c267543eb438>
3.84375
1,462
Knowledge Article
Science & Tech.
23.907384
|Ediacara Biota: Ancestors of Modern Life or Evolutionary Dead End?| Ediacara Biota are the first convincing fossils of Precambrian animals that were found in the Ediacara Hills of Australia. The unusual fossils, originally interpreted as jellyfish, strange worms, and frond-like corals, gave scientists their first look at the animals that populated the Precambrian seas. In Canada, Ediacaran fossils are found in the Northwest Territories, Yukon, British Columbia, and Newfoundland. The Mackenzie Mountains, NWT, has the thickest continuous section of rock (2.5 kilometers) containing Ediacaran fossils in the world. This site provides information on the fossils and features a location map with active links and a link to information on Ediacaran fossils found in Namibia. Intended for grade levels: Type of resource: No specific technical requirements, just a browser required Cost / Copyright: This page, and all contents (except noted) are Copyright (c): 2003 by Queens University Kingston, Ontario, Canada. DLESE Catalog ID: DLESE-000-000-005-223 This resource is part of Resource contact / Creator / Publisher:
<urn:uuid:fdc256ea-068e-4916-9906-b8f7ceaf2f9c>
3.609375
251
Content Listing
Science & Tech.
26.072025
When do Solar and Lunar eclipses occur? The German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt) is participating in the International Year of Astronomy 2009. DLR experts will be answering the 'Astronomy Question of the Week' once a week. Course of the solar eclipse zones of Saros family 136 The new century has just begun and already the longest solar eclipse that will occur in the 21st century is about to take place – on 22 July the Moon will completely eclipse the Sun for up to 6 minutes and 39 seconds. This event will, however, only be observable in parts of India, China and the Pacific region. The game of 'hide and seek' between the Sun and Moon also works the other way around: if, during its orbit, the Moon is located on the side of the Earth that is not facing the Sun and it moves into the shadow of the Earth, there is a lunar eclipse. In the past, eclipses were interpreted as a sign of fate. Even the ancient Babylonians knew the rules of celestial mechanics, which can be used to forecast solar and lunar eclipses. They found out that eclipses of similar types happen successively at an interval of 18 years, 10 days and 8 hours (when there are five leap years in the intervening period). This special sequence of eclipses is known as the 'Saros cycle' (there are other eclipse cycles as well.) After a 'Saros period' – that is, after 18 years, 10 days and 8 hours – the Moon, Earth and Sun again assume the same positions relative to each other and solar and lunar eclipses again take place under almost exactly the same conditions. A Saros cycle is one day longer (18 years, 11⅓ days) if it only contains four leap years and two days longer (18 years, 12⅓ days) if only three leap years occur. Patterns of solar and lunar eclipses During a solar eclipse the Moon moves between the Sun and the Earth and throws its shadow on parts of the Earth’s surface. Many people will remember the solar eclipse on 11 August 1999, which could be observed in Europe – it will have a 'daughter eclipse' on 21 August 2017, exactly 18 years, 10 days and 8 hours later. During those eight 'extra' hours, the Earth will have turned approximately 120 degrees longitudinally, from west to east – and for this reason this eclipse will be viewable over the North American continent. It will also have shifted a little southward. The next solar eclipse in this series will take place on 2 September 2035 in the Asia-Pacific region, in turn again having shifted around 120 degrees west and a little further south. This solar eclipse is thus the daughter eclipse of the daughter eclipse and this is why, in addition to Saros cycles, there are also 'Saros families'. For each Saros family, the 'matriarch' of the family appears near one of the Earth's poles as a partial solar eclipse. The eclipse zones of subsequent family members then move in a spiral pattern across Earth's surface at 18-year intervals until the very last descendant of a Saros family arrives at the Earth’s other pole around 1300 years later and that 'Saros dynasty' becomes extinct after approximately 70 eclipses.
<urn:uuid:1d3a9357-9e5c-4333-b970-288a9c294a69>
3.859375
693
Knowledge Article
Science & Tech.
46.367576
Staged CI is a practice where a tight loop is used for the typical CI build and one or more additional loops are used to automate a more thorough determination of the code quality. The reason for the multiple loops is that you want to keep the CI loop tight to provide feedback to developers as quickly as possible. In the tight loop, you're willing to sacrifice accuracy of the quality determination for speed. But once you have the quick feedback, you can take a little more time to get more detailed feedback from the additional loops. The end result is that for each project, you have multiple build loops in your system, one loop for each stage. Each stage is progressively more thorough and thus includes longer sets of tests. The nice thing about this approach is that it makes it easy to deal with limited hardware and test resources. Since each stage runs in a loop, there's never more than a single instance of a stage running at any one time. If that weren't the case and you could have multiple instances of a stage running at any one time, then you'd need to have a scheduler that has knowledge about available hardware resources and manages the allocation of hardware to stage instances. Furthermore, there would need to be a way to balance the pace of stage instance creation to the throughput of the available hardware. Once common mechanism to do this is request coalescing. But the point is that by keeping each stage in a loop, you can avoid a lot of these complexities. In a Staged CI type of setup, it is typical to have a tight CI loop that takes 15 minutes or less to run, followed by a longer loop that takes an hour or two to run, followed by a nightly build that can take five or more hours to run; see Figure 1. The tight CI loop runs quite often as indicated by the runs CI1, CI2...CI6. The second and fourth runs of the CI loop failed as indicated by the red color. The longer loop that includes long-running tests is depicted by runs L1 and L2. Notice that this longer loop runs concurrently with the tight CI loop. The entire system being used for CI and the other loops may be made up of multiple machines that include a central server and many agent machines. All the heavy work is typically performed on the agent machines so that the system scales horizontally. The nightly build (N1) takes the most amount of time to run. Staged CI is not really any different from nightly builds, although this depends on the reason for the nightly build and on what happens during the nightly build. If the reason for the nightly build is simply that the team does not see any value from having a tighter feedback loop, then there are some differences. But if the reason for the nightly build is to let the build run over several hours doing extensive testing of the code base to arrive at a very detailed quality determination, then the nightly build becomes an example of Staged CI. Rather than running in a continuous loop, the nightly build runs once during a 24-hour period during the night. Typically, the decision to run at night is related to the usage of resources. Presumably, if the nightly build were to run during the day, it would require access to some resources that are unavailable or being used for other purposes during the day. Staged CI makes use of multiple build types. Let's take a look at what we mean by build types, then look at why Staged CI uses them. To understand build types, you need to understand that most of the time when we use the term "build" we're not exact in what we mean. Usually, when we talk about a "build" we are actually talking about something more than just a build. For example, when we talk about a "continuous integration build," we're talking about a process that extracts source code from the source-code manager (SCM), compiles it, packages it, and then runs some tests on the resulting artifacts. In contrast, a nightly build may extract source code from the source-code management system (SCM), compile it, package it, then deploy the artifacts to a QA server and run functional tests. The CI build and the nightly build are just two examples of build types. The defining feature of a build type is that it is a combination of multiple processes. Of those multiple processes, one is a build process and the remainder is made up of one or more secondary processes. So what do we mean by a build process? A build process takes source code, dependencies, environment settings, and configuration as input, and transforms them into the output. The typical output of the build process is made up of artifacts (typically a compiled binary), log files, and reports. The transformation of the input into the output typically involves compilation and packaging. However, this varies with the technology being used, as native languages include a linking step, whereas scripting languages don't have the compilation step. Let's take a look at a CI build type and a nightly build type in light of this definition of the build process. Recall that a CI build type extracts source code from the SCM, compiles it, packages it, and then runs some tests on the resulting artifacts. I can now restate that so that the CI build type extracts source code from the SCM, performs a build, then runs some tests on the resulting artifacts. And the nightly build type extracts source code from the SCM, performs a build, then deploys the artifacts to a QA server and runs functional tests on them (Figure 2). Each build type is a combination of build process along with one or more additional (secondary) processes. One of the defining properties of Staged CI is that each loop (or stage) is a different build type. This means that each stage builds the source code in addition to running one or more processes. This may seem like a natural thing and you may wonder why this is worth pointing out. The reason is that this is very different from the two approaches I address next.
<urn:uuid:62607504-eaaf-4ddb-82bf-768e11ea2e1c>
2.765625
1,226
Knowledge Article
Software Dev.
52.880427
If n is an integer, the numbers coprime to n, taken modulo n, form a group with multiplication as operation; it is written as (Z/nZ)× or Zn*. This group is cyclic if and only if n is equal to 2 or 4 or pk or 2 pk for an odd prime number p and k ≥ 1. A generator of this cyclic group, that is, all powers of some number g in Zn* is a number in Zn* is called a primitive root modulo n, or a primitive element of Zn*. Take for example n = 14. The elements of (Z/14Z)× are the congruence classes of 1, 3, 5, 9, 11 and 13. Then 3 is a primitive root modulo 14, as we have 32 = 9, 33 = 13, 34 = 11, 35 = 5 and 36 = 1 (modulo 14). The only other primitive root modulo 14 is 5. Here is a table containing the smallest primitive root for various values of n (see A046145): |n||primitive root mod n| No simple general formula to compute primitive roots modulo n is known. There are however methods to locate a primitive root that are faster than simply trying out all candidates: first compute φ(n). Then determine the different prime factors of φ(n), say p1,...,pk. Now, for every element m of (Z/nZ)×, compute If the multiplicative order of a number k modulo p is p-1, then it is a primitive root. We can use this to test for primitive roots. The number of primitive roots modulo n is equal to φ(φ(n)) since, in general, a cyclic group with r elements has φ(r) generators. There exist positive constants C, ε and p0 such that, for every prime p ≥ p0, there exists a primitive root modulo p that is less than C p1/4+ε. If the generalized Riemann hypothesis is true, then for every prime number p, there exists a primitive root modulo p that is less than 70 (ln(p))2.
<urn:uuid:cd0c90aa-f1e7-4c89-ac04-18877e57b3b9>
3.203125
469
Knowledge Article
Science & Tech.
77.565183
The world’s oceans may be turning acidic faster today from human carbon emissions than they did during four major extinctions in the last 300 million years, when natural pulses of carbon sent global temperatures soaring, says a new study in Science. The study is the first of its kind to survey the geologic record for evidence of ocean acidification over this vast time period. March 01, 2012 September 15, 2009 The world’s oceans are growing more acidic as carbon emissions from the modern world are absorbed by the sea. A new film, “A Sea Change,” explores what this changing chemistry means for fish and the one billion people who rely on them for food. This first-ever documentary about ocean acidification is told through the eyes of a retired history teacher who reads about the problem in a piece in The New Yorker and is inspired to find out more. His quest takes him to Alaska, California, Washington and Norway to talk with oceanographers, climatologists and others.
<urn:uuid:87a93661-2527-4e96-ae5a-40f25ef80448>
3.109375
203
Content Listing
Science & Tech.
41.584295
In the section Introduction to programming, we had defined an expression as “A mathematical entity that evaluates to a value”. However, the term mathematical entity is somewhat vague. More precisely, an expression is a combination of literals, variables, operators, and functions that evaluates to a value. A literal is simply a number, such as 5, or 3.14159. When we talk about the expression “3 + 4″, both 3 and 4 are literals. Literals always evaluate to themselves. You have already seen variables and functions. Variables evaluate to the values they hold. Functions evaluate to produce a value of the function’s return type. Because functions that return void do not have return values, they are usually not part of expressions. Literals, variables, and functions are all known as operands. Operands are the objects of an expression that are acted upon. Operands supply the data that the expression works with. The last piece of the expressions puzzle is operators. Operators tell how to combine the operands to produce a new result. For example, in the expression “3 + 4″, the + is the plus operator. The + operator tells how to combine the operands 3 and 4 to produce a new value (7). You are likely already quite familiar with standard arithmetic operators, including addition (+), subtraction (-), multiplication (*), and division (/). Assignment (=) is an operator as well. Operators come in two types: Unary operators act on one operand. An example of a unary operator is the – operator. In the expression -5, the – operator is only being applied to one operand (5) to produce a new value (-5). Binary operators act on two operands (known as left and right). An example of a binary operator is the + operator. In the expression 3 + 4, the + operator is working with a left operand (3) and a right operand (4) to produce a new value (7). Note that some operators have more than one meaning. For example, the – operator has two contexts. It can be used in unary form to invert a number’s sign (eg. -5), or it can be used in binary form to do arithmetic subtraction (eg. 4 – 3). This is just the tip of the iceberg in terms of operators. We will take an in-depth look at operators in more detail in a future section. |1.6 — Whitespace and basic formatting| |1.4 — A first look at functions|
<urn:uuid:7ec9e048-c3a5-45c9-ac25-99d1589d08ef>
5.03125
543
Tutorial
Software Dev.
50.778458
The Earth's Elements; October 1994; Scientific American Magazine; by Kirshner; 8 Page(s) Matter in the universe was born in violence. Hydrogen and helium emerged from the intense heat of the big bang some 15 billion years ago. More elaborate atoms of carbon, oxygen, calcium and iron, out of which we are made, had their origins in the burning depths of stars. Heavy elements such as uranium were synthesized in the shock waves of supernova explosions. The nuclear processes that created these ingredients of life took place in the most inhospitable of environments. Once formed, violent explosions returned the elements to the space between the stars. There gravitation molded them into new stars and planets, and electromagnetism cast them into the chemicals of life. The ink on this page, the air you breathe while reading it--to say nothing of your bones and blood-- are all an inheritance from earlier generations of stars. Walking down the corridors of an observatory, you see collections of carbon atoms hunched over silicon boxes, controlling distant telescopes of iron and aluminum in an attempt to trace the origin of the very substances of which they are made.
<urn:uuid:1c2f8a66-9e2f-49cd-bf9f-c6b9da20b334>
3.59375
236
Truncated
Science & Tech.
35.379822
Observations and results How much water were you able to collect? You can try leaving it for longer and see how much more you can accumulate. Not everything can be separated out from water this way. Tiny particles, such as dust or chemical pollutants, can still find their way up into the sky. In fact, raindrops form around small pieces of dust in the clouds, and polluting "acid rain" can contain chemicals from burning fossil fuels. What are other ways you can think of to purify water? Share your water cleaning observations and results! Leave a comment below or share your photos and feedback on Scientific American's Facebook page. Don't drink the water, but you can use it to water plants! You can use the moist dirt for houseplants, outdoor plants or to start a new plant from seed. More to explore "Sour Showers: Acid Rain Returns" from Scientific American "Warmer Climate Produces Less Rain" from Scientific American The Water Cycle game from the Environmental Protection Agency "What Is Acid Rain?" overview from the Environmental Protection Agency A Drop of Water by Gordon Morrison, ages 4–8 The Water Cycle: Evaporation, Condensation & Erosion by Rebecca Harman, ages 9–12 High Seas: What Happens When the Glaciers Melt? What you'll need • Small bowl • Ice cubes • Modeling clay • Warm water
<urn:uuid:4d924797-1418-4976-b395-22efd897cf34>
3.75
297
Tutorial
Science & Tech.
51.097739
- Philipp Böing - United Kingdom UCL iGEM / SynBioSoc organizer, student (computer science), University College London Should Synthetic Organisms be released to clean plastic pollution from the ocean? The Problem: Microplastic pollution in the ocean. Proposed Solution: Engineered Organisms that can collect the pollution into recyclable pieces. (the science behind this: http://www.ucl.ac.uk/igem2012 ) A video explanation: http://youtu.be/rEDLg03teOk Should Synthetic Organisms be released to clean plastic pollution from the ocean? Is synthetic biology the only solution to plastic pollution? Can we anticipate and prevent any negative repercussions? Do the potential risks outweigh the benefits? Who could profit from this? Who calls the shots?
<urn:uuid:4732d972-9ef0-4721-a2eb-e7cfd8112cb8>
3.046875
169
Audio Transcript
Science & Tech.
35.606003
Gamma-Ray Astronomy in the Compton The STS 37 space shuttle launch which carried the Compton satellite into orbit. Compton, at 17 tons, is the largest scientific payload carried by the shuttle. |A 14-year effort of scientific vision, careful instrument design, spacecraft engineering, and mission development culminated in the 1991 space shuttle launch of Compton. Since launch, the Compton project has exceeded expectations, providing high-quality science data to over 750 scientists from 23 countries. The international scope of the Compton mission is also revealed through the instrument builders - the United States, the Federal Republic of Germany, the Netherlands and the United Kingdom all contributed. The four onboard science instruments combine to provide complementary capabilities for mapping the gamma-ray sky, probing the energy distribution of individual sources, and monitoring the sky for time variable phenomena.||These capabilities were intended to solve some of the outstanding questions earlier missions had posed - questions about the nature of gamma-ray bursts, about the behavior and number of gamma-ray emitting pulsars and active galaxies - and perhaps most importantly, to watch for the unexpected. The enhancements in sensitivity of the Compton instruments over previous experiments hold the key to this progress. To detect more gamma rays, one simply needs larger instruments. The size of the Compton instruments, the benefits of a long mission, and anticoincidence techniques which prevent cosmic rays from mimicking gamma-ray signals have allowed Compton's instruments to make significant contributions to high-energy astrophysics. While there is some overlap in capabilities, each of the four instruments has special design characteristics which allow them to perform unique and valuable science.| The Compton instruments cover more than six orders of magnitude in energy in a complementary manner.
<urn:uuid:546063f6-e0e5-417e-8814-322891fb9283>
3.765625
342
Knowledge Article
Science & Tech.
20.434148
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893| Publisher:SAGE Publications, Inc.About this encyclopedia Chamberlin, Thomas C. (1843–1928) THOMAS C. CHAMBERLIN was an American glacial geologist and educator who, at the turn of the 20th century, challenged the generally accepted Laplacian theory that the Earth was formed by hot gases and was gradually becoming cooler. He suggested the plan-etesimal hypothesis, arguing that the planets were formed after a star passed near the Sun, pulling away material from both bodies that later condensed into the planets. Chamberlin was one of the first scientists to emphasize the role of carbon dioxide in regulating the Earths temperature, thus anticipating the current debates on global warming. Chamberlin also founded the Journal of Geology , acted as its editor for many years, and was the first director of the U.S. Geological Surveys Pleistocene Division (1881–1904). Chamberlin was the first geologist to demonstrate that there had been multiple Pleistocene glaciations in North America. He offered early analyses of moraines, drumlins, eskers, and boulder trains. ...
<urn:uuid:44932e6a-6c89-4b87-b361-5993aafbd0ba>
3.625
282
Knowledge Article
Science & Tech.
47.512102
Help is needed with the following geometry problem: Consider a triangle ABC. Insert a point D on the side Ac and a point E on the side AB. Draw a line through DE. The intersection between the line through DE and the line through BC is called F. Drawing an arbitrary triangle as mentioned above we can compute the following: It is possible to draw the triangle in a program called geogebra. By doing this we can move the points D and E and observe the change. The question is the following: formulate an expression which apparently holds true for the above-mentioned triangle. Thanks a bunch.
<urn:uuid:f765672a-410b-4353-9f7a-a2707ffcb8ba>
3.109375
126
Q&A Forum
Science & Tech.
64.382739
If sin^2 of theta = 1/2 and pi< theta < 3pi/2 (quadrant III) then theta= ? I have no idea how to do these type of problems! Please Help! Follow Math Help Forum on Facebook and Google+ Originally Posted by askmemath sine of theta must be taken as negative sqaure root of half here. then the value of theta will be 5Pi/4 View Tag Cloud
<urn:uuid:ac1235ec-ddd8-4d21-a463-2f722c04a8f7>
2.859375
99
Q&A Forum
Science & Tech.
70.878409
Matter in Black Holes Name: dutch minott Date: 1993 - 1999 What happens to matter that is pulled in to black holes? It passes through very severe gravitational forces that can distort and pull it apart. Tim e seems to drag while it passes through the surface. Sometimes a little bit of x-rays or particles are emitted, but mostly on the way in, since little can escape the event horizon. Aside from being torn apart by the strong gravity, it would be uneventful. Samuel P Bowen Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:8e8db389-8a00-4497-906a-bb0c10b17905>
3.171875
130
Knowledge Article
Science & Tech.
53.702763
Continuing the science blocks at East Bay Waldorf school, I am teaching physics for grade eight. Physics in the upper grades begins in grade six, and continues on as a sequence of advancing concepts through grade eight. In grade eight, the concepts covered are thermal dynamics, refraction, hydraulics, and electromagnetism. Our first week of physics, we learned about heat energy. Heat energy travels via radiation, conduction, and convection. I organized our week according to these three principles. The approach to physics in Waldorf is phenomena-based, meaning, the instruction is designed to be experiential, and not abstracted through lecture or texts. Through direct experience and observations of demonstrations, the children live in the phenomena of physics. The beauty of physics is that, unlike learning about Renaissance art, or the American Revolution, physics is experienced in the everyday. The physics block can be regarded as a way of simply giving a vocabulary and elucidation of what the children already know and experience. Day One: Heat Experienced in the Everyday, and Radiation On our first day, I had the students write a paragraph on their experience of how heat and cold are experienced in their everyday lives. It was a way for them to give thought to their actions and their feelings as they wake in the morning and feel the coldness of a bare floor, the warmth from a robe or a cup of hot chocolate. They become active in feeling warmth in standing close to their friends, and the sun as it filters into the classroom space. In observing the effects of the sun on dark and light materials, and in experiencing the heat from a radiant heater, the students learned about RADIATION, the transfer of heat energy through space. The key observations were that dark objects tend to absorb heat, while lighter objects tend to reflect heat. Day Two: Conduction The rhythm of the main lesson is such that the students are engaged in some movement exercises and mental acitivities to synchronize the class as a whole, followed by a review of material introduced the day before, then the new material, and work in their main lesson books. In day two, I had the students form two lines and, joining hands, formed a wave. Good for their physical beings, and also had demonstrated for them how radiated heat travels (through waves). We observed the phenomena of CONDUCTION. I used a heating plate with a baking stone set on top, and placed various materials on top of that: a piece of marble, a metal cup, cork, rubber stopper, a beaker of water, a shell, wood block. The students touched the materials after the baking stone had heated, and also after the heat was turned off. Conduction is the transfer of heat energy between materials that are in direct contact with each other. We witnessed that different materials behave differently to applied heat: the metal cup seemed to have heated up the fastest, the marble heated to the highest temperature, and the marble also retained the heat the longest. Day Three: Convection, Part I Our movement exercise included two lines of students who demonstrated the wave, and they also passed a ball from one person to the next, showing that "direct contact" is required for conduction. Each line of students raced one another - let's see who can conduct heat the fastest! The phenomena demonstrated on day three was CONVECTION, the transfer of heat energy through air or liquid by a moving current. I demonstrated for them how heat from a candle produces a current of warmed air that rises. This rising heat can be used to do work - thermal dynamics. I showed them how to make tiny pinwheels that balance on bent paper clips. They held their pinwheels over the candle, and the pinwheels turned! Day Four: Convection, Part II We continued the concept of convection. This time, using a large glass container, I showed them how a red dye placed at the bottom of the water-filled jar would begin to move towards a corner that was being heated by a candle. It showed how heated liquid rises, and as the water cools with an ice cube placed at the opposite end, the cooled water sinks, creating a convection current. In their main lesson books (MLBs), the students would write a paragraph and draw a picture that would help them synthesize their learning. Following a two-day rhythm, the material that I introduced one day would be become their MLB work the following day. Day Five: Designing an Eco-Friendly Home The knowledge gleaned from the week's work is connected with practical application. In designing homes that are energy-efficient, and eco-friendly, which is a growing industry, knowing how to regulate temperature of a home with efficiency and with resource conservation is very important to the comfort of its occupants. I put the students in small groups and gave them the following exercise: they are an architectural firm who a client is asking to design an energy efficient home that uses heat energy wisely, so innovative use of materials and design elements is essential in winning the bid. The students took the exercise whole-heartedly and came up with some creative solutions such as grass rooves, rain catchment systems, gray water, radiant-heated floors, solar energy, wind energy, and use of materials that have good thermal retention and are eco-friendly. To keep things fun, and help them with remembering concepts and terms, I had them draw little cartoons associated with the concepts. With radiation, they drew a surfer enjoying heat from the sun, saying "Radical, dude!" With conduction, they drew a duck whose feet are in direct contact with an icy pond, "Quack, my feet are cold!" With convection, they drew a conveyor belt, where heat was actually being moved from one place to another, "More heat, coming right up!"
<urn:uuid:15d7444a-7c3b-43b8-96f7-3308a17704d7>
2.96875
1,209
Personal Blog
Science & Tech.
44.929365
What is all this fuss about the Higgs boson? The physics community is abuzz that a fundamental particle expected by the largely successful Standard Model of particle physics may be soon be found by the huge Large Hadron Collider (LHC) at CERN in Europe. The term boson refers to a type of fundamental particle with similarities to the photon, while Higgs refers to Peter Higgs, a physicist who among others published research predicting the mechanism through which such a particle might act. The above animated cartoon explains in humorous but impressive detail why the Higgs boson is expected, and one method that the Large Hadron Collider is using to find it. rumors hint that preliminary traces of the Higgs boson are already being found, even not finding this unusual particle would open the door to a new fundamental understanding of how our universe works.
<urn:uuid:cdc3753a-c5f2-46d1-818a-1f9c61a0d24d>
3.15625
191
Content Listing
Science & Tech.
30.1405
How do I convert a char password = new char; into a Sting String new_password. return (String) password or new_password = (String)password Joined: Jul 10, 2001 Hello This is taken directly from the String class String(char value) Allocates a new String so that it represents the sequence of characters currently contained in the character array argument. String(char value, int offset, int count) Allocates a new String that contains characters from a subarray of the character array argument. hope it helps ! ------------------ Sun Certified Programmer for Java 2 Platform Joined: Nov 23, 2001 Hi, You can use the method toString(char); ------------------ Ahmer Arman Joined: Aug 06, 2001 Download the Java API documentation - you'll find everything you need to know in there!
<urn:uuid:3ede8e7f-fe61-4920-bf1f-16e2a1af3aaa>
3.375
175
Q&A Forum
Software Dev.
51.526545
Joined: 16 Mar 2004 |Posted: Fri Dec 19, 2008 10:01 am Post subject: Magnets that could Burn Cancer Developed in Scots Laboratory |Magnets that could Burn Cancer Developed in Scots Laboratory Tiny magnets made by bacteria could be used to kill tumours, say researchers. A team at the University of Edinburgh has developed a method of making the nanomagnets stronger, opening the way for their use in cancer treatment. The bacteria-produced magnets are better than man-made versions because of their uniform size and shape, the Nature Nanotechnology study reported. It is hoped one day the magnets could be guided to tumour sites and then activated to destroy cancerous cells. The bacteria take up iron from their surroundings and turn it into a string of magnetic particles. They use the chains of particles like a needle of a compass to orient themselves and search for oxygen-rich environments. "For nanoparticles to be used in medicine you need them to be a very uniform size and shape and bacteria are very good for that", says Dr Sarah Staniland, study leader. There has been a lot of interest in their potential application in medicine, but how useful they could be will depend on the strength of the magnets. Scientists at Edinburgh University grew the bacteria in a mixture that contained more cobalt than iron. The addition of cobalt in the nanomagnets made them 36-45% stronger. This meant they stayed magnetised longer when taken out of a magnetic field. The ability of the nanomagnets to remain magnetised opens the way for their use in killing tumour cells, the researchers said. They could be guided to the site of a tumour magnetically. Once there, applying an opposite magnetic field would cause the nanomagnets to heat up, destroying cells in the process. They could also potentially be used to carry drugs directly to the cancerous tissue. Study leader, Dr Sarah Staniland, a research fellow at the University of Edinburgh, said: "For nanoparticles to be used in medicine you need them to be a very uniform size and shape and bacteria are very good for that. "This increases the scope for their use in cancer". "You would move them with a normal magnetic field and then heat them with the opposing field." Liz Baker, Cancer Research UK's science information officer, said: "Targeting treatments specifically to cancer cells is an exciting area of research, but in this case work is still at a very early stage". "It will be interesting to see if further research into nanomagnets will provide us with a new and effective anti-cancer therapy."
<urn:uuid:0e328024-c9b3-40c4-a998-8e4a8cc90410>
3.015625
546
Comment Section
Science & Tech.
41.61944
Surface Reflection and Polarization Name: Tony K. I was studying on polarization of light and came across Brewster's angle. Even after much research on the Internet, I cannot find out WHY polarization occurs parallel to the surface along this angle. Why does it work? What is happening that my eyes cannot see? Brewster's angle is the angle at which reflected light is perpendicular to refracted light (the light that continues on into the material). The effect of pure polarization at Brewster's angle was discovered by experiment: no theories predicted it. There are some theories regarding it, but nobody knows for sure why it happens. The most popular theory is as follows: The electric charges in the glass (or other transparent material) oscillate perpendicular to the direction of the light waves within the material. Thus, they oscillate perpendicular to the refracted light. Because light waves are "side-to-side" oscillations (lateral waves), there must be some side-to-side motion in the charges producing the waves. When the reflected light is perpendicular to the refracted light, there is no oscillation in the reflection plane. Waves oscillating in the plane of reflection cannot be emitted as reflected light. Dr. Ken Mellendorf Illinois Central College The answer is simple in principle, but a bit more complicated in the details. The reflection and refraction of light by a surface is not simply the "bouncing off" of the incident beam from the surface. Rather, the oscillating electric vector of the incident beam causes the electrons in the reflecting/refracting medium to oscillate. These oscillations in turn produce the reflected/refracted beams. In the "particle" description of light replace the "oscillation of the electrons of the medium" to "elastic scattering of the photons by the electrons in the medium". That is the The details are a bit more involved, but are explained far more concisely and lucidly than I would even attempt. See: Richard Feynman's "Lectures on Vol. I, Chapter 33, Sections 33-4 through 33-7. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:642b4e9d-8f1d-411b-8746-55a34f648bd1>
3.421875
481
Q&A Forum
Science & Tech.
42.699564
How Much is a Species Worth? That simple question is at the heart of a complicated debate over the value of saving endangered species and the controversial law that protects them Jerry Adler and Mary Hager It is one of the greatest laws in history--a compact with the Earth in which humanity for the first time renounces the right to decide for itself which species deserve to share space on the planet. Like the Civil Rights acts, it grew out of well-deserved guilt, for by 1973 when the Endangered Species Act was passed in its present form, hundreds of U.S. species had vanished forever. This "irreplaceable loss to aesthetics, science, ecology and the national heritage" must be halted, Congress declared. Diverse species "are potential resources. They are keys to puzzles which we cannot yet solve, and may provide answers to questions which we have not yet learned to ask." The law created a distinct class of wildlife, the members of the Endangered Species List. The singular fact about the Endangered Species Act was that an animal didn't have to earn a right to protection by benefiting the species that paid taxes and voted. It just had to be in danger of extinction. If Congress had merely codified or expanded existing protections for animals with economic or sentimental claims on human indulgence, the law would not have been such a milestone in conservation consciousness. But the beautiful, the useful and delicious were declared by Congress to be no more or less worthy of protection than the ugly, the unpalatable and possibly pestilent. There was in 1973, as there is now, a lot of high-minded concern about the possibility that the next species to go out of existence might harbor in its genes a cure for cancer. In fact, if all of the ancient forests of the Pacific Northwest were cut down to help keep the region's timber industry afloat, most of the country's 100year-old Pacific yew trees would fall with them. And thousands of Americans would lose the source of a yew-bark compound (taxol) that in the last two years has emerged as a promising treatment for ovarian and breast cancer. But as compelling as that fact is in the political arena, intellectually it represents a weak case for species protection, because it merely restates an economic argument for saving a species on more favorable terms. The more enlightened position, said Congress, is that a species is literally priceless. The Endangered Species Act embraced the broadest concept of human dependence on other species, based on what Harvard biologist E.O. Wilson calls a "psychological affinity" for the natural systems in which humanity evolved. The law merely postulated, without demanding proof, that endangered species were of "aesthetic, ecological, educational, historical, recreational and scientific value to the nation and its people." It would be a sorry species that couldn't justify itself under at least one of those headings. Of the 109 animal species on the list when the act was passed, perhaps only a dozen would have been identifiable to most laymen, and of those, several were newly promoted from the ranks of pests. A few have genuinely thrived under protection of the law, notably the American alligator, which subsequently was promoted from its classification as "endangered" to "threatened" in a portion of its range. Several, such as the peregrine falcon and brown pelican, have been pulled back from the brink of extinction, although just barely; some, like the black-footed ferret and the California condor, remain just one epidemic away from catastrophe. And a few others, such as the ivory-billed woodpecker and Santa Barbara song sparrow, were probably extinct, or nearly so, at the time they were listed. Almost certainly, many species would be gone by now without the act. But the need for protection is no less urgent as a result. And several of the threats looming as the law faces reauthorization this year for the sixth time since 1973, are among the most insidious the animal world has faced since a man and realized he was looking at the largest piece of meat he would ever see. Presently, for only the second or third time in the law's history, protection of a listed species--several species, in fact--is colliding with powerful economic interests. "The rhetoric against the law has a much harsher tone than ever," says National Wildlife Federation President Jay D. Hair. "It's going to be a tough fight; of the first things to get jettisoned when the economy is bad." Even without the rhetoric, conservationists have long been concerned about the unwillingness of recent administrations and Congress to fund the act adequately. The current Endangered Species List has grown to more than 600 U.S. species, of which a little more than half are plants; fish make up the next largest category followed by birds, mammals, reptiles and amphibians, snails and clams, insects and arachnids, and crustaceans. But more than 3,000 species are candidates for listing; at least 600 require immediate action. The cost of researching a listing is pegged at $60,000, so just dealing with the 600 urgent cases would cost $36 million; the entire budget for protecting endangered species last year was only about $55 million, not counting the somewhat larger amount spent by the states. And nearly a fourth of the federal budget total was spent last year studying one creature, the northern spotted owl. An important though previously little-known component of ancient forest ecosystem in the Pacific Northwest, the owl has become a focal point of the fundamental argument--short-term economic growth vs. long-term protection of an endangered ecosystem--that has thrust the Endangered Species Act into the midst of a political firestorm. As everyone recognizes, the debate over protection of the spotted owl amounts to a debate over continued logging of old-growth public forests in the Northwest. At stake, say timber industry officials, are the livelihoods of thousands of loggers. While the Endangered Species Act requires the designation of critical habitat for listed species, it specifies that resulting economic and other impacts be considered before any land is designated. The law also authorizes the exclusion of land from critical habitat if it is determined that the costs of including a particular area outweigh the benefits to the species in question. That flexibility in the law allowed the U.S. Fish and Wildlife Service to drop some three million acres in 1991 from the proposed spotted owl critical habitat plan. In doing so, the agency was in effect allowing logging to continue in certain areas. But to keep the owl alive, some habitat must be protected even at the possible cost of jobs. The question is whether Americans are willing to pay that price. A decision by Interior Department Secretary Manuel Lujan to convene a special committee to consider overriding the law's protections in regard to the owl does not auger well for the answer. The committee is known as the "God Squad," because it takes upon itself that which no human should ever be called upon to decide: the life or death of an entire species. This has happened only twice before, most notably in 1978 when the nearly completed Tellico Dam in Tennessee was stopped because it would destroy critical habitat of a tiny endangered fish, the snail darter. In that case, ironically, the "God Squad," which was created at the instigation of Senate Minority Leader Howard Baker to find a way to enable Tellico to be built, voted against the project. The dam went ahead only after Congress itself voted specifically for its completion. Timber industry officials point out, accurately if irrelevantly, that the spotted owl is in most of its range a fairly rare bird. Its disappearance, they say, would go unnoticed. That, clearly, is not the case with another group of animals that are equally threatened--and equally controversial: the Columbia River system salmon, which inhabit the same ecosystem as the owl and are animals of vast cultural, ecological and, coincidentally, economic significance. Commercial fishing for salmon that spawn in the Columbia and Snake rivers is a $60 million plus industry. Idaho Governor Cecil Andrus estimates that salmon fishermen would spend $125 in his state per pound of fish caught, except that there hasn't been a sport-fishing season in Idaho since 1977. Since the first major hydroelectric dam was completed at Bonneville more than 50 years ago, the annual run of salmon in the Columbia-Snake ecosystem has declined from more than 10 million fish to around 2.5 million. What is worse, only a small fraction of those represent original wild stock; the rest are hatchery fish, sometimes known (for their lack of fight and their general obtuseness) as "swimming hot dogs." The astonishing thing is that any wild fish make the journey at all. When the rivers ran free, helped by spring melts, salmon made a 900-mile trip from the Snake's headwaters to the mouth of the Columbia in about three weeks. Today that trip is interrupted by eight major dams, behind which water pounds up in lakes where parasites and predators lurk. As the fish pass through the dams themselves, the turbine blades take another heavy toll. Then, after three or four years of dodging fishermen in the Pacific, the salmon reverse the journey, and with luck make it up "fish ladders"--a series of stepped pools providing portage around the dams--to their spawning grounds. That, of course, is assuming that in the meantime the area hasn't been logged up to the banks and covered in silt. The Snake River sockeye has been battling these odds for decades, and last year exactly four of the fish made it all the way upriver. Only one of them was a female. If that isn't endangered, what is? The National Marine Fisheries Service apparently agrees. Last November, it designated the Snake River sockeye salmon "endangered." It also proposed listing two runs of Snake River chinook (fall and spring/ summer) as "threatened." This begs the question of what can be done to save the fish, short of blowing up dams. Over the years a variety of technical fixes have been proposed, and some actually tried, such as barging the fry downstream past the dams--a procedure with a mortality rate, says Andrus aide Scott Peyron, of between 90 and 99 percent. A variation is to capture fry in submerged nets and tow the nets downriver. Most environmentalists, though, think the plan with the best chance of success will be some variation of one proposed last year by Andrus. It calls for essentially opening up the gates on four Snake River dams just above the confluence with the Columbia for about two months each spring. The river would return for those weeks to approximately its natural contours and levels, affording unimpeded passage to the fry heading downriver. (The fish would still have to get past four dams on the lower Columbia, but Andrus contends that the upper four account for the majority of the mortality.) The implications of such a relatively simple adjustment in the water-driven economies of this part of the country are staggering. The power industry, which regards water above sea level as money in the bank, opposes turning off the turbines for two months; to give an idea of the money at stake, Andrus' office touts a study showing that the lost power sales would amount "only" to $100 million. The big grain farmers and shippers oppose the plan because it would shut barge traffic on the river for that period, forcing them to use railroads instead. Farmers that draw irrigation water from the reservoirs behind the dams would have to extend their intake pipes to reach the river, and even that would cost several million dollars. In the end, say researchers Jeffrey Hyman and Kris Wernstedt, who studied the issue for Resources for the Future, "Political forces will decide what to protect and how to protect it." The owl and the salmon controversies both reflect broader issues that will figure in the Congressional debate over the act this year. Opponents of the law, including such misleadingly named groups as the National Endangered Species Act Reform Coalition, are planning to turn the spotted owl into a symbol of bleeding-heart excesses. One "reform" they seek is to give economic considerations more weight in deciding if a species merits inclusion on the list. That approach was tried during the Reagan Administration, which led Congress in 1982 to reaffirm that the only considerations that matter in a species listing are scientific. Critics often lose sight of the fact that the law provides accommodations of economic and other concerns by allowing special regulations to be issued for particular species. Such was the case two years ago in the heated controversy over turtle excluder devices, which are required in shrimp fishing nets to prevent drowning of threatened and endangered sea turtles. The National Academy of Sciences had concluded that the best way to protect the turtles is to ban all shrimping in critical areas. "But that was neither a realistic nor desirable option," says National Wildlife Federation attorney Robert Irvin. "The regulations requiring the excluder devices offered a balanced solution to a complex situation." A more problematic area in regard to the law's reauthorization concerns protection for subspecies and geographic populations. The spotted owl of the Northwest is actually the northern spotted owl; there are at least two other races of spotted owls, the California and the Mexican. And the species as a whole is closely related to the barred owl, which inhabits the same ecological niche east of the Rockies. Lujan has suggested genetic testing as a way to settle whether the endangered owls are truly unique, but it is not clear he has an open mind on the subject. When the question arose of protecting a subspecies of red squirrel, which was threatened by plans for a new telescope on Arizona's Mount Graham, Lujan remarked that he couldn't tell the difference between a brown squirrel and a red one. The implication: that one kind of squirrel should be enough for everyone else as well. The key issue, however--the one that goes to the very heart of what the Endangered Species Act means--concerns habitat protection. Lujan contends that the law has been misapplied, that what was meant as a "shield" for endangered animals has become a "sword" to use against activities that environmentalists don't like for other reasons. This is a serious accusation, and it may in some cases be true. One possible response is that clear-cutting virgin forests is irresponsible public policy and ethically indefensible, and that people are entitled to use any means within the law to stop it. Another response is that Lujan simply hasn't paid attention to the language of the act, which makes it clear that Congress intended not just to protect species per se, but to conserve "the ecosystem upon which endangered and threatened species depend." The species-by-species listing process has tended to obscure that point. As the law comes up for reauthorization, some environmental groups would like to add provisions for a broader "ecosystem" listing, protecting whole habitats and all the species within them. But they are troubled by the fact that Lujan says he wants the same thing. Environmentalists fear an Administration version of "ecosystem protection" would involve setting aside small "core" habitats for protection and opening the land all around to intensive development. National parks do that already, and more parks would be nice, but not at the expense of destroying critical habitat in the West. The Endangered Species Act as currently written lets wildlife draw the line on protection, not a bureaucrat. And that's the reason for conflicts: Wildlife knows no boundaries of map and plot, does not distinguish between land that no one wants and prime second-home sites, and inhabits the same areas it has for millennia without regard for the needs of developers. That is what Congress recognized in creating the Endangered Species Act: That if we want wildlife, we have to take it on its own terms, give it land and water it needs and get out of its way. And that is why one has to believe in the end the act will survive all efforts to weaken or modify it. Because it is one of the greatest laws in history. New York writer Jerry Adler and Washington, D.C., correspondent Mary Hager are on the staff of Newsweek.
<urn:uuid:03bbec3f-1525-4665-bfb6-ef189dbbed26>
3.578125
3,341
Nonfiction Writing
Science & Tech.
41.663772
Mechanics: Vectors and Projectiles Vectors and Projectiles: Audio Guided Solution Ty Ridlegs boards a paddle boat and heads the boat westward directly across a river. The river flows south at 48 cm/s. Ty paddles the boat with a speed of 98 cm/s. a. Determine the resultant velocity of the boat - both magnitude and direction. b. If the river is 22 m wide at this location, then how much time does it take Ty to cross the river? Assume that Ty keeps his paddle boat headed west. c. How far downstream will Ty be when he reaches the other side of the river? Audio Guided Solution Click to show or hide the answer! b. 22 s c. 11 m (rounded from 10.8 m) Habits of an Effective Problem Solver - Read the problem carefully and develop a mental picture of the physical situation. If necessary, sketch a simple diagram of the physical situation to help you visualize it. - Identify the known and unknown quantities in an organized manner. Equate given values to the symbols used to represent the corresponding quantity - e.g., vox = 12.4 m/s, voy = 0.0 m/s, dx = 32.7 m, dy = ???. - Use physics formulas and conceptual reasoning to plot a strategy for solving for the unknown quantity. - Identify the appropriate formula(s) to use. - Perform substitutions and algebraic manipulations in order to solve for the unknown quantity. Read About It! Get more information on the topic of Vectors and Projectiles at The Physics Classroom Tutorial. Return to Problem Set Return to Overview
<urn:uuid:e258b6a7-da2c-4727-9b3d-afe6f6e7f540>
4
363
Tutorial
Science & Tech.
69.005032
Science can make blind mice see again and deaf mice hear — now scent-deprived mice can sniff their surroundings and smell for the first time, after a new gene therapy. It may be a while before this treatment percolates up to humans, but it’s a sign that gene therapy could restore smell in this rare but disorder. When Matthew Schiefer, a neural engineer at Case Western Reserve University in Cleveland, Ohio, first managed to stimulate the leg of an unconscious volunteer by wrapping an electrode around a nerve bundle, he knew he was on to something. Now, four years later, Schiefer has created a new kind of nerve-activating electrical interface that could allow people with paralyzed limbs to activate their legs with the push of a button. Until the naked mole rats yield their secrets, humanity will still have to worry about treating and controlling cancer. And to that end, one company may have figured out a novel way to prevent the spread of a highly dangerous form of brain cancer, through the use of pulsing electric fields. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:5e6dbb61-91b9-49d0-a2b1-f235b4f1cf8e>
3.09375
267
Content Listing
Science & Tech.
40.450395
PhysicsThe word "Physics" is derived from the Greek Words "physikos" (= natural) and "physis" (= nature) Study of PhysicsPhysics is a science that deals with the natural world. It studies about matter (the fundamental constitutent of the universe), in its various forms, the forces exerted by them on one another and the results produced thereon. Within this framework, physics encompasses essentially all of nature: The laws and properties of matter and the forces acting upon it, especially the causes (gravitation, heat, light, magnetism, electricity, etc) that modify the general properties of bodies. The study of physics is divided into a number of subfileds. Though the classification is very wide, we limit our classification to the topics that we deal with in our study of physics at this level. Mechanics, Dynamics, Heat, Light, Sound, Electricity, Magnetism, Atomic Physics, etc., are some of the topics that we study. Physics » Study of MotionIn the study of matter, motion is one of the most important and fundamental aspects that one has to understand. Therefore one branch of physics involves the study of motion – from objects as small as neutrinos to ones as massive as galaxies or even the entire universe – and forces – the interactions between bodies. Besides this, many specialties exist, dealing with gases, liquids, and solids, and so on. MechanicsMechanics is a branch of physics which deals with the study of the forces which act on a body and keep it in equilibrium (objects or bodies at rest) or in motion. It is one of the largest subjects in science and technology. The bodies whose motion we study in "Mechanics" are macroscopic bodies i.e. bodies that we can easily see. [Though microscopic bodies like electrons etc., are also in motion, their study forms separate field of study.] However, we may study motion with respect to solids, liquids and gases, where we may be dealing with the motion of large molecules. Mechanics is classified into three divisions as StaticsStatics is a branch of physics (mechanics) which concerned with equillubrium state of bodies under the action of forces. When a system of bodies is in static equilibrium, the system is either at rest, or moving at constant velocity through its center of mass. It can also be understood as the study of the forces affecting nonmoving objects DynamicsThe branch of physics (mechanics) which deals with the effect of forces on the motion of bodies. It can also be understood as the study of the forces affecting moving objects. KinematicsIt is the branch of physics (mechanics) concerned with the motions of objects without being concerned with the forces that cause the motion. |Author Credit : The Edifier||... Continued Page 2|
<urn:uuid:ff13712d-fbb8-49d3-aca5-b735c5ed10e9>
3.203125
599
Knowledge Article
Science & Tech.
41.625701
NASA's Curiosity rover is really digging in at Rocknest, a patch of Martian sand the robot has been exploring for more than a week. The photo above, from one of Curiosity's navigation cameras, shows an area of Rocknest sand "with what looks like three bite marks," as project scientist John Grotzinger put it in an October 18 teleconference with reporters. Each mark is a trench left by the scoop on Curiosity's robotic arm, which collects samples for analysis with the rover's onboard instruments. But before Curiosity fired up its CheMin (Chemistry and Mineralogy) instrument to analyze the soil, it first had to purify its sample-collection instruments using Martian sand as a cleansing abrasive. And having already encountered man-made debris, which may have been strewn about during the rover's landing, mission scientists took caution not to stick any artificial objects into CheMin. The rover's second scoop at Rocknest contained a bright object that became cause for concern. Most of the science team, Grotzinger said, ultimately concluded that the bright fleck was probably indigenous to Mars, but nonetheless that sample was dumped in favor of a third scoop of sand (center). Grotzinger said that the sample had been delivered to the onboard CheMin, which uses x-ray diffraction to assess mineralogical composition, and that data from the instrument would be transmitted to Earth soon. Deadline: Jul 25 2013 This challenge provides an opportunity for Solvers to build a web-based or mobile “app” to explore data relationships in scholarly conte Deadline: Jun 29 2013 Reward: $7,000 USD The Seeker for this Challenge desires proposals for chemical methods that could rapidly degrade a dilute aqueous solution Save 66% off the cover price and get a free gift! Learn More >>X
<urn:uuid:972cc973-8331-4abd-9486-600cd241c9de>
3.046875
380
Truncated
Science & Tech.
31.976426
"What's Special About This Number" Facts by Gianni A. Sarconehttp://www.archimedes-lab.org/numbers/Num1_69.html eople have always been fascinated by NUMBERS... Numbers areactually basic elements of mathematics used for counting,measuring,ranking,comparing quantities, and solving equations. Numbers have unique properties: for some ones of us they are merelyconcise symbols manipulated according to arbitrary rules, for othersnumbers carry occult powers and mystic virtues.Almost all numeration systems star t as simpletally marks, using single strokes to represent each additional unit. The first known use of numbers dates back to around 30,000 BC when tally marks wereprecisely used by stone age people.To show that each number is unique and has its own beauty, we havecollected for you a huge amount of facts pertaining to the magicalworld of numbers, covering a rangeof different topics includingmathematics, history, philosophy, psychology, symbolism,etymology,language, and/or ethnology... The number facts in this 'Numberopedia' are available asfeaturesfor print and electronic publishing. If you got a distinctive fact about any number listed here you think Archimedes' Labcommunity might enjoy, why not post it here? Conoscete un numero con delle proprietàoriginali?Contattateci ! Connaissez-vous un nombre avec des propriétés étonnantes?Contactez-nous! lista dei numeri liste des nombres lista de números getallen en getalverzamelingen מס פרי ם ש מו ת umber) is, in computing, a value (or symbol) that is usuallyproduced as the result of an operation on invalid input operands,especially infloating-point calculations. NaNs are close to someundefined or inderterminateexpressions in mathematics. In short, NaN isnot really a number but a symbol that represents a numerical quantitywhose magnitude cannot be determined by the operating system.=
<urn:uuid:49dc6a98-8090-431c-9547-d624e6dfd547>
3.09375
456
Content Listing
Science & Tech.
24.268585
) is the most abundant gas in Earth's atmosphere, including the troposphere. This cartoon also depicts several other tropospheric gases, including oxygen (O2 ), carbon dioxide (CO2 ), water vapor (H2 O), methane (CH4 ), sulfur dioxide (SO2 ), and carbon monoxide (CO). Click on image for full size Image courtesy UCAR, modified by Windows to the Universe staff (Randy Russell). Chemical Composition of Earth's Atmosphere Earth's atmosphere consists of about 78% nitrogen, 20% oxygen, and a mixture of small amounts of numerous other ingredients. Some of the minor constituents do, however, have big impacts. For example, greenhouse gases such as carbon dioxide and methane exert a large influence on the temperature of our planet. The chemical composition of air varies across the different layers of Earth's atmosphere. The oceans and the biosphere exchange vast quantities of gases with the atmosphere's lowest layer. The Carbon and Nitrogen Cycles play key roles in these processes. The activities of humans play an increasingly important role in atmospheric chemistry. Fossil fuel burning generates sulfur oxides, which create sulfuric acid - a component of acid rain. Exhaust gases from cars and trucks produce nitrogen oxides, which contribute to the formation of smog and of nitric acid - another component of acid rain. Shop Windows to the Universe Science Store!Cool It! is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store You might also be interested in: Nitrogen is a chemical element with an atomic number of 7 (it has seven protons in its nucleus). Molecular nitrogen (N2) is a very common chemical compound in which two nitrogen atoms are tightly bound...more Oxygen is a chemical element with an atomic number of 8 (it has eight protons in its nucleus). Oxygen forms a chemical compound (O2) of two atoms which is a colorless gas at normal temperatures and pressures....more Less than 1% of the gases in Earth's atmosphere are called greenhouse gases. Even though they are not very abundant, these greenhouse gases have a major effect. Carbon dioxide (CO2), water vapor (H2O),...more Carbon dioxide is a colorless and non-flammable gas at normal temperature and pressure. Although much less abundant than nitrogen and oxygen in Earth's atmosphere, carbon dioxide is an important constituent...more Methane is gas that is found in small quantities in Earth's atmosphere. Methane is the simplest hydrocarbon, consisting of one carbon atom and four hydrogen atoms. Methane is a powerful greenhouse gas....more Chemistry plays an influential role in the behavior of Earth's atmosphere. The various gases in the atmosphere are constantly mixing with and reacting with each other. Gases released by the oceans, emitted...more Air pollution comes from many different sources. Natural processes that affect air quality include volcanic activity, which produce sulfur, chlorine, and ash particulates, and wildfires, which produce...more
<urn:uuid:bbedeadc-6127-460e-aab1-5fb7f5466c30>
3.75
656
Knowledge Article
Science & Tech.
41.898182
The pigbutt worm or flying buttocks (Chaetopterus pugaporcinus) is a newly discovered species of worm found by scientists at the Monterey Bay Aquarium Research Institute. The worm is round in shape, approximately the size of a hazelnut, and bears a strong resemblance to a disembodied pair of buttocks. Because of this, it was given a Latin species name that roughly translates to “resembling a pig’s rear.” The worm has been recently observed residing just below the oxygen minimum zone between 900 and 1,200 metres (3,000 to 4,000 feet) deep — even when the sea floor is significantly deeper. The worms have also been observed floating with their mouths surrounded by a cloud of mucus. Current theories suggest that they reside in this area of the ocean because of its cornucopia of detritus and marine snow, and that the worms use these mucus clouds to capture particles of food and “snow.” The worm has a segmented body, but the middle segments are highly inflated, giving the animal a round shape. These morphological characteristics are unique among chaetopterids. It is unknown whether the specimens found to date were adult or larval forms. Their unusual size (five to ten times larger than any known chaetopterid larvae) might indicate they were adults, but all known species of chaetopterid adults live in parchment-like tubes on the sea floor. Comparison to larval morphology has indicated that the specimens have a close relationship to either genus Chaetopterus or genus Mesochaetopterus, and aphylogenetic tree constructed from mitochondrial and ribosomal DNA sequences from twelve different Chaetopteridae worms found them to be most closely related to other worms of the Chaetopterus genus.
<urn:uuid:ca41c474-4edc-49b5-a079-8fdd7700d351>
3.859375
373
Knowledge Article
Science & Tech.
31.172075
The other day, an educated but non-professional person wrote something to me that said "All starfish eat mussels". This was followed by me doing this: Then...I regained my composure. Okay..so let me say this again. NOT ALL STARFISH EAT MOLLUSKS!! There seems that YEARS of biology and zoology textbooks using the common nearshore asteriid starfish Asterias and Pisaster and their kin as models of the "common" starfish have led to misinformation about their feeding mode as the basis for ALL starfish. STARFISH DO NOT FEED ALIKE. So, this week is ALL ABOUT Starfish Feeding !! FIRST-to break the preconceived notions? SOME Starfish DON'T use a stomach to feed! Paxillosidans: including "sand stars" such as Astropecten and/or Luidia will usually SWALLOW their prey whole. Sometimes they eat OTHER starfish! Here we have an Atlantic Luidia feeding on a much less fortunate Astropecten. Note that there is NO STOMACH extruded. The prey is simply being EATEN ALIVE and swallowed into the mouth! Sometimes they overdo it.... ...and others feed on mud and the foodish bits that are found in deep-sea mud! Many deep-sea (these live at 3000+m !!) porcellanasterid "mud stars" are collected with their guts FILLED with mud. To the point that the animals are literally BALLS of mud surrounded by starfish stuff!! The thickness you see in this deep-sea Styracaster sp?? ALL MUD!!! And then, we get to those starfish that DO feed using their extendable stomachs! Many, such as the tropical sea star Protoreaster gain most of their nutrition from microalgae and don't really feed much on meaty bits (which is probably why they don't live long if you feed them clam meat) and this Patiria miniata extend their stomachs in order to feed on small microalgae or opportunistically ANY small food they can roll their stomach over and eat!! Usually starfish species that feed in this way can also feed on encrusting animals like sponges, bryozoans, and etc... This feeding mode is actually pretty common. If I could make ANY generalization about starfish feeding, it is that many species are OMNIVOROUS. That is, they can feed on just about anything slower then them..algae, encrusting invertebrates, etc. But really, they don't always feed on other moving animals ...and finally we get to the starfish that FEED on bivalve mollusks. Many of these are taxa that are restricted to the ASTERIIDAE. and EVEN these starfish don't always feed on bivalves...sometimes its snails or what have you... To be honest, there's a fair number of asteriid species that feed this way..but - Its a specialized feeding mode (the humped posture over the shell) - The great majority of taxa aren't asteriid starfish and DON'T feed this way. NUDIBRANCHS! (Cadlina luteomarignata being fed upon by Crossaster papposus-contrary to what the original source says) (from the Sea Slug Forum) ...and of course...coral!! Both deep-sea (Hippasteria on ???) Later this week (Thursday.) STARFISH THAT CAPTURE SWIMMING FOOD!!!
<urn:uuid:ce8034a2-11e1-4cfb-8764-7919af74907f>
2.71875
771
Personal Blog
Science & Tech.
64.375568
purpose of the PARCS project is to place an advanced laser-cooled cesium atomic clock in orbit and utilize it to test a variety of predictions of the Theory of Relativity. One of these predictions, made by Albert Einstein in 1915, is that clocks tick slower in strong gravity than they do in weak gravity. An orbiting satellite might place PARCS at an altitude of 220 miles (360 kilometers), where gravity is slightly weaker than that found at the Earth's surface. Thus the PARCS clock aboard the satellite ticks faster than a clock on the surface of the Earth by about 1 second in every 10,000 years. We Hope to Find Out: tiny shifts have already been observed in previous experiments-the aim of PARCS is to measure them about a hundred times more accurately than ever before. To observe such a small change in clock rate requires extremely accurate clocks, both in orbit and on the ground. PARCS will be the most accurate clock ever built, keeping time to within 1 second in 300 million years. It will be compared to the master clock of the United States, which is at the National Institute of Standards and Technology (NIST) in Boulder, Colorado. Principal Investigators for the PARCS project are from NIST and the University of Colorado. clock consists of two major components: an oscillator, which produces a stable frequency (in other words, something that produces a steady series of "ticks"), and a "frequency checker", which compares that frequency to the natural frequency of an atom. For PARCS, the oscillator will itself be a highly stable atomic clock, a hydrogen maser built by the Smithsonian Astrophysics Observatory. The frequency checker part of the apparatus consists of a beam of very cold cesium atoms, which pass through a pair of microwave cavities, which are used to very accurately measure the natural frequency difference between two internal energy levels of an atom. The hydrogen maser frequency is checked against this frequency and then compared to that of a clock on the ground. Because every cesium atom of the same isotope is identical, we can be sure that any differences in frequency that we see between a clock on the ground and one in orbit are due to relativistic effects. the very high accuracy required, PARCS will use atoms that have been cooled to a temperature of just 1 millionth of a degree above absolute zero. This is achieved using a technique called "laser cooling." Photons from several laser beams, each coming from a different direction, bounce off of the atoms, giving the atoms a small push with each bounce. These small pushes serve to slow down the atomic motion, resulting in dramatically cooler temperatures. These lower temperatures allow the natural frequency of the atom to be measured much more accurately. Further improvements can be made by performing the experiment in space. Because objects in orbit are all freely falling (this is what produces the phenomena the atoms can be observed for a longer time before they hit the walls of the container. The longer measurement times yield more precise clocks. As Werner Heisenberg showed in 1927, this longer observation time allows for a more precise measurement of an energy level (this is called "the uncertainty principle for time and energy"). We'll Conduct Our Experiment: To compare the measurement of time by the PARCS experiment in orbit and an accurate clock on the ground, the Global Positioning System (GPS) is used. Each clock compares its frequency to that transmitted by the GPS satellites. By knowing each of these frequency differences, one can calculate the frequency difference between the ground clock and the space clock. - See the RACE experiment information
<urn:uuid:aac94cf6-49da-453d-91b8-5c317ec1ce40>
4.15625
811
Knowledge Article
Science & Tech.
34.244344
Linus Pauling: One day, when I was Eastman Professor at Oxford in the spring of 1948, I caught a cold. It was before the vitamin C days! I caught a cold and after a day or two in bed of reading science fiction and detective stories, I got tired of that, and thought, why don't I discover the alpha helix? Something like that - why don't I try to find how polypeptide chains are folded in a way compatible with all the knowledge we have of structural chemistry and such that they can form hydrogen bonds to hold the parts of the molecule together? I took a piece of paper, much like this piece, and drew on it a representation of an extended polypeptide chain, with the distances approximately right and the angles right. Except, one angle did not have the right value. I still have that original piece of paper, by the way. This is the bond angle of the alpha carbon that didn't have the right value. I folded the paper - actually, it took several trials - I folded it along several parallel lines through the successive alpha carbons. Finally, I found a way by folding the paper to make this bond have an angle of 110 degrees. I finally found a way of folding such that when I fit it together, there was an N-H-C-O bond formed by each N-H group, and each C=O group. The hydrogen bond held the structure together and had just the right dimensions. I found that this structure, which turned out to be the structure of hair and horn and fingernail, and also present in myoglobin and hemoglobin and other globular proteins, a structure called the alpha-helix, had 3.6 residues per turn of the helix. A helical structure where there are 3.6 residues per turn.
<urn:uuid:df64510b-6c6f-491a-8d21-014cd0d4c248>
2.859375
389
Audio Transcript
Science & Tech.
62.311731
Quadratic Equations and Parabolas -> SOLUTION: 3x^2+12x+8=0 You enter your algebra equation or inequality - solves it step-by-step while providing clear explanations. Free on-line demo : algebra software solves algebra homework problems with step-by-step help! solves your algebra problems and provides step-by-step explanations! Click here to see ALL problems on Quadratic Equations put this solution on YOUR website! To solve for x we can use the quadratic formula: SOLVE quadratic equation with variable (in our case ) has the following solutons: For these solutions to exist, the should not be a negative number. First, we need to compute the discriminant Discriminant d=48 is greater than zero. That means that there are two solutions: can be factored: Again, the answer is: -0.845299461620749, -3.15470053837925. Here's your graph:
<urn:uuid:51b4abe3-9f62-425e-bb8b-ccb17d98b3a6>
3.1875
228
Tutorial
Science & Tech.
52.156046
Science Fair Project Encyclopedia Eryops ("big eye") is a genus of extinct, semi-aquatic amphibian found primarily in the Permian-aged Admiral Formation of Archer County, Texas, but fossils are also found in New Mexico and parts of the eastern United States. Eryops averaged a little over 5 feet (1.5 m) long, making it one of the largest land animals of its time. Several complete skeletons of Eryops have been found in the Lower Permian, but skull plates and teeth are the most common fossils. Although it had no direct descendents, it is the best-known Permian amphibian and a remarkable example of natural engineering. Eryops is an example of an animal that made successful adaptations in the movement from a water environment to a terrestrial one. It retained, and refined, most of the traits found in its fish ancestors. Sturdy limbs supported and transported its body while out of water. A thicker, stronger backbone prevented its body from sagging under its own weight. Also, by utilizing vestigial fish jaw bones, a rudimentary ear was developed, allowing Eryops to hear airborne sound. The skull of Eryops is proportionately large, being broad and flat and reaching lengths of 2 feet. It had an enormous mouth with many sharp teeth in strong jaws. Its teeth had enamel with a folded pattern, hence its classification with the Labyrinthodonts ("maze toothed"). Within the wide, gaping jaw, the fang-like palatal teeth, when coupled with the gape, suggest an intertial feeding habit. This is when the amphibian would grasp its prey and, lacking any chewing mechanism, toss its head up and backwards, throwing the prey farther back into its mouth. Such feeding is seen today in the crocodile and alligator. It is taken that Eryops was not very active, thus a predatory lifestyle, while possible, was probably not the norm. It is more likely that it fed on fish either in the water or on those that became stranded at the margins of lakes and swamps. A large supply of terrestrial invertebrates were also abundant at the time, and this may have provided a fairly adequate food supply in itself. Eryops’ eye sockets were large and directed upward. The body was low to the ground and supported by short, massive limbs. The tail was short, suggesting the animal was not a fast or powerful swimmer. The flat skull with the large eyes and nostrils placed on the top of the head are suggestive that Eryops used stealth for hunting, much like a modern crocodile, and sat quietly in the water waiting for prey with only its eyes and nostrils visible above the water. The pectoral girdle of Eryops was highly developed, with a larger size for both increased muscle attachment to both it and the limbs. Most notably, the shoulder girdle was disconnected from the skull, resulting in improved terrestrial locomotion. The crossopterygian cleithrum was retained as the clavicle, and the interclavicle was well-developed, lying on the underside of the chest. In primitive forms, the two clavicles and the interclavical could have grown ventrally in such a way as to form a broad chest plate, although such was not the case in Eryops. The upper portion of the girdle had a flat, scapular blade, with the glenoid cavity situated below performing as the articulation surface for the humerus, while ventrally there was a large, flat coracoid plate turning in toward the midline. The pelvic girdle also was much larger than the simple plate found in fishes, accommodating more muscles. It extended far dorsally and was joined to the backbone by one or more specialized sacral ribs. The hind legs were somewhat specialized in that they not only supported weight, but also provided propulsion. The dorsal extension of the pelvis was the ilium, while the broad ventral plate was comprised of the pubis in front and the ischium behind. The three bones met at a single point in the center of the pelvic triangle, called the acetabulum, providing a surface of articulation for the femur. The main strength of the ilio-sacral attachment of Eryops was by ligaments, a condition structurally, but not phylogenetically, intermediate between that of the most primitive embolomerous amphibians and early reptiles. The condition that is more usually found in later vertebrates is that cartilage and fusion of the sacral ribs to the blade of the ilium are utilized in addition to ligamentous attachments. Modern amphibians breathe by inhaling air into lungs, where oxygen is absorbed. They also breathe through the moist lining of the mouth and skin. So too did Eryops, but its ribs were too closely spaced to suggest that it simply expanded the rib cage. More likely, it depressed the hyoid apparatus to expand the oral cavity and elevated the floor of the mouth while it and the nostrils were closed. This forced air back into the lungs. Air could then be forced back out by contraction of the elastic tissue in the lung walls. Eryops had typical amphibian posture exhibited by the upper arm and upper leg extending nearly straight out from its body, while the forearm and the lower leg extended downward from the upper segment at a near right angle. The body weight was not centered over the limbs, but was rather transferred 90 degrees outward and down through the lower limbs, which contacted the ground. Most of the animal's strength was used to just elevate its body off the ground for walking, which was probably slow and difficult. With this sort of posture, only short, broad strides could be achieved. This has been confirmed by fossilized footprints found in Carboniferous rocks. Ligamentous attachments within the limbs were present in Eryops, being important because they were the precursor to bony and cartilagenous variations seen in modern terrestrial animals that use their limbs for locomotion. The primary species of Eryops has been named Eryops megacephalus (“big head”). The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:f318908e-b16e-42ff-a5c9-95dec929d43a>
3.921875
1,307
Knowledge Article
Science & Tech.
43.667881
Elements, Atoms and the Periodic Table atom - An atom is the smallest unit of a substance that still has all the properties of that substance. In most cases, an atom consists of protons, neutrons, and electrons. The protons and neutrons are found in the center of the atom, called the atomic nucleus, and the electrons orbit or circle around the center of the nucleus in paths called orbitals. atomic number - The atomic number of an atom is equal to the number of protons that the atom contains. Atoms can have differing numbers of neutrons and electrons while still retaining the original characteristic properties of that atom. However, if an atom gains or loses a proton, in essence, it changes its atomic number and becomes an entirely new atom with new characteristics. atomic weight - The atomic weight of an atom is a measure of how much mass an atom has. The atomic weight is calculated by adding the number of protons and neutrons together. Atomic masses are not listed as whole numbers on the periodic table because atoms can come in forms with different amounts of neutrons. The atomic weight reported for any particular element is an average weight of all the known forms of that element. electron - An electron is a negatively charged particle found circling or orbiting an atomic nucleus. An electron, like a proton is a charged particle, although opposite in sign, but unlike a proton, an electron has negligible atomic mass. Electrons contribute no atomic mass units to the total atomic weight of an atom. ion - An ion is an atom that has been charged. Neutral atoms have the same number of protons and electrons. Since the charge of these two particles are equal in magnitude and opposite in sign, the charge of a neutral atom is zero. When an atom gives up or takes on an electron, the positive and negative charges are no longer balanced. Extra electrons give an ion a negative charge. Fewer electrons give an ion a positive charge. isotope - Isotopes of atoms are atoms with the same number of protons and electrons but different number of neutrons. Adding neutrons to atoms does not affect charge, but it will affect atomic mass. This is why atomic mass is not reported as a whole number on the periodic table. Elements exist as a group of different isotopes. neutron - A neutron is an uncharged particle found in the nucleus of an atom. A neutron, like a proton, contributes one atomic mass unit to the total atomic weight of an atom. periodic table - The periodic table is a chart of all the known elements in order of increasing atomic number. The table puts elements into groups with similar characteristics, allowing us to recognize trends over the whole array of elements. proton - A proton is a positively charged particle found in the nucleus of an atom. A proton contributes one atomic mass unit to the total atomic weight of an atom.
<urn:uuid:85db2682-a5c2-4d4b-bd99-d1ca6c279b20>
4.40625
597
Structured Data
Science & Tech.
40.039332
If you love space, and you love getting behind the scenes access then, the AsteroidMappers project is perfect for you. NASA's Jet Propulsion Laboratory's spacecraft Dawn recently finished its orbit of the asteroid Vesta, and has delivered so much data that citizen scientists are being asked to help analyze what has been found. The AsteriodMappers project is similar to the MoonMappers project housed online at CosmoQuest — home to various citizen scientists projects. Your mission will be to scour high resolution images of Vesta, identifying craters, boulders, rises and other landmarks to help create a map of the surface. A tutorial on how to begin your mission will walk you through how to contribute, and provide instructions along the way. Dawn scientists were amazed at the amount of data they got from the spacecraft's orbit of Vesta. Scientists have not yet released all the images to the public so by working on the project you can truly say you got an insider's look. Scientists are calling it a surprisingly fascinating asteroid — unlike anything they've seen before — with huge impact basins, steep cliffs and other unusual color variations. You are probably wondering what scientists hope to learn by mapping Vesta and its neighbor Ceres. They are hoping to characterize the conditions and activity in the solar system's earliest epoch as these "protoplanets" have remained intact since their formation. Both Ceres and Vesta reside in the asteroid belt of our inner solar system, but each one is believed to have followed very different evolutionary paths due to the diversity of conditions and processes during the time the solar system was formed. That's important to scientists because it will in turn help us understand how these forces may have helped form our own planet. Scientists already know that Vesta more closely resembles a small planet, or Earth's moon, than any other asteroid. Even with what data has been analyzed, they can conclusively link Vesta with meteorites that have fallen on Earth. If it wasn't exciting enough helping to build a picture of what the conditions were like in the earliest stages of our solar system, there is also a bit of space history being made. Thanks to its revolutionary ion engine, the Dawn is now heading to over to Ceres; this will make the Dawn the first spacecraft to establish orbit around two distinct objects in space. Just think — the orbit of Ceres will likely also collect a vast amount of data, so you could very well be getting in on the ground floor of an ongoing mapping project that could potentially unlock some secrets of our solar system.
<urn:uuid:0da9f057-79f2-40cb-8226-48dc3b3cc44c>
3.21875
518
Personal Blog
Science & Tech.
37.229642
- Bulgarian (bg) - Czech (cs) - Danish (da) - German (de) - Greek (el) - English (en) - Spanish (es) - Estonian (et) - Finnish (fi) - French (fr) - Hungarian (hu) - Icelandic (is) - Italian (it) - Lithuanian (lt) - Latvian (lv) - Maltese (mt) - Dutch (nl) - Norwegian (no) - Polish (pl) - Portuguese (pt) - Romanian (ro) - Slovak (sk) - Slovenian (sl) - Swedish (sv) - Turkish (tr) Problems like climate change, biodiversity loss and natural resource use have long-term implications which require long-term policy solutions. To make informed strategic decisions, we must try to anticipate what lies ahead and grasp ongoing, emerging and latent developments. If we want to seriously address Europe's sustainability, we have to look beyond two legislative cycles and more. More - Key facts and messages - The global population will still be growing midway through the 21st century but at a slower rate than in the past. People will live longer, be better educated and migrate more. Some populations will increase as others shrink. Migration is only one of the unpredictable prospects for Europe... more - The breakneck pace of technological change brings risks and opportunities, not least for developed regions like Europe. These include in particular the emerging cluster of nanotechnology, biotechnology, and information and communication technology. Innovations offer immense opportunities... more - The risk of exposure to new, emerging and re-emerging diseases, to accidents and new pandemics, grows with increasing mobility of people and goods, climate change and poverty. Vulnerable Europeans could be severely affected. more - By increasing tax on pollution and other environmentally-damaging activities, governments can use the extra funds to provide incentives for innovation, such as developing renewable energy. For advanced economies like the EU, such schemes also create new technologies which can be exported globally. more - An increasingly urban world will probably mean spiralling consumption and greater affluence for many. But it also means greater poverty for the urban underprivileged. Poor urban living conditions and associated environmental and health risks could impact all areas of the world, including Europe. more As Europe’s climate warms, wine producers in Europe may need to change the type of grapes they cultivate or the location of vineyards, even moving production to other areas in some cases. This is just one example of how Europe’s economy and society need to adapt to climate change, as examined in a new report from the European Environment Agency (EEA). Increased flooding is likely to be one of the most serious effects from climate change in Europe over coming decades. Some of the conditions which may contribute to urban flooding are highlighted in an Eye on Earth map from the European Environment Agency (EEA). Climate change is affecting all regions in Europe, causing a wide range of impacts on society and the environment. Further impacts are expected in the future, potentially causing high damage costs, according to the latest assessment published by the European Environment Agency today. The continuing loss of biodiversity – made up of genes, species and ecosystems – is a matter of growing concern in Europe. Yet measuring the extent of the loss and the threat it poses is a huge challenge. Climate change will affect Europe's cities in different ways. To give an overall impression of the challenge for European cities to adapt to climate change, the European Environment Agency (EEA) has published a series of detailed interactive maps, allowing users to explore data from more than 500 cities across Europe. The world continues to speed down an unsustainable path despite over 500 internationally agreed goals and objectives to support the sustainable management of the environment and improve human wellbeing, according to a new and wide-ranging assessment coordinated by the United Nations Environment Programme (UNEP). Despite progress in some areas, Europe must do more to create the 'green economy' needed for the continent to become sustainable, according to a new report from the European Environment Agency (EEA). Around three quarters of Europeans live in cities. Most of Europe's wealth is generated in cities, and urban areas are particularly at risk due to climate change. Europe should seize the opportunity of improving quality of life while adapting to climate change in cities, according to a report from the European Environment Agency (EEA). The report also warns that delaying adaptation will be much more costly in the long-term.
<urn:uuid:114a1a34-4fa6-47e8-8565-5e8a60233eea>
2.75
942
Content Listing
Science & Tech.
25.740804
Section 1: Introduction Any two objects, regardless of their composition, size, or distance apart, feel a force that attracts them toward one another. We know this force as gravity. The study of gravity has played a central role in the history of science from the 17th century, during which Galileo Galilei compared objects falling under the influence of gravity and Sir Isaac Newton proposed the law of universal gravitation, to the 20th century and Albert Einstein's theory of general relativity, to the present day, when intense research in gravitational physics focuses on such topics as black holes, gravitational waves, and the composition and evolution of the universe. Figure 1: Portraits of Sir Isaac Newton (left) and Albert Einstein (right). Source: © Image of Newton: Wikimedia Commons, Public Domain; Image of Einstein: Marcelo Gleiser. More info Any study of gravity must accommodate two antithetical facts. In many ways, gravity is the dominant force in the universe. Yet, of the four forces known in nature, gravity is by far the weakest. The reason for that weakness remains a major unanswered question in science. Gravity also forms the central focus of efforts to create a "theory of everything" by unifying all four forces of nature. Ironically, gravity was responsible for the first unification of forces, when Newton identified the force that caused an apple to fall to Earth to be the same as the force that held the Moon in orbit. Current research on gravity takes several forms. Experiments with ever-greater precision seek to test the foundations of gravitational theory such as the universality of free fall and the inverse square law. Other experimentalists are developing ways to detect the gravitational waves predicted by Einstein's general relativity theory and to understand the fundamental nature of gravity at the largest and smallest units of length. At the same time, theorists are exploring new approaches to gravity that extend Einstein's monumental work in the effort to reconcile quantum mechanics and general relativity.
<urn:uuid:a5bbc16f-8701-4c2f-9f0b-3cc757a902a2>
3.96875
394
Knowledge Article
Science & Tech.
26.841364
All sea turtles that occur in US waters are listed as either threatened or endangered under the Endangered Species Act of 1973, as amended. Maintaining an active stranding network has been identified in each of the sea turtle recovery plans, developed jointly by the Fish and Wildlife Service and NOAA Fisheries Service, as a task necessary for the conservation and recovery of listed sea turtles. The Sea Turtle Stranding and Salvage Network (STSSN) was formally established by NOAA Fisheries Service in the southeastern U.S. and Gulf of Mexico in 1980. The STSSN has since spread to encompass the entire east and gulf coasts of the U.S., from Maine through Texas, as well as parts of the Caribbean. The STSSN was established in response to the need to better understand the threats sea turtles face in the marine environment, to provide aid to stranded sea turtles, and to salvage dead sea turtles that may be useful for scientific and educational purposes. Actions taken by stranding networks improve the survivability of sick, injured, and entangled turtles; while also helping scientists and managers to expand their knowledge about diseases and other threats that affect sea turtles in the marine environment and on land. In the Northeast Region there is an active network of organizations that participate in the STSSN. While NOAA Fisheries Service coordinates the network, it is participating organizations that respond to stranded turtles, rehabilitate sick and injured turtles, and help educate the public, for the overall goal of sea turtle conservation. All data collected by the Northeast Region STSSN is housed in a National STSSN database, maintained by the NOAA Fisheries Service, Southeast Fisheries Science Center (SEFSC). For more information on the National STSSN Program and Database, please visit the SEFSC and National STSSN Website. NOAA Fisheries Service, NER Stranding Hotline: 866-755-NOAA (6622)
<urn:uuid:e4a254b2-723a-4d7e-8b1e-2e9cdcb97614>
3.421875
386
Knowledge Article
Science & Tech.
36.185294
First of all, try to avoid the term "HHO" it makes you look like a crackpot. Electrolysis of water produces a mixture of molecular hydrogen (H2) and molecular Oxygen (O2) in a 2:1 ratio, there is no such thing as "HHO". Second thing to know, is that no matter what you do, the energy into the system is more than the energy you get back out of the system. If you put in one Joule of energy to break the water into H2 and O2 you will get back less than one Joule when the H2 and O2 is recombined. That's the laws of thermodynamics and you can't break those laws. Now, what are you really trying to do? You mention a Tesla coil but that is a high voltage low current system. Electrolysis cells are intrinsically low voltage high current devices. To electrolyze water you only need about 2 volts per cell. Using higher voltages doesn't accomplish anything useful, indeed higher voltages just mean that you will get electrolysis reactions you don't want as well as significant energy loss due to ohmic heating. So it would be really silly to take an energy source and use that to run a Tesla coil then use that output to run the electrolysis rig. If you are thinking of using the Tesla coil to pull power out of say… atmospheric RF (basically using the coil as an RF antenna), then you have to go back to my earlier point about energy in > energy out. How much "free" energy can the coil actually capture from the atmosphere? RF waves are extremely low energy; even with a big antenna you are not going to be able to capture enough energy to light even a single LED. So, running an electrolysis cell might be possible but it isn't going to generate enough fuel to make it worthwhile. One last thing, H2 + O2 is explosive. The Hindenburg didn't explode, it combusted. A mixture of H2 + O2 WILL explode, and that explosion will be much more energetic than what occurred with the Hindenburg. Add in the worry that any electrolysis cell produces gases that can be compressed along with heat and you have two pathways to an explosion: combustion and over pressurization of the H2 + O2 storage tank.
<urn:uuid:ba29777d-0247-4ce3-be20-8cc6a95634dd>
2.90625
477
Comment Section
Science & Tech.
54.846807
Audio animations of pulsating stars When a star pulsates, it does so in many modes simultaneously, each with a slightly different frequency. The periods of the pulsations are typically many minutes, but speeding them up by a factor of 300,000 in these computer simulations brings the pulsations into the audible range. Each recording lasts for 8 seconds, which corresponds to about 30 days of - The Sun is a fairly average star, about 4.5 billion years old. Pulsations in the Sun have been studied extensively over the past few decades, particularly by the SOHO satellite. - alpha Centauri A, the brighter of the two pointers to the Southern Cross and the nearest star to our own solar system. This star is very similar to the Sun, but is a bit bigger and probably a bit older. The computer-generated recording shows oscillations from this star. - beta Hydri is also a bright southern star, located quite close to the south celestial pole. This star gives us an idea of the future fate of our Sun, being about 7 billion years old. These sound animations were made using the Vislab facility by Andrew Lyons, a PhD student working in music composition at Sydney Conservatorium of Last updated 20-Dec-2005 by Tim Bedding
<urn:uuid:e0bf90dd-0622-41d6-966a-5c453e2a4924>
3.125
275
Knowledge Article
Science & Tech.
47.091486
The Case of the Alternating Ice Sheets There has been a wave of triumphal announcements by climate change proponents recently, almost giddy over the summer shrinkage of the Arctic ice sheet. “Lowest level ever!” they proclaim, thought that is not quite true. Nonetheless, The Arctic pack ice has been receding over the last decade or so, but that is only natural. You see, there is a well known, if poorly understood, linkage between the ice at the north pole and the ice in and around Antarctica—and the ice around Antarctica is doing quite well. Satellite radar altimetry measurements indicate that the East Antarctic ice sheet interior increased in mass by 45±7 billion metric tons per year from 1992 to 2003. This trend continues today, reinforcing recent scientific investigations into this millennial scale oscillation between the poles. According to studies, this is how things have been for hundreds of thousands of years. By now everyone who pays attention to climate matters has heard the news, the Nations Ice and Snow Data Center (NSIDC) has proclaimed a new record low for the Arctic ice sheet. The dweebs over at RealClimate are beside themselves with joy, smugly celebrating the impending ecological doom of all mankind. “Take to the lifeboats, the seas are a risin'.” Ok, maybe they are not quite that ecstatic, but this “record” is being used as a see-I-told-you-so to prop up anthropogenic global warming. Here is what the NSIDC had to say in their press release: Arctic sea ice cover melted to its lowest extent in the satellite record yesterday, breaking the previous record low observed in 2007. Sea ice extent fell to 4.10 million square kilometers (1.58 million square miles) on August 26, 2012. This was 70,000 square kilometers (27,000 square miles) below the September 18, 2007 daily extent of 4.17 million square kilometers (1.61 million square miles). NSIDC scientist Walt Meier commented, “By itself it's just a number, and occasionally records are going to get set. But in the context of what's happened in the last several years and throughout the satellite record, it's an indication that the Arctic sea ice cover is fundamentally changing.” Problem is the record is not all that meaningful. As I will explain, these folks are all missing the bigger picture. The first thing to note is that this “record” is only valid for the period that we have had satellite observations, roughly 33 years beginning in 1978. There is no data for direct comparison before that, so you cannot even say with certainty that this is the lowest ice extent this century. Indeed, as was reported in “Greenland's Oscillating Glaciers,” the glaciers of Greenland hit a low back in the early 1930s that rivals current reports of glacial melting, an indication the Arctic pack ice might have been rather sparse in the summers back then as well. Arctic sea ice over the past 33 years. Animation by D. Kelly O'Day. The warmist apologists will say that those temperatures were warm but they were different, today we have warming all over. But that is not really true either, though there is not enough data to conclusively prove this argument one way or the other. Regardless of the ice coverage during the 1930s, if you go back farther to some of the historical climate optima it is hard to believe that the new “record” is, in fact, the most shrunken Arctic ice sheet ever (see “Driftwood On Ice”). During the Holocene Climate Optimum, around 6,000 years ago, temperatures in the Arctic were 4°C higher than today and the Arctic Ocean may have been totally ice free during the summer. That this happened before makes the melting of the Arctic sea ice not a particularly bothersome thing; even the “endangered” polar bears managed to live through this balmy period in the high Arctic. Even if we ignore the fact that there have been warmer periods in the Holocene climate record, there is a reason to not get upset by the apparent retreat of the Arctic ice sheet. That reason is explained in a paper by Stephen Barker and colleagues, entitled “800,000 Years of Abrupt Climate Variability,” that appeared in Science in 2011. Here is the abstract: We constructed an 800,000-year synthetic record of Greenland climate variability based on the thermal bipolar seesaw model. Our Greenland analog reproduces much of the variability seen in the Greenland ice cores over the past 100,000 years. The synthetic record shows strong similarity with the absolutely dated speleothem record from China, allowing us to place ice core records within an absolute timeframe for the past 400,000 years. Hence, it provides both a stratigraphic reference and a conceptual basis for assessing the long-term evolution of millennial-scale variability and its potential role in climate change at longer time scales. Indeed, we provide evidence for a ubiquitous association between bipolar seesaw oscillations and glacial terminations throughout the Middle to Late Pleistocene. According to the authors, ice core records from Greenland document the existence of repeated, large, abrupt shifts in Northern Hemisphere climate in the past. The last glacial cycle was characterized by rapid alternations between cold (stadial) and warmer (interstadial) conditions, cycles known as Dansgaard-Oeschger (D-O) oscillations. These oscillations led several scientists to propose a theory of inter-hemisphere climate linkage known as the seesaw model (see “Paleocean circulation during the Last Deglaciation: A bipolar seesaw?”). The thermal bipolar seesaw model, first proposed by Wally Broecker, attempts to explain the observed relationship between millennial-scale temperature variability observed in Greenland and Antarctica. The speculative mechanism responsible for these oscillations is variation in the strength of the Atlantic meridional overturning circulation (AMOC). In trying to document the model Barker et al. encorporate orbital cycles, insolation, a number of different proxies and several mathematical techniques, yielding results shown in the figure below. Ice core records from Greenland (GISP2) and Antarctica (EDC). According to the seesaw model, a transition from weak to strong AMOC would cause an abrupt warming across the North Atlantic region (a D-O warming event) while temperatures across Antarctica would (in general) shift from warming to cooling. In other words, the thermal bipolar seesaw model states that there is an inverse relationship between temperatures in Greenland and the rate of change of Antarctic temperature. “The northward heat transport associated with this circulation implies that changes in the strength of overturning should lead to opposing temperature responses in either hemisphere,” Barker et al. state. While this report is specifically about millennial cycles there are others who have proposed shorter term oscillations on the order of centuries or decades. Taking this relationship another logical step, shrinking ice in one hemisphere should imply growing ice mass in the other—the ice sheets alternate. Is the ice mass growing in Antarctica? According to a study done by NASA scientists back in 2005 and published in Science, that is exactly what is happening down at the bottom of the world. Accumulation of snow in the interior of the continent is resulting in growth in the Antarctic glacial ice. The plot below shows the change in elevation between 1992 and 2003. The important part of this plot is the long-term linear trend (black line), from which a steady increase in elevation since about 1995 is apparent (the red curve is an 11 year, least squares polynomial fit of questionable usefulness). The average rate of change from 1995 to 2003 is 2.2 cm/year after adjustment for isostatic uplift. This growth is even more dramatic when viewed on the map below. Satellite radar altimetry measurements indicate that the East Antarctic ice-sheet interior north of 81.6°-S increased in mass by 45±7 billion metric tons per year from 1992 to 2003. Comparisons with meteorological model snowfall estimates suggest that the gain in mass is associated with increased precipitation. A gain of this magnitude is enough to slow sea-level rise by 0.12±0.02 millimeters per year. But it is not just the ice on the Antarctic continent that is showing signs of growth. According to NASA's Earth Observatory, total Antarctic sea ice has increased by about 1% per decade since the start of the satellite record. “Whether the small overall increase in sea ice extent is a sign of meaningful change in the Antarctic is uncertain because ice extents in the Southern Hemisphere vary considerably from year to year and from place to place around the continent,” they report. “Considered individually, only the Ross Sea sector had a significant positive trend, while sea ice extent has actually decreased in the Bellingshausen and Amundsen Seas. In short, Antarctic sea ice shows a small positive trend, but large scale variations make the trend very noisy.” Antarctic sea ice in winter and summer, 2011-2012. The Arctic is an ocean basin surrounded by land. The Antarctic, on the other hand, is a large continent surrounded by ocean. Because of this geography, sea ice has more room to expand in the winter. But the ice also extends to warmer latitudes, leading to more melting in summer. The Antarctic sea ice peaks in September and retreats to a minimum in February, as can be seen from the seasonal maps above. Earth's climate engine contains cycles within cycles, operating on timescales that often exceed a human lifetime. Really long cycles leave traces in sediment and glacial ice, short-term change can be witnessed first hand, but the intermediate cycles are difficult for even scientists to appreciate. Imagine if a year took a century to unfold and you were born in the dead of winter; the coming of summer would seem a frightening change, with temperatures rising dramatically and seemingly without limit. It is easy to understand people getting frantic over shrinking ice sheets and melting glaciers, but such events are all natural and operate on timescales we are ill-equipped to comprehend. Nature will keep its own counsel without regard for overly excitable climate scientists. The rest of us should simply chill out. Be safe, enjoy the interglacial and stay skeptical.
<urn:uuid:361f5a39-5a42-4e14-8b66-777f248575e7>
2.953125
2,146
Personal Blog
Science & Tech.
41.017961
BUTTERFLY EFFECT BY NADIM CHOWDHURY What is Butterfly Effect? The "Butterfly Effect" is the propensity of a system to be sensitive to initial conditions. Such systems over time become unpredictable, this idea gave rise to the notion of a butterfly flapping it's wings in one area of the world, causing a tornado or some such weather event to occur in another remote area of the world. It is a phrase that encapsulates the more technical notion of sensitive dependence on initial conditions in chaos theory. Small variations of the initial condition of a dynamical system may produce large variations in the long term behavior of the system. So this is sometimes presented as esoteric behavior, but can be exhibited by very simple systems: for example, a ball placed at the crest of a hill might roll into any of several valleys depending on slight differences in initial position The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in a certain location. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale alterations of events. Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different. Of course the butterfly cannot literally cause a tornado. The kinetic energy in a tornado is enormously larger than the energy in the turbulence of a butterfly. The kinetic energy of a tornado is ultimately provided by the sun and the butterfly can only influence certain details of weather events in a chaotic manner. Who devised the idea? 1960, E Lorenz was doing weather prediction research at MIT. He managed to get funding to acquire a Royal McBee LGP-30 computer with 16 KB of memory that could do 60 multiplications per second. Lorenz set the new computer to solve a system of 12 differential equations that model a miniature atmosphere. To speed up the output, Lorenz altered the program to print only three significant digits of the solution trajectories, although the calculations themselves were carried out with a somewhat higher precision After seeing a particularly interesting run, he decided to repeat the calculation. As a shortcut on a number in the sequence, he entered the decimal .506 instead of entering the full .506127 the computer would hold and started the program. Lorenz went for a coffee break, and when he returned, he found that the results we completely different. At first he thought that some vacuum tubes in the computer were not working. Upon careful check, he realised that the discrepancies between the original and re-started calculations occurred gradually: First in the least significant decimal place and then eventually in the next, and so on From the above mentioned event what Lorenz has discovered is that tiny differences in the starting conditions can have big effects on later on. Why it is called Butterfly effect? • Lorenz coined the phrase "butterfly effect" describing how a slight change in initial conditions can lead to drastically different outcomes. "The flap of a butterfly's wings in South America could be responsible for a tornado in Texas." • He chose a butterfly because a three dimensional image of the trajectory mapped out by his equations looks like a butterfly. When the initial conditions change a bit, "does the flap of a butterfly's wings in Brazil set off a Tornado in Texas?"( Edward Lorenz Dec 1972, Talk given in Washington DC).There is a common misconception as with regards to the words "set off" (or cause in other formulations of the same idea). You cannot call uncle Eddie in Brazil and ask him to let his pet-butterflies flap their wings so that they cause a rain storm in Dhaka to soak your boy/girl-friend whom you are angry at. What is means is that you have to imagine two identical worlds. In one of the worlds you place a butterfly and let it flap its wings. In the other world you don't place the butterfly. Now you wait a while (a few months or more perhaps) and will see that the global weather patterns on your two worlds are completely different. The butterfly effect refers to the exponential growth of any small perturbation. However, this exponential growth continues only so long as the disturbance remains very small compared to the size of the attractor. It then folds back onto the attractor. Unfortunately, most people miss this latter part and think that the small perturbation continues to grow until it is huge and has some large effect. The point of the effect is that it prevents us from making very detailed predictions at very small scales, but it does not have a significant effect at larger scales.
<urn:uuid:630ad580-74fc-4d76-b762-0fcec12607f4>
3.6875
953
Knowledge Article
Science & Tech.
42.441396
A NASA image that suggests the origin of Earth's atmosphere. Earthís atmosphere evolves continuously. Early in our planetís history, gases from volcanoes and material from comets contributed to the atmosphere. Once life began, plants and animals became another contributing factor to the evolution of the atmosphere. Courtesy of JPL/NASA.
<urn:uuid:f1e7f52d-4fb7-45e8-a7dc-3e8a8852bd8a>
3.421875
64
Knowledge Article
Science & Tech.
23.964353
The birth of a glacier What makes a glacier? A glacier carves a U-shaped valley Illustrated explanations show how a glacier changes a meandering, V-shaped stream valley into a relatively straight, U-shaped valley in 4 steps. Carving a cirque at the head of an alpine glacier Illustrated explanations shows the development of a cirque and associated glacial features. Features of a valley glacier Illustrated explanations xxxxxxxxxxxxx. A slice through a glacier Illustration shows where snow accumualtes, where it melts and more. Image bank (zillions of megabytes!!!) New glacier terms in glossary
<urn:uuid:1132ac99-5950-42bc-a629-221e01f307c2>
3.921875
140
Content Listing
Science & Tech.
43.70875
Source of image: To read an interesting article which discusses why there seem to be so many predators in the Palmyra Atoll (which is located here): please click on the following link: I have done some research online to determine in my own mind why Palmyra Atoll has more predatory species existing there than in nearby areas. Here is what I have discovered: What is an atoll? An atoll is an island of coral that encircles a lagoon partially or completely. Charles Darwin classifyed "the atoll" as a unique type of island, Darwin understood the creation of an atoll as having its roots in volcanic activity. The basis for his theory came from personal observations that Darwin made during a six-year voyage around the southern section of the Pacific Ocean. According to Darwin, the atoll was the result of the gradual sinking of a volcanic island that has long since cooled, leaving behind an open crater in the middle section of the island. As the island begins to sink, the surrounding territory of the island falls beneath the surface of the water. At the same time, the coral reef that is found around the fringes of the island remains and gradually is built up through the natural practice of the accumulation of marine organisms that become part of the reef. This creates a barrier reef around the remaining section of the island, forming the perfect conditions for the development of a lagoon. Once the lagoon is formed and is more or less encircled by the barrier reef, an atoll is the final product of this gradual metamorphosis."(source: http://www.wisegeek.com/what-is-an-atoll.htm ) Facts about the Palmyra Atoll: "About halfway between Hawai‘i and American Samoa lies Palmyra Atoll. Palmyra consists of a circular string of about 50 islets nestled among several lagoons and encircled by 15,000 acres of shallow turquoise reefs and deep blue submerged reefs. It is the northernmost atoll in the Line Islands in the equatorial Pacific." (source:http://www.fws.gov/palmyraatoll/ ) "Palmyra.is an incorporated atoll administered by the United States federal government. The atoll is 4.6 sq mi (12 km2), and it is located in the Northern Pacific Ocean. Geographically, Palmyra is one of the Northern Line Islands (southeast of Kingman Reef and north of Kiribati Line Islands), located almost due south of the Hawaiian Islands, roughly halfway between Hawaii and American Samoa. Its 9 mi (14 km) of coastline has one anchorage known as West Lagoon. It consists of an extensive reef, two shallow lagoons, and some 50 sand and reef-rock islets and bars covered with vegetation—mostly coconut trees, Scaevola, and tall Pisonia trees." "Palmyra was first sighted in 1798 by an American sea captain, Edmund Fanning of Stonington, Connecticut, while his ship the Betsy was in transit to Asia, but it was only later— on November 7, 1802—that the first Western people landed on the uninhabited atoll. On that date, Captain Sawle of the United States ship Palmyra was wrecked on the atoll..In December 2000, most of the atoll was purchased by The Nature Conservancy," "On January 18, 2001, the U.S. Secretary of the Interior signed an order designating Palmyra’s tidal lands, submerged lands, and surrounding waters out to 12 nautical miles from the water’s edge as a National Wildlife Refuge. According to the document :https://darchive.mblwhoilibrary.org/bitstream/handle/1912/2051/ "On coral reefs in Palmyra—a central Pacific atoll with limited fishing pressure—total fish biomass is 428 and 299% greater than on reefs in nearby Christmas and Fanning Islands. Large apex predators –groupers, sharks, snappers, and jacks larger than 50 cm in length- account for 56% of total fish biomass in Palmyra on average, but only 7% and 3% on Christmas and Fanning." "With minimal historical and current population (two resident refuge managers and up to ten visiting scientists or volunteers), Palmyra has never had the extensive local fisheries of the more populated Line Islands." "The shallow reefs at Palmyra Atoll (2-10 people, no fishing allowed on reefs because of marine protected area status) sustain 428% and 299% more fish biomass per 200 m2 than Christmas" According to the document: "Trip Report from December 2008 Pangaea Research "Our findings to date have helped to describe the dynamics of this unique ecosystem, which is relatively free of human influences" We determined bonefish at Palmyra to have much higher natural mortality rates compared with other locations that have been investigated (Friedlander et al.2004, 2007), most likely a consequence of the large number of apex predators at Palmyra (e.g. sharks and jacks). These predator-dominated ecosystems are rare owing to the extirpation of large apex-predators from most reefs worldwide (Friedlander and DeMartini 2002, Sandin et al. 2008)." "Understanding the role that sharks and other apex predators play is becoming even more important due to recent reports that predator populations are declining due to over-fishing, and it is unclear what effect this may have on prey populations and other dynamics of the marine communities (Baum et al., 2003)" "In addition to bonefish, we have intensively studied movements and feeding habits of the most common predator in the lagoon ecosystem at Palmyra, the blacktip reef shark (Carcharhinus melanopterus). Our data indicates that blacktip reef sharks show site fidelity to certain lagoon areas, and that movements are focused along the edges of the sand flats (Papastamatiou et al. in press). Using fractal analysis, we surmised that blacktips patrol these edges to intercept prey species that may move to and from the lagoons and sand flats with the tides. The only other study of movement patterns of blacktip reef sharks, at Aldabra atoll, Indian Ocean, also suggested that blacktips showed site fidelity to core areas of a reef, and that movement patterns were influenced by tidal currents (Stevens 1984). So I come to the following conclusion which seem to indicate why there are more predators in Palmyra Atoll than in surrounding atolls and areas : The atoll has limited or no fishing pressure from humans, partially due to the fact that no fisheries are present in this area and due to the fact that no fishing is allowed on reefs because these areas have been designated as having "marine protected area" status. The fact that the United States Government has also designated Palmyra’s tidal lands, submerged lands, and surrounding waters out to 12 nautical miles from the water’s edge as a National Wildlife Refuge is another reason why there are more predators found in this atoll. Because more fish exist, this means that predators have a larger food supply in this area, which they have come to know, and therefore they will obviously continue to reside in an area where their dietary needs can be fulfilled. Because the predator fish and animal species in this atoll have not been over-fished by humans, this means these populations of these fish have been allowed to increase in size. In the following video Senior Scientist and Cultural Advisor for the Nature Conservancy Hawai'i, Sam Ohu Gon, shares why Palmyra is so special and why Hōkūle'a was allowed to visit this protected atoll: Here is a video which shows some pictures taken of Palmyra Atoll from Google Earth by Youtube member Tautvis17 source of video: The "Nature Conservancy created this next video of the waters off of Palmyra Atoll Lastly, here is a website which gives you more information about the Atoll:
<urn:uuid:b46cdccc-1adc-4441-a317-9df1075f3a96>
3.234375
1,738
Personal Blog
Science & Tech.
37.838751
I . WHAT' S HAPPENIN G TO GLOBA L CLIMATE ? Warm current i reduces Peruvian > fish catch by 45 per cent criticise the GCMs of the early 1990s for their over-prediction of the average surface temperature rise. What Singer and others such as Frederick Seitz, past President of the US National Academy of Sciences, patently failed to mention, however, was the modellers' awareness that the 'extra' greenhouse gases were not the whole story and that their models, to be one step closer to reality, need ed to take on board the effect of 'offending' atmospheric aerosols. By realising that the sulphur dioxide emitted with the burning of fossil fuels has had a cooling effect on the Earth's surface through reflecting incoming light back out to space, the modellers at the UK's Met Office's Hadley Centre are now able to get good corre lations with the records of past surface temperatures. Once again the discrepancy has been cleared up: the model shows a warming of little more than 0.5°C since 1860, just as has been found from measurements on the ground.8 In fact, the upward trend in temperature over the past 130 years has been in fits and starts rather than being a steady increase. The reason for the jerkiness becomes clear once the industrially-gen erated sulphate aerosols are included, which, in sharp contrast to the greenhouse gases, with an atmospheric lifetime of roughly 50 to 200 years, have an atmospheric lifetime of two weeks at most, together with a distribution that is extremely patchy. When indus trial activity is high, for instance during the two World Wars, the emissions of sulphur go up, and since their effect on the atmos phere is immediate but short-lived they tend to dominate in the short-term. When high industrial activity is followed by a slump, as in the Great Depression, the concentration of atmospheric sul phur rapidly falls and the impact of the greenhouse gases comes shining through. We therefore have the paradoxical situation that cooler periods in the past resulted from greater industrial activity and warmer periods from economic and industrial recession. Clearly, as we institute sulphur-scrubbing to reduce sulphur emis sions on an international basis, in accord with the Helsinki Proto col, the skies will become clearer and the full warming impact of the added greenhouse gases will be revealed. Even as the theories of the small band of climate change scep tics are being demolished, discrepancies or lack of correlation between carbon dioxide levels and climate over the past few hun dred years are still being manipulated as evidence that our current greenhouse gas emissions cannot be correlated with global warm ing. One notable claim is that the Sun is largely responsible for such 'natural' fluctuations in climate through variations in sunspot activity. Thus, a shorter cycle of around nine years, compared with the average 11-year cycle, is generally associated with greater sunspot activity and there is evidence that those periods coincide with warmer surface temperatures, such as in late Roman times and in the Middle Ages. By the same token, periods of cool sur face temperatures, such as between AD 1400 and 1510, a period known as the Sporer minimum, and the Maunder Minimum of the seventeenth century - when the Sun's brightness fell by at least 0.4 per cent - coincided with low sunspot activity.9 As various scientists have pointed out, the sunspot cycle is now months shorter than it was one century ago, implying more solar activity and presumably a warming. But, far more important than the actual length of the solar cycle is the number of sunspots in evidence at any one time, and they have been declining since 1960 - an indication that the Earth should be getting cooler, at least on the surface. Hence, the only possible remaining reason for the warming is the rise in greenhouse gases which are now swamping fluctuations in sunspot activity. Still, climate change sceptics argue that climatic changes we may be witnessing today are a consequence of natural phenomena - such as El Nino. Whilst El Nino is normally a natural phenom enon, its recent extreme manifestation is highly likely to be the consequence of severe aggravation by human activities, including human-induced global warming and tropical forest destruction. In fact, according to some climatologists, i f natural variability were the overriding factor, far from causing warming, it would current ly be leading us into a period of cooling — a glacial. Writing 20 years ago, those climatologists were basing their argument on what was known of the Earth's orbiting around the Sun - known as the Milankovitch Wobble. The Earth's orbit shifts from being The Ecologist, Vol. 29, No 2, March/April 1999 WE'R E CHANGIN G OUR CLIMATE ! WHO CA N DOUB T IT ? The Rise of Greenhouse Gas Concentrations Atmospheric concentrations - the accumulation of emissions - of greenhouse gases have grown significantly since preindustrial times as a result of human activities. Carbon dioxide concentrations - the most important greenhouse gas apart from water vapour - has increased more than 30 per cent from 280 ppmv (parts per million by volume) in the pre-industrial era to 365 ppmv by the late 1990s. The current rate of increase is around 1.5 ppmv per year. Unfortunately, a large proportion of the carbon dioxide we put into the atmosphere remains there, warming the planet, for around 200 years. Methane - on a weight-per-weight basis some 20 times more powerful as a greenhouse gas than carbon dioxide - has more than doubled its concentration, from 700 to 1,720 parts per billion, by volume, (ppbv), primarily because of deforestation and the growth in rice and cattle production. Natural gas leaks are another source. Methane's residence time in the atmosphere is relatively short approximately 12 years. Nitrous oxide, associated with modern agriculture and the heavy application of chemical fertilisers, has increased from preindustrial levels of 275 ppbv to 310, with a current annual growth rate of 0.25 per cent. On a weight-per-weight basis it is more than 200 times more powerful as a greenhouse gas compared with carbon dioxide. Its residence time in the atmosphere is around 120 years. The chlorofluorocarbons, CFC11 and CFC12, both with growth rates of 4 per cent per year during the past decade, have now reached levels of 280 parts per trillion by volume (pptv) and 484 pptv respectively. They have a 'greenhouse gas potential' that is many thousands of times greater than carbon dioxide on a weight-per-weight basis, and they remain in the atmosphere from several thousand years. When we take the residence time in the atmosphere of the different gases and their specific effectiveness as greenhouse gases into account, carbon dioxide's contribution is some 55 per cent of the whole, compared with 17 per cent for the two CFCs and 1 5 per cent for methane. Other CFCs and nitrous oxide account for 8 and 5 per cent respectively of the changes in radiative forcing. circular to elliptical over the course of 100,000 years. Its tilt varies too, from 21.8 to 24.4 degrees over a 40,000-year period and is currently tilted at 23.44. The more tilted the Earth the greater the impact of the seasons. Which hemisphere is closest to the Sun dur ing its summer or indeed winter varies over a 25,000-year cycle. The northern hemisphere is now closest to the Sun during its win ter and furthest away in the summer, which means that it receives approximately 5 per cent less summer sunshine than it received 12,000 years ago. The Earth's current trajectory is one which has more in common with a cooling period and therefore we should be heading towards another ice-age. Recent history of the Earth sug gests that ice-ages last 90,000 years with 10,000 years of interglacial. On that basis the timing is right for the development of another ice-age. The current spate of warming is therefore indica tive that new factors - human emissions of greenhouse gases and mass deforestation - have been introduced which are counteract ing and even overwhelming the consequences of a natural process.10 Waiting fo r 'more certainty ' cannot be an optio n The handful of climate change sceptics enjoy repeating the mantra that too many uncertainties exist in the science of climate change and that these must be eliminated before we take economically 'costly' mitigating action. Such arguments are false and in leading to prevarication they are extremely dangerous: all the evidence of the IPCC has been properly peer-reviewed by the best climatolo gists in the world and it shows without doubt that global warming is a human-induced phenomenon that has a significant statistical base. The only elements of uncertainty concern the precise effects global warming will have on the rest of the Earth's climate-stabil ising systems, and the speed with which changes will occur. But that must not be used as a reason for delaying action. Quite the opposite, for such uncertainty encompasses the possibility of high ly disruptive, extremely long-lasting climatic change. The longer we delay reducing our greenhouse gas emissions, the more likely it is that the warming we have set in motion will increase to the extent that it causes new factors to come into play - such as the collapse of the planet's natural greenhouse-gas-absorbing sinks, which will in turn feed back on the warming process, causing cli matic changes that are potentially catastrophic and effectively irre versible for centuries i f not millennia to come (see 'How Climate Change Could Spiral Out of Control', p.68). I f such effects were unleashed, we would not be able to return rapidly to where we were by simply switching off the emission of greenhouse gases and deforestation that caused the impact in the first place. For, once carbon dioxide is in the atmosphere, between 40 and 60 per cent of it remains there for a historically long period - some 200 years when the carbon sinks are in healthy operation. Waiting for 'more certainty' or more damage to occur is an extremely dangerous and irresponsible position to take for anoth er reason. The radiative thermodynamic physics of the greenhouse effect are such to cause a long delay between the emission of car bon dioxide into the atmosphere and the time when the effects on the climate actually manifest themselves. Hence, the CO: that we emit and accumulate in the atmosphere now will only act on the climate 50 to 80 years in the future. Conversely, climatic changes, such as temperature increase, extreme weather events and damage to crop yields that we are experiencing today, are occurring in response to the CO: that we emitted half-a-century or more ago when atmospheric concentrations were much lower than they cur rently are. It therefore follows that in 50 to 80 years from now, we will experience incomparably more damage than today. Our politicians should therefore understand that i f they only take action proportionate to the damage they see now, they will dramatically and catastrophically underestimate the damage that will actually take place, and they will hence underestimate the degree of action that is needed to avert it. Measures to prevent such severe climatic disruption cannot therefore be taken soon enough. The reality of climate change and the need for preventive action is now inescapable - no one should doubt it. Simon Retallack is guest editor of this special issue of The Ecologist. Peter Bunyard Science Editor of this special issue is the author of Gaia In Action: Science of the Living Earth. His forthcoming book on climate change is called The Impact of Global Warming. References: 1. IPCC's Second Assessment Report, Summary for Policymakers, Cambridge Univeristy Press, 1995. 2. Ross Gelbspan, The Heat is On, Addison Wesley, 1997. 3. Ross Gelbspan, Climate change: local and global, article 1998. 4. Climate Change, The IPCC Scientific Assessment, Processes and Modelling, WMO/UNEP, 1990. 5. Stephen Hume, The Vancouver Sun, December 30, 1998. 6. Frank Wentz, Matthias Schabel, Nature, Vol. 394, p.661, August, 1998. Also see James Hansen et ai, Science, Vol. 281, p.930 and Jeff Hecht, New Scientist, August 15, 1998. 7. Martin Jarvis, British Antarctic Survey, Journal of Geophysical Research, Vol. 103, p.20 774. 8. UK Climate Impacts Programme: Technical Report No. 1, The Met Office, October 1998. 9. John Eddy, Solar History and Human Affairs, Human Ecology, Vol. 22 Nol , 1994. 10. David Waugh, Geography: an Integrated Approach, Nelson, second edition, 1995. The Ecologist, Vol. 29, No 2, March/April 1999
<urn:uuid:269f8df6-ef3d-4f4b-88c9-49e34c097785>
3.015625
2,665
Academic Writing
Science & Tech.
45.525425
(Submitted September 10, 1999) I have studied with fascination the massive gamma ray burst event (GRB990123) that has shocked astronomers with the unimaginable power of the burst that occurred on the 23 of January this year. One press release claimed that if the same event happened a mere 2000 light years away, it would appear twice as bright as the Sun for the brief time of the burst. My question relates to the impact of the gamma rays on earth if such an event occurred within a relatively close proximity to earth, would it compare to the electromagnetic pulse (EMP) effect of a high-altitude nuclear explosion on earths delicate electronic hardware? (How close would such a burst have to be to create such an effect?) If a gamma-ray burst occurred near to us, it would be Bad. For a description of what a mere supernova could do, see The gamma-rays from 990123 had 1000 times the energy flux of the optical light, so at 2000 light years the gammas would deposit 2000 times as much energy as the Sun (in addition to twice as much visible light). Furthermore, this gamma-ray energy would interact in the upper atmosphere, producing nitrogen oxides that would rapidly catalyze the destruction of the ozone layer. And then, a few centuries later, it gets worse, if current models are correct. A storm of cosmic rays would pretty much wipe out everything that wasn't beneath a few hundred meters of rock. See Sky and Telescope for February 1998 (in most good libraries) for more In answer to your EMP question, I believe that the typical scenarios involve a a big bomb, call it 10 megatons, at a high altitude, call it 1000 km. 10 megatons is 4.2 x 1023 ergs. 990123 produced about 4 x 54 ergs of gamma-rays, so it produced 1 x 1031 times as much, and would be as vicious at 3 x 1015 times the distance. 3 x 1015 x 1000 km is about 300,000 light years. So by this analysis, from anywhere in our Galaxy, the gamma rays would cause a massive EMP event and smite all of the electronics on that side of the planet. (And maybe on the other side as well, I don't know how EMP propagates over the horizon.) However, there are probably mitigating effects. A bomb produces a very fast release of gamma-rays (microseconds to milliseconds) causing a fast rise in electric field, the same amount of energy over a shorter time means more power in the pulse. The slower GRB 990123 (lasting about a minute) would probably cause a corresponding decrease in the EMP. There are, however, GRBs with rise times of less than a millisecond. David Palmer and Samar Safi-Harb for Ask an Astrophysicist
<urn:uuid:8ecbad98-4bd1-4db9-a257-3a9f5c202325>
3.640625
622
Q&A Forum
Science & Tech.
54.4625
Hydroelectric systems make use of the energy in running water to create electricity. In coal and natural gas systems, a fossil fuel is burned to heat water. The steam pressure from the boiling water turns "propellors" called turbines. These turbines spin coils of wire between magnets to produce electricity. Hydropowered systems also make use of turbines to generate electrical power; however, they do so by using the energy in moving water to spin the turbines. Water has kinetic energy when it flows from higher elevations to lower elevations. The energy spins turbines like those pictured below: A hydroelectric turbine An interior view of the turbines at "Nine Mile Plant" at the Spokane River In larger scale hydroelectric plants, large volumes of water are contained by dams near the generator and turbines. The "forebay" is a storage area for water that must be deep enough that the penstock is completely submerged. The water is allowed to flow into the electricity-generating system through a passage called the "penstock". The controlled high-pressure water spins the turbines, allowing the generator to produce an electric current. The "powerhouse" contains and protects the equipment for generating electricity. The high-pressure water exits the system through a "draft tube." The "fish ladder" (see "problems") attempts to minimize the environmental impact of hydroelectric systems by providing a path for migrating fish to take. Next Page: "Types of Hydroelectric Plants"
<urn:uuid:2cab7f65-38ba-4164-ac73-58801fd68003>
3.875
296
Knowledge Article
Science & Tech.
27.06797
Illustration courtesy NASA Published September 9, 2011 It's coming from outer space. Sometime in the next few weeks, pieces of a defunct NASA satellite will rain down on an unlucky patch of Earth. Precisely where and when the space debris will hit home are not yet known, though the U.S. government will have a better picture of the so-called "debris footprint"—expected to be roughly 500 miles (805 kilometers) long—as the satellite's date with destiny draws near. When the satellite was switched off in 2005, it became another piece of potentially hazardous space junk, so NASA nudged it toward Earth, aiming for a downward trajectory that would cause the craft to burn up in the atmosphere. Now the satellite itself will become a type of experiment: Can an uncontrolled 6.3-ton object plummet out of orbit without hitting anybody? At a press briefing Friday, NASA said there's generally little danger of death by space debris. Since the dawn of the Space Age some five decades ago, no human has been killed or even hurt by an artificial object falling from the heavens. Many space objects experience a carefully controlled demise. Russia's Mir space station, for example, was steered into a remote patch of ocean in 2001. (Related: "Space Station to Fall to Earth—Find Out How and Where.") But other pieces—old rocket segments jettisoned in orbit and abandoned spacecraft—fall toward Earth unguided. Last year one object a day, on average, made an unshepherded dive into the atmosphere, said NASA's Nick Johnson. To date nearly 6,000 tons of human-made material have survived the fiery journey through our atmosphere, according to the Aerospace Corporation, a space-research center. Here are some of the notable objects that have made surprise return trips to Earth: Sphere of Influence In March a hiker in northwestern Colorado spotted a spherical object, still warm to the touch, sitting in a crater. The hiker called military aerospace officials but was told to instead call the county sheriff, according to an orbital-debris report released last week by the National Research Council. Eventually the hiker reached the NASA office that tracks space debris. The tank, from a Russian Zenit-3 rocket launched in January, is one of the few such space objects to be recovered in the United States. Brush With Space Junk A woman taking a late-night walk in Oklahoma in January 1997 saw a streak of light in the sky, then felt something brush her shoulder. It turned out to be part of a U.S. Delta II rocket launched in 1996—the only space debris known to have hit someone, according to the Aerospace Corporation. The woman was unhurt—and lucky. A 580-pound (260-kilogram) fuel tank from the same rocket slammed to the ground in Texas around the same time, narrowly missing an occupied farmhouse, NASA reports. In January 1978 the Soviet surveillance satellite Kosmos 954 crashed in northern Canada, scattering radioactive material from the spacecraft's nuclear power generator over thousands of square miles, the Canadian government said. A frantic campaign dubbed Operation Morning Light was mounted to find the radioactive material, but only 0.1 percent of the dangerous debris was ever recovered. (Related: "Space Station Crew Not Stranded, Despite Russian Crash.") Space Station Shower When the Salyut-7 space station began trailing lower in its orbit, Soviet engineers tried to send it into a controlled tumble into the Atlantic Ocean. But their efforts failed, and the 88,000-pound (39,916-kilogram) station—one of the largest human-made objects to reenter the atmosphere—showered metal fragments on a city in Argentina, where residents observed glowing trails in the sky. No one was hurt, according to the Aerospace Corporation. In 2000 beachcombers stumbled upon a mysterious object that had washed ashore near Corpus Christi, Texas. The finder wanted to turn the object—the pointed nose of an Ariane 5 rocket that had just launched—into a hot tub. "We convinced him … that was not an option," NASA's Johnson said. These six scientists were snubbed for awards or robbed of credit for discoveries … because they were women. Scientists say they've learned why penguin wings, now used for swimming, no longer get the birds off the ground. A boulder-size meteor slammed into the moon in March, igniting an explosion so bright that anyone looking up at right moment might have spotted it.
<urn:uuid:936bb4a0-98cc-4ed2-9a6d-87f268c1ef36>
3.8125
930
Content Listing
Science & Tech.
46.797888
Use the diagram to investigate the classical Pythagorean means. Show that the arithmetic mean, geometric mean and harmonic mean of a and b can be the lengths of the sides of a right-angles triangle if and only if a = bx^3, where x is the Golden Ratio. What is the relationship between the arithmetic, geometric and harmonic means of two numbers, the sides of a right angled triangle and the Golden Ratio?
<urn:uuid:d04f7e89-3056-4448-8d0d-2d0c952cdd17>
3.234375
93
Tutorial
Science & Tech.
53.498311
Find a quadratic formula which generalises Pick's Theorem. Manufacturers need to minimise the amount of material used to make their product. What is the best cross-section for a gutter? Can you find a general rule for finding the areas of equilateral triangles drawn on an isometric grid? Prove that the area of a quadrilateral is given by half the product of the lengths of the diagonals multiplied by the sine of the angle between the diagonals. Three rods of different lengths form three sides of an enclosure with right angles between them. What arrangement maximises the area A trapezium is divided into four triangles by its diagonals. Suppose the two triangles containing the parallel sides have areas a and b, what is the area of the trapezium? A farmer has a field which is the shape of a trapezium as illustrated below. To increase his profits he wishes to grow two different crops. To do this he would like to divide the field into two. . . . In a right-angled tetrahedron prove that the sum of the squares of the areas of the 3 faces in mutually perpendicular planes equals the square of the area of the sloping face. A generalisation. . . . If the base of a rectangle is increased by 10% and the area is unchanged, by what percentage (exactly) is the width decreased by ? Four quadrants are drawn centred at the vertices of a square . Find the area of the central region bounded by the four arcs. In this problem we are faced with an apparently easy area problem, but it has gone horribly wrong! What happened? This article is about triangles in which the lengths of the sides and the radii of the inscribed circles are all whole numbers. Can you choose your units so that a cube has the same numerical value for it volume, surface area and total edge length? Investigate the properties of quadrilaterals which can be drawn with a circle just touching each side and another circle just touching each vertex. One side of a triangle is divided into segments of length a and b by the inscribed circle, with radius r. Prove that the area is: If I print this page which shape will require the more yellow ink? Three triangles ABC, CBD and ABD (where D is a point on AC) are all isosceles. Find all the angles. Prove that the ratio of AB to BC is equal to the golden ratio. Six circular discs are packed in different-shaped boxes so that the discs touch their neighbours and the sides of the box. Can you put the boxes in order according to the areas of their bases? A finite area inside and infinite skin! You can paint the interior of this fractal with a small tin of paint but you could never get enough paint to paint the edge. A square of area 40 square cms is inscribed in a semicircle. Find the area of the square that could be inscribed in a circle of the Draw two circles, each of radius 1 unit, so that each circle goes through the centre of the other one. What is the area of the Make a poster using equilateral triangles with sides 27, 9, 3 and 1 units assembled as stage 3 of the Von Koch fractal. Investigate areas & lengths when you repeat a process infinitely often. The square ABCD is split into three triangles by the lines BP and CP. Find the radii of the three inscribed circles to these triangles as P moves on AD. A circle is inscribed in a triangle which has side lengths of 8, 15 and 17 cm. What is the radius of the circle? Change the squares in this diagram and spot the property that stays the same for the triangles. Explain... This shape comprises four semi-circles. What is the relationship between the area of the shaded region and the area of the circle on AB as diameter? The diagonals of a trapezium divide it into four parts. Can you create a trapezium where three of those parts are equal in area? Take any rectangle ABCD such that AB > BC. The point P is on AB and Q is on CD. Show that there is exactly one position of P and Q such that APCQ is a rhombus. ABC and DEF are equilateral triangles of side 3 and 4 respectively. Construct an equilateral triangle whose area is the sum of the area of ABC and DEF. Can you show that you can share a square pizza equally between two people by cutting it four times using vertical, horizontal and diagonal cuts through any point inside the square? What is the ratio of the area of a square inscribed in a semicircle to the area of the square inscribed in the entire circle? Straight lines are drawn from each corner of a square to the mid points of the opposite sides. Express the area of the octagon that is formed at the centre as a fraction of the area of the square. Follow the hints and prove Pick's Theorem. What is the area of the quadrilateral APOQ? Working on the building blocks will give you some insights that may help you to work it A and B are two points on a circle centre O. Tangents at A and B cut at C. CO cuts the circle at D. What is the relationship between areas of ADBO, ABO and ACBO? What is the same and what is different about these circle questions? What connections can you make? Given a square ABCD of sides 10 cm, and using the corners as centres, construct four quadrants with radius 10 cm each inside the square. The four arcs intersect at P, Q, R and S. Find the. . . . Can you draw the height-time chart as this complicated vessel fills Have a go at creating these images based on circles. What do you notice about the areas of the different sections? Imagine different shaped vessels being filled. Can you work out what the graphs of the water level should look like? How efficiently can you pack together disks? Join in this ongoing research. Build squares on the sides of a triangle, join the outer vertices forming hexagons, build further rings of squares and quadrilaterals, investigate. Cut off three right angled isosceles triangles to produce a pentagon. With two lines, cut the pentagon into three parts which can be rearranged into another square. Three squares are drawn on the sides of a triangle ABC. Their areas are respectively 18 000, 20 000 and 26 000 square centimetres. If the outer vertices of the squares are joined, three more. . . . Triangle ABC is right angled at A and semi circles are drawn on all three sides producing two 'crescents'. Show that the sum of the areas of the two crescents equals the area of triangle ABC. A point P is selected anywhere inside an equilateral triangle. What can you say about the sum of the perpendicular distances from P to the sides of the triangle? Can you prove your conjecture? Take a sheet of A4 paper and place it in landscape format. Fold up the bottom left corner to the top so the double thickness is a 45,45,90 triangle. Fold up the bottom right corner to meet the. . . . Which has the greatest area, a circle or a square inscribed in an isosceles, right angle triangle? Draw three equal line segments in a unit circle to divide the circle into four parts of equal area. Analyse these beautiful biological images and attempt to rank them in size order.
<urn:uuid:7e9f1f5b-1873-4a93-a00f-acfde2e19401>
3.28125
1,650
Content Listing
Science & Tech.
67.065028
At this site we keep several lists of primes, most notably the list of the 5,000 largest known primes. Who found the most of these record primes? We keep separate counts for persons, projects and programs. To see these lists click on 'number' to the right. Clearly one 100,000,000 digit prime is much harder to discover than quite a few 100,000 digit primes. Based on the usual estimates we score the top persons, provers and projects by adding (log n)3 log log n for each of their primes n. Click on 'score' to see these lists. Finally, to make sense of the score values, we normalize them by dividing by the current score of the 5000th prime. See these by clicking on 'normalized score' in the table on the right. - Score for Primes To find the score for a person, program or project's primes, we give each prime n the score (log n)3 log log n; and then find the sum of the scores of their primes. For persons (and for projects), if three go together to find the prime, each gets one-third of the score. Finally we take the log of the resulting sum to narrow the range of the resulting scores. (Throughout this page log is the natural logarithm.) How did we settle on (log n)3 log log n? For most of the primes on the list the primality testing algorithms take roughly O(log(n)) steps where the steps each take a set number of multiplications. FFT multiplications take about O( log n . log log n . log log log n ) operations. However, for practical purposes the O(log log log n) is a constant for this range number (it is the precision of numbers used during the FFT, 64 bits suffices for numbers under about 2,000,000 digits). Next, by the prime number theorem, the number of integers we must test before finding a prime the size of n is O(log n) (only the constant is effected by prescreening using trial division). So to get a rough estimate of the amount of time to find a prime the size of n, we just multiply these together and we get O( (log n)3 log log n ). Finally, for convenience when we add these scores, we take the log of the result. This is because log n is roughly 2.3 times the number of digits in the prime n, so (log n)3 is quite large for many of the primes on the list. (The number of decimal digits in n is floor((log n)/(log 10)+1)).
<urn:uuid:f169239f-67b7-4fb4-aae4-72f868f151d3>
2.96875
586
Knowledge Article
Science & Tech.
75.005185
DETERMINATION OF SODA ASH PURITY Standardization of HCl In the fume hood, prepare a solution of HCl that is about 0.1 M by dissolving 10 mL of conc. HCl in about 1 L of DI water. Dry about 1.5 g of primary standard sodium carbonate for 1 h at 160oC, cool in your desiccator, and using an analytical balance weigh at least 5 samples into separate 250 mL conical flasks. Be sure to choose sample sizes that will require between 30 and 50 mL of titrant. Add 100-125 mL of DI water and 2 drops of modified methyl orange indicator to each flask. Titrate the sample slowly (0.5 mL/s) until the indicator changes color from green to gray (almost colorless.) [NOTE: This is a relatively gradual change. If the solution appears purple you have titrated too far - discard that sample and obtain another.] Record the titrated volume in your lab notebook (Don't forget to rinse the inside of your flask!). Calculate the molarity of the HCl solution using the volume of HCl, the mass and purity of the standard sodium carbonate, the molecular weight of sodium carbonate (105.99), and the balanced chemical equation for the reaction between sodium carbonate and HCl. Determination of Soda Ash Purity Dry your unknown for at least 1 h at 160oC. After cooling your unknown in your desiccator use an analytical balance to weigh 5 samples into separate 250 mL conical flasks. Again, be sure to choose sample sizes that will require between 30 and 50 mL of titrant. From this point repeat the procedure used to standardize your HCl. Calculate the percent purity of soda ash (sodium carbonate) present in your sample.
<urn:uuid:b6a8041c-4467-4714-8b3d-4e695a0ba294>
3.046875
374
Tutorial
Science & Tech.
61.167039
Glucose and other simple carbohydrates can be converted to more complicated carbohydrates such as starch and cellulose. Carbon dioxide is the fourth most abundant gas in the atmosphere, but on the planets Venus and Mars it is the most abundant. The gas traps infrared radiation emitted from the warm earth surface. Carbon dioxide is transparent to sunlight, the light warms the surface, as infrared radiation is emmited back, it is trapped by the molecules and the atmosphere is warmed in a process known as the greenhouse effect. In the laboratory, the test for carbon dioxide is to bubble it through dilute calcium hydroxide solution (limewater). This results in the formation of insoluble calcium carbonate which makes the solution turn cloudy.
<urn:uuid:0a0c688e-48b6-4002-9b57-f146b85e468c>
3.21875
147
Knowledge Article
Science & Tech.
31.435
Actually the reason that whales are so big is related to the medium that they live in. Water provides buoancy so the gravitational pull of a large body isn't the same issue that it would be on land. Also, life in water means that the animal must find a way to reserve body heat (heat is lost about 20 times faster in water than on land). As you increase body size, the surface area to volume ratio decreases, thereby decreasing the amount of area that heat can be lost across. So, a larger body means less heat loss. Whales also have a thick layer of blubber to help minimize heat loss which also contributes to a larger body size. Most large whales migrate and therefore will encounter colder offshore waters at some point. Therefore, thermoregulation is very important.
<urn:uuid:45fafb23-ea12-4677-b142-257f6621a256>
3.3125
161
Comment Section
Science & Tech.
49.213763
Evidence of String Theory: Gamma Ray Bursts Among the various phenomena in the universe, two types produce large amounts of energy and may provide some insight into string theory: gamma ray bursts (GRBs) and cosmic rays. Exactly what causes a gamma ray burst is disputed, but it seems to happen when massive objects, such as a pair of neutron stars or a neutron star and a black hole (the most probable theories), collide with each other. These objects orbit around each other for billions of years, but finally collapse together, releasing energy in the most powerful events observed in the universe, depicted in this figure. The name gamma ray bursts clearly implies that most of this energy leaves the event in the form of gamma rays, but not all of it does. These objects release bursts of light across a range of different energies (or frequencies — energy and frequency of photons are related). According to Einstein, all the photons from a single burst should arrive at the same time, because light (regardless of frequency or energy) travels at the same speed. By studying GRBs, it may be possible to tell if this is true. Calculations based on Amelino-Camelia’s work has shown that photons of different energy that have traveled for billions of years could, due to (estimated and possibly over-optimistic) quantum gravity effects at the Planck scale, have differences of about 1 one-thousandth of a second (0.001s). The Fermi Gamma-ray Space Telescope (formerly the Gamma-ray Large Area Space Telescope, or GLAST) was launched in June 2008 as a joint venture between NASA, the U.S. Department of Energy, and French, German, Italian, Japanese, and Swedish government agencies. Fermi is a low-Earth orbit observatory with the precision required to detect differences this small. So far, there’s no evidence that Fermi has identified Planck scale breakdown of general relativity. To date it’s identified a dozen gamma ray–only pulsars, a phenomenon that had never been observed before Fermi. (Prior to Fermi, pulsars — spinning and highly magnetized neutron stars that emit energy pulses — were believed to emit their energy primarily through radio waves.) If Fermi (or some other means) does detect a Planck scale breakdown of relativity, then that will only increase the need for a successful theory of quantum gravity, because it will be the first experimental evidence that the theory does break down at these scales. String theorists would then be able to incorporate this knowledge into their theories and models, perhaps narrowing the string theory landscape to regions that are more feasible to work with.
<urn:uuid:6fc0be54-a1d4-495f-b5da-53ecf60b35de>
4.15625
554
Knowledge Article
Science & Tech.
36.443764
The Molecular Formula of Compounds. A compound was found to contain 48 g of carbon and 12 g of hydrogen. The relative molecular mass of the compound is 30. What is the molecular formula of the compound? 1) Find how many moles of carbon react with how many moles of hydrogen. RAM of C = 12, RAM of H = 1. moles = mass ÷ RAM moles = 48 ÷ = 4·0 moles. moles = 12 ÷ = 12·0 moles. 2) The proportion of moles of carbon to moles of hydrogen is reduced to the lowest whole number. 4 moles of C to 12 moles of Divide by 4. 1 mole of C to 3 moles of H. The empirical formula is CH3. the relative molecular mass of the the relative molecular mass of the empirical formula. RMM of the compound = 30. RMM of the empirical formula = RMM of CH3 = 12 + (3 x 1) 30 ÷ 15 = 2. 2 CH3 units in the compound. The molecular formula of the compound is C2H6. The compound is ethane. Links Moles Search Questions gcsescience.com Contents The Periodic Table Index Quizzes gcsescience.com Copyright © 2012 Dr. Colin France. All Rights Reserved.
<urn:uuid:3d48c2d3-e4bc-4df3-9755-136d4b3ea867>
3.5
304
Tutorial
Science & Tech.
83.456253
Be it an individual, small business or large corporates… Everyone has become highly environment conscious. Everybody is seeking to promote Eco-Friendly products and thereby do their bit in spreading environmental consciousness. A large part of going green is being done through technology that reduces carbon footprint to a great extent. Listed below are 3 ideas, where technology can be put into daily use, to do our bit towards having a eco-friendly environment. Use of Internet One of the best resources we have at our disposal is the World Wide Web. The wireless Internet has cut down our need for paper to a great extent. Most of us today read books online. Many even have devices such as the Amazon Kindle to store hundreds of books and files. Many corporate companies have educated their employees about the need to conserve paper and ink, as a lot of energy is used in order to manufacture them and this process leaves great carbon footprints. Taking such steps has made professionals aware of the environmental facts, and people are now printing out papers and files only if necessary. Plus, many companies use recycled paper for printing. For a long time people did not use recycled paper for printing because it lacked the brightness and quality of new paper, but the technological advances in recycling have made this possible. Recycling started almost a revolution across the world. Over the years, most of us recycle whatever possible. In fact, there are people thoughtful enough, and take the time to separate their wet and dry waste to make it easier for sorting out. While recycling does help and must be encouraged, especially in the younger generations that need to be aware of the effect of garbage and waste products on the environment, the next step for us is to embrace products that are biodegradable. A simple online search can introduce us to a whole world of biodegradable products available in the market today. For example, we can find bamboo ballpoint pens that are completely recycled, natural and biodegradable. Eco-Friendly Electronic Appliances Offices as well as homes need to be comfortable for those who spend time there. This translates into having air conditioners, refrigerators, ample lighting, coffee machines, etc. Today, all appliances come with a star rating. The higher the star rating the more energy efficient the product is. Almost all air conditioner manufacturers are now striving to produce 5 star rating products to help the consumers save money as well as help the environment. The same applies to other products like refrigerators, microwave ovens, coffee machines, and so on. Smart consumers prefer smart technology. Even when it comes to lighting most people are now shifting to LED or CFL bulbs. Both these options are eco-friendly and use a fraction of the energy needed to light traditional incandescent bulbs.
<urn:uuid:22b66064-aded-48ed-9e8f-a58a7a8faaa2>
2.796875
562
Listicle
Science & Tech.
37.232834
Formation Of Invisible Foci ( Originally Published 1905 ) This extraordinary deportment of the elementary gases naturally directed attention to elementary bodies in other states of aggregation. Some of Melloni's results now attained a new significance. This celebrated experimenter had found crystals of sulphur to be highly pervious to radiant heat; he had also proved that lamp-black, and black glass (which owes its blackness to the element carbon), were to a considerable extent transparent to calorific rays of low refrangibility. These facts, harmonizing so strikingly with the deportment of the simple gases, suggested further inquiry. Sulphur dissolved in bisulphide of carbon was found almost perfectly diathermic. The dense and deeply-colored element bromine was examined, and found competent to cut off the light of our most brilliant flames, while it transmitted the invisible calorific rays with extreme freedom. Iodine, the companion element of bromine, was next thought of, but it was found impracticable to examine the substance in its usual solid condition. It, however, dissolves freely in bisulphide of carbon. There is no chemical union between the liquid and the iodine; it is simply a case of solution, in which the uncombined atoms of the element can act upon the radiant heat. When permitted to do so, it was found that a layer of dissolved iodine, sufficiently opaque to cut off the light of the midday sun, was almost absolutely transparent to the invisible calorific rays.' By prismatic analysis Sir William Herschel separated the luminous from the non-luminous rays of the sun, and he also sought to render the obscure rays visible by concentration. Intercepting the luminous portion of his spectrum, he brought, by a converging lens, the ultra-red rays to a focus, but by this condensation he obtained no light. The solution of iodine offers a means of filtering the solar beam, or, failing it, the beam of the electric lamp, which renders attainable far more powerful foci of invisible rays than could possibly be obtained by the method of Sir William Herschel. For to form his spectrum he was obliged to operate upon solar light which had passed through a narrow slit or through a small aperture, the amount of the obscure heat being limited by this circumstance. But with our opaque solution we may employ the entire surface of the largest lens, and having thus converged the rays, luminous and non-luminous, we can intercept the former by the iodine, and do what we please with the latter. Experiments of this character, not only with the iodine solution, but also with black glass and layers of lamp-black, were publicly performed at the Royal Institution in the early part of 1862, and the effects at the foci of invisible rays, then obtained, were such as had never been witnessed previously. In the experiments here referred to, glass lenses were employed to concentrate the rays. But glass, though highly transparent to the luminous, is in a high degree opaque to the invisible, heat-rays of the electric lamp, and hence a large portion of those rays was intercepted by the glass. The obvious remedy here is to employ rock-salt lenses instead of glass ones, or to abandon the use of lenses wholly, and to concentrate the rays by a metallic mirror. Both of these improvements have been introduced, and, as anticipated, the invisible foci have been thereby rendered more intense. The mode of operating remains, however, the same, in principle, as that made known in 1862. It was then found that an instant's exposure of the face of the thermo-electric pile to the focus of invisible rays, dashed the needles of a coarse galvanometer violently aside. It is now found that on substituting for the face of the thermo-electric pile a combustible body, the invisible rays are competent to set that body on fire.
<urn:uuid:f0254617-9c84-4c41-8780-8e795043c816>
3.375
800
Nonfiction Writing
Science & Tech.
29.992046
The jet stream is a narrow river of air high in the atmosphere. At times there can be several types of jet streams. The polar jet brings down cold air. The subtropical jet brings warmth and moisture from the Pacific Ocean. A low level jet can occur at night and help fuel thunderstorms. Upper level air patterns can help determine placement of hot and cold air masses and surface high and low pressure systems. Ed Buckner explains more about Jet Streams in this edition of Weather 101.
<urn:uuid:89f88818-25ee-46dd-9c6d-f87013878e9d>
3.3125
102
Truncated
Science & Tech.
67.806207
Querying and formatting resource mysql_query ( string query [, resource link_identifier]) int mysql_num_rows ( [resource result]) The majority of your interaction with MySQL in PHP will be done using the mysql_query() function, which takes one parameter - the SQL query you want to perform. It will then perform that query and return a special resource known as a MySQL result index - this resource contains all the rows that matched your query. This result index resource is the return value of mysql_query(), and you should save it in a variable for later use - whenever you want to extract rows from the results, count the number of rows, etc, you need to use this value. One other key function is mysql_num_rows(), which takes a MySQL result index as its parameter, and returns the number of rows inside that result - this is the number of rows that matched the query you sent in mysql_query(). With the two together we can write our first database enabled script: mysql_connect("localhost", "phpuser", "alm65z"); $result = mysql_query("SELECT * FROM usertable"); $numrows = mysql_num_rows($result); print "There are $numrows people in usertable\n"; As you can see, we capture the return value of mysql_query() inside $result, then use on the very next line - this MySQL result index is used quite heavily, so it is important to keep track of it. The exception to this is when you are executing a write query in MySQL - you might not want to know the result. One helpful feature of mysql_query() is that it will return false if the query is syntactically invalid - that is, if you have used a bad query. This means that very often it is helpful to check the return value even if you are writing data - if the data was not written successfully, mysql_query() will tell you so with the return value. Next chapter: Disconnecting from a MySQL database >> Previous chapter: Connecting to a MySQL database Home: Table of Contents
<urn:uuid:86c1102b-6715-45d9-abda-f3391eccb4c4>
3.171875
445
Documentation
Software Dev.
28.373354
Right now I have a graduate student working on a project to understand the effects of stream restoration in altering patterns of groundwater-stream exchange. She’s working in four stream reaches with varying restoration patterns and watershed land uses. In one of her streams, there is a restoration structure she calls “the temple.” I’d walked the lower bit of stream and the upper bit of stream, but somehow I’d managed to miss this feature that is having remarkable effects on the transient storage dynamics of the stream. This week, I rectified my omission and visited the temple. What the photo above doesn’t fully capture is how large a volume of water is stored behind each of the rock step structures. This picture was taken as water was receding after a pretty high flow event (notice all the debris trapped at the top of one step), but at lower flows there is little or no water going over the top of the steps. Instead, all of the water goes under or around the rock structures and pools of water more than 1 m deep occur between each of the structures. Such big pools at such low flow volumes could have a dramatic effect on things like stream temperature and nutrient dynamics. This structure is all the more remarkable because it occurs at an interesting geomorphic transition in the stream. Upstream of the temple, the stream is restored using typical features like cross-vanes and riffle/pool features. It is low gradient and not confined in a valley. Immediately downstream of the temple, there is a big pool and then a long reach floored with bedrock (obviously not restored). Downstream of the bedrock reach, the stream crosses into the floodplain of a larger watershed and has lots of fine grained alluvium to contend with. Thus, it appears that “the temple” restoration feature is placed at an important geomorphic transition in this stream. It’s in the place where the stream briefly enters a more confined valley and it’s the steepest part of the stream. In other Piedmont streams, I’ve seen bedrock cascades in some places like this, but the stream restoration designers wouldn’t have covered over a feature like that. Instead, maybe there was a knickpoint retreating through soil, saprolite or colluvium that could cause a lot of potential instability in the reach above. The “temple” feature then would be a way of preventing further knickpoint retreat by creating a short high-gradient section. I really wish I’d seen this area before the stream was restored. Instead, I marvel at the highly engineered form of the stream as it passes through the temple, and look forward to seeing what my grad student finds out about the effects of this structure on the transient storage and water quality in the stream.
<urn:uuid:a5018420-9a83-4d7c-8969-0545b3b52bcf>
2.921875
580
Personal Blog
Science & Tech.
50.885476
New Scientist: Scientists have determined that two high-energy neutrinos detected by the South Pole IceCube Neutrino Observatory originated in outer space. Since the discovery of “Bert and Ernie” last year, the IceCube collaboration has been reexamining the data gathered from May 2010 to May 2012. So far they have found 26 more neutrinos of about 50 TeV each. Because that’s twice the expected number of atmospheric neutrinos, which are produced by cosmic rays hitting Earth’s atmosphere, about half must be coming from outside the solar system, according to IceCube team member Thomas Gaisser of the University of Delaware in Newark. Another indication that the neutrinos traveled a great distance is their distribution: Neutrinos are created with a well-defined flavor—either electron, muon, or tau—but can oscillate among the three flavors as they travel through space. The fact that the three types were equally represented indicates that they came a long way. As neutrinos only weakly interact with other matter, they may be able to be used to observe phenomena that optical telescopes cannot, such as the sources of cosmic rays, dark matter and dark holes, and stellar explosions. New Scientist: Although quantum cryptography has been touted as a method of secure communication, it may be susceptible to eavesdropping, according to a paper published in Physical Review Letters. Quantum cryptographic techniques rely on a fundamental principle of quantum mechanics—namely, that the act of measuring quantum data disturbs the data. Therefore, any attempt by a hacker to intercept a message compromises the transmittal. However, even the best systems will always have some margin of error. Now a quantum cloner has been developed that can create copies of a quantum-encrypted message’s photons that, although not perfect, are good enough to keep the transmission error rate relatively low. Only by closely monitoring the rate of error can the counterfeits be detected. BBC: Researchers at IBM invented the scanning tunneling microscope (STM), for which they won the Nobel Prize in Physics in 1986. Advances in the technology, which allows for the imaging and manipulation of individual atoms, are being applied to data storage. The microscope moves an electrically charged, very small-tipped needle over a surface and maps the location of atoms when the tip is close enough that the charge tunnels to the atom. By moving the tip even closer to the atom, the microscope can be used to push the atom to a new location. To demonstrate the capabilities that the technology has achieved, a team led by Andreas Heinrich of IBM Research in Almaden, California, worked 18-hour days for two weeks to create a short, stop-motion movie. Called A Boy and His Atom, the 242-frame, 90-second-long video shows a stick figure playing with a ball. The series of still images is created from dozens of carbon atoms arranged on a sheet of copper, to which the carbon atoms bond. Between frames, the researchers use an STM to carefully move the atoms around, which then rebond to the copper sheet and are imaged in their new positions. Heinrich admits that the movie isn’t about any particular breakthrough, but is more about getting people interested and excited about technology. Ars Technica: Two neutrino-induced events with energy greater than 1 PeV have been reported by researchers at the IceCube Neutrino Observatory located at the South Pole. The observation is significant because it is likely the neutrinos originated in some high-energy event distant from Earth. Trillions of neutrinos, which emanate from a number of sources such as nuclear reactions in the Sun, pass through Earth every second. But they are extremely difficult to detect because they almost never interact with normal matter. Embedded in Antarctic ice, IceCube’s strings of photodetectors watch for the telltale emission of Cherenkov radiation when a neutrino passing through happens to collide with an atom in the ice. Most of the high-energy neutrinos detected by IceCube have come from cosmic rays colliding with atoms in Earth’s atmosphere. Because of the extremely high energy of the newly detected neutrinos, however, researchers believe they may be the first indication of an astrophysical high-energy neutrino flux—an extremely energetic event that occurs far out in the universe. Longer sampling times and more data will be required to verify that finding. Nature: Entangling quantum bits (qubits) at a distance has been done before, but most such demonstrations have used materials or systems that are not easily scalable. Ronald Hanson of Delft University of Technology in the Netherlands and his colleagues have now demonstrated the ability to entangle qubits in diamond crystals 3 m apart. Qubits, which are the basis for quantum computing, allow more than just a single bit of data to be encoded at one time. Entangling qubits over a distance may allow for the development of quantum communication systems with extreme levels of encryption and significantly faster transmission of information. The system demonstrated by Hanson and his colleagues is not very efficient, achieving entanglement only one time in every 10 million attempts (or about once every 10 minutes), and requires extremely low temperatures. However, once entangled, the qubits can be stored in the diamonds at room temperatures. BBC: Speculation surrounding this year’s Nobel Prize in Physics has rekindled the debate concerning the naming of the Higgs boson. Because key contributions were made by at least six people—Robert Brout (who died in 2011), François Englert, Gerald Guralnik, Carl Hagen, Peter Higgs, and Tom Kibble—many in the community object to the particle’s being named for just one of them. Yet naming it after all six would be unwieldy. Because the Higgs theory may be the focus of this year’s Nobel, and a maximum of three individuals can share the prize, the controversy over what to call the new particle is heating up. MIT Technology Review: Electromagnetic waves were first trapped in the 1990s. The complex setups for doing so involve ultracold atomic gases, such as cesium and rubidium, and systems of lasers that take advantage of electromagnetically induced transparency. Now Toshihiro Nakanishi of Kyoto University in Japan and his colleagues have demonstrated a similar effect in a metamaterial made of repeating units of two variable capacitors. When both capacitors are set to the same frequency, incoming electromagnetic waves of that frequency are absorbed and trapped. Detuning the capacitors releases the waves and maintains the phase distribution of the absorbed waves. The team’s metamaterial is a three-layer deep proof-of-concept device that they successfully tested with microwaves. They believe that further work could produce a material that could trap optical frequencies or that could release waves of arbitrary shape and polarization. Those capabilities could be useful in information storage and quantum optics. BBC: A collaboration of researchers in the UK has been able to create a fully synthetic vaccine for foot-and-mouth disease, a serious and contagious condition that afflicts cloven-hoofed animals. The researchers used x rays generated by the Diamond Light Source synchrotron to obtain a highly detailed, atomic-level understanding of the protein shell of the foot-and-mouth virus, a member of the picornavirus family. With that knowledge, they were able to construct a synthetic version of the virus consisting of an empty shell and lacking any of the internal RNA that makes viruses dangerous. Because the resulting vaccine has no live virus, there is no risk of infection, and animals given the vaccine can be easily distinguished from those that are infected. The researchers also reinforced the synthetic virus’s shell, which makes the vaccine stable for several hours, even at high temperatures. The vaccine is therefore very useful in places like southeast Asia, where foot-and-mouth disease is endemic. Also in the picornavirus family is polio, which has not yet been completely eradicated. The current polio vaccine uses a live virus and so carries the risk of potentially reestablishing itself. If the technique used to create the foot-and-mouth vaccine can also be used for polio and other similar viruses, such risks can be mitigated. Discovery News: It has been 25 years since the Department of Energy closed the Savannah River Site, the last source of non-weapons-grade plutonium-238 in the US. Since then, the US has been using up its stockpiles and obtaining plutonium through trade with Russia. The deal with Russia ended in 2010, however. Beginning in the 1970s, NASA used plutonium-238 to supplement solar panels as a power source in spacecraft, including probes such as Voyager 1 and 2, Galileo, and Cassini, and the Martian landers Viking and Curiosity. After the end of the trade deal with Russia, DOE and NASA began working to redevelop a plutonium production system at Oak Ridge National Laboratory. The agencies believe that the process of irradiating neptunium in the reactors at the laboratory will produce roughly 1.5 kg of plutonium-238 every year. The newly produced plutonium will then be mixed with the remaining stockpile, which will restore the older plutonium to a usable property density. So for every 1 kg of new production, 2 kg of the stockpile will be revived. NASA already intends to use plutonium in its next Martian rover, planned for launch in 2020. Science News: The origin of neutron-rich heavy elements in the universe remains a mystery. One possible source could be neutron stars, whose high-pressure, high-gravity interiors could stabilize atoms that could not form otherwise. Collisions between neutron stars would then disperse the atoms into space. Because neutron stars are too far away to study, scientists are trying to determine their composition via computer simulations using the properties of exotic isotopes created in particle accelerators. One such isotope, thought to exist in the crust of neutron stars, is zinc-82. Using a facility at CERN, Robert Wolf of the University of Greifswald in Germany and colleagues were able to isolate a pure sample of zinc-82 and determine its mass. By comparing the mass with predictions from computer modeling, the researchers were able to rule out zinc-82 as a constituent of neutron stars. Despite the negative result, the technique shows potential for “pin[ning] down the characteristics of other exotic nuclei that may exist in neutron stars,” writes Andrew Grant for Science News.
<urn:uuid:fc2c1cad-ab5d-4e4e-be37-f18f8568fdf5>
3.625
2,161
Content Listing
Science & Tech.
30.843558