text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Provided by Annie Reisewitz and Jessica Carilli, Scripps Institution of Oceanography, University of California, San Diego
Shared by Mexico, Belize, Guatemala, and Honduras, the Mesoamerican Reef spans over 1,000 km, making it the largest continuous reef in the Western Hemisphere. Portions of the Mesoamerican Reef are World Heritage sites and more than two million people in four countries benefit from the ecosystem services the reef provides, which include productive fishing grounds and the attraction of millions of tourists.12
However, agricultural runoff from more than 300,000 hectares of cropland in the region is a prime threat tothe reef’s health.3 Other local stresses to the reef include coastal development and overfishing. Two recent scientific studies have shown that these local stresses negatively impact the reef’s ability to recover from climate-related threats, such as coral bleaching.
In 1998, a mass coral bleaching caused significant coral death on the Mesoamerican Reef. A study conducted in Belize and Honduras showed that in areas with clean waters and healthy reefs, mountainous star corals (Montastraea faveolata) were able to recover and grow normally within two to three years after the bleaching. In comparison, corals living with excessive human pressures, such as pollution, coastal development, and runoff, had not recovered even eight years after the event.4
The fast-recovering corals were located far offshore, at Turneffe Atolland Cayos Cochinos.. The corals that took longer to recover were located in areas with significant land-based runoff and heavily populated and developed coastlines – the Sapodilla Cayes in southern Belize and reefs near Utila Island in Honduras, respectively.
A related study compared a century-long record of thermal stress on the Mesoamerican Reef with a century-long record of bleaching events. Surprisingly, although temperatures in the region were much warmer in the 1950s than in recent years, no bleaching occurred during that decade—suggesting that recent mass bleaching events appear to stem from the coupling of mild warming and local stress.5
These findings suggest that by protecting reefs from local threats, bleached corals will be able to bounce back to normal growth rates more quickly after natural disturbances—and that protecting reefs from local threats will foster coral resilience in the face of rising sea temperatures.
ICRAN. ICRAN-MAR Project: Terminal Report. (Mesoamerican Reef Alliance, 2007). ↩
Burke, L. & Sugg, Z. Hydrologic Modeling of Watersheds Discharging Adjacent to the Mesoamerican Reef. (World Resources Institute, Washington, DC, 2006). ↩
Carilli, J., Norris, R., Black, B., Walsh, S. & McField, M. Local stressors reduce coral resilience to bleaching. PLoS One 4, e6324 (2009). ↩
Carilli, J., Norris, R. D., Black, B., Walsh, S. & McField, M. Century-scale records of coral growth rates indicate that local stressors reduce coral thermal tolerance threshold. Global Change Biology 10, 1365-2486 (2009). ↩ | <urn:uuid:8067f644-2f17-42ae-9daa-364b0438955e> | 4.15625 | 676 | Knowledge Article | Science & Tech. | 42.321208 |
I’ve been reading about Sedna for the past 45 minutes. It’s a dwarf planet that we know exists in our solar system as a trans-Neptunian object, but it sort of defies our understanding of how solar systems develop and function.
In 2076 it will reach its closest point to the sun along its 11,400 year orbit: about 76 AU (1 AU is the average distance between the Earth and the Sun.) At 76 AU, it’s still about 2.5 times further away than Neptune’s most distant point. At Sedna’s furthest point, it will be about 937 AU away.
To put this in perspective, Voyager 1 the farthest man-made object from Earth. It’s reached the heliospause - the boundary at which the sun’s solar wind is able to push back against the stellar winds of the surrounding stars. This means that for all intents and purposes, it’s just about to pass into interstellar space. Right now, light itself takes about 16.5 hours to travel between Earth and Voyager 1. It’s about 125 AU out. Sedna’s orbit takes it seven and a half times further away from the sun.
Think about that: at its furthest distance out - well into interstellar space - the light from the star it orbits takes more than five full days to reach Sedna’s surface. Sedna actually only spends a relatively small amount of its orbit within what some scientists consider to be the de facto solar system, yet gravity still pulls it back around to make another orbit.
This is because the gravitational influence produced by the sun dominates the gravitational influence of surrounding stars to about 125,000 AU, or about two light-years out.
That’s just fascinating.
18 Notes/ Hide
- jcg1013 likes this
- karenabad likes this
- mikeambs reblogged this from spytap
- catmansmuckers likes this
- monsterbeard likes this
- mike-ambs likes this
- andrewpettit likes this
- bronwynlewis likes this
- creeperstatus likes this
- nslayton reblogged this from spytap
- maygen likes this
- downtostars likes this
- cafeconsanity likes this
- lizlet said: ugh you got your nerd ALL OVER ME
- lizlet likes this
- ladimcbeth likes this
- international-nerd likes this
- spytap posted this | <urn:uuid:5772dcce-52ff-4724-a730-3705e71a8703> | 3.109375 | 542 | Personal Blog | Science & Tech. | 58.313795 |
iOS applications use Cocoa classes, and these classes use the Objective-C programming language so one must know Objective-C if he/she does wish to develop iOS apps. We all know shifting from a loss, non-strict programming language might find iOS’s syntax seem strange and difficult. Don’t worry, this strangeness and difficulties will give way to an elegant experience programming in iOS and I am sure that everyone will appreciate the meticulous language to become an MVC methodologist.
Class @interface and @implementation
iOS separates a class into an @interface and @implementation. An interface declares instance variables, properties and methods. It is a standard C header file and doesn’t provide any method definitions. The implementation, however, contains the method definitions for the class and this is the place where you synthesize the properties. It is the file in a .m extension.
Creating a project with Objective-C Class file
We will create an example project using an Objective-C class and IBAction. First, create a View-Based Application project and name it ClassAndIBAction. After creating the project, single-click the ClassAndIBAction folder and create a new Objective-C Class file. Go to File > New > New File or create the file by pressing Command + N (⌘ + N) buttons … | <urn:uuid:c64ea1bf-259c-4f92-92da-74a4b21c931c> | 2.984375 | 281 | Tutorial | Software Dev. | 40.418214 |
A fundamental problem studies the number of ways can be written as a sum of positive integers , that is, the number of solutions of
The number of summands is unrestricted, repetition is allowed, and the order of the summands is not taken into account. The corresponding unrestricted partition function is denoted by , and the summands are called parts; see §26.9(i). For example, because there are exactly seven partitions of 5: .
The number of partitions of into at most parts is denoted by ; again see §26.9(i).
Euler introduced the reciprocal of the infinite product
as a generating function for the function defined in §27.14(i):
with . Euler’s pentagonal number theorem states that
where the exponents 1, 2, 5, 7, 12, 15, are the pentagonal numbers, defined by
Multiplying the power series for with that for and equating coefficients, we obtain the recursion formula
where is defined to be 0 if . Logarithmic differentiation of the generating function leads to another recursion:
where is defined by (27.2.10) with .
These recursions can be used to calculate , which grows very rapidly. For example, = 1905 69292, and . For large
and is a Dedekind sum given by
Dedekind sums occur in the transformation theory of the Dedekind modular function , defined by
This is related to the function in (27.14.2) by
satisfies the following functional equation: if are integers with and , then
where and is given by (27.14.11).
Ramanujan (1921) gives identities that imply divisibility properties of the partition function. For example, the Ramanujan identity
implies . Ramanujan also found that and for all . After decades of nearly fruitless searching for further congruences of this type, it was believed that no others existed, until it was shown in Ono (2000) that there are infinitely many. Ono proved that for every prime there are integers and such that for all . For example, .
The discriminant function is defined by
and satisfies the functional equation
if are integers with and .
The 24th power of in (27.14.12) with is an infinite product that generates a power series in with integer coefficients called Ramanujan’s tau function :
The tau function is multiplicative and satisfies the more general relation:
Lehmer (1947) conjectures that is never 0 and verifies this for all by studying various congruences satisfied by , for example: | <urn:uuid:0199d3e4-5fb9-4c3c-9e8d-25dbcd27e015> | 3.1875 | 555 | Knowledge Article | Science & Tech. | 49.298866 |
Source code: Lib/tokenize.py
The tokenize module provides a lexical scanner for Python source code, implemented in Python. The scanner in this module returns comments as tokens as well, making it useful for implementing “pretty-printers,” including colorizers for on-screen displays.
To simplify token stream handling, all Operators and Delimiters tokens are returned using the generic token.OP token type. The exact type can be determined by checking the exact_type property on the named tuple returned from tokenize.tokenize().
The primary entry point is a generator:
The tokenize() generator requires one argument, readline, which must be a callable object which provides the same interface as the io.IOBase.readline() method of file objects. Each call to the function should return one line of input as bytes.
The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed (the last tuple item) is the logical line; continuation lines are included. The 5 tuple is returned as a named tuple with the field names: type string start end line.
Changed in version 3.1: Added support for named tuples.
Changed in version 3.3: Added support for exact_type.
Token value used to indicate a comment.
Token value used to indicate a non-terminating newline. The NEWLINE token indicates the end of a logical line of Python code; NL tokens are generated when a logical line of code is continued over multiple physical lines.
Token value that indicates the encoding used to decode the source bytes into text. The first token returned by tokenize() will always be an ENCODING token.
Another function is provided to reverse the tokenization process. This is useful for creating tools that tokenize a script, modify the token stream, and write back the modified script.
Converts tokens back into Python source code. The iterable must return sequences with at least two elements, the token type and the token string. Any additional sequence elements are ignored.
The reconstructed script is returned as a single string. The result is guaranteed to tokenize back to match the input so that the conversion is lossless and round-trips are assured. The guarantee applies only to the token type and token string as the spacing between tokens (column positions) may change.
It returns bytes, encoded using the ENCODING token, which is the first token sequence output by tokenize().
tokenize() needs to detect the encoding of source files it tokenizes. The function it uses to do this is available:
It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (not decoded from bytes) it has read in.
It detects the encoding from the presence of a UTF-8 BOM or an encoding cookie as specified in PEP 263. If both a BOM and a cookie are present, but disagree, a SyntaxError will be raised. Note that if the BOM is found, 'utf-8-sig' will be returned as an encoding.
If no encoding is specified, then the default of 'utf-8' will be returned.
New in version 3.3.
The tokenize module can be executed as a script from the command line. It is as simple as:
python -m tokenize [-e] [filename.py]
The following options are accepted:
show this help message and exit
display token names using the exact type
If filename.py is specified its contents are tokenized to stdout. Otherwise, tokenization is performed on stdin.
Example of a script rewriter that transforms float literals into Decimal objects:
from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP from io import BytesIO def decistmt(s): """Substitute Decimals for floats in a string of statements. >>> from decimal import Decimal >>> s = 'print(+21.3e-5*-.1234/81.7)' >>> decistmt(s) "print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))" The format of the exponent is inherited from the platform C library. Known cases are "e-007" (Windows) and "e-07" (not Windows). Since we're only showing 12 digits, and the 13th isn't close to 5, the rest of the output should be platform-independent. >>> exec(s) #doctest: +ELLIPSIS -3.21716034272e-0...7 Output from calculations with Decimal should be identical across all platforms. >>> exec(decistmt(s)) -3.217160342717258261933904529E-7 """ result = g = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string for toknum, tokval, _, _, _ in g: if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens result.extend([ (NAME, 'Decimal'), (OP, '('), (STRING, repr(tokval)), (OP, ')') ]) else: result.append((toknum, tokval)) return untokenize(result).decode('utf-8')
Example of tokenizing from the command line. The script:
def say_hello(): print("Hello, World!") say_hello()
will be tokenized to the following output where the first column is the range of the line/column coordinates where the token is found, the second column is the name of the token, and the final column is the value of the token (if any)
$ python -m tokenize hello.py 0,0-0,0: ENCODING 'utf-8' 1,0-1,3: NAME 'def' 1,4-1,13: NAME 'say_hello' 1,13-1,14: OP '(' 1,14-1,15: OP ')' 1,15-1,16: OP ':' 1,16-1,17: NEWLINE '\n' 2,0-2,4: INDENT ' ' 2,4-2,9: NAME 'print' 2,9-2,10: OP '(' 2,10-2,25: STRING '"Hello, World!"' 2,25-2,26: OP ')' 2,26-2,27: NEWLINE '\n' 3,0-3,1: NL '\n' 4,0-4,0: DEDENT '' 4,0-4,9: NAME 'say_hello' 4,9-4,10: OP '(' 4,10-4,11: OP ')' 4,11-4,12: NEWLINE '\n' 5,0-5,0: ENDMARKER ''
The exact token type names can be displayed using the -e option:
$ python -m tokenize -e hello.py 0,0-0,0: ENCODING 'utf-8' 1,0-1,3: NAME 'def' 1,4-1,13: NAME 'say_hello' 1,13-1,14: LPAR '(' 1,14-1,15: RPAR ')' 1,15-1,16: COLON ':' 1,16-1,17: NEWLINE '\n' 2,0-2,4: INDENT ' ' 2,4-2,9: NAME 'print' 2,9-2,10: LPAR '(' 2,10-2,25: STRING '"Hello, World!"' 2,25-2,26: RPAR ')' 2,26-2,27: NEWLINE '\n' 3,0-3,1: NL '\n' 4,0-4,0: DEDENT '' 4,0-4,9: NAME 'say_hello' 4,9-4,10: LPAR '(' 4,10-4,11: RPAR ')' 4,11-4,12: NEWLINE '\n' 5,0-5,0: ENDMARKER '' | <urn:uuid:f5af35e9-65ee-457f-8209-576e8d39af22> | 3.15625 | 1,858 | Documentation | Software Dev. | 70.80885 |
International Code of Nomenclature of Bacteria
||This article needs additional citations for verification. (February 2010)|
The International Code of Nomenclature of Bacteria (ICNB) or Bacteriological Code (BC) governs the scientific names for bacteria, including Archaea . It denotes the rules for naming taxa of bacteria, according to their relative rank. As such it is one of the Nomenclature Codes of biology.
Originally the International Code of Botanical Nomenclature dealt with bacteria, and this kept references to bacteria until these were eliminated at the 1975 IBC. An early Code for the nomenclature of Bacteria was approved at the 4th International Congress for Microbiology in 1947, but was later discarded.
The latest version to be printed in book form is the 1990 Revision, but the book does not represent the current rules, as the Code has been amended since (these changes have been published in the International Journal of Systematic and Evolutionary Microbiology (IJSEM). The rules are maintained by the International Committee on Systematics of Prokaryotes (ICSP; formerly the International Committee on Systematic Bacteriology, ICSB).
The base-line for bacterial names is the Approved Lists with a starting point of 1980. New bacterial names are reviewed by the ICSP as being in conformity with the Rules of Nomenclature and published in the (IJSEM).
- International Committee on Systematics of Prokaryotes (ICSP)
- P. H. A. Sneath, 2003. A short history of the Bacteriological Code URL
- * Bacteriological Code (1990 Revision)
- VBD Skerman, Vicki McGowan, and PHA Sneath, 1989. Approved Lists of Bacterial Names, Amended edition Washington (DC): ASM Press | <urn:uuid:cb73c7c7-bbfd-40bc-82e2-73ca32f30071> | 3.03125 | 382 | Knowledge Article | Science & Tech. | 24.193594 |
Multifocal plane microscopy
||This article may be too technical for most readers to understand. (August 2012)|
Multifocal plane microscopy (MUM) or Multiplane microscopy or Biplane microscopy is a form of light microscopy that allows the tracking of the 3D dynamics in live cells at high temporal and spatial resolution by simultaneously imaging different focal planes within the specimen. In this methodology, the light collected from the sample by an infinity-corrected objective lens is split into two paths. In each path the split light is focused onto a detector which is placed at a specific calibrated distance from the tube lens. In this way, each detector images a distinct plane within the sample. The first developed MUM setup was capable of imaging two distinct planes within the sample. However, the setup can be modified to image more than two planes by further splitting the light in each light path and focusing it onto detectors placed at specific calibrated distances. Presently, MUM setups are implemented that can image up to four distinct planes.
Fluorescence microscopy of live cells represents a major tool in the study of trafficking events. The conventional microscope design is well adapted to image fast cellular dynamics in two dimensions, i.e., in the plane of focus. However, cells are three dimensional objects and intracellular trafficking pathways are typically not constrained to one focal plane. If the dynamics are not constrained to one focal plane, the conventional single plane microscopy technology is inadequate for detailed studies of fast intracellular dynamics in three dimensions. Classical approaches based on changing the focal plane are often not effective in such situations since the focusing devices are relatively slow in comparison to many of the intracellular dynamics. In addition, the focal plane may frequently be at the wrong place at the wrong time, thereby missing important aspects of the dynamic events.
MUM can be implemented in any standard light microscope. An example implementation in a Zeiss microscope is as follows. A Zeiss dual-video adaptor is first attached to the side port of a Zeiss Axiovert 200 microscope. Two Zeiss dual-video adaptors are then concatenated by attaching each of them to the output ports of the first Zeiss video adaptor. To each of the concatenated video adaptor, a high resolution CCD camera is attached by using C-mount/spacer rings and a custom-machined camera coupling adaptor. The spacing between the output port of the video adaptor and the camera is different for each camera, which results in the cameras imaging distinct focal planes.
It is worth mentioning that there are many ways to implement MUM. The mentioned implementation offers several advantages such as flexibility, ease of installation and maintenance, and adjustability for different configurations. Additionally, for a number of applications it is important to be able to acquire images in different colors at different exposure times. For example, to visualize exocytosis in TIRFM, very fast acquisition is necessary. However, to image a fluorescently labeled stationary organelle in the cell, low excitation is necessary to avoid photobleaching and as a result the acquisition has to be relatively slow. In this regard, the above implementation offers great flexibility, since different cameras can be used to acquire images in different channels.
3D super-resolution imaging and single molecule tracking using MUM
Modern microscopy techniques have generated significant interest in studying cellular processes at the single molecule level. Single molecule experiments overcome averaging effects and therefore provide information that is not accessible using conventional bulk studies. However, the 3D localization and tracking of single molecules poses several challenges. In addition to whether or not images of the single molecule can be captured while it undergoes potentially highly complex 3D dynamics, the question arises whether or not the 3D location of the single molecule can be determined and how accurately this can be done.
A major obstacle to high accuracy 3D location estimation is the poor depth discrimination of a standard microscope. Even with a high numerical aperture objective, the image of a point source in a conventional microscope does not change appreciably if the point source is moved several hundred nanometers from its focus position. This makes it extraordinarily difficult to determine the axial, i.e., z position, of the point source with a conventional microscope.
More generally, quantitative single molecule microscopy for 3D samples poses the identical problem whether the application is localization/tracking or super-resolution microscopy such as PALM, STORM, FPALM, dSTORM for 3D applications, i.e. the determination of the location of a single molecule in three dimensions. MUM offers several advantages. In MUM, images of the point source are simultaneously acquired at different focus levels. These images give additional information that can be used to constrain the z position of the point source. This constraining information largely overcomes the depth discrimination problem near the focus.
The 3D localization measure provides a quantitative measure of how accurately the location of the point source can be determined. A small numerical value of the 3D localization measure implies very high accuracy in determining the location, while a large numerical value of the 3D localization measure implies very poor accuracy in determining the location. For a conventional microscope when the point source is close to the plane of focus, e.g., z0 <= 250 nm, the 3D localization measure predicts very poor accuracy in estimating the z position. Thus, in a conventional microscope, it is problematic to carry out 3D tracking when the point source is close to the plane of focus.
On the other hand, for a two plane MUM setup the 3D localization measure predicts consistently better accuracy than a conventional microscope for a range of z-values, especially when the point source is close to the plane of focus. An immediate implication of this result is that the z-location of the point source can be determined with relatively the same level of accuracy for a range of z-values, which is favorable for 3D single particle tracking.
Dual objective multifocal plane microscopy (dMUM)
In single particle imaging applications, the number of photons detected from the fluorescent label plays a crucial role in the quantitative analysis of the acquired data. Currently, particle tracking experiments are typically carried out on either an inverted or an upright microscope, in which a single objective lens illuminates the sample and also collects the fluorescence signal from it. Note that although fluorescence emission from the sample occurs in all directions (i.e., above and below the sample), the use of a single objective lens in these microscope configurations results in collecting light from only one side of the sample. Even if a high numerical aperture objective lens is used, not all photons emitted at one side of the sample can be collected due to the finite collection angle of the objective lens. Thus even under the best imaging conditions conventional microscopes collect only a fraction of the photons emitted from the sample.
To address this problem, a microscope configuration can be used that uses two opposing objective lenses, where one of the objectives is in an inverted position and the other objective is in an upright position. This configuration is called dual objective multifocal plane microscopy (dMUM).
- Prabhat, P.; Ram, S.; Ward, E.S.; Ober, R.J. (2004). "Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimension". IEEE Transactions on Nanobioscience 3 (4): 237–242. doi:10.1109/TNB.2004.837899.
- Prabhat, P.; Ram, S.; Ward, E.S.; Ober, R.J. (2006). "Simultaneous imaging of several focal planes in fluorescence microscopy for the study of cellular dynamics in 3D". Proceedings of SPIE 6090: 115–121. doi:10.1117/12.644343.
- Dehmelt, Leif; Bastiaens, Philippe I. H. (2010). "Spatial organization of intracellular communication: insights from imaging". Nature Reviews Molecular Cell Biology 11: 440–452. doi:10.1038/nrm2903.
- Badieirostami, M.; Lew, M.D.; Thompson, M.A.; Moerner, W.E. (2010). "Three-dimensional localization precision of the double-helix point spread function versus astigmatism and biplane". Applied Physics Letters 97: 161103. doi:10.1063/1.3499652.
- Ram, Sripad; Prabhat, Prashant; Chao, Jerry; Sally Ward, E.; Ober, Raimund J. (2008). "High accuracy 3D quantum dot tracking with multifocal plane microscopy for the study of fast intracellular dynamics in live cells". Biophysical Journal 95 (12): 6025–6043. doi:10.1529/biophysj.108.140392.
- Dalgarno, P. A.; Dalgarno, H. I. C.; Putoud, A.; Lambert, R.; Paterson, L.; Logan, D. C.; Towers, D. P.; Warburton, R. J. et al. (2010). "Multiplane imaging and three dimensional nanoscale particle tracking in biological microscopy". Optics Express 18 (2): 877–884. doi:10.1364/OE.18.000877.
- "MUM - Ward Ober Lab". UT Southwestern. Retrieved 2012-07-19.
- Prabhat, P.; Gan, Z.; Chao, J.; Ram, S.; Vaccaro, C.; Gibbons, S.; Ober, R. J.; Ward, E. S. (2007). "Elucidation of intracellular recycling pathways leading to exocytosis of the Fc receptor, FcRn, by using multifocal plane microscopy". Proceedings of the National Academy of Sciences 104 (14): 5889–5894. doi:10.1073/pnas.0700337104.
- Ram, S.; Ward, E.S.; Ober, R.J. (2005). "How accurately can a single molecule be localized in three dimensions using a fluorescence microscope?". Proceedings of SPIE 5699: 426–435. doi:10.1117/12.587878.
- "Biplane FPALM super resolution microscope".
- Ram, Sripad; Chao, Jerry; Prabhat, Prashant; Ward, E. Sally; Ober, Raimund J. (2007). "A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking". Proceedings of SPIE 6443: 6443–0C. doi:10.1117/12.698763.
- Ram, Sripad; Prabhat, Prashant; Ward, E. Sally; Ober, Raimund J. (2009). "Improved single particle localization accuracy with dual objective multifocal plane microscopy". Optics Express 17 (8): 6881–6898. doi:10.1364/OE.17.006881. | <urn:uuid:7145a46e-ad62-4f69-b551-fc53f533e349> | 3 | 2,312 | Knowledge Article | Science & Tech. | 43.901679 |
The structure of a micro-bat community in relation to gradients of environmental variation in a tropical urban area
Hourigan, C.L., Johnson, C., and Robson, S.K.A. (2006) The structure of a micro-bat community in relation to gradients of environmental variation in a tropical urban area. Urban Ecosystems, 9 (2). pp. 67-82.
|PDF (Published Version) - Repository staff only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader|
View at Publisher Website: http://dx.doi.org/10.1007/s11252-006-790...
We investigated patterns of community structure (species composition, foraging activity, and nightly foraging patterns) of bats in relation to gradients of environmental variation in a tropical urban area. A total of 32 sites spread equally across eight habitat types were sampled in the city of Townsville, North Queensland, Australia. Each site was sampled on 3 non-consecutive occasions using automated AnaBat systems. Eleven species were confidently identified while a possible four more were identified only to the genus level. Ordination of environmental variables measured at these sites identified two distinct environmental gradients reflecting the degree of urbanisation and foliage density. With increasing urbanisation there was a decline in species richness and total foraging activity. We used regression trees to characterise foraging preferences of each species. This analysis suggested that only one species of Mormopterus was able to exploit the resources provided by urbanisation. This species foraged in areas with higher numbers of white streetlights. The remaining species of bats preferred to forage within close proximity to natural vegetation and with low numbers of streetlights. The density of vegetation in long-established suburbs did not substantially reverse the trend for urban areas to have fewer bat species than original habitats.
|Item Type:||Article (Refereed Research - C1)|
|Keywords:||micro-bat; tropical urban gradient; foraging patterns; species composition; community structure|
|FoR Codes:||06 BIOLOGICAL SCIENCES > 0602 Ecology > 060208 Terrestrial Ecology @ 100%|
|SEO Codes:||96 ENVIRONMENT > 9608 Flora, Fauna and Biodiversity > 960806 Forest and Woodlands Flora, Fauna and Biodiversity @ 100%|
|Deposited On:||12 Nov 2009 15:39|
|Last Modified:||13 Feb 2011 06:36|
Last 12 Months: 0
|Citation Counts with External Providers:|
Repository Staff Only: item control page | <urn:uuid:1d3cc0e8-821f-4959-9805-865ca7dbcebb> | 2.78125 | 551 | Academic Writing | Science & Tech. | 33.374548 |
Transactional memory on x86 will come in two flavors:
- Hardware Lock Elision (HLE), which consists of XACQUIRE and XRELEASE instruction prefixes. These can optimistically turn lock regions into transactions. What's the advantage? Well, transactions can execute concurrently, as long as they aren't writing to the same memory. Locks are serialized: only one thread can be in a lock region at a time. So this mechanism can (hypothetically) execute more code concurrently. Also - and perhaps more importantly - it should be much cheaper to acquire a lock using XACQUIRE than it is to acquire it without, since with XACQUIRE you don't have to do anything other than tell the system to start looking out for conflicts. Between the two advantages, HLE should be able to provide a nice performance boost. These are backwards compatible, so that software written to use them can be run on earlier hardware.
- Restricted Transactional Memory (RTM), which consists of XBEGIN, XEND and XABORT instructions. These have the traditional transactional memory semantics: you begin a transaction, and if you abort before you end it, your writes are undone. These are very flexible, but are not backwards compatible (i.e., you can't use them on older hardware).
I like the idea of hardware-level transactions / atomicity. Compilers can be written to take advantage of them, and they can be a win for people implementing synchronization primitives and doing very low-level multithreading. They have the potential of making synchronization very cheap when there isn't much contention. AFAICT, Azul has claiming
very large benefits from their HLE-style synchronization for years (edited: They claim <10%: see this presentation on hardware transactional memory from Cliff Click).
There is also likely to be a renaissance in lock-free data structures. Currently, lock-free data structures are pretty much all built on top of one non-blocking atomic instruction: the compare-and-swap (LOCK: CMPXCHG on x86, or AtomicReference.compareAndSet() to Java programmers). Trying to shoehorn everything you might want to do atomically into a single instruction that operates on one word has led to some very interesting data structures over the last 20 years or so, but is really the wrong way to solve the problem.
The flip side of this is that I don't think these will have much of an impact on most user-level synchronization. The HLE is really just an optimization for current lock implementations. The RTM aborts if you execute any of a large number of instructions (like PAUSE or CPUID, or many operations that affect control flow), or if there is a conflicting write to the cache line (which may not have any logical connection to the data the programmer cares about): the average programmer who deals with multithreaded code can't reasonably be expected to think about what is going on at the microarchitectural level.
Also, as with most transactional memory systems, RTM only deals with memory writes to a limited number of cache lines. If you have to deal with, for example, I/O, or if you touch too many cache lines, you have to write an abort handler that deals with rollback correctly. My feeling is that this will probably be far too difficult for most programmers.
Transactions' usefulness also depends enormously on the quality of their implementation. I'd love to see some hardware, but it isn't expected to be released until 2013. | <urn:uuid:17db5c90-1445-4f12-bc7a-d127eb59f891> | 2.8125 | 733 | Personal Blog | Software Dev. | 40.610272 |
Introduction to PHP
PHP (the acronym is "recursive" and refers back to itself--it stands for PHP: Hypertext Preprocessor) is a popular server-side scripting language. The fact that it is "server-side" means that special software on the web server is used each time a page with PHP is processed. This software responds to specific commands included within (embedded) in the webpage. Generally, these commands tell it to generate new HTML code which is sent back to the user's web browser. Thus, especially when it is connected to a database on a web server, PHP can be used to generate dynamic web content. There are other uses for PHP that do not involve the internet, but this use of PHP is the most common one today. (Coggeshall)
Examples: Like ASP.NET, PHP is broadly used, especially by small and moderately sized websites. Indeed, the original meaning of PHP was Personal Home Page tools, as it was originally built for the maintenance of the personal website of its creator, Rasmus Lerdorf. PHP is also particularly prevalent in online web discussion boards. All PHP pages end in .php rather than .html or .aspx. | <urn:uuid:5d37ef30-3312-4488-b104-55c3f19a3c07> | 3.65625 | 244 | Knowledge Article | Software Dev. | 54.326438 |
2.11. Electrostatic Energy
The effect of the electrostatic interaction on the binding energies of atomic nuclei relative to free protons and electrons is taken into account in category 6, and the electrostatic contribution to the binding energies of white dwarfs is (in principle) part of category 5. The molecular binding energy in objects ranging from dust to asteroids that are held together by the electrostatic interaction deserves separate mention.
The molecular binding energy relative to free atoms in condensed matter is roughly 1 eV per atom. The product of this mass fraction, ~ 10-10, with entry 3.12 is
This is the binding energy of condensed matter physics outside strongly self-gravitating systems. We refrain from entering it in Table 1 because the estimate is so small and uncertain, but we offer for comparison the binding energy of the electrons in atoms.
The electrostatic binding energy of the electrons in an O VI atom is 1.6 keV, and the addition of the other five electrons in a neutral oxygen atom increases the binding energy by only 27 percent. That is, it is a reasonable approximation to ignore the states of ionization of the heavy elements and the electrostatic binding energy of neutral hydrogen and helium. The sum over the heavy element masses in entries 1, 2, 4, 5, and 6 in Table 3, weighted by the neutral atomic binding energies of 15 cosmically conspicuous elements, is
This is some six orders of magnitude larger than the molecular binding energy in equation (129), and comparable in value to the smallest entries in Table 1. | <urn:uuid:17f13d65-f5c5-404f-abbb-bde2cdc67741> | 3.015625 | 317 | Academic Writing | Science & Tech. | 35.652115 |
for National Geographic News
Warp drives may be the stuff of science fiction, but another Star Trek staple appears to be edging toward science fact.
The energy source that enables the starship Enterprise to boldly go where no one has gone before has, according to one controversial new claim, moved much closer to reality.
A New Mexico company has just completed its initial studies of an antimatter-powered rocket that it hopes will someday take astronauts to Mars in 90 days or less. (Related interview: Mars expert on how to get there.)
As with Trek when it first aired in the 1960s, many critics doubt the ambitious new program will ever get off the ground.
The existence of antimatter was first predicted in 1928. It's said to be a mirror image of matter.
If a particle makes contact with its antiparticle, the two substances annihilatethey both vanish in a flurry of high-energy radiation known as gamma rays.
The electron, carrier of electricity, has an antimatter twin called the positron, or antielectron, which was discovered in 1932.
Sci-fi authors and screenwriters have since cashed in on the reflective, perplexing, and overpowering possibilities of this mystery substance.
Examples include the "Anti-Matter Man" episode of the 1960s TV series Lost in Space; the "positronic brains" of the cyborgs in Isaac Asimov's book I, Robot; and R.L. Forward's novel Martian Rainbow, in which antimatter rockets boost both a Mars mission and world domination.
But Forward was not the only visionary who saw antimatter as the ultimate form of rocket propulsion.
In the 1950s Austrian engineer Eugen Sänger first suggested using positron-electron annihilations to power spacecraft. But one of the chief problems that dogged his efforts was storage.
SOURCES AND RELATED WEB SITES | <urn:uuid:6bec27af-b471-47f1-b8c6-a47ee98792ca> | 3.28125 | 393 | Content Listing | Science & Tech. | 43.43723 |
Alister Doyle / Reuters file
Experts increasingly recognize that ice melting in Antarctica could push up sea levels dramatically higher in coming decades.
By John Roach, NBC News
Melting glaciers in Antarctica and Greenland may push up global sea levels more than 3 feet by the end of this century, according to a scientific poll of experts that brings a degree of clarity to a murky and controversial slice of climate science.
Such a rise in the seas would displace millions of people from low-lying countries such as Bangladesh, swamp atolls in the Pacific Ocean, cause dikes in Holland to fail, and cost coastal mega-cities from New York to Tokyo billions of dollars for construction of sea walls and other infrastructure to combat the tides.
Estimating how much sea levels will rise from ice sheet melting is one of the more challenging aspects of climate science. Some evidence suggests recent accelerated melting is related to changes in ocean and atmospheric temperature, though natural variability may play an important role.
In addition, glaciers respond to external forces such as warmer temperatures in different ways, even when they are located right next to each other. As a result, there is tremendous uncertainty in the scientific community over how the melting will affect sea levels over the next century.
Bamber and colleague Willy Aspinall attempted to find clarity in the chaos using a scientific polling technique common in fields such as predicting earthquakes and volcanic eruptions, but until now not applied to climate science.
The pair sent 26 of the world's leading glaciologists a series of questions about the behavior of the Antarctic and Greenland ice sheets. About half replied to the survey in 2010. The respondents were polled again in 2012 to assess the robustness of their answers.
Bamber said this type of approach is "a lot more than an opinion poll." The experts were handpicked to get a representative perspective of world leaders from the ice sheet modeling and observational fields. "We analyzed the results in a very systematic, rigorous, and statistically robust way," he added.
The median estimate from the experts is that the melting ice sheets will contribute 1 foot (29 centimeters) to sea level rise by the year 2100 with a 5 percent chance their contribution could exceed 2.8 feet (84 centimeters). When the effect of thermal expansion (water expands as it warms) is taken into account, the high-end estimate is more than 3 feet (1 meter).
The estimates are higher than the controversial figures in the 2007 report from the Intergovernmental Panel on Climate Change (IPCC) of up to 23 inches (59 centimeters) and higher than the unpublished estimates being prepared for the next IPCC report, said Bamber, who is a review editor for that document and has seen the estimates.
The discrepancy likely reflects added weight given to recent studies that indicate glacier melt has accelerated in recent years in Antarctica and Greenland, and that the West Antarctic ice sheet could partially collapse by the end of this century.
"The numbers we are getting out of our elicitation reflect the fact that the world leaders in this field are now cognizant of the fact that the ice sheets are quite responsive and, in particular, there is a potential for them to make a really quite dramatic contribution," Bamber said.
The greatest drama would be a more than 3-foot rise in sea levels from the combined effect of melting ice and thermal expansion, which the study indicates has a 1 in 20 chance of occurring.
How much of this drama can be attributed to human burning of fossil fuels, the study indicates, remains murky. “There is really no consensus amongst the experts we approached,” Bamber said. “That’s something that we in the scientific community need to address as a matter of urgency.”
John Roach is a contributing writer for NBC News Digital. To learn more about him, check out his website. | <urn:uuid:c786f10e-24cf-4efa-bd51-5d30ab214311> | 3.53125 | 783 | Truncated | Science & Tech. | 38.787797 |
Physics First: Magnetism and Magnetic Force Units
Magnetic fields can be defined as the regions surrounding a magnet where a moving electric charge will feel a force of attraction or repulsion. Invisible magnetic field lines emerge from the North pole of a magnet and enter the South pole. Field lines can be visualized by sprinkling small iron filings over a magnet covered by a clear sheet of plastic. When a compass (or any freely floating bar magnet) points north, it is actually aligning its north pole to the Earth's magnetic south pole.
References and Collections:
This collection of 41 interactive java tutorials would be an excellent choice to connect physics to "real-world" applications. Designed by well-respected authors, the topics range from simulated magnetic fields and field lines to primers on capacitance, resistance, Ohm's Law, and electromagnetic induction. Included are simulations on how things work, such as vacuum tube diodes, cathode rays, capacitors, AC/DC generators, hard drives, pulsed magnets, and speakers.
This resource is a student tutorial on magnetism appropriate for middle school or 9th grade Physical Science. It is organized into sequenced headings that each contain interactive simulations and reflective questions. The first half of the tutorial gives students a conceptual framework to understand properties of magnets and magnetic behavior. The topics then broaden to include magnetic lines of force, magnetic field, electromagnets, electric motors, and galvanometers.
This lesson contains instructions for conducting an inquiry-based lab to investigate current-carrying coils in magnetic fields. | <urn:uuid:506b5a04-7ccd-4857-991b-d7542507d87d> | 4.46875 | 317 | Content Listing | Science & Tech. | 27.656367 |
SCIENTISTS associated with the Intergovernmental Panel on Climate Change have predicted that by the year 2050, the sea level is likely to rise by 0.3-0.5 m. This is assuming that no effort is made to check greenhouse gas emissions, which heat up the atmosphere and cause the polar ice caps to melt. Using sophisticated computer modelling techniques, they calculated that by 2100, a 1-metre rise in sea level is likely.
But determining changes in the level of the sea is no easy task. For instance, scientists of the Ocean Engineering Centre at the Indian Institute of Technology in Madras calculated on the basis of historical data that the average sea level rise on the eastern Indian coast is 0.67 mm per year.
However, an analysis of similar data by another group of scientists reveals that the sea level at 5 stations spread along both the eastern and western Indian coasts has risen by an average 2.8 mm per year. One of the scientists associated with the later study cautions that "there is still a considerable uncertainty about the rise in sea level due to global warming in the Indian region".
Explains Virendra Asthana of Jawaharlal Nehru University's school of environmental sciences, "The analysis of tide gauge data is extremely difficult due to background noise caused by the natural rise or subsidence of land." This, he says, clouds data and makes it difficult to separate land movements from actual rises in the sea level. | <urn:uuid:0edd48ad-1cbd-4ee2-9a37-1e014dbb86ab> | 3.84375 | 296 | Knowledge Article | Science & Tech. | 45.526169 |
|You are here: Home > Dive Into Python > Data-Centric Programming > Finding the path||<< >>|
Python for experienced programmers
When running Python scripts from the command line, it is sometimes useful to know where the currently running script is located on disk.
This is one of those obscure little tricks that is virtually impossible to figure out on your own, but simple to remember once you see it. The key to it is sys.argv. As we saw in XML Processing, this is a list that holds the list of command-line arguments. However, it also holds the name of the running script, exactly as it was called from the command line, and this is enough information to determine its location.
If you have not already done so, you can download this and other examples used in this book.
import sys, os print 'sys.argv =', sys.argv pathname = os.path.dirname(sys.argv) print 'path =', pathname print 'full path =', os.path.abspath(pathname)
|Regardless of how you run a script, sys.argv will always contain the name of the script, exactly as it appears on the command line. This may or may not include any path information, as we’ll see shortly.|
|os.path.dirname takes a filename as a string and returns the directory path portion. If the given filename does not include any path information, os.path.dirname returns an empty string.|
|os.path.abspath is the key here. It takes a pathname, which can be partial or even blank, and returns a fully qualified pathname.|
os.path.abspath deserves further explanation. It is very flexible; it can take any kind of pathname.
>>> import os >>> os.getcwd() /home/f8dy >>> os.path.abspath('') /home/f8dy >>> os.path.abspath('.ssh') /home/f8dy/.ssh >>> os.path.abspath('/home/f8dy/.ssh') /home/f8dy/.ssh >>> os.path.abspath('.ssh/../foo/') /home/f8dy/foo
|os.getcwd() returns the current working directory.|
|Calling os.path.abspath with an empty string returns the current working directory, same as os.getcwd().|
|Calling os.path.abspath with a partial pathname constructs a fully qualified pathname out of it, based on the current working directory.|
|Calling os.path.abspath with a full pathname simply returns it.|
|os.path.abspath also normalizes the pathname it returns. Note that this example worked even though I don’t actually have a 'foo' directory. os.path.abspath never checks your actual disk; this is all just string manipulation.|
|The pathnames and filenames you pass to os.path.abspath do not need to exist.|
|os.path.abspath not only constructs full path names, it also normalizes them. If you are in the /usr/ directory, os.path.abspath('bin/../local/bin' will return /usr/local/bin. If you just want to normalize a pathname without turning it into a full pathname, use os.path.normpath instead.|
[f8dy@oliver py]$ python /home/f8dy/diveintopython/common/py/fullpath.py sys.argv = /home/f8dy/diveintopython/common/py/fullpath.py path = /home/f8dy/diveintopython/common/py full path = /home/f8dy/diveintopython/common/py [f8dy@oliver diveintopython]$ python common/py/fullpath.py sys.argv = common/py/fullpath.py path = common/py full path = /home/f8dy/diveintopython/common/py [f8dy@oliver diveintopython]$ cd common/py [f8dy@oliver py]$ python fullpath.py sys.argv = fullpath.py path = full path = /home/f8dy/diveintopython/common/py
|In the first case, sys.argv includes the full path of the script. We can then use the os.path.dirname function to strip off the script name and return the full directory name, and os.path.abspath simply returns what we give it.|
|If the script is run by using a partial pathname, sys.argv will still contain exactly what appears on the command line. os.path.dirname will then give us a partial pathname (relative to the current directory), and os.path.abspath will construct a full pathname from the partial pathname.|
|If the script is run from the current directory without giving any path, os.path.dirname will simply return an empty string. Given an empty string, os.path.abspath returns the current directory, which is what we want, since the script was run from the current directory.|
|Like the other functions in the os and os.path modules, os.path.abspath is cross-platform. Your results will look slightly different than my examples if you’re running on Windows (which uses backslash as a path separator) or Mac OS (which uses colons), but they’ll still work. That’s the whole point of the os module.|
Addendum. One reader was dissatisfied with this solution, and wanted to be able to run all the unit tests in the current directory, not the directory where regression.py is located. He suggests this approach instead:
import sys, os, re, unittest def regressionTest(): path = os.getcwd() sys.path.append(path) files = os.listdir(path)
|Instead of setting path to the directory where the currently running script is located, we set it to the current working directory instead. This will be whatever directory you were in before you ran the script, which is not necessarily the same as the directory the script is in. (Read that sentence a few times until you get it.)|
|Append this directory to the Python library search path, so that when we dynamically import the unit test modules later, Python can find them. We didn’t have to do this when path was the directory of the currently running script, because Python always looks in that directory.|
|The rest of the function is the same.|
This technique will allow you to re-use this regression.py script on multiple projects. Just put the script in a common directory, then change to the project’s directory before running it. All of that project’s unit tests will be found and tested, instead of the unit tests in the common directory where regression.py is located.
<< Data-Centric Programming
| 1 | 2 | 3 | 4 | 5 | 6 |
Filtering lists revisited >> | <urn:uuid:7e0a7105-7fff-453b-885c-3331edbb4a02> | 3.625 | 1,573 | Tutorial | Software Dev. | 66.397919 |
Engineers at the University of Ulster are the first researchers to create diamond nanorods with a diameter as thin as 2.1 nm, which is not only smaller than all the currently reported diamond 1D nanostructures (4-300 nm) but also smaller than the theoretical calculated value (2.7-9 nm) for energetically stable diamond nanorods.
This is only twice as wide as the “rod logic” rods in Nanosystems. These are still bulk technology, though, and not atomically precise. Near-term uses seem to be as low-power cold cathodes and the like. See also here and here. | <urn:uuid:f5addd1d-bdf9-45d1-bb3c-15a07693e32c> | 2.75 | 136 | Knowledge Article | Science & Tech. | 56.229082 |
Marbled murrelets typically nest on large mossy beds in complex and biodiversity-rich forests older than 250 years — in other words, old growth. These forests provide a canopy structure for nesting and have to be within a 30 kilometre (km) range of the ocean. These mossy beds must be high up in the canopy and hidden in order to protect marbled murrelets from predators when they nest. These mossy beds must also be next to an opening in the forest canopy to allow for the special methods by which this bird approaches its nest and leaves it. The murrelet takes off by jumping out of the nest and the free-falls to earth before beginning to fly.
Attribution: to Gu van Viet
Marbled murrelets are very slow to reproduce. They breed only after two to three years of age. Each breeding female usually lays only one egg per year and that has less than a 50 per cent chance of surviving.
Its slow reproductive rate combined with the logging of nesting habitats in old growth forests throughout much of the bird’s range as well as threats at sea (such as oil spills and gill-netting), have resulted in a rapid population decline — approximately 70 per cent loss in the last 25 years alone. For these reasons, the marbled murrelet is now listed as threatened (red-listed) by the British Columbia (B.C.) Conservation Data Centre, and as threatened under the Canadian Species at Risk Act. It is also listed under the U.S. Endangered Species Act.
It is estimated that British Columbia is home to approximately 27 per cent of the global population. The Great Bear Rainforest is home to half of B.C.’s marbled murrelet population.
SOME INTERESTING FACTS
- Marbled murrelets require a healthy marine and terrestrial environment for survival. This means oil spills, overfishing, logging and mining activities all place this unique bird at risk.
- Although they spend most of their time in the ocean, marbled murrelets nest in inland areas in old-growth forests that provide canopy-structured trees needed for their nesting habits.
- Marbled murrelets do not build nests. They make a depression in existing moss in the tree canopy. This is in order to avoid leaving evidence of a nest which ‘edge of forest’ species like ravens, crows and jays look for when hunting.
- One egg is about a third the size of a marbled murrelet. The chick remains in the nest for about a month.
- Most chick species feed on regurgitated food from the parents – but not the marbled murrelet. The chick feeds on a whole fish which its parent sometimes flies up to 70 km away to find.
- Marbled murrelets do not ‘glide’ like most birds. Instead they fly at one speed. When they stop, they just stop and when they take off from their nest they must drop down from the tree.
Burger, A. E. 2002. Conservation assessment of Marbled Murrelets in British Columbia: a review of the biology, populations, habitat associations, and conservation Technical Report Series No. 387. Canadian Wildlife Service, Pacific and Yukon Region, British Columbia. Issued under the Authority of the Minister of Environment Canadian Wildlife Service.
H.L. Horn, P. Arcese, K. Brunt, A. E. Burger, H. Davis, F. Doyle, K. Dunsworth, P. Friele, S. Gordon, A. N. Hamilton, S. Hazlitt, G. MacHutchon, T. Mahon, E. McClaren, V. Michelfelder, B. Pollard, S. Taylor, F.L. Waterhouse. 2009. Part 1: Assessment of Co-location Outcomes and Implications for Focal Species Management under EBM. Report 1 of the EBM Working Group Focal Species Project. Integrated Land Management Bureau, Nanaimo, B.C. [online] URL: ilmbwww.gov.bc.ca/slrp/lrmp/nanaimo/cencoast/ebmwg_docs/ei02c_report_3.pdf
Piatt, J.F., K.J. Kuletz, A.E. Burger, S.A. Hatch, V.L. Friesen, T.P. Birt, M.L. Arimitsu, G.S. Drew, A.M.A. Harding, and K.S. Bixler. 2006. Status Review of the Marbled Murrelet (Brachyramphus marmoratus) in Alaska and British Columbia. U.S. Geological Survey Open File Report 2006 1387. [online] URL: pubs.usgs.gov/of/2006/1387/ | <urn:uuid:6817ccf3-e42c-420b-ab11-4e145f431a53> | 4 | 1,022 | Knowledge Article | Science & Tech. | 70.239763 |
There is probably no day greeted with greater joy and anticipation than the first day of spring -- especially after a Minnesota winter! Sometimes, the only thing that gets us through February is knowing that better days are on the way. But when, exactly, does spring get here?
TV weathermen will tell you that spring starts on the vernal equinox -- the day when the number of hours of daylight are equal to the number of hours of night. (In 2005, this falls on March 20.) The problem is, the weathermen are wrong.
Seasons are not astronomical phenomena; they are climatological phenomena. Or, to put it in English: seasons aren't about the sun in the sky; they're about the weather on Earth. They are defined by temperature. Winter is the coldest part of the year, summer the warmest, and spring and fall the periods in-between. On average, the coldest part of the year runs from December 4 through March 4. Which makes Saturday, March 5, the first day of spring!
Now, the problem is, this date is just an average. In any given year, there will almost certainly be nicer days before it and lousier days after it. Which is probably what led the weathermen to go with the equinox. It's very predictable -- down to the minute -- and there's no arguing: the days before it are shorter, the days after it are longer. (It's also three weeks later, which means the chance of nasty weather after your "first day of spring" is diminished.)
But daylight isn't the issue. Temperature is what counts. Besides, if you make the equinox the first day of spring, then the first day of summer will be the solstice -- the longest day of the year, June 21. And, as all good Scandanavians know, the longest day of the year isn't the first summer's day; it's MID-summer's day!
The first day of each season, according to average temperature:
(To learn more about how the Sun and Earth create our seasons, visit the Weather section of the Experiment Gallery on level 3.) | <urn:uuid:020a8ef6-4421-42c9-ae8e-652f81a4237f> | 3.34375 | 447 | Personal Blog | Science & Tech. | 66.94849 |
Mar. 16, 2012 Early snowmelt caused by climate change in the Colorado Rocky Mountains snowballs into two chains of events: a decrease in the number of flowers, which, in turn, decreases available nectar. The result is decline in a population of the Mormon Fritillary butterfly, Speyeria mormonia.
Using long-term data on date of snowmelt, butterfly population sizes and flower numbers at the Rocky Mountain Biological Laboratory, Carol Boggs, a biologist at Stanford University, and colleagues uncovered multiple effects of early snowmelt on the growth rate of an insect population.
"Predicting effects of climate change on organisms' population sizes will be difficult in some cases due to lack of knowledge of the species' biology," said Boggs, lead author of a paper reporting the results online in this week's journal Ecology Letters.
Taking into account the butterfly's life cycle and the factors determining egg production was important to the research.
Butterflies lay eggs (then die) in their first summer; the caterpillars from those eggs over-winter without eating and develop into adults in the second summer.
In laboratory experiments, the amount of nectar a female butterfly ate determined the number of eggs she laid. This suggested that flower availability might be important to changes in population size.
Early snowmelt in the first year leads to lower availability of the butterfly's preferred flower species, a result of newly developing plants being exposed to early-season frosts that kill flower buds.
The ecologists showed that reduced flower--and therefore nectar--availability per butterfly adversely affected butterfly population growth rate.
Early snowmelt in the second year of the butterfly life cycle worsened the effect, probably through direct killing of caterpillars during early-season frosts.
The combined effects of snowmelt in the two consecutive years explained more than four-fifths of the variation in population growth rate.
"Because species in natural communities are interconnected, the effects of climate change on any single species can easily be underestimated," said Saran Twombly, program director in the National Science Foundation's Division of Environmental Biology, which funded the research.
"This study combines long-term, data models, and an understanding of species interactions to underscore the complex effects climate change has on natural populations."
"It's very unusual for research to uncover a simple mechanism that can explain almost all the variation in growth rate of an insect population," said David Inouye, a biologist at the University of Maryland and co-author of the paper.
Indeed, "one climate parameter can have multiple effects on an organism's population growth," Boggs said. "This was previously not recognized for species such as butterflies that live for only one year.
"We can already predict that this coming summer will be a difficult one for the butterflies," she said, "because the very low snowpack in the mountains this winter makes it likely that there will be significant frost damage."
"Long-term studies such as ours are important to understanding the 'ecology of place,' and the effects of weather and possible climate change on population numbers," said Inouye.
"This research is critical to assessing the broader effects of weather on an ever-changing Earth," he said. "By facilitating long-term studies, field stations such as the Rocky Mountain Biological Laboratory are an invaluable asset."
Stanford University's Vice Provost for Undergraduate Education also funded the work.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Carol L. Boggs, David W. Inouye. A single climate driver has direct and indirect effects on insect population dynamics. Ecology Letters, 2012; DOI: 10.1111/j.1461-0248.2012.01766.x
Note: If no author is given, the source is cited instead. | <urn:uuid:77e2eac2-e429-46e2-b9cd-2d5d85afed93> | 3.28125 | 809 | Truncated | Science & Tech. | 32.976725 |
The difference between sleeping and being awake seems simple enough. You know someone is sleeping because his eyes are closed, he’s lying down and inactive, he doesn’t answer your questions, and he might be snoring. People who are awake, on the other hand, have open eyes, can get things done, are responsive to questions and generally don’t snore. If you were to look at a group of animals, you could probably tell which were asleep and which were awake.
But a new study suggests sleeping may be more complicated and less obvious than that. When researchers kept animals up late, the critters seemed to be wide awake, even though tests showed parts of their brains were sleeping.
In the experiment, scientists studied brain activity in sleep-deprived rats. The animals were kept up when they normally would have been sleeping. During that time, the rats’ eyes stayed open. But their brains were not fully functioning: Some brain cells, called neurons, were working, while others dozed.
When an animal is awake, neurons send messages to each other in the form of tiny electrical pulses. While an animal sleeps, these pulses change: Neurons cycle on and off.
The scientists measured the electrical activity of rat neurons with an electroencephalogram, or EEG. Each rat in this experiment had two electrodes in its brain. Electrodes picked up and recorded electrical activity, which the EEG reported as patterns that researchers could read.
After the rats had been kept awake for hours, they played and did tasks — but the EEG showed that some of their neurons were catching zzzzs. In other words, some neurons in the awake rats’ brains shut off as though the animals were sleeping.
The tired rats also had trouble completing difficult tasks, such as reaching through a hole in a wall for a sugar pellet. (The task requires animals to reach and turn their paws, which is considered difficult for rats.) The researchers found a connection between a rat’s success and which brain cells fell asleep. If the sleeping neurons were in a part of the brain that the rat needed for getting the sugar, the animal had difficulty with this task.
Scientists study rats and other animals because their brains are similar to humans’. If the brains of tired people behave like the brains of tired rats, sleep-deprived people also may run into trouble — saying the wrong thing, making mistakes while driving or making bad decisions.
Even though a person might feel sleepy, “nobody would be able to tell there was anything wrong with you,” Giulio Tononi told Science News. Tononi, who led the recent study, is a neuroscientist at the University of Wisconsin–Madison. Neuroscientists study the structure and function of the brain and the nervous system. They often try to understand which parts of the brain correspond to behaviors or activities.
Scientists used to believe that one part of the brain was in charge of sleeping and being awake. But in the past 20 years, a number of studies — including this one — suggest that sleep may not be so simple. Many researchers now suspect that sleep starts in single cells and then spreads throughout the brain. Which means not all cells sleep at the same time, and recognizing the difference between being awake and being asleep can prove challenging.
Unless someone is snoring. Then it still seems fairly simple.
POWER WORDS (adapted from the New Oxford American Dictionary)
electroencephalogram (EEG) A test or record of brain activity. An EEG device reports the electrical activity in the brain as a series of waves that can be read and interpreted by scientists.
neuroscience The scientific study of the brain and nervous system, their structures and functions.
neuron A specialized cell that transmits electrical impulses to other cells.
electrode A conduit, or channel, that allows electricity to leave or enter an object
conductor Something that can conduct electricity.
snore A snorting or grunting sound in a sleeping person’s breathing. | <urn:uuid:0ab5be42-47ce-46c7-846b-6c14b0be9904> | 3.875 | 833 | Knowledge Article | Science & Tech. | 51.770045 |
> The wchar_t is an opaque data type (some says semi-opaque) that you
> shouldn't assume on its representation. You can find various
> UNIX spec and std sources saying that it is an opaque data type, for
> instance, XPG4/4.2/5 that subsumes POSIX.1, ANSI C/ISO C, SVID3 and >System V
> ABI, ... The only things that you can be sure (if the OS you are >dealing with
> is XPG4-complient) with the wchar_t type is, there will 0 and PCS >characters
> (with same value that you can assign) exist in the wchar_t in terms of
> code values of the type.
To my knowledge, there isn't a single data type in C that is nailed
down to a specific number of bits. Everything can change, based
on the author of the compiler and capability of the target processor.
You're only granted that long is at least as large as int, for
example. And, I remember my first compiler that 'supported' wchar_t
had sizeof (wchar_t) == 1!!! [MSVC 1.52??]
Hence, if you really want a fixed number of bits, you're best using
macros like UBYTE, UWORD, WORD, LWORD, etc, then having conditionals
to make sure your compiler assigns the correct data type to these.
I like the idea of a macro/typedef called 'unichar', or UNICHAR.
Then we can have UCS4CHAR too! :-)
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:33 EDT | <urn:uuid:5850dd5c-bbd4-4632-916a-c1cfb7719a90> | 3.15625 | 381 | Comment Section | Software Dev. | 76.371481 |
Uit Kust Wiki
Versie door Astamoulis op 2 mrt 2009 om 23:23
This is one of the fundamental definitions for comprehending evolution.
A population is a group of individuals of one species that live in the same geographical area at the same time.
Please note that others may also have edited the contents of this article. | <urn:uuid:66596104-4bff-4923-a611-9eb038476331> | 2.6875 | 73 | Knowledge Article | Science & Tech. | 45.065789 |
I am supposed to find all the zeros of a polynomial function
If I synthetically divide a function by all the real zeros, and am left with 1, is that an x value of 1?
Nope! If I understand your question correctly, being "left with 1" means a factor of 1, not a remainder of 1 and not the expression "x−1". For example:
Maybe you needed to find all the zeros of x4−15x2−10x+24. You determined that there was a real zero at x=4, so you divided the original polynomial by x−4 using synthetic division to obtain the cubic x3+4x2+x−6. Then you found another zero at x=−2, so you divided that cubic by x+2 to obtain x2+2x−3. Next you divided this quadratic by x−1 because that quadratic has a zero at x=1. This division gave you the linear expression x+3 which has a zero at x=−3, and the final synthetic division of x+3 by x+3 gives you 1.
It's true that (1)(x+3)(x-1)(x+2)(x-4) = x4−15x2−10x+24.
But it's also true that (1)(1)(1)(1)(x+3)(x-1)(x+2)(x-4) = x4−15x2−10x+24. And it's a bit silly.
That's because 1 is the mutiplicative identity. When the only factor left is one, your fun is done...assuming that you think solving a quartic is fun...which would make you a very unusual person. I prefer to solve quartics using Wolfram Alpha. Try http://www.wolframalpha.com/input/?i=Solve+x^4%E2%88%9215x^2%E2%88%9210x%2B24%3D0 .
Anyway, if that's what happened then you're in for a treat because, coincidentally, the great mathematics educator Vi Hart came out with a video on this exact topic! See http://www.youtube.com/watch?v=GFLkou8NvJo
The Remainder Theorem provides the answer:
The Division Algorithm states:
f(x) = (x-c) * q(x) + r
q is the quotient of the synthetic division
r is the remainder of the synthetic division
Set x=c to get a value for r:
f(c) = (c-c)*q(c) + r
f(c) = ( 0 )*q(c) + r
This tells you that the remainder, r, is the value of f(x) at x=c
If r = 1, then c is not a root of the function f.
If r = 0, then c is a root of the function f, i.e. f(c)=0 | <urn:uuid:33921868-ab65-4b3a-bec6-b22aa14458d5> | 3.296875 | 656 | Q&A Forum | Science & Tech. | 100.785161 |
Apache Tomcat Development
Apache Tomcat 6.0
Class Loader HOW-TO
Like many server applications, Tomcat installs a variety of class loaders
(that is, classes that implement
java.lang.ClassLoader) to allow
different portions of the container, and the web applications running on the
container, to have access to different repositories of available classes and
resources. This mechanism is used to provide the functionality defined in the
Servlet Specification, version 2.4 — in particular, Sections 9.4
In a Java environment, class loaders are
arranged in a parent-child tree. Normally, when a class loader is asked to
load a particular class or resource, it delegates the request to a parent
class loader first, and then looks in its own repositories only if the parent
class loader(s) cannot find the requested class or resource. Note, that the
model for web application class loaders differs slightly from this,
as discussed below, but the main principles are the same.
When Tomcat is started, it creates a set of class loaders that are
organized into the following parent-child relationships, where the parent
class loader is above the child class loader:
Webapp1 Webapp2 ...
The characteristics of each of these class loaders, including the source
of classes and resources that they make visible, are discussed in detail in
the following section.
|Class Loader Definitions|
As indicated in the diagram above, Tomcat 6 creates the following class
loaders as it is initialized:
Bootstrap — This class loader contains the basic
runtime classes provided by the Java Virtual Machine, plus any classes from
JAR files present in the System Extensions directory
$JAVA_HOME/jre/lib/ext). Note: some JVMs may
implement this as more than one class loader, or it may not be visible
(as a class loader) at all.
System — This class loader is normally initialized
from the contents of the
CLASSPATH environment variable. All
such classes are visible to both Tomcat internal classes, and to web
applications. However, the standard Tomcat startup scripts
%CATALINA_HOME%\bin\catalina.bat) totally ignore the contents
CLASSPATH environment variable itself, and instead
build the System class loader from the following repositories:
$CATALINA_HOME/bin/bootstrap.jar — Contains the
main() method that is used to initialize the Tomcat server, and the
class loader implementation classes it depends on.
$CATALINA_HOME/bin/tomcat-juli.jar — Logging
implementation classes. These include enhancement classes to
java.util.logging API, known as Tomcat JULI,
and a package-renamed copy of Apache Commons Logging library
used internally by Tomcat.
See logging documentation for more
$CATALINA_HOME/bin/commons-daemon.jar — The classes
from Apache Commons
The tomcat-juli.jar and commons-daemon.jar JARs in
$CATALINA_HOME/bin are not present in the
.sh scripts, but are
referenced from the manifest file of bootstrap.jar.
If $CATALINA_BASE and $CATALINA_HOME do differ and
$CATALINA_BASE/bin/tomcat-juli.jar does exist, the startup scripts
will add it to
CLASSPATH before bootstrap.jar, so
that Java will look into $CATALINA_BASE/bin/tomcat-juli.jar for
classes before it will look into $CATALINA_HOME/bin/tomcat-juli.jar
referenced by bootstrap.jar. It should work in most cases but,
if you are using such configuration, it might be recommended to remove
tomcat-juli.jar from $CATALINA_HOME/bin so that only
one copy of the file is present on the classpath. The next version of
Tomcat, Tomcat 7, takes different approach here.
Common — This class loader contains additional
classes that are made visible to both Tomcat internal classes and to all
Normally, application classes should NOT
be placed here. The locations searched by this class loader are defined by
common.loader property in
$CATALINA_BASE/conf/catalina.properties. The default setting will search the
following locations in the order they are listed:
- unpacked classes and resources in
- JAR files in
- unpacked classes and resources in
- JAR files in
By default, this includes the following:
- annotations-api.jar — JavaEE annotations classes.
- catalina.jar — Implementation of the Catalina servlet
container portion of Tomcat.
- catalina-ant.jar — Tomcat Catalina Ant tasks.
- catalina-ha.jar — High availability package.
- catalina-tribes.jar — Group communication package.
- ecj-*.jar — Eclipse JDT Java compiler.
- el-api.jar — EL 2.1 API.
- jasper.jar — Tomcat Jasper JSP Compiler and Runtime.
- jasper-el.jar — Tomcat Jasper EL implementation.
- jsp-api.jar — JSP 2.1 API.
- servlet-api.jar — Servlet 2.5 API.
- tomcat-coyote.jar — Tomcat connectors and utility classes.
- tomcat-dbcp.jar — Database connection pool
implementation based on package-renamed copy of Apache Commons Pool
and Apache Commons DBCP.
- tomcat-i18n-**.jar — Optional JARs containing resource bundles
for other languages. As default bundles are also included in each
individual JAR, they can be safely removed if no internationalization
of messages is needed.
WebappX — A class loader is created for each web
application that is deployed in a single Tomcat instance. All unpacked
classes and resources in the
/WEB-INF/classes directory of
your web application, plus classes and resources in JAR files
/WEB-INF/lib directory of your web application,
are made visible to this web application, but not to other ones.
As mentioned above, the web application class loader diverges from the
default Java 2 delegation model (in accordance with the recommendations in the
Servlet Specification, version 2.4, section 9.7.2 Web Application Classloader).
When a request to load a
class from the web application's WebappX class loader is processed,
this class loader will look in the local repositories first,
instead of delegating before looking. There are exceptions. Classes which are
part of the JRE base classes cannot be overriden. For some classes (such as
the XML parser components in J2SE 1.4+), the J2SE 1.4 endorsed feature can be
Last, any JAR file that contains Servlet API classes will be explicitly
ignored by the classloader — Do not include such JARs in your web
All other class loaders in Tomcat 6 follow the usual delegation pattern.
Therefore, from the perspective of a web application, class or resource
loading looks in the following repositories, in this order:
- Bootstrap classes of your JVM
- System class loader classes (described above)
- /WEB-INF/classes of your web application
- /WEB-INF/lib/*.jar of your web application
- Common class loader classes (described above)
|XML Parsers and Java|
Starting with Java 1.4 a copy of JAXP APIs and an XML parser are packed
inside the JRE. This has impacts on applications that wish to use their own
In old versions of Tomcat, you could simply replace the XML parser
in the Tomcat libraries directory to change the parser
used by all web applications. However, this technique will not be effective
when you are running modern versions of Java, because the usual class loader
delegation process will always choose the implementation inside the JDK in
preference to this one.
Java supports a mechanism called the "Endorsed Standards Override
Mechanism" to allow replacement of APIs created outside of the JCP
(i.e. DOM and SAX from W3C). It can also be used to update the XML parser
implementation. For more information, see:
Tomcat utilizes this mechanism by including the system property setting
-Djava.endorsed.dirs=$JAVA_ENDORSED_DIRS in the
command line that starts the container. The default value of this option is
$CATALINA_HOME/endorsed. This endorsed directory is not
created by default. | <urn:uuid:8cc7b665-a96a-44a7-9be6-4d8bf1b4a5f2> | 2.890625 | 1,932 | Documentation | Software Dev. | 42.823073 |
Haze is traditionally an atmospheric phenomenon where dust, smoke and other dry particles obscure the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand and snow. Sources for haze particles include farming (ploughing in dry weather), traffic, industry, and wildfires.
Seen from afar (e.g. approaching airplane) and depending upon the direction of view with respect to the sun, haze may appear brownish or bluish, while mist tends to be bluish-grey. Whereas haze often is thought of as a phenomenon of dry air, mist formation is a phenomenon of humid air. However, haze particles may act as condensation nuclei for the subsequent formation of mist droplets; such forms of haze are known as "wet haze."
In the United States and elsewhere, the term "haze" in meteorological literature generally is used to denote visibility-reducing aerosols of the wet type. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid. The reactions are enhanced in the presence of sunlight, high relative humidity, and stagnant air flow. A small component of wet haze aerosols appear to be derived from compounds released by trees, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under favorable conditions each summer.
Air pollution
Haze often occurs when dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concentrate and form a usually low-hanging shroud that impairs visibility and may become a respiratory health threat. Industrial pollution can result in dense haze, which is known as smog.
Since 1991, haze has been a particularly acute problem in Southeast Asia, Indonesian forest fires burnt to clear land being the reason. In response the 1997 Southeast Asian haze, the ASEAN countries agreed on a Regional Haze Action Plan (1997) and later signed the Agreement on Transboundary Haze Pollution (2002) however the pollution is still a problem today. Under the agreement the ASEAN secretariat hosts a co-ordination and support unit.
In the United States, the Interagency Monitoring of Protected Visual Environments (IMPROVE) program was developed as a collaborative effort between the US EPA and the National Park Service in order to establish the chemical composition of haze in National Parks and establish air pollution control measures in order to restore the visibility to pre-industrial levels. Additionally, the Clean Air Act requires that any current visibility problems be remedied, and future visibility problems be prevented, in 156 Class I Federal areas located throughout the United States. A full list of these areas is available on EPA's website.
Haze causes issues in the area of terrestrial photography, where the penetration of large amounts of dense atmosphere may be necessary to image distant subjects. This results in the visual effect of a loss of contrast in the subject, due to the effect of light scattering through the haze particles. For these reasons, sunrise and sunset colors appear subdued on hazy days, and stars may be obscured at night. In some cases, attenuation by haze is so great that, toward sunset, the sun disappears altogether before reaching the horizon. (see example). Haze can be defined as an aerial form of the Tyndall effect therefore unlike other atmospheric effects such as cloud and fog, haze is spectrally selective: shorter (blue) wavelengths are scattered more, and longer (red/infrared) wavelengths are scattered less. For this reason many super-telephoto lenses often incorporate yellow filters or coatings to enhance image contrast.
Infrared (IR) imaging may also be used to penetrate haze over long distances, with a combination of IR-pass optical filters (such as the Wratten 89B) and IR-sensitive detector.
See also
- WMO Manual on Codes
- ASEAN action hazeonline
- IMPROVE Visibility Program
- Federal Class 1 Areas
- Figure 1. "The setting sun dimmed by dense haze over State College, Pennsylvania on 16 September 1992". "Haze over the Central and Eastern United States". The National Weather Digest. March 1996. Text "accessed April 26, 2011" ignored (help)
|Wikimedia Commons has media related to: Haze|
- National Pollutant Inventory - Particulate matter fact sheet
- Haze over the central and eastern United States
- Chemical Composition of Haze in US Natioanl Parks: Views Visibility Database | <urn:uuid:97302598-58d1-4842-a46e-fde436d8f61b> | 4.125 | 976 | Knowledge Article | Science & Tech. | 26.396045 |
when do two names cease to refer to the same string object?
steve at REMOVETHIScyber.com.au
Fri Mar 3 12:19:09 CET 2006
On Thu, 02 Mar 2006 20:45:10 -0500, John Salerno wrote:
> To test this out a wrote a little script as an exercise:
> for num in range(1, 10):
> x = 'c' * num
> y = 'c' * num
> if x is y:
> print 'x and y are the same object with', num, 'characters'
> print 'x and y are not the same object at', num, 'characters'
> But a few questions arise:
> 1. As it is above, I get the 'not the same object' message at 2
> characters. But doesn't Python only create one instance of small strings
> and use them in multiple references? Why would a two character string
> not pass the if test?
>>> "aaaaaa" is "aaaaaa"
>>> "aaaaaa" is "aaa" + "aaa"
Does this give you a hint as to what is happening?
Some more evidence:
>>> "aaaaa"[0:1] is "aaaaa"[0:1]
>>> "aaaaa"[0:2] is "aaaaa"[0:2]
> 2. If I say x = y = 'c' * num instead of the above, the if test always
> returns true. Does that mean that when you do a compound assignment like
> that, it's not just setting each variable to the same value, but setting
> them to each other?
Yes. Both x and y will be bound to the same object, not just two objects
with the same value. This is not an optimization for strings, it is a
design decision for all objects:
>>> x = y =
> Finally, I'd like to see how others might write a script to do this
filename = "string_optimization_tester.py"
s = "if '%s' is not '%s':\n raise ValueError('stopped at n=%d')\n"
f = file(filename, "w")
for n in range(1000):
f.write(s % ("c"*n, "c"*n, n))
f.write("""if 'ccc' is not 'c'*3:
print 'Expected failure failed correctly'
print 'Expected failure did not happen'
More information about the Python-list | <urn:uuid:1a23b567-4222-4e20-af50-63d7af236b52> | 2.859375 | 562 | Comment Section | Software Dev. | 81.560625 |
What is ECOLOGICAL BACKLASHES?
Definition of ecological backlash – Our online dictionary has ecological backlash information from A Dictionary of Ecology dictionary. Encyclopedia.com: English, psychology and medical dictionaries
Best Answer: From several sites: ecological backlash The unexpected and detrimental consequences of an environmental modification (e.g. dam construction) which may outweigh ...
What is ECOLOGICAL BACKLASHES? Mr What will tell you the definition or meaning of What is ECOLOGICAL BACKLASHES
Ecological backlash involves the counter-responses of pest populations or other biotic factors in the environment that diminish the effectiveness of pest management tactics ...
Best Answer: Media reports has really exposed the "ecological backlash" India is facing today. The directionless, ineffective and rudderless agricultural and environmental ...
What is ECOLOGICAL BACKLASH? Mr What will tell you the definition or meaning of What is ECOLOGICAL BACKLASH
Some examples of ecological backlash: Pest control resistance- bugs become immune to treatments and chemicals Effects of an oil spill on the enviroment- loss of fish, dirty ...
ECOLOGICAL BACKLASHES Ppt Presentation - A PowerPoint presentation ... What is Ecological Backlashes??: What is Ecological Backlashes?? The Human Activities that result in the destruction of the Balance of Nature Ecological backlash involves the counter-responses of pest populations or other ...
Backlashes is always a great word to know. So is zedonk. Does it mean: the offspring of a zebra and a donkey. a fool or ... Causes of ecological... Nearby Words. backing. backing away. backing dog. backing down. backing for. backing light. backing off. backing out. backing out of.
Ecological Backlash - download or readfalse online. ... Neil Aldrin Valeroso. II GLA / SSP B Ecological backlash and its management
Ecological Backlash - download or readfalse online. ... Ecological backlasb involves the countei-iesponses of pest populations oi othei biotic
What is ecological backlash? Instantly find the definition, meaning and translation of ecological backlash at TermWiki.com
Lecture 29. Revenge of the Insects: Ecological Backlash. Introduction; Ecological Backlash; counter-responses of pest populations or other biotic factors in the environment that diminish the effectiveness of management tactics
noun succession ( def 6 ) . Dictionary.com Unabridged Based on the Random House Dictionary, © Random House, Inc. 2013. Cite This Source | Link To ecological succession
What are example of ecological backlashes? ChaCha Answer: Ecological backlash involves the counter-responses of pest populations or o...
Ecological backlashes download on GoBookee.com free books and manuals search - Quarter: 1A Time Frame: Topic: Balance of Nature Performance ...
Essays on Ecological Backlashes for students to reference for free. Use our essays to help you with your writing 1 - 60.
Ecology is the study of organisms and their environments. The types of ecology are population ecology and community ecology. There is also the study of behavioral
www.tutorvista.com Updated: 2013-05-07 Ecological Balance and Sustainable Development | Tutorvista.com. The environment in which the man and other organisms live is called the biosphere.
Available in the National Library of Australia collection. Author: Thomson, J. M. (James Miln), 1921-; Format: Book; 15 p. 22 cm.
The picture of ecological backlashes inside this slide design can be used for sustainability presentations in PowerPoint. PPT PPS Size: 184.8 KiB | Downloads: 22,587 Download 544_ecological_ppt.zip. Related PowerPoint PPT Templates. powerpoint terms:
Free Essays on Example Of Ecological Backlashes for students. Use our papers to help you with yours 1 - 30.
What Is Ecological Backlash. What Is An Ecological Footprint. Weblink Page Faq What Is The Individual Income Tax Return. Frequently Asked Questions What Is The Reliacard Visa Debit Card. What IS An. Ecological Footprint. Grade Level: Elementary and Middle School. Subject Correlation: Science.
Ecological Backlashes - Page 2 | Ecological Backlashes - Page 3 | Ecological Backlashes - Page 4 | Ecological Backlashes - Page 5 | Ecological Backlashes - Page 6 | Ecological Backlashes - Page 7. Magic Clickers & Photographers Rafi's Photography
Ecological backlash? ecological backlash The unexpected and detrimental consequences of an environmental modification (e.g. dam construction) which may outweigh the gains anticipated from the modification scheme.
Ecological-Backlash - where is the picture of ecological backlashes? : An ecological backlash is the unexpected and detrimental consequences o...
Ecology (from Greek: οἶκος, "house"; -λογία, "study of") is the scientific study of the relationships that living organisms have with each other and with their abiotic environment. Topics of interest to ecologists include the diversity, distribution, amount (biomass), number ...
Best Answer: There are many causes of ecological backlash and all of these are attributed to the "acts of man" like: 1. Dam construction which alter the natural ecosystem of ...
To view this website you will need any broadband connection and the Flash plug-in ... Counting Since 1st August 2005 :
Ecological backlash meaning? ChaCha Answer: Ecological backlash involves the counter-responses of pest populations or other biotic fa...
Ecological Backlashes Guimaras Oil Spill It was said that the captain has no capacity to manage the ship for it was overloaded with oil tankers.
There are many causes of ecological backlash and all of these are attributed to the "acts of man" like: 1. Dam construction which alter the natural ...
By: Mikeala Calamba ... Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to add Kokoy Calamba 's video to your playlist.
Get information, facts, and pictures about Backlash at Encyclopedia.com. Make research projects and school reports about Backlash easy with credible articles from our FREE, online encyclopedia and dictionary.
What is example of ecological backlash? What is the example of back lashes of an ecosystem image ... [ Where the Philippine archipelago found in ...
B. Ecological backlashes II. Philippine Biodiversity III. Biodiversity Conservation Initiatives IV. Philippine Biodiversity Laws Learners will be able to: • predict changes in a biotic community after a period of time. ...
Definition of Backlashes with photos and pictures, translations, sample usage, and additional links for more information.
what is ecological backlash. ecological importance of the black rhino. tundra ecological succession. worksheet ecological successions of northern canada
The diversity of species and genes in ecological communities affects the functioning of these communities. These ecological effects of biodiversity in turn affect both climate change through enhanced greenhouse gases, aerosols and loss of land cover, and biological diversity, causing a rapid ...
What is "ecological succession"? "Ecological succession" is the observed process of change in the species structure of an ecological community over time.
Essays on Example Of Ecological Backlashes for students to reference for free. Use our essays to help you with your writing 1 - 60.
The ecological backlash - nature versus man by James Miln Thomson; 1 edition; First published in 1970; Subjects: Ecology
Free Essays on 3 Different Example Of Ecological Backlashes for students. Use our papers to help you with yours 1 - 30.
Science & Environmental Health Network - Precautionary Principle ... Jan 16, 2003 . On-the-ground examples of ecological backlash from technologies we .
More than a year ago I predicted on this blog that there was going to be a public backlash against the political climate change narrative. The year 2009 confirmed this prediction with several surveys indicating serious decreases in public understanding and awareness of the urgency to ...
Why Is Ecological Backlash Happen. What Is Partner Relationship Management Prm, And Why Is The Roi. Why Is The Study Of Ethics Important. Why Is High Duedate Performance So Difcult To Achievean. What is Partner Relationship Management (PRM), and Why is the ROI so High? WHITE PAPER. Free Download ...
Download: Ecological Backlashes Photo found on Marks Web of Books and Manuals
What is ecological backlashes? Read answer... What are the different backlashes? Read answer... When is Backlash 2009? Read answer... Help us answer these What is ecologiacal backlash? What is ecology backlash? What was the Mississippi Backlash?
If you want to know what is ecological succession, then go no further. This article will tell you all about it in detail and will also talk about the various stages of ecological succession. Read on...
im not sure if this is correct or not. I think ecological backlashes have something to do with the Eco system affect us if its harmed. 10 months ago
If you didn't find what you were looking for you can always try Google Search
Add this page to your blog, web, or forum. This will help people know what is What is ECOLOGICAL BACKLASHES | <urn:uuid:d4df34a2-9f11-4bee-9028-f881b32433c5> | 2.96875 | 1,990 | Content Listing | Science & Tech. | 35.019001 |
A Laboratory Manual for X-Ray Powder Diffraction
Illite is essentially a group name for non-expanding, clay-sized, dioctahedral, micaceous minerals. It is structurally similar to muscovite in that its basic unit is a layer composed of two inward-pointing silica tetragonal sheets with a central octahedral sheet. However, illite has on average slightly more Si, Mg, Fe, and water and slightly less tetrahedral Al and interlayer K than muscovite (Bailey, 1980). The weaker interlayer forces caused by fewer interlayer cations in illite also allow for more variability in the manner of stacking (Grim, 1962). Glauconite is the green iron-rich member of this group.
Illites, which are the dominant clay minerals in argillaceous rocks, form by the weathering of silicates (primarily feldspar), through the alteration of other clay minerals, and during the degradation of muscovite (Deer and others, 1975). Formation of illite is generally favored by alkaline conditions and by high concentrations of Al and K. Glauconite forms authigenically in marine environments and occurs primarily in pelletal form.
Members of the illite group are characterized by intense 10-angstrom 001 and a 3.3-angstrom 003 peaks that remain unaltered by ethylene glycol or glycerol solvation, potassium saturation, and heating to 550 degrees C (Fanning and others, 1989). Glauconite can be differentiated from illite by a 1.5- to 1.52-angstrom 060 peak (illite's 060 peak occurs at 1.50 angstroms), and by the presence of only a very weak 5-angstrom 002 peak.
|X-ray powder diffraction patterns of oriented-aggregate mounts showing the effects of standard treatments on:|
Selected Bibliography for Illite-Group Minerals
U.S. Department of the Interior, U.S. Geological Survey
Maintained by Publishing Services
Last modified: 01:53:13 Fri 11 Jan 2013
Privacy statement | General disclaimer | Accessibility | <urn:uuid:522a3f7e-cf52-4d7e-985d-09f9b44e0b28> | 3.65625 | 461 | Knowledge Article | Science & Tech. | 40.135809 |
The effect of phosphite on the sexual reproduction of some annual species of the jarrah (Eucalyptus marginata) forest of southwest Western Australia
Fairbanks, M.M., Hardy, G.E.St.J. and McComb, J.A. (2001) The effect of phosphite on the sexual reproduction of some annual species of the jarrah (Eucalyptus marginata) forest of southwest Western Australia. Sexual Plant Reproduction, 13 (6). pp. 315-321.
*Subscription may be required
Phosphite is a cost-effective fungicide used to control the pathogen Phytophthora cinnamomi which is damaging the diverse flora of the southwest of Western Australia. Three annual species of the southwest jarrah (Eucalyptus marginata) forest of Western Australia (Pterocheata paniculata, Podotheca gnaphalioides and Hyalosperma cotula), were studied to determine the effect of the fungicide phosphite on the species' reproduction. Phosphite at concentrations of 2.5, 5 and 10 g L<sup>-1</sup> reduced pollen fertility of Pt. paniculata when plants were sprayed at the vegetative stage. Pollen fertility of all three species was reduced when plants were sprayed at anthesis with 10 g L<sup>-1</sup> phosphite. Seed germination was reduced by phosphite in Pt. paniculata and H. cotula when plants were sprayed in the vegetative stage. Phosphite when sprayed at anthesis at a concentration of 5 g L<sup>-1</sup> reduced seed germination of H. cotula. Phosphite at concentrations of 5 and 10 g L<sup>-1</sup> killed a proportion of plants from all three species and up to 90% of Po. gnaphalioides plants. The frequent application of phosphite, therefore, may reduce the abundance of annual plants in this ecosystem.
|Publication Type:||Journal Article|
|Murdoch Affiliation:||School of Biological Sciences and Biotechnology|
|Copyright:||(c) Springer Verlag|
|Item Control Page| | <urn:uuid:a3a145fb-e42d-4273-bcb7-03487f548ad8> | 3.015625 | 462 | Academic Writing | Science & Tech. | 47.81185 |
As the space industry continues to cut costs by using lightweight materials and alternative types of energy, it is opening up the possibility that you and I may one day have the opportunity to live in space. The idea of a colony on the moon or Mars might be made possible with new spacecraft technologies being developed today. One of the remaining barriers to affordable space travel or even placing spacecraft in orbit is still the high price of launching these spacecraft. At today's prices, it would cost $12,500 just to launch an object as light as an inflated basketball (1.25 pounds) into space. The heavier the spacecraft, the more rocket fuel is needed to get the vehicle off the ground.
NASA and other space agencies are working on constructing a new breed of inflatable spacecraft made of lightweight materials. The amazing thing about these inflatable spacecraft is that they can be squeezed into small canisters only a fraction of their full size and then inflated once they arrive in space using a sophisticated deployment system that releases an inert gas to push out the walls of the inflatable material.
Space inflatable technology has been around since the 1960s, but has played a minor role in space exploration to this point. With the ability to cut costs, space inflatables could once again be used to build 1,000-foot antennas, space habitats or solar sails, which wouldn't be practical with conventional spacecraft materials. In this edition of How Stuff Will Work, we will take a look at two kinds of space inflatables being developed and how they may pave the way for interstellar travel and Martian colonies. | <urn:uuid:a76c2978-3cf5-46e1-a009-270cd14b3f31> | 3.828125 | 318 | Knowledge Article | Science & Tech. | 40.307787 |
Harbor Seal Decline
INTRO: Harbor seals were once a common sight along Alaska's rocky coast. But since the 1970s, they've all but disappeared in parts of the state. Researchers at the Alaska SeaLife Center in Seward have launched a new study aimed at helping seals recover.
STORY: Harbor seals Atuun, Qilak, Susitna, and Miki were barely two months old when they were brought to the Alaska SeaLife Center in Seward last summer. Now more than six months old, they still turn up their noses when offered a meal of squid. Lori Polasek is a marine mammal biologist at the center.
POLASEK: "Squid to these seals is considered a vegetable...In fact one of them has only just started eating squid."
These seals don't know it, but they may hold the key to helping their species recover.
Harbor seals were once as common as the rain that falls steadily much of the year along coastal Alaska. Anne Hoover-Miller is a marine mammal biologist with the Alaska SeaLife Center.
HOOVER-MILLER: "They were probably the most commonly seen marine mammal along the coast. They were hauled out along the shoreline everywhere. People pretty much assumed there were lots of seals and that there always would be lots of seals."
So numerous, that for a time in the 1950s a bounty was paid on seals to lessen their impact on commercial salmon stocks. In 1972 the federal Marine Mammal Protection Act prohibited all except Alaska Natives from harvesting seals. Theoretically at least, seal numbers should have begun to climb.
But instead, Hoover-Miller says seal numbers plummeted. The first sign of trouble was seen around Kodiak Island.
HOOVER-MILLER: "What actually happened was the seal populations in the Kodiak Island area began declining quite rapidly through the mid to late 1970s, and the number through 1994 declined about 90 percent."
From there, the decline quickly spread eastward to Cook Inlet and Prince William Sound, and west through the Aleutian Islands. Only seal populations in Southeast Alaska remained stable.
Scientists say the decline followed a dramatic shift in the kinds of fish available for seals to eat. Fat, energy-giving fish like herring, capelin, and sand lance, were replaced by low-fat, nutritionally questionable fish like pollock. If you're a marine mammal, high-fat prey is important to everything from warding off the bitter cold water to producing healthy milk for newborn pups. The shift in food didn't hurt just seals. Most scientists believe it triggered dramatic declines in Steller sea lions, fur seals, and seabirds.
Back at the SeaLife Center, researchers are trying to understand just how a diet of low-fat fish might affect harbor seals' ability to grow and reproduce. That's where Atuun, Qilak, Susitna, and Miki come in. Lori Polasek says Susitna and Miki are fed a steady diet of herring that are about 20 percent lower in fat than herring fed to Atuun and Qilak.
POLASEK: "This is the first time that the SeaLife Center has done a diet study in which everything remains the same in the diet except seals are getting a low fat herring or a high fat herring. There's still capelin, there's still pollock in the diet. There will eventually be squid in the diet. So it's still a mixed diet. The only factor that will change is the fat."
Over time, Polasek says a low-fat diet may affect harbor seals in subtle, but important ways. Researchers will conduct a host of tests aimed at gauging the health of the seals. They'll take regular blood samples to look for the presence of stress hormones. And they'll monitor the seal's growth to see if it's slower or stunted in the seals that don't get enough fat in their diet.
POLASEK: "And so there are several things we're looking for. Will there be changes in the effects of hormones over the season? Will there be delayed maturation in seals on the low-fat diet? Will they have a lower capability of onlaying fat in the winter? And what affect will that have on different health parameters in the seals? Of course, we don't want to drive these low fat seals into something where they would become unhealthy. We are definitely monitoring them very closely to make sure they remain healthy. But at what point are these seals stressed under conditions of low fat?"
What Polasek sees with her captive seals will be compared to what Hoover-Miller sees in the wild.
HOOVER-MILLER: "We're doing several things, taking blood samples and such to look at the health and condition of the seals that are foraging in the wild. But also the Department of Fish and Game is implanting transmitters that last 3-5 years, so that we are able to follow single animals over a long period of time to see, especially for females, are they reproducing annually? Are they successfully rearing pups? Are they staying at the same haul-out or are they shifting around with the season or between years? So we are trying to get longer-term information about particular animals."
Hoover-Miller says that whatever triggered the decline of harbor seals in the 1970s probably isn't what's keeping seal numbers low today.
HOOVER-MILLER: "We may be looking at different impacts precipitated by the decline. Availability of food may have precipitated the decline. The animals may have stabilized at a low number where the food is not impacting them quite so hard. But then the predator effect may be interfering with their ability to recover as quickly. So there are a lot of components that we need to put in."
For the moment, researchers will focus on the effects of diet on the health of harbor seals. From the study, Polasek and Hoover-Miller hope to develop a profile of what constitutes a healthy and unhealthy harbor seal. Knowing more about what it means to be a healthy seal will help scientists monitor the recovery of harbor seals across the state.
Audio version and related websites (above right)
Thanks to the following individuals for help preparing this script:
Anne Hoover-Miller, Harbor Seal Program Manager
Arctic Science Journeys is a radio service highlighting science, culture, and the environment of the circumpolar north. Produced by the Alaska Sea Grant College Program and the University of Alaska Fairbanks. The shortcut to our ASJ Radio home page is www.asjradio.org.
The URL for
this page is
Listen to story on Real Audio
Editor's note: Since 1994, Alaska harbor seal populations in parts of the state have begun to rebound. For example, on Kodiak Island, seal populations have increased five percent each year over the last decade, according to Kate Wynne, Marine Mammal Specialist with Alaska Sea Grant. | <urn:uuid:b9cd9305-535f-4b35-bd46-a327002eab48> | 3.0625 | 1,449 | Audio Transcript | Science & Tech. | 55.150214 |
Do other planets have weather? If so, what kind, what planets? (Natalie, Somewhere, USA)
A: Yes, other planets endure weather so harsh that Earth's worst seems balmy. All planets in our solar system, with the exception of Mercury, have an atmosphere and, therefore, weather.
"I'm not sure it's fair to describe Mercury as having 'weather'. With virtually no atmosphere, the planet's temperature change is driven entirely by the (extremely slow 176-Earth-days from one sunrise to the next) rotation of the planet beneath the near Sun," says astronomer Robert Massey of the Royal Observatory Greenwich in London.
For the rest, let me mention a few weather extremes. Pluto, for instance, turns into a planet frostball every so often. A red-tinted frost probably covers Pluto — a methane-nitrogen-carbon-monoxide frost. Pluto moves in a greatly elongated orbit about the Sun in 248 years. During the 20 years she is closest to the Sun, temperatures rise, and ice turns to gas. Moreover, when Pluto orbits away from the Sun, and the gases freeze, her atmosphere may collapse, and a planet-wide frost ensues: a frostball Pluto.
Venus, on the other hand, has a surface literally hot enough to melt lead: 860° F (460° C). The closer Sun shines more intensely on Venus than on Earth, but thick clouds high in the atmosphere reflect much of the light. The surface converts the sunlight filtering through the clouds into thermal energy, which heats the surface, which then emits infrared radiation. The atmosphere absorbs this infrared radiation, transforms it again, radiates mostly in the infrared and heats the surface below, even more. Furthermore, the enormous amount of carbon dioxide in Venus' extraordinarily thick atmosphere generates more heating through this atmospheric heating process than on any other planet in our solar system. That is why Venus is so hot, say physicist Craig Bohren, author of Clouds in a Glass of Beer, and atmospheric scientist Peter Pilewskie of the University of Colorado.
Her atmosphere is so thick it exerts a surface pressure about 90 times Earth's. She's covered with sulphuric-acid clouds with tops that race across her skies three times faster than Earth's hurricane speed, while surface zephyrs waft at only a couple of miles an hour.
We've been watching what may be the solar system's longest lasting storm — Jupiter's Great Red Spot — on and off for 340 years, since Cassini first discovered it in 1665, after Hans Lippershey invented the telescope in 1608. The high-pressure storm gyrates (in the opposite direction from low-pressure Earth hurricanes) due to Coriolis effects (just as on Earth) making a complete rotation every 6 days (2.5 times faster than storms rotate on Earth). Click for animation of the swirling storm, courtesy of Wikipedia and the American Museum of Natural History.
Jupiter's extremely rapid rotation rate (a 10-hour day) helps drive the large storms, says Massey. For example, consider a point on our equator — Quito, Ecuador. A corresponding point on Jupiter's equator whips around 27 times faster than Quito does.
The windiest spot under the Sun may be Neptune. We've clocked blasts over 1500 mph (2400 km/h).
Scientists have been making a "big thing" about the analogies of Titan's weather (Saturn's biggest moon) to Earth's, says James F. Kasting, Distinguished Professor of Geosciences at Pennsylvania State University. It rains methane on Titan instead of water but otherwise has Earthlike weather processes, smog, for example. The Sun's ultraviolet light breaks up methane in Titan's atmosphere, which produces the orange haze: a smog worse than Los Angeles on its worst day.
If you and I could land on Titan, we would descend through a colorful nitrogen atmosphere denser than Earth's: a violet outer layer, next, a thin blue layer, a yellow band, and finally deepening shades of orange until we settled on her cold (-290° F, -180° C) surface — perhaps a sticky, cold sand made from ice grains. Scattered clouds would float above in the orange hazy distance.
Martian dust devils can tower to five miles (8 km) above its terrain, dwarfing our half-mile high tornados. Global-wide dust storms can last for several months. The major factor driving dust storms, says Massey, is the small dust-particle size. Even Mars' thin atmosphere can lift these tiny motes.
Total darkness shrouds the Martian winter pole, creating such cold that up to 25% of Mars' atmosphere condenses into thick slabs of dry ice. When summer comes, the dry ice sublimates, and generates vast hurricanes.
Unlike Earth, Mars' changing distance from the Sun affects its seasons, especially in the southern hemisphere. Mars is closest to the Sun in the southern summer and farthest away in the southern winter. Consequently the south has more extreme seasons than the north.
"The very fact that we can see planetary weather is a testament to the technological advances of the last four centuries. We've moved from seeing planets as bright dots to being able to see storms brewing on Jupiter, find out which gases surround Pluto and watch the Martian ice caps sublime. So much of this can even be seen using telescopes on or near the Earth — although, too often, they just whet our appetite for further space missions," says Massey.
April Holladay, science journalist for USATODAY.com, lives in Albuquerque, New Mexico. A few years ago Holladay retired early from computer engineering to canoe the flood-swollen Mackenzie, Canada's largest river. Now she writes a column about nature and science, which appears Fridays at USATODAY.com. To read April's past WonderQuest columns, please check out her site. If you have a question for April, visit this informational page. | <urn:uuid:0b1b0170-c388-496d-94e0-a3a1e269ff11> | 3.359375 | 1,244 | Nonfiction Writing | Science & Tech. | 50.596414 |
Science Fair Project Encyclopedia
A graphite bomb (also known as the "Blackout Bomb") is a non-lethal weapon used to disable electrical power systems. Graphite bombs work by spreading a cloud of extremely fine carbon fibre wires over electrical components, causing a short-circuit and a disruption of the electrical supply. The graphite bomb was used against Iraq in the Gulf War (1990 - 1991), knocking out 85% of the electrical supply. Similarly, the "BLU-114/B" graphite bomb was used by NATO against Serbia in May of 1999, disabling 70% of that country's power grid.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:6828baaf-8677-4349-ab98-2e30b2f4d483> | 3.15625 | 156 | Knowledge Article | Science & Tech. | 49.37561 |
Day 3: Blast matrices and genome
The blast matrix perl script
comparison of multiple organisms. For every organism, it calculates how
proteins are homologous to all organism in the comparison. Including 4
in a single comparison will leave 4 x 4 = 16 cells in the matrix.
The numbers that appear in each square are as follows: the
number is the percent of proteins in the total set of gene families
that are identical. The first of the numbers below is the number of
gene families that are identical in the two, while the second number is
the total number of gene families. I.e., the first is the intersection
of the two sets of gene familes, whereas the second is the union of the
of the matrix represent a special case, since this is equal to the
compared to itself. Naturally, when aligning a given gene with itself,
have perfect match alignment, and these hits are therefore excluded
diagonal. This leaves the diagonal as a measure of the number of
(homology within genomes) whereas all other cells represent the
(homology between genomes)
The input for the script is an XML
file. We have provided a template for this file, and shall add only a
section defining which genomes to
Log in to the CBS computers
Find a program which will let you log into the
Computer name: login.cbs.dtu.dk
User name: studXXX
You will get your password from the teachers.
After that you need to log into the computer
where we will do the exercises, which is named sbiology.
# log into CBS
ssh -Y ibiology
setenv MAKEFILES /home/people/pfh/bin/Makefile
- Look at the file and answer the questions below:
this plot, can you identify genomes which share homology?
Can you find genomes, which has a high degree of paralogs (homology
within the genome?)
you identify the least related proteomes?
Many of the properties that we have shown you today can be presented in
a genome atlas, a circular map of the genome. You have seen several of
them in the lectures up till now.
We have prepared atlases for many genomes already.
the webpages with the prepared atlases. Select ONE of your
genomes to examine.
Things to look at:
- Is your organism AT or GC rich? Does that
correspond to what you found earlier?
- Where is the replication origin in your
organism? (HINT: look at the distribution of Gs and As).
- Are there any regions in your genome that are
more AT or GC rich than the rest of the genome?
- Can you identify the leading and the lagging
strand of your genome?
- Which strand are the genes on? Is there any
tendencies for the genes to be either on the leading or the lagging
strand, or are they randomly distributed?
- Can you find any regions that might be highly
expressed (HINT: very flexible regions). Can you tell why this region
could be higly expressed? What happens if this region also easily melts
- would this help or hinder expression?
- Can you find any regions that can mutate
(HINT: AT rich regions that can melt easily).
- Can you find any regions that might be
protected against mutation (HINT: rigid regions that won't melt easily).
- Can you find any globally repeated sequences,
either direct or inverted, in this genome? How are these repeats
located in relation to the genes in the genome?
- Are there more local than global repeats, in
- How many rRNA genes does it have? Where are
they located (close or far away from the origin of replication,
randomly distributed or something else?).
- Do any of the rRNAs have tRNAs in them?
- Do the rRNA genes have any special features
the structural parameters? Are there repeats in this region, is the DNA
especially flexible, or something else? | <urn:uuid:359a7f37-d69e-43fd-b2fa-fdb7207d0d74> | 3.15625 | 869 | Tutorial | Science & Tech. | 53.710023 |
In the mid-1800s astronomers discovered from hundreds of sunspot sightings that, when they tabulated and graphed them, their numbers increased and decreased over time in a repeatable cycle. These extremes represent the amplitude of the cycle. We now call this the solar activity cycle or the sunspot cycle. Heinrich Schwabe published his first measurements of the sunspot cycles between 1826-1843 in 1843. He had counted these spots every possible clear day, and found two peaks in 1828 and 1837, with minima in 1833 and 1843. Subsequent historical studies by astronomers such as Rudolph Wolf uncovered recognizable cycles since 1700. A curious absence of cyclic behavior was noted by Gustav Spörer to the dawn of telescopic observations in ca 1610. This period is known as the Maunder Minimum, which coincides with an unusually cool period in European history called the Little Ice Age. Further attempts at developing an historical record for sunspot cycles have yielded suggestions for cycles between 1600-1650, making the Maunder Minimum a verified 70-year absence in sunspot activity. Astronomers today count the sunspot cycles beginning with Cycle 1 whose maximum occurred in 1760 which is the first year follwing the MAunder Minimum when a sunspot cycle had a distinch beginning and ending. We have just completed Cycle 23 (1996-2007) and wil lsoo nstart Cycle 24 (2008-2019).
During the last 200 years, the time between years of maximum activity, which is called the period of the cycle, has been about 11 years, but sunspot cycles can be as short as 9 or as long as 15 years. During sunspot minimum conditions, such as the year 1996, astronomers counted fewer than 5 sunspots on the surface of the Sun at any one time. During sunspot maximum conditions, as many as 250 could be seen. On September 20, 2000 one very large sunspot group could be seen with the naked eye with the proper safety precautions. (You should never look directly at the Sun without proper shielding to avoid eye damage!).
Ancient Chinese astronomers also kept track of naked-eye sunspots 4000 years ago, and that's how we know that sunspots have been a common feature of the Sun for millennia. In fact, solar activity is mirrored in the rise and fall of Carbon-14 isotopes found in biological matter. As solar activity increases, the solar magnetic field becomes more distended, reducing the inflow of cosmioc rays into the atmosphere. This reduces the production of atmospheric carbon-14 which is ultimately ingested by biological systems. By looking at carbon-14 in tree rings, the rise and fall of sunspot activity is detectable in the carbon-14 record, allowing the recovery of the solar activity record thousands of years into the past.
Scientists don't fully understand the connection between the sunspot cycle and weather conditions here on Earth, but there does seem to be something going on between them. Could it be that the sunspots block out light from the Sun and make the Earth cooler as the mini-Ice Age example might suggest? Curiously, if you were to measure how bright the Sun is during sunspot maximum when it has the most spots, it is actually slightly brighter, not dimmer! This is because the magnetic fields in the sunspots are so stiff that they prevent the gas from convecting and transporting energy from the lower layers to the surface. Energy that would have flowed out of the dark spots is actually re-directed around them like a broken car blocking traffic on a busy highway.
Predicting Sunspot Cycles
Can the details of the sunspot cycle be predicted? It seems so. In March, 2006 solar physicist Mausumi Dikpati and her colleagues at the National Center for Atmospheric Research in Boulder Colorado announced the results from their 'Predictive Flux-transport Dynamo Model '. The newly developed model simulated the strength of the past eight solar cycles with more than 98% accuracy. The forecasts are generated, in part, by tracking the subsurface movements of the sunspot remnants of the previous two solar cycles. The actual and predicted cycles are shown in the figure below.
It appears that the next sunspot cycle, Cycle 24, will be stronger than Cycle-23 now ending. This implies that the 21,000 solar flares and 13,000 coronal mass ejections we have just experienced during Cycle-23 may be substantially exceeded during the 2007-2019 time period. This will have even more serious implications for satellite systems, astronauts working in space, the global electric power grid, and even passengers flying in jet aircraft. All of these systems have known vulnerabilities to solar storms and space weather.
During the annual Space Weather Workshop held in Boulder, CO in May, 2008, the Solar Cycle 24 Prediction Panel released (June 27, 2008) an update to the prediction for the next solar cycle. In short, the update is that the panel has not yet made any changes to the prediction issued in April, 2007. (See Consensus Statement below.)
The panel expects solar minimum to occur in March, 2008. The panel expects the solar cycle to reach a peak sunspot number of 140 in October, 2011 or a peak of 90 in August, 2012.
CONSENSUS STATEMENT OF THE SOLAR CYCLE 24 PREDICTION PANEL
March 20, 2007
The Solar Cycle 24 Prediction Panel anticipates the solar minimum marking the onset of Cycle 24 will occur in March, 2008 (±6 months). The panel reached this conclusion due to the absence of expected signatures of minimum-like conditions on the Sun at the time of the panel meeting in March, 2007: there have been no high-latitude sunspots observed with the expected Cycle 24 polarity; the configuration of the large scale white-light corona has not yet relaxed to a simple dipole; the heliospheric current sheet has not yet flattened; and activity measures, such as cosmic ray flux, radio flux, and sunspot number, have not yet reached typical solar minimum values.
In light of the expected long interval until the onset of Cycle 24, the Prediction Panel has been unable to resolve a sufficient number of questions to reach a single, consensus prediction for the amplitude of the cycle. The deliberations of the panel supported two possible peak amplitudes for the smoothed International Sunspot Number (Ri): Ri = 140 ±20 and Ri = 90 ±10. Important questions to be resolved in the year following solar minimum will lead to a consensus decision by the panel.
The panel agrees solar maximum will occur near October, 2011 for the large cycle (Ri=140) case and August, 2012 for the small cycle (Ri=90) prediction.
- Solar Cycle 24 Prediction Issued April 2007, Presented by the NOAA Space Weather Prediction Center (SWPC), National Oceanic and Atmospheric Administration (NOAA).
- Heinrich Schwabe, 1843, Astronomische Nachrichten, vol. 20., no. 495.
Related EoC Articles
- Solar Cycle Prediction (Updated 2009/04/02) - Solar Physics, Marshall Space Flight Center, NASA.
- Solar Cycle Progression - NOAA/Space Weather Prediction Center.
- Space Weather Prediction Center, National Weather Service, National Oceanic and Atmospheric Administration (NOAA).
- The Sunsprt Cycle, 1610-1976 - Solar Results From Skylab, NASA.
- The Sunspot Cycle (Updated 2009/04/02) - Solar Physics, Marshall Space Flight Center, NASA.
- This image shows the variation of sunspot number over time for solar cycle 11 (which began around 1867) and cycle 23, which began around 1996. The graphical representation of the sunspot cycles shows that the two cycles have similar shape and size and a long decrease to minimum. However, the numbers of sunspots in cycle 12 increased much more quickly than our new cycle, cycle 24. Sunspot cycle 23, peaked in 2001 and produced some of the largest flares on record (the "record" approximately equates to the space age, when we began observing X rays from the Sun). The background image is from Hinode's X-ray Telescope (XRT). View full-size image. (Source: NASA-HINODE - Current Update.) | <urn:uuid:acc3316e-6d25-480b-aea7-0d1ad242cd7a> | 4.25 | 1,702 | Knowledge Article | Science & Tech. | 48.267121 |
Ice Sheets Apparently Can Grow Quickly in Cold Periods
How fast can glaciers and ice sheets expand and shrink in response to rapidly changing climatic conditions? It's a question that scientists have been pondering with particular interest of late, with Greenland's Peterman Glacier calving large amounts of ice two years in succession, and much of the island's surface ice melting earlier this summer.
Because abrupt climate changes have occurred, across various spatial and temporal scales, at several previous points in the planet's history, scientists can look for prehistorical clues, to see what happened then and thus infer what might happen in a warming 21st century. A team of geologists has done just that, although it has looked for evidence not during previous warm spells, but by looking at two major cooling events in Earth's past.
One, called the Younger Dryas period, began about 13,000 years ago and lasted more than a millennium. A second, less prosaically dubbed the 8.2 kiloyear event (because it occurred approximately 8,200 years ago), was less intense and was far shorter in duration - no more than 150 or so years. In this week's journal Science, Nicolas Young of Columbia University and colleagues write that, by dating moraines - piles of rocks and debris that glaciers deposit while expanding - on Canada's Baffin Island, they found that glaciers had been significantly more expansive during those cold periods.
No surprise there, of course. What was interesting and seemingly counter-intuitive, however, was that those glaciers appeared to cover a larger area during the more recent, shorter-lived, and less intense cold spell than during the Younger Dryas period.
Ice Sheet image via Shutterstock.
Read more at Discovery News. | <urn:uuid:5c4ecf41-f63f-4ce8-af10-96deb5126fe0> | 4.125 | 355 | Truncated | Science & Tech. | 41.18125 |
100 Years of Achievement - 100 Years of Heros
On December 17, 1903, two brothers made history by becoming the first humans to fly in a powered aircraft. On July 20, 1969, Neil Armstrong made history by becoming the first human to step on the Moon. On June 21, 2004, civilian astronaut Mike Melvill made history by becoming the first private citizen to pilot a space craft into Earth's orbit.
A new era, a new space age has begun. No longer is space travel restricted to powerful world governments. Because of the work and ingenuity of companies around the world, civilian space travel has become a reality, and will soon be available to everyone.
Space Ship One was the first, but there are many others that are being developed. We have highlighted a few of these for you. Select from the links on the left to learn more about civilian space programs.
WHAT WOULD YOU LIKE TO LEARN ABOUT?
Select from the menu on the left. | <urn:uuid:ef030833-25b3-4cf1-bf83-2e4a0982505e> | 2.921875 | 198 | Content Listing | Science & Tech. | 58.502065 |
They are referred to as intraterrestrials, organisms that live inside the Earth. Most live beneath the bottom of the oceans. Some live in the tens of meters of mud just beneath the seafloors; others, following fractures in rock, live hundreds of meters down. By some estimates, as much as one-third of the planet's biomass (the weight of all its living organisms) is "buried" beneath the ocean floor.
By comparison to the nutrient rich coastal zones, many of the communities on the ocean bottom live in nutrient poor regions. In nutrient rich areas oxygen is frequently only found in the upper 1-2 cm of mud; any deeper and it is consumed. But in some areas where the waters are nutrient poor, oxygen penetrates up to 80 meters of sediment. One thesis is that the microbes use oxygen slowly, so more is available to penetrate. Another possibility is that the microbes have a separate, unusual source of oxygen, natural radioactivity. Radioactive decay creates particles that can split H2O into hydrogen and oxygen (the process is known as radiolysis) [for a rough overview of radiolysis, see Radiolysis]. In this admittedly "exotic" thesis, microbes consume these elements.
It is also important to bear in mind that water does not merely sit on top of the seafloor, but in fact moves through the undersea arena, cycling the equivalent of the ocean's entire volume through the crust every half-million years or so. Recent research has identified one of these flows. At Juan de Fuca (see Juan de Fuca Plate) researchers have found two volcanoes, 50 kilometers apart. Cracks connect the two. Water goes in at one volcano and comes out at the other. Most of the fractures in the ocean crust here run north to south, making that the probable direction in which microbes also move. The cracks serve as a sort of microbial superhighway, allowing the microbes to flow along easily, carried by water. Thus, when thinking of subseafloor microbial populations, it is not correct to think of them as necessarily staying in one area. Because of differences in nutrient flows from the surface and about the seafloor, because of cracks, and because of water flows, among other traits, the seafloor has a variety of different environments.
As one would anticipate with the presence of various environments, the deep-sea microbes also express diversity. Researchers have found not only broad classes of bacteria, but also archaea [see Archaea], fungi, viruses, and other types of organisms.
Studies, reports, and websites on the existence of life below the seafloor can be found at: http://www.nature.com/ismej/journal/v5/n4/abs/ismej2010157a.html; http://publications.iodp.org/preliminary_report/329/index.html; http://www.nature.com/nrmicro/journal/v9/n10/abs/nrmicro2647.html; Center for Dark Energy Biosphere Investigations; http://www.sciencemag.org/content/320/5879/1046.abstract; http://geology.gsapubs.org/content/early/2011/02/03/G31598.1.abstract; & http://www.ncbi.nlm.nih.gov/pubmed/18509444.
For more information about LexisNexis products and solutions connect with us through our corporate site. | <urn:uuid:058a9597-b4a4-454d-b230-d30170ff726d> | 4.1875 | 731 | Knowledge Article | Science & Tech. | 53.660858 |
Read off the coordinates of the point symbolized by the red circle and insert them into the respective text fields. All coordinate values are integer or half-integer.
Click the button "Check" in order to see whether your values were correct. Click the button "Next" in order to try again with another point. You may also correct a false input, thereafter click again "Check". Repeat the exercise several times. In case you have inserted a wrong guess, repeat until more than 90 percent correct answers are displayed. | <urn:uuid:5058ed67-e685-4a47-bb62-45e35fc27a4e> | 2.765625 | 104 | Tutorial | Science & Tech. | 54.192414 |
South African Journal of Science
Print version ISSN 0038-2353
JANION, Charlene et al. Springtail diversity in South Africa. S. Afr. j. sci. [online]. 2011, vol.107, n.11-12, pp. 01-07. ISSN 0038-2353.
Despite their significance in soil ecosystems and their use for investigations of soil ecosystem functioning and in bioindication elsewhere, springtails (Collembola) have not been well investigated in South Africa. Early recognition of their role in soil systems and sporadic systematic work has essentially characterised knowledge of the southern African fauna for some time. The situation is now changing as a consequence of systematic and ecological work on springtails. To date this research has focused mostly on the Cape Floristic Region and has revealed a much more diverse springtail fauna than previously known (136 identifiable species and an estimated 300 species for the Cape Floristic Region in total), including radiations in genera such as the isotomid Cryptopygus. Quantitative ecological work has shown that alpha diversity can be estimated readily and that the group may be useful for demonstrating land use impacts on soil biodiversity. Moreover, this ecological work has revealed that some disturbed sites, such as those dominated by Galenia africana, may be dominated by invasive springtail species. Investigation of the soil fauna involved in decomposition in Renosterveld and Fynbos has also revealed that biological decomposition has likely been underestimated in these vegetation types, and that the role of fire as the presumed predominant source of nutrient return to the soil may have to be re-examined. Ongoing research on the springtails will provide the information necessary for understanding and conserving soils: one of southern Africa's major natural assets. | <urn:uuid:8ee4354b-92b4-45e5-b12b-3f7622140d31> | 3.140625 | 368 | Academic Writing | Science & Tech. | 31.062023 |
Conservation: What's it to you?
My favorite definition of conservation is “the careful utilization of a natural resource in order to prevent depletion” (www.dictionary.com). This definition is critical to our Contract Stewardship program, which offers professionally-trained and experienced staff to assist landowners with a conservation ethic in improving the wildlife value and ecological health of their property. To ensure we take on projects that are in line with our mission, we have created a Conservation Matrix to determine a potential project’s relationship to the mission of Teton Science Schools. This not only makes sure we are in legal compliance but that we are completing projects that have a definable conservation value. Factors we consider include the educational value and audience, project timeframe, ecotype in which the work will be completed (i.e., riparian), land ownership (public or private), project value to wildlife/habitat, and the financial implications of the project. This tool is a new process that is still in development, but will help us to “score” projects by their conservation value. For more information, contact firstname.lastname@example.org.
Your comment must be approved before it will be added. | <urn:uuid:8ac88dd1-28dd-4c83-bfb8-ffc7c390243d> | 2.96875 | 254 | Truncated | Science & Tech. | 33.984615 |
Researchers have recently proven that a quality of the unseen quantum world called entanglement can happen also at the microscopic level (i.e. the size of stuff in the regular world).
Entanglement is really strange. What happens is that two particles can get mated (how that happens is a bit of a mystery), and then communicate with one another not just faster than the speed if light, but instantaneously. This is a really, really big mystery, because it violates a few cardinal laws of physics: nothing can travel faster than the speed of light, at which point an object's mass should become infinite. And since light itself has a speed, being instantaneous is something that just doesn't happen in Einstein's relative universe.
There's some explanation of why such strangeness is possible, as per the math. But there's absolutely no answer for how. None.
It helps that most activity at the quantum level is really weird. It turns out that everything there doesn't really exist at all, per se, but rather hovers in a state on indeterminacy, only collapsing into something observable when somebody observes it. Observation is a core quality of what reality is made of, according to quantum physics. Let's not even get started on the multiple dimensions...
Only now, scientists have reported in Nature that they've entangled pairs of beryllium and magnesium ions, and they've exhibited the same qualities of entanglement. This means that the real world is as strange as the quantum one.
I'd say we've seen it already in politics and business for a while already.
The Democrats and Republicans are entangled, aren't they? One of the qualities of entanglement is that objects are the opposite of one another (one particle spins left, so the other always spins right). Coke and Pepsi were entangled for many years, so just as one brand would announce some starlet spokesdrinker, the other had a competing celeb ready to announce, too. And, just for kicks, isn’t our experience of Internet Search the macro version of that observation thing I mentioned earlier? I think we invent reality from the cloud with our choice of search terms.
Maybe the bigger observation from this entanglement research is that there are really strange forces going on, and that seeing them -- and even describing them with some detail and reliability -- doesn't mean that we understand what's going on. | <urn:uuid:146c2a71-087b-4537-ab36-b3183844eb86> | 2.75 | 490 | Personal Blog | Science & Tech. | 50.929024 |
(HLL) A programming language which provides some level of abstraction above assembly language. These normally use statements consisting of English-like keywords such as "FOR", "PRINT" or "GOTO", where each statement corresponds to several machine language instructions. It is much easier to program in a high-level language than in assembly language though the efficiency of execution depends on how good the compiler or interpreter is at optimising the program.
Rarely, the variants "VHLL" and "MLL" are found.
See also languages of choice, generation.
Try this search on Wikipedia, OneLook, Google
Nearby terms: higher-order function « higher-order macro « High-level Data Link Control « high-level language » high memory area » high moby » High Performance Computing and Communications | <urn:uuid:b4705e47-f8e6-4e87-877a-02096962f865> | 3.6875 | 166 | Knowledge Article | Software Dev. | 21.837846 |
Data Encryption Standard: Part 1
August 31, 2010
The Data Encryption Standard has been one of the most successful ciphers in history, and is still in use today, especially in its Triple DES variant. The Data Encryption Standard is officially described by FIPS 46-3, though if you are not fond of reading algorithm descriptions written by government lawyers there are many other descriptions available on the internet.
DES is a block cipher, operating on 64 bits at a time. Here is an example:
PT P r o g P r a x
HEX 50 72 6F 67 50 72 61 78
KEY 01 23 45 67 89 AB CD EF
CT CC 99 EA 46 B1 6E 28 90
There is more than one way to encrypt a message longer than 64 bits; we will examine them in a later exercise.
Your task is to write the code to encipher and decipher a single 64-bit block using the Data Encryption Standard. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2 | <urn:uuid:57768dd2-fda0-4564-bc87-d4d4ffc6094b> | 3.21875 | 235 | Tutorial | Software Dev. | 60.199141 |
This programming language may be used to instruct a computer to perform a task.
If you know BCPL, please write code for some of the tasks not implemented in BCPL.
BCPL is a typeless ancestor of C.
BCPL was first implemented at MIT by Martin Richards early in 1967. It was strongly influenced by CPL which was a general purpose language developed jointly at Cambridge and London Universities between 1962 and 1966.
BCPL is a very simple typeless language in which all values are of the same size, typically a 32-bit word. It has a compiler freely available via http://www.cl.cam.ac.uk/users/mr10. This web page also contains several link to other items including Cintpos an interpretive implementation of the Tripos Portable Operating System and a manual covering both BCPL and Cintpos.
BCPL and Cintpos are still undergoing slow development.
This category has the following 3 subcategories, out of 3 total. | <urn:uuid:06597ed8-858b-4f27-910a-f9b97b7cff66> | 3.25 | 204 | Knowledge Article | Software Dev. | 59.948077 |
Dig the well before you are thirsty.
Test-First Development Abstract
Test-first is a cornerstone of XP and should be a mainstay of all Agile methods, including Scrum and Kanban. Originally called test-first programming, larger scale development has required more than just the programming of tests first. The key to test-first development is understanding the intention of what you are writing before you write it. There are two main levels of test-first:
- Acceptance Test-Driven Development
The method of doing these tests fall under the umbrella of what we call Sustainable Test-Driven Development. Test-First development is based on several principles. Two of the most important is that testability, that is, how easily testable code is, is related to code quality. Testability is one of the best software trim tabs to attend to. Another critical reason is that test-first follows one of the mantras of good design mandated by design patterns – design to interfaces. This is topic is also discussed in .
Acceptance Test-Driven-Development (ATDD) is mostly about having the people playing the product owner (customer proxy), developer and tester roles discussing the requirements before writing code. The intention is to create a series of acceptance tests that, when passed successfully, will demonstrate the working of the system. ATDD is more about the clarity that occurs with this conversation. However, the resulting test-specifications can provide the basis for automated regression testing – a very useful practice for quick deliveries. These tests can be in the form of tables or in the form of “in this situation, when this event occurs, this result occurs.” Be clear that the result of ATDD is defined and implemented acceptance tests for the system – not mere specifications that describe scenarios that need to be handled. See for more.
Test-Driven Development, on the other hand is a developer-oriented activity designed to assist the developers in writing the code by strict analysis of the requirements and the establishment of functional boundaries, work-flows, significant values, and initial states. TDD tests are written in the developer’s language and are not designed to be read by the customers. These tests can use the public interfaces of the system, but are also used to test internal design elements. We often see the developers take the tests written through the ATDD process and implement them with a unit testing framework.
Leffingwell, Dean. Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. Addison-Wesley, 2011.
Shalloway, A., Bain, S., Pugh, K., & Kolsky, A. Essential Skills for the Agile Developer: A Guide to Better Programming and Design. Addison-Wesley, 2011.
Pugh, Ken. Lean-Agile Acceptance Test-Driven Development: Better Software Through Collaboration. Addison-Wesley, 2011.
Last updated 15 January 2013
© 2010-2013 Leffingwell, LLC. | <urn:uuid:f279ed04-f8c5-4c64-8f86-5e2c7f1c618c> | 2.9375 | 624 | Knowledge Article | Software Dev. | 42.198322 |
NOAA Teacher at Sea
Aboard R/V Savannah
July 7 – July 18, 2012
Mission: SEFIS Reef Fish Survey
Geographical Location: Atlantic Ocean, off the coasts of Georgia and Florida
Date: July 9, 2012
Latitude: 30 ° 54.55’ N
Longitude: 80 ° 37.36’ W
Air Temperature: 28.5°C (approx. 84°F)
Wind Speed: 6 knots
Wind Direction: from SW
Surface Water Temperature: 28.16 °C (approx. 83°F)
Weather conditions: Sunny and fair
Science and Technology Log
Purpose of the research cruise and background information
The Research Vessel, or R/V Savannah is currently sampling several species of fish that live in the bottom or benthic habitats off the coasts of Georgia and Florida.
These important reef habitats are a series of rocky areas that are referred to as hard bottom or “live” bottom areas by marine scientists. The reef area includes ledges or cliff-like formations that occur near the continental shelf of the southeast coast. They are called ‘reefs’ because of their topography – not because they are formed by large coral colonies, as in warmer waters. These zones can be envisioned as strings of rocky undersea islands that lie between softer areas of silt and sand. They are highly productive areas that are rich in marine organism diversity. Several species of snapper, grouper, sea bass, porgy, as well as moray eels, and other fish inhabit this hard benthic habitat.
It is also home to many invertebrate species of coral, bryozoans, echinoderms, arthropods and mollusks.
The rock material, or substrate of the sea bottom, is thought to be limestone — similar to that found in most of Florida. There are places where ancient rivers once flowed to a more distant ocean shoreline than now. Scientists think that these are remnants of old coastlines that are now submerged beneath the Atlantic Ocean. Researchers still have much to discover about this little known ocean region that lies so close to where so many people live and work.
The biological research of this voyage focuses primarily on two kinds of popular fish – snappers and groupers. These are generic terms for a number of species that are sought by commercial and sports fishing interests. The two varieties of fish are so popular with consumers who purchase them in supermarkets, fish markets and restaurants, that their populations may be in decline.
At this time, all red snapper fishing is banned in the southeast Atlantic fishery because the fish populations, also known as stocks, are so low.
How the fish are collected for study
The fish are caught in wire chevron traps. Six baited traps are dropped, one by one from the stern of the R/V Savannah. The traps are laid in water depths ranging from 40 to 250 feet in designated reef areas. Each trap is equipped with a high definition underwater video camera to monitor and record the comings and goings of fish around and within the traps, as well as a second camera that records the adjacent habitat.
I will provide the details of the fish trapping and data capture methods in a future blog.
Who is doing the research?
When not at sea, the R/V Savannah is docked at the Skidaway Institute of Oceanography (SKIO)on Skidaway Island, south of Savannah, Georgia. The institute is part of the University of Georgia. The SKIO complex is also the headquarters of the Gray’s Reef National Marine Sanctuary. The facility there has a small aquarium and the regional NOAA office.
The fisheries research being done on this cruise is a cooperative effort between federal and state agencies. The reef fish survey is one of several that are done annually as part of SEFIS, the Southeast Fisheries Independent Survey. The people who work to conduct this survey are located in Beaufort, North Carolina. SEFIS is part of NOAA.
The other members of the research team are from MARMAP, the Marine Research Monitoring Assessment and Prediction agency, which is part of the South Carolina Department of Natural Resources . This team is from Charleston, South Carolina.
NOAA also allows “civilians” like me — one of the Teachers at Sea– as well as university undergraduate and graduate students to actively participate in this research. | <urn:uuid:f1155bc6-4291-46a5-9e79-ea9122c838d9> | 3.328125 | 910 | Personal Blog | Science & Tech. | 48.018801 |
Water: Climate Change and Water
Water Impacts of Climate Change
Climate change has already altered, and will continue to alter, the water cycle. Impacts include warmer air and water, changes in the amounts and distribution of rainfall and snowfall, more intense rainfall and storms, and sea level rise. This page provides information about the water impacts of climate change.
- The Water Cycle
- Impacts on Water Quantity
- Impacts on Water Quality
- Impacts on Ocean Acidification
- Impacts of Changes in Water Resources on Other Sectors
- Learn More
The Water Cycle
The water cycle is a delicate balance among precipitation, evaporation, and all the steps in between. Warmer temperatures increase the rate of evaporation of water into the atmosphere, in effect increasing the atmosphere's capacity to "hold" water. Increased evaporation may dry out some areas causing moisture to fall as excess precipitation on other areas.
Changes in the amount of rain falling during storms provide evidence that the water cycle is already being affected. Over the past 50 years, the amount of rain falling during the most intense 1 percent of storms increased by almost 20 percent. Warmer winter temperatures cause more precipitation to fall as rain rather than snow. Furthermore, rising temperatures cause snow to begin melting earlier in the year. This alters the timing of streamflow in rivers whose sources are in mountainous areas.
Impacts on Water Quantity
Water resources are important to both society and ecosystems. We depend on a reliable, clean supply of drinking water to sustain our health. We also need water for agriculture, energy production, navigation, recreation, and manufacturing.
How does climate change affect water resources?
"Water is essential to life and is central to society's welfare and to sustainable economic growth. Plants, animals, natural and managed ecosystems, and human settlements are sensitive to variations in the storage, fluxes, and quality of water at the land surface – notably storage in soil moisture and groundwater, snow, and surface water in lakes, wetlands, and reservoirs, and precipitation, runoff, and evaporative fluxes to and from the land surface, respectively. These, in turn, are sensitive to climate change."
Source: U.S. Climate Change Science Program, 2008
Many of these uses put pressure on water resources, stresses that are likely to be exacerbated by climate change. In many areas, climate change is likely to increase water demand while shrinking water supplies. This shifting balance will challenge water managers to simultaneously meet the needs of growing communities, sensitive ecosystems, farmers, ranchers, energy producers, and manufacturers.
Many areas of the United States, especially the West, currently face water supply issues. The amount of water available in these areas is already limited, and demand will continue to rise as population grows. The Southeast and West (especially the Southwest) have experienced less rain over the past 50 years, as well as increases in the severity and length of droughts.
In the western part of the United States, future projections for less total annual rainfall, less snowpack in the mountains, and earlier snowmelt mean that less water likely will be available during the summer months, when demand is highest. This will make it more difficult for water managers to satisfy water demands throughout the course of the year.
Freshwater resources along the coasts face risks from sea level rise. As the sea rises, saltwater moves into freshwater areas. This may cause public utilities to find potable water from other sources, including an increase in the need for desalination (or removal of salt from the water) for some coastal freshwater aquifers used as drinking water supplies.
Planners across many sectors will confront the challenge of a changing water supply. They likely will adopt a variety of adaptation practices designed to better conserve our water supplies and improve water recycling, and will develop alternative strategies for water management.
Impacts on Water Quality
In some areas, increases in runoff, flooding, or sea level rise are a concern. These effects can reduce the quality of water and can damage the infrastructure that we use to treat, transport and deliver water.
Warmer air temperatures can raise stream and lake temperatures, which can harm aquatic organisms that live in cold-water habitats, such as trout and salmon. Additionally, warmer water can increase the range of non-native fish species, permitting them to move into previously cold-water streams. The population of native fish species often decreases as non-native fish prey on and out-compete them for food.
Water quality could also suffer in areas experiencing increases in rainfall. For example, increases in heavy precipitation events could cause problems for the water infrastructure, as sewer systems and water treatment plants are overwhelmed by the increased volumes of water. Heavy downpours can increase the amount of runoff into rivers and lakes, washing sediment, nutrients, pollutants, trash, animal waste, and other materials into water supplies and making them unusable, unsafe, or in need of increased water treatment. In addition, as more freshwater is removed from rivers for human use in coastal areas, saltwater may move farther upstream. Drought can also cause coastal water resources to become more saline as freshwater supplies from rivers are reduced. Water infrastructure in coastal cities, including sewer systems and wastewater treatment facilities, faces risks from rising sea levels and the damaging impacts of storm surges.
Impacts on Ocean Acidification
Ocean acidification refers to the decrease in the pH of the oceans caused by the uptake of carbon dioxide (CO2) from the atmosphere. Oceans have been absorbing about one-third of the anthropogenic CO2 emitted into the atmosphere since pre-industrial times. As more CO2 dissolves in the ocean, it reduces ocean pH, which changes the chemistry of the water. These changes present potential risks across a broad spectrum of marine ecosystems.
Top of page
Impacts of Changes in Water Resources on Other Sectors
The impacts of climate change on water availability and water quality will affect many sectors, including energy production, infrastructure, human health, agriculture, and ecosystems.
Some regions of the United States, particularly the Northwest, use water to produce energy through hydropower. If climate change results in lower streamflows or changes in the timing of streamflows, it will reduce the amount of hydroelectricity that can be produced. Lower water flows would also reduce the amount of water available to cool fossil-fuel and nuclear power plants.
Tourism and recreation also will be affected by climate change impacts on water supply and quality. The quality of lakes, streams, coastal beaches, and other water bodies that are used for swimming, fishing, and other recreational activities can be affected by changes in precipitation, increases in temperature, and sea level rise. In addition, winter sport activities that depend on the production of snow and ice could be limited in the future as temperatures increase.
Agriculture and livestock depend on water. Heavy rainfall and flooding can damage crops, increase soil erosion, and delay planting. Additionally, areas that experience more frequent droughts will have less water available for crops and livestock.
To learn more about how climate change will affect water resources, visit:
1U.S. Global Change Research Program, 2009. Global Climate Change Impacts in the United States (PDF) (12 pp, 1.45MB, About PDF)
2U.S. Climate Change Science Program, 2008. The Effects of Climate Change on Agriculture, Land Resources, Water Resources, and Biodiversity in the United States (PDF) (30 pp, 2.8MB, About PDF) | <urn:uuid:becb2884-7d17-4a8a-905a-45ac504f5f3a> | 3.78125 | 1,535 | Knowledge Article | Science & Tech. | 28.165254 |
This important, and fairly easy to perform Star Count project is designed as part of an international scientific study to investigate the visual quality of the nighttime sky and to help assess the national and global extent of atmospheric light pollution. It will also help to evaluate the amount of energy wasted through poor or inappropriate lighting practices.
Teachers, students, youth organizations ( e.g. Cubs, Scouts, Guides etc.) amateur astronomers, science and environmental organizations, and dedicated interested individuals, are all invited to participate. | <urn:uuid:764336b5-3314-4e33-a6ae-490358c7688e> | 2.84375 | 100 | Knowledge Article | Science & Tech. | 21.555612 |
I'll first explain what this is.
Common dialog control is a component that allows us to access inbuilt libraries.
If you don't know what these are let me explain it to you with an example.
Open the notepad go to File and click Save As... A window will pop up with a couple
of components (command buttons, combo boxes, etc). You will face other programmes having
the same kind of window with these components. If you have TextPad, an editor that comes
in handy for programmers, you will see the same.
The reason why to use common dialog controls is that you won't need to spend time making
your custom libraries, just use the inbuilt ones, but it's up to you.
Lets get started!
We will now built a small application that changes the colour of a picture box
Open a new project, select Standart.exe. Right click on the tools list, go to components,
find and select Microsoft Common Dialog Control (6.0 in my case). Add a picture box name
it to "picbox" and also change the AutoRedraw to true. Then add a command button, name it
"cmdsetcol". In the end add a common dialog control and name it "cmdlg".
Now we'll go to the coding.
Use the following code for the command button:
cmdlg.ShowColor picbox.BackColor = cmdlg.Color
Now by pressing the button an inbuilt library of colour selection will pop up. You might
have come across it in several painting applications and several times colour changing
is involved. After selecting a colour and proceeding to our form the pictures box colour
will automatically change to the colour we selected due to the second line of code.
Its very easy!
Let us add a text box and another two command buttons and add the following code the them:
Private Sub Command1_Click() cmdlg.ShowColor Text1.ForeColor = cmdlg.Color End Sub Private Sub Command2_Click() cmdlg.ShowColor Text1.BackColor = cmdlg.Color End Sub
Now we can change the back and character colour of the text box.
Hope you understood it all briefly and i wish you learning and coding! | <urn:uuid:8e8ba7c2-dac1-42db-8f40-8a366b73813c> | 3.234375 | 478 | Comment Section | Software Dev. | 68.264812 |
What is this strange ring that has been developing on the Sun during
Sunspot 1112, located in the southeast quadrant, has been the source of a
giant filament that is currently stretching 400,000 km across the surface of the
However, today, there appears to be development of a enormous circular
ring which looks to be
linking with the huge magnetic filament of sunspot 1112. Most of today's various
wavelength images of the Sun all show this feature over at the SDO (Solar
Dynamics Observatory) - NASA website.
SpaceWeather.com today reports,
A vast filament of magnetism is cutting
across the Sun's southern hemisphere today. A bright 'hot spot' just north of
the filament's midpoint is UV radiation from sunspot 1112. The proximity is no
coincidence; the filament appears to be rooted in the sunspot below. If sunspot
flares, it could cause the entire structure to erupt. This active region merits
What concerns me is that if indeed this is a huge magnetic filament nearly
encircling the entire Sun, it is now currently directly facing the
Earth. If sunspot 1112 does erupt, could the entire filament explode
into a massive CME?
This particular phenomenon will be all over in a few days as it rotates
around the Sun, but it serves to remind us that there are more and more events
happening on the Sun as we transit into the next solar cycle maximum (peaking ~
2012 into 2013).
have been two updates since this initial report, Link to original article and | <urn:uuid:b066382d-e405-4b61-bfe9-e4bab6b17f59> | 3.046875 | 335 | Personal Blog | Science & Tech. | 43.722298 |
system call operates in exactly the same way as
except for the differences described in this manual page.
If the pathname given in
is relative, then it is interpreted relative to the directory
referred to by the file descriptor
(rather than relative to the current working directory of
the calling process, as is done by
for a relative pathname).
is relative and
is the special value
is interpreted relative to the current working
directory of the calling process (like
is absolute, then
returns a new file descriptor.
On error, -1 is returned and
is set to indicate the error.
The same errors that occur for
can also occur for
The following additional errors can occur for
is not a valid file descriptor.
is relative and
is a file descriptor referring to a file other than a directory.
was added to Linux in kernel 2.6.16.
A similar system call exists on Solaris.
and other similar system calls suffixed "at" are supported
for two reasons.
allows an application to avoid race conditions that could
occur when using
to open files in directories other than the current working directory.
These race conditions result from the fact that some component
of the directory prefix given to
could be changed in parallel with the call to
Such races can be avoided by
opening a file descriptor for the target directory,
and then specifying that file descriptor as the
allows the implementation of a per-thread "current working
directory", via file descriptor(s) maintained by the application.
(This functionality can also be obtained by tricks based
on the use of
but less efficiently.) | <urn:uuid:175d0237-69a5-4d0a-a48f-474385fec56e> | 3.234375 | 345 | Documentation | Software Dev. | 36.058894 |
MacMahon's Coloured Cubes
What are Mac Mahon's Coloured Cubes?
MacMahon's cubes are the cubes which develop if you give the six sides
six different colours in all possible combinations.
P.A.MacMahon was an English mathematician and major. He lived from 1854
||The colours are not determined. You can choose any colour. I chose
red(1), light blue(2), dark blue(3), dark green(4), light green(5), and
A cube is drawn on the left as a net and in perspective with three
turned square sides.
There are 30 Mac Mahon
The following picture illustrates these facts. The numbers below the cubes
mean the number of the turns.
||If you give the squares of a cube the numbers 1, 2, 3, 4, 5, 6 and
form all permutations of the six numbers, you get 1*2*3*4*5*6=6!=720 cubes.
Many among the cubes are the same. They can be transferred by turns
around one of the 13 axles into one another. There are 24 turns and thus
only 720:24 = 30 different cubes.
Making of the Cubes top
You also get the 30 cubes by systematic colouring.
If you want to play with the coloured cubes, you
must build them yourselves.
> All cubes get the colour pink on the reverse side.
> The front sides get one of the six colours in each line.
> The sides underneath get the third suitable colour in each line .
> The remaining three sides get all permutations of the remaining three
colours in a line.
The names like Ba or Fa come from J.H.Conway (see below).
||You can write the numbers 1 to 30 (in place of the colours) on round
self-adhesive labels and stick them on the cubes. Each cube should get
a name like Ab in order to keep track of things.
It is more beautiful, of course, if you use larger cubes and give them
the six colours.
Figures with Cubes top
The cube on the left is extended twice by a 1x2x2-slice
to the left. 3x2x2 and 4x2x2-solids develop.
Playing with the cubes is looking for right rectangular
solids or big cubes with one-coloured sides.
The following four figures are relatively easy
The small cube on the left is used for forming
the corners of the large cube on the right. You can easily find the 19
||The L-shaped figure on the left is built by symmetrically
coloured cubes (see below).
The speciality is that inside the same colours
touch as additional condition. This is the so-called domino condition (Gardner).
Mac Mahon's Problem top
The main problem is Mac Mahon's problem.
||You select a cube from the 30 cubes, e.g. the
A description of the way to the solution for the cube Ab follows.
||29 cubes remain. You select eight among them
to build a 2x2x2 cube with the same colours as the small cube. The same
colours must touch inside, too, to make the puzzle more difficult.
You can't find the solution by accident. You must proceed systematically.
First you look for the four cubes in the lower layer. Lay all cubes
with dark blue underneath.
The cubes Bc, Ca, Df, Ed, and Fe are possible for the cube at the
bottom, on the left, in front. Take out Df, Ed, and Fe, because inside
and outside would be the same colour. Bc and Ca are left. (Lay Bc
aside. This would lead to a second solution.)
The cubes Bd, Cf, Da, Ec, and Fe are possible for the cube at the
bottom, on the right, in front. Bd is left.
The cubes Bd, Cf, Da, Ec, and Fe are possible for the cube at the
bottom, on the left, in the back. Bf is left.
The cube Ea is only possible for the cube at the bottom, on
the right, in the back.
Turn the remaining cubes so that dark green is at the top.
The cubes Be and Cd are possible for the cube at the top, on the
left, in the front. Be is left.
The cube Fa is only possible for the cube at the top, on the
right, in the front.
The cube Da is only possible for the cube at the top, on the
left, in the back.
The cube Bc is only possible for the cube at the top, on the
right, in the back.
Hence the solution is:
There is the second solution on the far right, which I mentioned while
looking for the solution. You use the same cubes. They lay symmetrically
to the centre and are turned.
J.H.Conway takes the credit for a complete
solution of the problem.
The speciality of the table is that it contains all
solutions of MacMahon's problem for all cubes.
||He arranged the 30 cubes in a 6x6-field, whereby
he kept the main diagonal free.
The columns are called a, b, c, d, e, f,
the lines A, B, C, D, E, F.
Thus each cube gets a pair of a large and a small
letter as a name depending on its position.
If you want to build i.e. the cube Ab as a 2x2x2-cube,
you can easily find the eight cubes:
You go from Ab to the mirror cube Ba and choose
the remaining eight cubes in the line and the column with cube Ba.
Still another feature:
||Cube and mirror cube have mirror images, for
instance Ab and Ba.
||If you choose the five cubes of a line or of
a column and build a 1x1x5-bar and make sure that any colour lies down,
the remaining five colours lie above.
The Mayblox Problem top
Eight cubes are given. Form a 2x2x2 cube with
six colours at every side and the same colours touching inside. You have
no small cube as a model. This is an additional difficulty.
Kowalewski's Problem top
The German mathematician G. Kowalewski varied
||It required the same colours on the right and
on the left and in front and on the reverse side. A third and a fourth
colour are at the top and underneath. Inside equal colours should meet.
This problem has only two solutions. One solution is shown. You need
eight more cubes for the second solution.
Instant Insanity top
"Parker Brothers" introduced the "color matching
box" with the name "Instant Insanity" in the year 1967. The puzzle was
called "Vier verrückt" or "Katzenjammer-Puzzle" in Germany.
Twelve millions were sold world wide (!?).
||This puzzle has four coloured cubes. In contrast
to Mac Mahon's cubes only four different colours are used.
The problem is to arrange the cubes to a 1x1x4-bar,
so that four different colours appear on all four sides.
Here is one solution.
||It is also possible to build a bar from Mac Mahon's
cubes, so that six different colours appear on all four sides. In addition
the same colours touch inside. Even the ends of the bars have the same
There is one solution on the left. The colours
||You can divide the 30 cubes into five groups with six cubes each, so
that you can form five bars just described [
Zoltan Perjés, book 6].
You can assemble them in the Conway scheme and name them with Roman
letters. The drawn bar has the number I.
The speciality is, that the cubes lay together and that you can partly
assemble them not only side by side, but also before one another or underneath
the other without changing their properties of the different coulors in
one row and the domino condition (noticed
by Torsten Sillke).
Coloured Cubes on the
VIA-Spiele Verlag Elfriede Pauli
Ivars Peterson's MathTrek
Jaap Scherpius (Jaap's Puzzle Page)
John J. O'Connor & Edmund F. Robertson, University
of St Andrews
Kadon Enterprises Inc.
Armbruster invented Instant Insanity
Major Percy Alexander
cubes (Multiple Cubes, Probabilities,
Charles-É. Jean (Dictionnaire
de mathématiques récréatives)
(1) Gerhard Kowalewski: Alte und neue mathematische Spiele, Leipzig
1930 (Reprint bei Teubner, Stuttgart 1978)
(2) Bruno Kerst: Mathematische Spiele, Berlin 1933 (Nachdruck: Martin
Sändig, Wiesbaden 1968)
(3) Gardner, Martin: Mathematische Knobeleien, Braunschweig 1980 (Vieweg)
(4) Rüdiger Thiele: Das große Spielvergnügen, Leipzig
(5) Gardner, Martin: Mathematische Hexereien, Berlin 1988 (Ullstein)
(6) Gardner, Martin: Fractal Music, Hypercards and More Math.
Recreations from SA Magazin, New York 1991 (Freeman)
(7) Rüdiger Thiele, Konrad Haase: Der verzauberte Raum, Leipzig/Jena/Berlin
(8) Rüdiger Thiele, Konrad Haase: Teufelsspiele, Leipzig/Jena/Berlin
Thanks to Sabine Sprankel for the idea of this topic and Torsten
Sillke for supporting me.
Thanks to Gail from Oregon for supporting me in my translation.
Feedback: Email address on my main page
page is also available in German.
2002 Jürgen Köller | <urn:uuid:c5079a02-91ea-4ba1-b2fe-84fd4387ad38> | 3.703125 | 2,200 | Personal Blog | Science & Tech. | 68.322906 |
24 Electrons on a Sphere
red = 0.7177998 (24)
green = 0.7660127 (24)
blue = 0.7768039 (12)
Notice the 6 square regions and 8 equilateral triangles on the
surface. Also, notice that each vertex is identical, and the
edges are blue, red, red, green, green proceeding clockwise
when viewed from outside the sphere. This is the "right-handed"
version of this configuration. There is another version with the
opposite handedness, i.e., the clockwise sequence of edges meeting
at each vertex is blue, green, green, red, red.
Here is another version of this applet, allowing you to manually
re-orient the polyhedron by dragging it with the mouse.
Return to MathPages Main Menu | <urn:uuid:1bb99656-b322-4ee8-95e7-d500b70beb5e> | 3.28125 | 179 | Tutorial | Science & Tech. | 65.845146 |
Microscopes are to microbiology what telescopes are to astronomy.
The earliest microscopes were simple instruments consisting of one or more crude glass lenses similar to those used to make early spectacles. The invention of the first true microscope is credited to the Jansen family of Middleburg, Holland, around 1595.
Later, in the 17th century, Dutch cloth merchant and amateur scientist Anton van Leeuwenhoek enlightened the world to what he dubbed “animacules” such as protozoa found in standing water. Using microscopes he made himself, Leeuwenhoek wrote up what he viewed in pond water, plant material, even gunk scraped off his teeth. He was the first to identify sperm and red blood cells.
There are two basic types of microscopes: light microscopes and electron microscopes. | <urn:uuid:91d505ca-f86b-4116-a9a4-94e217892782> | 3.453125 | 172 | Knowledge Article | Science & Tech. | 32.868293 |
Fraser Darling effect
The Fraser Darling effect, named after Sir Frank Fraser Darling, who proposed it in 1938, is the simultaneous and shortened breeding season that occurs in large colonies of birds.1 This synchronized and accelerated breeding leads to a greater chance of survival for each individual offspring.2 While studying herring gulls off the English coast, Fraser Darling noticed that individual gulls rarely raised their young past the fledgling stage. This led him to the conclusion that the birds received sexual stimulation not only from their mates but also from other birds of the same species.3
In 1956, a study conducted by Colson and White on the mating patterns of kittiwake showed that the effect only extended for two metres and that, for groups of birds who nested more sparsely, a longer breeding time was evident in the population as a whole. However, this particular species nested in areas that were hard for predators to reach and therefore relied less on this phenomenon than other species that were more vulnerable to predation.3 In 1968, while studying gulls, Horn found that "clumped nesting improves foraging efficiency and predation avoidance only when the colony is built in a large expanse of nesting habitat, surrounded by abundant, but patchily distributed food."2
Since Fraser Darling's initial observation, the phenomenon has also been observed in Brewer's Blackbirds, European Herring Gulls, Black-headed Gulls, and gannets; however, other studies conducted since have not been able to confirm it in other various species of gulls.2
- Michael Allaby (1999). "Fraser Darling effect". Dictionary of Zoology. Retrieved 2012-05-26.
- Yon-Tov, Yoram (October 1975). "Synchronization of Breeding and Intraspecific Interference in the Carrion Crow". The Auk (University of California Press): 778–785. Unknown parameter
- Wilson, Edward O. (2000). Sociobiology: The New Synthesis. Harvard University Press. p. 41. | <urn:uuid:2e822588-369c-4bc6-a0d4-6b988d4fb13b> | 4.0625 | 414 | Knowledge Article | Science & Tech. | 45.117839 |
Big ideas, with a little twist. Our scientific visionary gives some of Man’s great theories an unscientific going-over
Thermodynamics was discovered by James Clark Maxwell House while he was trying to invent freeze dried instant coffee. Dropping ice cubes into the brew always made it colder rather than producing hot coffee cubes (an idea later developed by Starbucks into the decaff skinny latte frappe). From this, Maxwell House deduced:
1) There was no such thing as a perpetual notion machine that would continually give out ideas as good as magnetic cows grazing on the power in electric fields.
2) The Universe would end when all cups of coffee were the same temperature.
The zeroth law
This should have | <urn:uuid:f396d98f-69e7-4782-ac50-e697277be4e9> | 2.78125 | 150 | Personal Blog | Science & Tech. | 40.065 |
Predators in the deep
SOME marine animals have warm muscles, they are usually kept warmer than the surrounding water. It’s a useful evolutionary trick which keeps them one step ahead of their prey.
One such fish is tuna, when travelling at speed, it pulls its fins into grooves along the body, giving it a smooth, hydrodynamic outline. Blue-fin tuna are the fastest among these fish. With these adaptations, the above species can reach the speed of up to 45 mph over short distances, mostly while trying to escape from one of their main enemies, the swordfish.
The powerful swimming muscles of the great white shark, and its relatives porbeagle and the mako, are kept 45-50 degrees warmer than the surrounding water. And for each 10-degree rise in temperature, these predators obtain a threefold increase in muscle power. These animals achieve this warmth with the help of a special blood-supply system to the swimming muscles, which looks rather like an old-fashioned, central-heating radiator, and acts like a heat exchanger. Basically, warm blood is prevented from being carried to certain parts, such as the gills, where heat would be lost to the surrounding seawater.
Besides this the fish’s
swimming efficiency is further enhanced by its torpedo-shaped body, and
also by the texture of its skin. The skin of sharks is not as smooth as
that of many other fishes, instead it has minute teeth-like structures
all over that are known as dermal denticles, which not only protect the
skin but also reduces the drag and lessens the fish’s resistance in
During day swordfish go down up to a depth of 2,000 ft and stay in semi-darkness, returning to the surface at night. It is one of the main reasons why not much is known about the biology of this animal. It does not swim continuously like many fish do in search of food, instead it acts like a cheetah and is known as a ‘stalker and sprinter’. If it is to remain alert and ready to respond to opportunities that might present themselves, it must have a sensory system that can respond immediately. And this is precisely that swordfish has. In order to spot and chase passing prey, particularly in cold depths, the fish warms up its eyes and brain, an ability it shares with the white marlin and the sailfish, the fastest fish in water.
Swordfish and its relatives swim so fast that
sometimes they cannot stop abruptly. Confronted by an unexpected obstacle, they
can be in trouble. Broken swords of swordfish have been found embedded in whales
and wooden ships. Predators in the open sea, though, can normally afford to
overshoot the mark, but those attacking prey close to the bottom are in danger
of crashing into the mud. The large-mouthed bass swims slowly towards its
target, using vision to guide itself. At the last moment, it darts at the prey,
oblivious to any change of course. If the prey moves, the bass misses, but it is
careful not to overshoot the original spot. It makes rapid braking movements
just before the actual point of prey capture so as not to crash. | <urn:uuid:38a9a3c7-30da-4dfe-bf00-0f837115c02e> | 3.5 | 678 | Nonfiction Writing | Science & Tech. | 52.689428 |
Aechmophorus occidentalis—Western Grebe // Podiceps/Podilymbus—Eared/Pied-billed Grebe // Podiceps nigricollis—Eared Grebe // Podilymbus podiceps—Pied-billed Grebe // Tachybaptus dominicus—Least Grebe
Grebes are highly aquatic birds seldom leaving the water except in migration. They are expert divers and resort to diving both as a defensive mechanism and for food procurement. Four species regularly occur in our region at present (Ligon 1961).
Fig. 1. Horned Grebe (Podiceps auritus). Photograph by Donna Dewhurst courtesy of the USFWS.
Literature. Ligon 1961.
Grebes are obligate aquatic birds whose presence in fossil deposits implies presence of moderate to large bodies of water within a reasonable distance. Presence at Burnet Cave indicates a probable source in the Pecos Valley to the east, since it is unlikely that suitable habitat was present closer to the cave. Presumably the Colorado River supplied suitable habitat in the Grand Canyon region.
Fig. 1. Western Grebe. National Park Service photograph by Will Elder.
Mid/Late Wisconsin: Sandblast Cave (Emslie 1988)
Late Wisconsin/Holocene: Burnet Cave (Schultz and Howard 1935); Stanton's Cave (Rea and Hargrave 1984).
Literature. Emslie 1988; Rea and Hargrave
1984; Schultz and
Grebes are aquatic birds that are excellent swimmers and divers, requiring at least moderate-sized bodies of water. The diet consists of a wide variety of aquatic animals. The White Lake section of Pleistocene Lake San Agustín sedimentary deposits is the source of the fossil material; the lake itself should have been suitable habitat.
With presently available comparative material, the specimen, a femur, is identified only as a small grebe (Fig. 1).
Fig. 1. Right femur of a small grebe, Pleistocene San Agustín. Scale in mm.
Wisconsin: White Lake (Harris 1993c).
Literature. Harris 1993c.
Synonyms: Podiceps caspicus.
Occurrence at Dark Canyon Cave is not surprising since it is only a short distance from the Pecos Valley flood plain. The Pecos River or ox-bows of the Pecos should have been prime habitat.
Two bones represent this taxon from Dark Canyon Cave; Howard (1971) noted that they are slightly smaller than those of modern examples from New Mexico. She also noted that half of the Eared Grebe bones from Fossil Lake, Oregon, also were small.
Mid/Late Wisconsin: Dark Canyon Cave (Howard 1971).
Late Wisconsin/Holocene: Skylight Cave (Emslie 1988: cf. gen. et sp.); Stanton's Cave (Rea and Hargrave 1984).
Literature. Howard 1971; Emslie 1988; Rea and Hargrave 1984.
Fig. 1. Pied-billed Grebe; US Fish & Wildlife Service photo.
In common with many water birds, this grebe occurs throughout the region.
Mid/Late Wisconsin: Sandblast Cave (Emslie 1988).
Late Wisconsin: Skylight Cave (Emslie 1988).
Literature. Emslie 1988.
This, the smallest of the grebes, is considerably out of place compared to its current distribution. Its nearest approach now is on the western and eastern coasts of Mexico into extreme southern Texas. However, there are occasional occurrences in southernmost California and Arizona.
Sites. Medial Irvingtonian: San Antonio Cave (Rogers et al. 2000: cf.).
et al. 2000.
Last Update: 18 Mar 2013 | <urn:uuid:2b55fb7a-28c2-4c21-8174-b1c689d924cd> | 3.421875 | 786 | Knowledge Article | Science & Tech. | 44.672049 |
Why are atoms "smashed" in an accelerator? What is learned by that
process? Why doesn't it set off a chain reaction like an atomic bomb?
What is the difference between "smashing" and "fusion"? Since they
dismantled the project here in Texas, I always wondered what was intended
to be learned by such a costly project.
Actually they don't "smash" atoms in accelerators anymore. What are "smashed" in accelerators now are parts of atoms. It's done by "crashing" these parts into each other and seeing what happens. The scientists are trying to figure out what the parts are made of. It's like taking your father's watch, smashing it with a hammer and watching the parts fly by. Then, from what you've seen, you try to figure out exactly how the watch was originally put together. (Don't try this at home!)
The reason a chain reaction doesn't occur is because the conditions aren't right in an accelerator for chain reactions to happen.
The difference between "smashing" and "fusion" is that one process
(smashing) knocks things apart while the other (fusion) puts atoms together.
Scientists weren't exactly sure what they would learn from
the SSC (Superconducting Super Collider) built in Texas. That is part of the reason they wanted it built. Every time something like it has been built in the past, exciting things were discovered.
Submitted by Mike (age 38, Texas, USA)
(October 22, 1997)
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
It depends on which type of motion you are asking about. If you take a birds-eye view from the top of the solar system all the planets orbit around the Sun in a counter-clockwise (or direct) direction....more
Have you ever wondered how astronauts live in space? Did you know they do a lot of the same things we do here on Earth? Astronauts eat, exercise and sleep just like we do. However, their food isn't always...more
There is a really neat internet program called Solar System Live that shows where all of the planets and the Sun are. If you go to that page, you'll see an image similar to the one on the left. Below the...more
The picture of the American Flag (the one put there by the Apollo astronauts) is waving (or straight out) in the wind. How could that be possible if there is no atmosphere on the Moon? Was it some sort...more
I was wondering if there is a new planet? Are there planets (a tenth planet?) after Pluto belonging to our solar system? What are the names of the new planets discovered in the solar system? Are there...more
When an object has a really high energy, it can form a black hole. This is called a primordial black hole. Primordial black holes were formed near the beginning of the universe. Primoridal black holes...more | <urn:uuid:65626209-f8d5-4116-b50e-7388e4d2c05e> | 3.609375 | 663 | Comment Section | Science & Tech. | 65.540745 |
Astrophysics class - Textbook I use : Quantitative Astronomy, topics in Astronomy by Thomas L. Swihart
The situation: I am revisiting an old take home test that my professor wants me to resubmit. I had left the test in a basement that has flooded this winter and unfortunately I am having a hard time filling in the damaged bits. Since he is returning from his sabbatical as I speak, I have a very short window to hopefully redo my test from last summer. I am allowed to use Internet resources, so I will be uploading the 10 questions or so I have to do in hopes that I can at least check to see if my attempts were close or out of practice.
The first one I am attempting:
1. Calculate the Energy Density in thermal energy and in photons at the center of the Sun.
In my notes I have written down the Stefan-Bolzmann law which essentially allows for me to calculate the energy that is radiated. Now the question states "center of the sun" so do I just look up a point of reference via google? Or is the energy consistent (or relatively so?).
To attempt to convert thermal energy to photons, I was stuck on the point of photon wavelength? Isn't gamma photons different from ultraviolet? I have more questions than direction..
I will be working on this problems all weekend starting now, so hopefully someone will work with me.
I do know this class does look for general answers as I have unfortunately over thought many of the questions. | <urn:uuid:e82b0568-b5e1-4fa0-ba85-b7d199868380> | 2.6875 | 318 | Comment Section | Science & Tech. | 55.708018 |
datalock() - lock process into memory after allocating data and stack
int datalock(size_t datsiz, size_t stsiz);
datalock() allocates at least datsiz bytes of data space and stsiz
bytes of stack space, then locks the program in memory. The data
space is allocated by malloc() (see malloc(3C)). After the program is
locked, this space is released by free() (see malloc(3C)), making it
available for use. This allows the calling program to use that much
space dynamically without receiving the SIGSEGV signal.
The effective user ID of the calling process must be super-user or be
a member of or have an effective group ID of a group having PRIV_MLOCK
access to use this call (see setprivgrp(2)).
The following call to datalock() allocates 4096 bytes of data space
and 2048 bytes of stack space, then locks the process in memory:
datalock (4096, 2048);
datalock() is thread-safe. It is not async-cancel-safe.
datalock() returns -1 if malloc() cannot allocate enough memory or if
plock() returned an error (see plock(2)).
Multiple datalocks cannot be the same as one big one.
Methods for calculating the required size are not yet well developed.
datalock() was developed by HP.
Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000 | <urn:uuid:266739f7-c5ae-4053-ba5c-cc3a899b220f> | 2.71875 | 333 | Documentation | Software Dev. | 47.320157 |
Aggregate functions perform a calculation on a set of values and return a single value. With the exception of COUNT, aggregate functions ignore null values. Aggregate functions are often used with the GROUP BY clause of the SELECT statement.
All aggregate functions are deterministic; they return the same value any time they are called with a given set of input values. For more information about function determinism, see Deterministic and Nondeterministic Functions.
Aggregate functions are allowed as expressions only in:
- The select list of a SELECT statement (either a subquery or an outer query).
- A COMPUTE or COMPUTE BY clause.
- A HAVING clause.
The Transact-SQL programming language provides these aggregate functions: | <urn:uuid:7157c1c6-1bf7-4498-8be2-02d7db78a8cc> | 3.265625 | 152 | Documentation | Software Dev. | 27.201786 |
The most well-known example of a topological phase is a 2D electron gas at low temperatures and in a high-magnetic field, which has a quantized Hall conductance. However, the conducting states on the surface of a 3D topological insulator, while bearing some similarities to those in the 2D case, are a new state of matter [11, 12, 13, 14, 15, 16, 17]. Theorists are therefore keen to describe the exotic physical properties of 3D topological insulators, which should exhibit new quantization rules, and predict the ways in which they can be observed in experiments.
The distinction between topological insulators and conventional band insulators is evident in the space that contains their allowed wave functions, i.e., the Hilbert space. In some sense, topological insulators are defined by the fact that their Hilbert space topology cannot be easily perturbed to destroy the wave functions on the surface. Hence the surface electron modes are protected.
In the simplest description, the surface electron modes of a topological insulator are arranged in a single Dirac cone—the linear dispersion that describes massless particles—with a vortexlike spin arrangement (Fig.1). The circulating structure of the spins contributes a Berry’s phase of to the electronic (or hole) wave function [3, 4, 5, 6] (recall that a spin- particle must undergo two complete rotations to acquire a phase of ). The Berry’s phase protects these surface states against backscattering from disorder and impurities and dictates new topological quantization rules [15, 16].
The spin vortexlike pattern on the surface of a topological insulator exists in the presence of time reversal symmetry, but when this symmetry is broken—say, due to the presence of a magnetic field—a gap will open in the Dirac spectrum that disrupts the spin-texture. This can lead to unusual electromagnetic and magnetotransport effects in a topological insulator.
These effects can be quite spectacular when a magnetic field is applied perpendicular to the surface of a topological insulator. A magnetic field will induce Landau levels—the quantized states that give rise to the quantum Hall effect—to appear in the surface’s electronic spectrum. The Landau levels for Dirac electrons are special, however, because a Landau level is guaranteed to exist at exactly zero energy. Since the Hall conductivity increases by a conductivity quantum of when the Fermi energy crosses a Landau level, the presence of a Landau level at zero energy means the conductivity must be half-integer quantized: .
This behavior has been famously demonstrated in experiments on graphene, except that the Dirac points in graphene have a fourfold spin and valley degeneracy, which means the observed Hall conductivity is still integer quantized. At the surface of a bulk topological insulator, however, there is only a single spin-polarized Dirac cone (i.e., one that circulates in a particular direction) that carries Berry’s phase (Fig. 1). This “fractional” integer quantized Hall state for on the surface should be a cause for concern because the integer quantized Hall effect is always associated with chiral edge states (as opposed to helical surface states) and can only be integer quantized. The resolution is the mathematical fact that a surface cannot have a boundary. If the topological insulator is shaped like a slab (Fig. 2), the top surface and bottom surface are necessarily connected to each other, and will always be measured in parallel in transport, doubling the . The top and bottom can share a single chiral edge state, which carries the integer quantized Hall current.
A similar surface quantum Hall effect, known as the anomalous quantum Hall effect, can be induced with the proximity to a magnetic insulator. A magnetic field—say, from a nearby thin magnetic film—on the surface of a topological insulator lifts the spin degeneracy at the surface Dirac point. If the Fermi energy is in this induced energy gap, this leads to a half-integer quantized Hall conductivity (Fig. 2) due to the Berry’s phase of on the topological surface.
To induce such a gap, Tse and MacDonald propose to place a thick film of a topological insulator between two ferromagnets. They then consider the electromagnetic response of a helical (spin-momentum locked) Dirac gas in a half-integer quantized Hall state. They showed that under such conditions, the polarization of light transmitted through a 3D topological insulator would always be rotated by a fixed angle of : rotation by any arbitrary angle is not possible. The flipside of this transmission or “Faraday effect” is a quantized “Kerr effect,” where light reflected from the surface has its polarization rotated by a fixed amount of . The only requirement is that the light be at a frequency lower than the topological insulator band gap and the induced magnetic gap on the Dirac states.
This unusual optical response of 3D topological insulators comes from a combination of cavity confinement and the Hall conductivity of spin-helical Dirac modes on the surface: The incident light can only induce quantized currents with a certain spin helicity, which in turn affects the reflected and transmitted light. The fine structure constant dictates the coupling between the quantized currents and the electromagnetic wave. The Kerr rotation of suggests that polarization of the reflected light should exhibit a striking full-quarter rotation relative to the incident polarization direction of the light beam (Fig. 2).
The effect Tse and MacDonald predict is effectively insensitive to the precise value of the gap as long as it is finite, since one can always choose a lower frequency of the incident light. Working with far infrared light, these conditions are adequately met in the topological insulator [4, 5]. This highly tunable material features almost ideal Dirac quasiparticle helical spin modes that are locked in by the Berry’s phase of , as in Fig. 1. The surface modes are well protected within a large band gap (). To measure the magneto-optical effects directly, a film of would have to be thick enough that the surface electrons do not tunnel between the top and bottom surfaces, but films of this thickness could, in principle, be grown using molecular beam epitaxy.
Tse and MacDonald’s proposal for measuring topological quantization in units of the vacuum fine structure constant could lead to a new metrological standard for fundamental physical constants [13, 14, 15, 16, 17]. In addition, observing topological quantization in would be confirmation, independent of spin-resolved photoemission, that topological order can exist in “ordinary” bulk solids. In the long run, classifying bulk solids in terms of topological quantization may turn out to be a more powerful method of identifying phases of matter beyond the standard Landau paradigm, which is based on the idea of spontaneously broken symmetry. Perhaps the most significant “effect” is that which the discovery of topological insulator states has had on research itself: physicists can now study the interplay of “topological order” and “broken-symmetry order” in real experiments and with real materials that are, in principle, accessible to anyone .
- W-K. Tse and A. H. MacDonald, Phys. Rev. Lett. 105, 057401 (2010).
- D. Hsieh, Y. Xia, L. Wray, Y. Hor, D. Qian, R. J. Cava, and M. Z. Hasan, Nature 452, 970 (2008).
- For the Berry’s phase sensitive measurements, see, D. Hsieh, et al., Science 323, 919 (2009).
- Y. Xia et al., Nature Phys. 5, 398 (2009); arXiv:0812.2078v1 (2008).
- D. Hsieh et al., Nature 460, 1101 (2009).
- P. Roushan et al., Nature 460, 1106 (2009).
- A. Nishide et al., arXiv:0902.2251 (2009).
- D. Hsieh et al., Phys. Rev. Lett. 103, 146401 (2009).
- Y. L. Chen et al., Science 325, 178 (2009).
- H. Lin et al., Phys. Rev. Lett. 105, 036404 (2010).
- J. E. Moore, Nature 464, 194 (2010).
- M. Z. Hasan and C. L. Kane, arXiv:1002.3895 (2010); Rev. Mod. Phys. (to be published).
- X.-L. Qi and S.-C. Zhang, Phys. Today 63, No. 1, 33 (2010).
- L. Fu and C.L. Kane, Phys. Rev. B 76, 045302 (2007).
- X.-L. Qi, T. Hughes, and S.-C. Zhang, Phys. Rev. B 78, 195424 (2008).
- A. Essin, J. E. Moore, and D. Vanderbilt, Phys. Rev. Lett. 102, 146805 (2009).
- C. Day, Phys. Today 62, No. 4, 12 (2009). | <urn:uuid:8f44d2f3-a175-4178-8fd1-d11a1ddbef08> | 2.890625 | 1,977 | Academic Writing | Science & Tech. | 53.843212 |
Spelling errors can mean one of two things:
- The person who makes them is not proficient in English, and doesn't take the time to compensate by using appropriate tools (dictionaries, spell checkers, etc.)
- The person who makes them is proficient in English, but doesn't care about spelling at all.
Either is a fairly bad sign, because it means the person in question doesn't have readability, maintainability and elegance high on their priority list; if the cause is a lack of English language proficience, it also means that the person lacks two essential skills - written English communication, and a general feeling for languages (if you can't express your thoughts clearly in English, chances are you can't express them well in a programming language either).
But why exactly are spelling errors bad, all else being equal? After all, the code works, and the compiler doesn't care at all how you name your identifiers, as long as they don't violate the syntax rules. The reason is that we write code not only for computers, but also and most of all, for humans. If that weren't the case, we'd still be using assembly. Source code is written once, but read hundreds of times during its lifecycle. Spelling errors make reading and understanding the source code harder - mild errors cause the reader to stumble for a fraction of a second, many of them can cause considerable delays; really bad errors can render source code completely unreadable. There is another issue, which is that most of the code you write will be referred to by other code, and that code more often than not is written by someone else. If you misspell your identifiers, someone else will have to remember (or look up) not only what the name is, but also how exactly it is misspelled. This takes time and breaks the programming flow; and since most code gets touched more than once in maintenance, each spelling error causes a whole lot of interruptions.
Consider how developer time equals salary equals expenses, I think it should be easy enough to make a case of this; after all, breaking the flow and getting back into it can take up to 15 minutes. This way, a severe spelling error can easily cost a few hundred dollars in further development and maintenance (but they're indirect costs, not directly visible in estimates and evaluations, so they often get ignored by management). | <urn:uuid:88f010a6-9fe3-4368-b2e0-e3df9740466d> | 2.90625 | 486 | Q&A Forum | Software Dev. | 39.125091 |
The Oracle PL/SQL HAVING clause is used to filter or restrict the groups formed by the GROUP_BY clause. It follows the GROUP_BY clause in the SELECT statement. The HAVING clause can also precede the GROUP_BY clause, but this isn't logical and is not recommended. All grouping is performed (and group functions executed) prior to evaluating the HAVING clause.
Difference between WHERE and HAVING
The WHERE clause will filter or limit rows as they are selected from the table, but before grouping is done. The HAVING clause will filter rows after the grouping.
SELECT <column list>, <group by function>
FROM <table name>
GROUP_BY <column list>
HAVING <group by function condition>
SELECT JOB_ID, SUM(SALARY)
HAVING SUM(SALARY) > 10000
Related Code Snippets:
- Having Clause - Some example of using the 'HAVING' clause with 'GROUP BY'. | <urn:uuid:44705bc1-5fc4-48b4-bc4a-9dfe111fdc1b> | 2.84375 | 214 | Documentation | Software Dev. | 46.615448 |
Earth Day 2030: "A new eye blinked open upon the world"
Copyright Ó 2005, by Alec Rawls
Vandenberg AFB—As people across the country watched the northern sky this afternoon, NASA officials gave the final go ahead and the gigantic Demi-Ra sun-reflecting satellite focused its millions of ten-meter by ten-meter reflecting panels on the Earth below. Even the most casual observer has become familiar with the sight of this immense construction project expanding in the night sky, but it seemed that no amount of familiarity could reduce the startling effect of its daylight debut.
“A new eye blinked open upon the world,” mused NASA engineer Katy Wong, watching from the observation deck at Vandenberg’s Demi-Ra Command Center. Her words unconsciously echoed the reaction of thousands across the country. Responding to what seems to have been an optical illusion created by the focusing sequence, people from every state described Demi-Ra as an “eye,” blinking two or three times before its light poured forth and observers had to look away from the brightness of this second sun in the sky.
Interviewed at his home in Virginia, Dr. Patrick Michaels, the driving force behind the Demi-Ra project, was enthusiastic, opining that “the timing of the project looks very good.” He noted that solar activity has dropped off dramatically in the last 20 years and that the Earth has cooled significantly as a result. “That puts us a little behind the curve,” he said, “but we built up enough greenhouse gases over the last century to slow the cooling down. With the climate modeling breakthroughs of the last ten years, we can be quite certain that Demi-Ra I, and the upcoming Demi-Ra II, will provide enough additional sunlight to keep another Little Ice Age from occurring.”
The most controversial aspect of the project is the variable focusing ability of the reflectors. A fixed reflector in sun-synchronous orbit would have been sufficient to achieve the project’s first requirement, which is to shine extra sun-light only onto the daylight side of the planet, leaving nocturnal creatures undisturbed. (A sun-synchronous orbit uses the asymmetry of the Earth’s mass to keep its circle facing the sun as the Earth orbits the sun.) The problem with a fixed reflector is that it would distribute the extra sunlight equally to the tropics, the temperate zones and the polar regions as the reflector traveled north to south. In contrast, Demi-Ra’s variable focus can be used to keep the extra sunlight off of the tropics and off of the ice caps. Some are alarmed, however, at the military potential of this feature.
In theory, Demi-Ra’s one thousand square miles of reflective surface can be focused on an area as small as one square mile. Fearful of this destructive potential, thousands of peace activists planned a massive “die in” for San Francisco today. Dressed as burnt ants, the activists were planning to curl up in intersections across the city when Demi-Ra came on-line, but it didn’t quite work out that way. When Demi-Ra “blinked to life” in startlingly life-like fashion, the ant-suited protesters were more than a little unnerved. In what seemed to be a genuine panic, protestors at tens of locations across San Francisco started running for cover, many screaming hysterically.
“It looked right at me! It looked right at me!” one protester cried over and over as she huddled the foyer of the Fairmont hotel. Police were nonplussed, but expressed relief that at least the “die in” was short lived. Similar panic attacks struck protestors in other North American and European cities. No other segment of society seems to have been affected. A spokesman for the Centers for Disease Control said that the “panic phenomenon” would be monitored. “On the plus side,” he joked, “I don’t think we’ll be seeing any more street-rallies from the Al Qaeda remnant.”
At the new United Nation complex in Harare, Zimbabwe, “contrarian” climatologist Stephen Schneider sounded a warning note. “As I have been saying since the 1970’s, the cause of global cooling is human economic activity. If we want to counteract global cooling, economic activity must be drastically curtailed. Demi-Ra just enables industrial society to proceed apace with its destructive impact.” A reporter reminded Dr. Schneider that from 1980 to 2010, when global temperatures were rising, Dr. Schneider, then at Stanford University, had claimed that human activity was causing global warming, and therefore needed to be curtailed. “As you can see,” Dr. Schneider answered, “I have been perfectly consistent.”
Also at the United Nation news conference was Dr. Paul Ehrlich, who declared that Demi-Ra would cause mass starvation by 2040. “It will lengthen the growing season in temperate regions,” Ehrlich predicted. “The resulting increase in food production will create population growth. Soon there won’t be enough food for the increased population and everyone will die.” A reporter reminded Dr. Ehrlich that he had predicted mass starvation by the mid 1970’s, the mid 80’s, the mid 90’s, by 2010, by 2020 and by 2030. “That is not consistent enough for you?” Dr. Ehrlich parried, receiving a sharp nod of approval from Dr. Schneider.
The one nation that actually is starving is Zimbabwe. Zimbabwe’s Dictator for Life, Kojo Annan, owns the entire country and insists that “I won’t allow people to steal from me by growing food on my property and eating it.” Zimbabwe’s population is down to 2 million, from 12 million in the year 2000. The country’s problems are compounded by the fact that Venezuela, when it pulled out of the United Nations last month (prompting the name change to United Nation), also ended its oil subsidies to Zimbabwe. Without Venezuelan oil, Zimbabwe’s downward population trend is expected to continue. “That is a good thing,” suggested Dr. Ehrlich, explaining that “the more people who starve today, the less competition there will be for food tomorrow.”
Russia, on the other hand, seems to be looking forward to being well fed—and warm—in the present. Each nation gets to decide where its allotment of extra sunlight will be directed. The Russian plan is to warm Russian cities during the long Russian winter, then in spring and fall, to create extended growing seasons in selected farming areas. Dry regions like Mongolia and the Western United States are planning to concentrate much of their Demi-Ra allotment onto fields of solar-electric generating panels. Countries can also trade their allotments on the open market.
The Demi-Ra company, a private corporation regulated by the United States government, will receive 10% of the market value of each country’s allotment in perpetuity. “If they don’t give us our cut, they don’t get the sunlight,” said Demi-Ra CEO Michael Petras, in attendance at the Vandenberg countdown. “Hey, we ought to be getting more,” he added unapologetically. “The government thought we needed to give up 90% to grease the international wheels. That’s a LOT!” Company President Aman Verjee, also in attendance agreed that: “The government drove a hard bargain, but it was still a no-brainer.” “Even at 10%, the margins are HUGE,” Petras roared, knocking Verjee backwards. “Once we looked at Dr. Michaels’ plans and started running the numbers, it was just a matter of getting congressional approval. Everyone with a dollar wanted in.”
That approval came with the changing of the guard in the climatology profession. When solar activity fell off after 2010, and global temperatures started falling with it, the old “global-warming consensus” was routed by solar-warming theory. Solar-warmists had until then been dismissed as a minor subset of a small cadre of egregiously wrong “contrarians.” By 2020, the global-warmists had become the new “contrarians” and Demi-Ra advocates like Dr. Michaels (an early “contrarian,” but not originally a solar-warmist) were able to get their enabling legislation.
“It is a great achievement,” said Michaels. “If Demi-Ra’s orbit holds, and its structure proves robust, we just might have bought ourselves a permanent inter-glacial. If the design turns out to be less robust than we hope, improved models are already on the drawing board. We’ll learn by doing for a couple of years, then decide how to proceed with Demi Ra II.” Demi Ra I has an expected service life of a century. As a major investor in the Demi-Ra project, the United States government expects to earn a substantial net return.
To learn more about solar-warming theory, see Alec’s twenty-five year old article “Global warming’s omitted variable,” available on line in archives of The Stanford Review, 2/15/2005.
This story was originally published in The Stanford Review, 4/22/2005. To comment, contact email@example.com, or visit the posting of this story on my Error Theory blog.
Worth a nickle?
PayPal's fee schedule is 30 cents + 2.2%, so make any donations lump sum rather than item by item. To hear more, visit: The decentralized coordination of intelligence.
Home Latest opinion columns etc. Lawsuit Direct Protection Multiple Verdicts Book on Republicanism Illiberal "liberalism" Decentralized coordination of intelligence Rebel-Yell Site search Contact Email sign-up Donate | <urn:uuid:5b581869-a8b4-431c-a39f-87eef7294c17> | 2.9375 | 2,150 | Personal Blog | Science & Tech. | 50.448322 |
What do manatees do when the temperature drops? They head for the warm waters that are discharged from power stations, of course!
Florida manatees (Trichechus manatus latirostris) are adapted to water temperatures of 70 degrees F or higher (21 degrees C). Manatees must find warmer waters when the water temperature gets too low. If water temps drop too far below 68 degrees F (20 degrees C), these marine mammals could die due to the stress of cold weather.
Manatee visits to the warm waters near power plants are an excellent example of a learned behavior in organisms. They remember where to find water that is the right temperature for surviving the cold snaps that hit South Florida. They come back year after year, eventually bring their offspring, and then the next generation of manatees learns where to go when the cold weather strikes in order to boost their chances of surviving the cold spell.
On exceptionally cold days, 500 or more manatees can stop by the Riviera Power Plant for a visit. Wildlife experts estimate that's five percent of the entire population of manatees on Earth. In human terms, it's like throwing a party and having 340 million people show up! Manatees are listed as an endangered species, so losing this percentage of the population due to the stress of cold weather would be tragic.
Luckily, Florida Power & Light officials are doing right by the manatees that have called the Riviera Beach waters their winter home. Engineers designed a water heating system that will continue to operate while the power plant itself is undergoing its renovation. Although it does use energy to operate and cost millions of dollars to install, it's an investment in wildlife conservation until the power plant is back in business in 2014. When the facility reopens, there are also plans to add an area dedicated to manatee viewing that will be open to the public.
In the meantime, you can watch the action on Florida Power & Light's live Manatee Cam - the colder the weather in South Florida the more manatees you'll be able to spot. After you check out the Manatee Cam post your comments below - what was the temperature and how many manatees did you count? | <urn:uuid:add0f680-ad57-4e3f-86f7-c36bfe412c56> | 3.5625 | 458 | Personal Blog | Science & Tech. | 48.157778 |
A thought experiment proposed by Erwin Schrödinger
to demonstrate the seemingly absurd character of reality at the level of Quantum Mechanics
. The thought experiment demonstrates quantum indeterminacy.
In Schrödinger's thought experiment, a cat is placed into a steel box with a small sample of radioactive material, a vial of cyanide, and a detector for detecting whether the radioactive sample has decayed and emitted a particle. If a single atom of the sample decays, the detector will detect it, and break the vial of cyanide. This will kill the cat.
Since an outside observer of the box has no way of knowing if the radio-active material has decided to decay, and release a particle, there is no way to know if the cat is dead or alive. Because there is no way to tell if the cat is dead or alive until it is observed, in essence, the cat is both dead and alive at the same time.
Schrödinger's cat was just an explanatory device to illustrate the concept of what really goes on at the quantum level. At the quantum level, where there is no way to tell about the state of a particle, reality actually is
in both states at the same time. The particle is not in one state or the other, but is in all possible states simultaneously. This is called superposition.
: Also spelled: Schrodinger's Cat, or Schroedinger's Cat when the alphabet does not fully support the native spelling. | <urn:uuid:87fdb663-ccbd-4f9a-8101-86915a58ecb7> | 3.53125 | 306 | Knowledge Article | Science & Tech. | 46.541297 |
WHAT IS WIND: Wind is a form of solar energy, caused by the uneven warming of the earth's surface. This is why air masses have different temperatures and pressures, and are constantly moving to find a balance. The higher the difference in pressure, the swifter the air moves and the stronger the wind. Mankind has used wind energy for thousands of years, using it to pump water, grind flour, press olives, and even to explore the world in wind-driven sailing ships.
RATING HURRICANES: Hurricanes are categorized according to the strength of their winds according to the Saffir-Simpson Hurricane scale. They are rated from lowest wind speeds (Category 1) to highest (Category 5). But even lower category storms can cause a great deal of damage, mostly from storm surges and the resulting flooding. The worst devastation from hurricane Katrina, for example, occurred when flooding caused the New Orleans levees to fail.
ABOUT TORNADOES: A tornado begins with a thunderstorm cloud, which can build up a lot of energy. If this energy creates a particularly strong updraft of air, it will form a vortex, much like how a whirlpool forms in a draining bathtub. The air is pulled toward the center in a spiral, forming a tornado under the thundercloud. Wind speeds can reach 200 to 300 MPH, and if the dangling vortex touches ground, the combination of the whirling wind's speed, the updraft, and pressure differences can cause severe damage. The path of a tornado is determined by the path of the parent thundercloud, but it will often appear to hop (called a "jumper"). This occurs when the vortex is disturbed, causing it to collapse momentarily and reform.
The Institute of Electrical and Electronics Engineers, Inc.-USA, and American Meteorological Society contributed to the information contained in the TV portion of this report. | <urn:uuid:692cb837-2db5-4944-9791-b5f262304697> | 3.59375 | 384 | Knowledge Article | Science & Tech. | 45.317243 |
Thermochemistry and Hess's Law
To measure the enthalpy change of two different reactions in the laboratory.
To use Hess's Law to estimate the enthalpy change for the reaction: 2 Mg (s) + O2 (g) 2 MgO (s)
In this lab, you will carry out the following two reactions to determine the enthalpy change for each:
(a) Mg (s) + 2 HCl (aq) MgCl2 (aq) + H2 (g); ΔH = measured in lab
(b) MgO (s) + 2 HCl (aq) MgCl2 (aq) + H2O (l); ΔH = measured in lab
You will then use the two equations and their enthalpy changes, along with the thermochemical equation:
(c) 2 H2 (g) + O2 (g) 2 H2O (l); ΔH = -571.6 kJ
and Hess's Law of Heat Summation in order to predict the enthalpy change of the reaction:
(d) 2 Mg (s) + O2 (g) 2 MgO (s); ΔH = ?
In order to perform the calculations at the end of this lab, there are a few things you must know:
The heat capacity (C)
of the coffee cup calorimeter is given as 10 J/°C.
The specific heat (s)
of the solution will be estimated to be the same as that of water, or 4.18
The systems you are
studying are reactions (a) and (b) above. The surroundings include the
calorimeter and the solution.
Use the combined mass of
all reagents (solids and solutions) used in each trial for the mass of the
The change in
is determined by tmax - tmin.
q = C x
and q = s
Heat gained by the system is lost from the surroundings.
Heat lost by the system is released to the surroundings. This is otherwise
known as the Law of Conservation of Energy. (qsystem
The enthalpy change
for each reaction is equal to the heat of reaction at a constant pressure. (ΔH
The enthalpy change determined in the previous step must first be converted to kJ per mole (kJ/mol) of the limiting reactant, then converted to represent the number of moles of the limiting reactant in the balanced equation. Seek assistance from the instructor, if necessary.
|calorimeter (two sytrofoam cups, nested)||Logger Pro with LabPro® Interface|
|cardboard cover with a small notch cutout||temperature probe|
|3 strips of magnesium metal (~0.15 g each)||magnesium oxide (~0.75 g)|
|150 mL 1.0 M HCl|
You will work in the same groups (of 3-4 students) as were assigned in the magnesium/hydrochloric acid lab. Students who served as a manager or computer operator will work as lab techs for this activity. In the event you had only 3 students the first time, or if a group member has dropped since then, you may make your own assignments, but you should take on a different role than last time. However, each individual should have had a chance to do some of the dirty work, as well as some of the greater responsibility of getting the assignment submitted to the professor in a timely manner. To refresh your memory on the role played by each person, you may check out the group work page.
Plug a stainless steel
temperature probe into the the LabPro® interface and launch
the Logger Pro application.
Adjust the experiment length to 200 s
and set the sampling rate to 5 seconds per sample. Adjust the number of
decimal places for the temperature in your data table to the nearest
±0.1°C. Also, adjust the axes on your graph to accommodate a temperature
range of 10-50°C.
your calorimeter onto an electronic balance (use a cheap, less precise
balance for this measurement, as you do not want to risk spilling acid on
the analytical balance) and tare the balance. Remove the
calorimeter from the balance and carefully add 25.0 mL of 1.0 M HCl.
Place the cup back onto the balance and record the mass of HCl added. Also,
record the volume of HCl used.
and record the mass of a magnesium strip (~0.15 g), using an analytical balance.
the temperature probe into the small notch cutout on the cardboard cover and
place the probe into the HCl. Stir the HCl with the probe to maintain a
uniform temperature throughout the solution. Wait until the temperature
the magnesium ribbon into a loose ball. Click the green
button to begin data collection. After a couple of data points have
been collected, slide the cover aside and drop the ball of magnesium into
the calorimeter. Slide the cover back into place. Continue stirring until
the data collection ends.
Select Store Latest Run
from the Experiment menu and save your data to disk.
Steps 2-6 twice to obtain a total of three trials with the magnesium
reaction with hydrochloric acid.
Steps 2-6 three times using ~0.25 g of magnesium oxide in place of the
If your data looks good, copy and paste the data from each of your trials into an Excel spreadsheet. Use the appropriate function or formula to determine the minimum and maximum temperatures reached in each trial.
Assignment (use the answers to questions 1-3 in your Results and to questions 4-7 in your Discussion)
substance is the limiting reactant in the reaction of magnesium with
hydrochloric acid, then in the reaction of magnesium oxide with hydrochloric
acid. How many moles of the limiting reactant were used in each of the
reactions? Provide at least one sample calculation for each of the two types
Calculate the enthalpy
per mole of the limiting reactant, in kJ/mol, for each of the two reactions. Provide at least one sample
calculation for each of the two types of reactions. (If
you are submitting the report electronically, you may place formulas in
spreadsheet cells instead.) See the introduction for
helpful information and equations.
Then determine ΔH,
in kJ, for the balanced equations in (a) and (b). (Look at the coefficients
of the limiting reactant in each of the equations. Don't make this
conversion difficult.) Calculate the average ΔH value for each of the two
reactions and submit your average ΔH
before leaving lab (or within 24 hours, with
Use Hess's Law of Heat
Summation, the class average enthalpy changes for reactions (a) and (b), and
the enthalpy change for (c) given above to
determine the enthalpy change for reaction (d). Show your work.
Calculate the percent
error in the ΔH
you calculated for reaction (d), assuming the reaction was carried out under
standard thermodynamic conditions. (Use your calculated ΔH for reaction (d) and the
for the same reaction using the values found in Appendix
IIB of the Tro textbook).
What would happen to the value of ΔH
you calculated if all of your temperature readings were too high by
What were some possible sources of error in this experiment? Explain.
You will be turning in a group laboratory report. The report should include the title information, an introduction to thermochemistry in general (different from the introduction in this lab manual), experimental details (properly referenced if you choose that route), results, discussion, and references. You should use the above questions to guide your results and discussion, but there should be more to these sections than just answering the questions and they show flow logically as you discuss the lab and the results.
Follow your instructor's directions for submitting this lab report. If you are submitting electronically, please submit a single file that includes all of the required components. Also, use the following convention for naming your files: Lastname1 Lastname2 Etc Thermo and if emailing, use a subject line of Chem 1061: Thermochemistry Lab. As alluded to earlier, you do need to show calculations. If you email the report, the embedded data tables must contain all formulas in calculated cells. If you submit a paper version, you will need to show these calculations for one of the the trials. In both cases, any calculations not carried out in the tables must be shown. Laboratory report guidelines are found at http://webs.anokaramsey.edu/chemistry/Chem1061.
Written by Lance S. Lund 2000 (updated June 14, 2011). | <urn:uuid:2d68b23b-01c4-46a6-9208-6941c942728e> | 4.09375 | 1,870 | Tutorial | Science & Tech. | 58.378316 |
The Sun is so far away that it would take the
Space Shuttle seven months to fly there. That's why the Sun, which
is a hundred times the diameter of the Earth, looks so small!
Three hundred years ago, astronomer Edmund Halley found a way to measure
the distance to the Sun and to the planet Venus. Knowing these distances
helped find the true scale of the entire Solar System for the first time.
Halley knew that every 121
years the planet Venus passes in front of the Sun. Venus’ position,
relative to the Sun behind it, appears very different when viewed
from two different places on Earth. How different depends on
how far away Venus and the Sun are from the Earth.
1761. Using observations
of the "transit
of Venus" made by astronomers around the world, the distance
to the Sun is determined to be 93 million miles. This photograph
is from the 1882 transit of Venus.
ABOVE: Our Sun is the nearest star. At 93 million
miles, the Sun provides the warmth that has allowed life to evolve
on Earth. Has life evolved elsewhere? | <urn:uuid:c1e43733-8e16-4f15-870b-2ee18627e026> | 4.03125 | 239 | Knowledge Article | Science & Tech. | 64.656006 |
September 2, 2010: UN Intergovernmental Panel on Climate Change (IPCC) needs fundamental changes - see Report on InterAcademy Council IPCC Review Website: http://reviewipcc.interacademycouncil.net/. As reported in the press:
- "UN climate experts 'overstated dangers': Keep your noses out of politics, scientists told", by Fiona Macrae, The Mail On-line, London, United Kingdom. "UN climate change experts have been accused of making 'imprecise and vague' statements and over-egging the evidence. A scathing report into the Intergovernmental Panel on Climate Change called for it to avoid politics and stick instead to predictions based on solid science." Read whole piece.
- "U.N. climate body needs 'fundamental reform,' says report", by Thair Shaikh, CNN, U.S.A. "The United Nations' climate body needs to "fundamentally reform" if it is to prevent a repeat of the error that led to the publishing of a report warning that Himalayan glaciers could melt by 2035, an international committee reported Monday." Read whole piece.
- "IPCC told to stop lobbying and restrict role to explaining climate science", by Stephen Adams and Robert Winnett, The Daily Telegraph, London, United Kingdom. "Harold Shapiro, a Princeton University professor and chair of the committee that conducted the review, said that a report by an IPCC working group "contains many statements that were assigned high confidence but for which there is little evidence." Read whole piece.
- "UN slams IPCC on report", India Today. "The inter-governmental Panel on Climate Change (IPCC) has been told not to make policy suggestions based on weak scientific evidence and to clearly convey uncertainties connected with climate change while preparing its assessment reports." Read whole piece.
- "Matt Ridley: This Discredited IPCC Process Must Be Purged - We cannot make sane decisions on global warming if the ‘experts’ present us with evidence that is biased, The Times, London, United Kingdom. "Yesterday, after a four-month review, a committee of scientists concluded that the Nobel prize-winning IPCC has “assigned high confidence to statements for which there is very little evidence”, has failed to enforce its own guidelines, has been guilty of too little transparency, has ignored critical review comments and has had no policies on conflict of interest”. Read whole piece.
- "Climate change body told: Get facts right", by Stephen Foley, New Zealand Herald. "Authors reported high confidence in some statements for which there is little evidence. Furthermore, by making vague statements that were difficult to refute, authors were able to attach "high confidence" to the statements', it said. One summary for policy makers 'contains many such statements that are not supported sufficiently in the literature, not put into perspective, or not expressed clearly'." Read whole piece.
- "Big changes proposed to UN climate panel", The Associated Press, Canadian Broadcasting Corporation Web site. "Still, Shapiro said the way the report expressed confidence in scientific findings was incomplete and at times even misleading. In the panel's first report, which is about the physical causes of global warming, scientists may have underestimated how confident they were in their conclusions, Shapiro said. But the second report, about the effects on daily life, in at least one instance claimed high confidence when there was no backing for that, he said." Read whole piece. | <urn:uuid:39278d8a-414a-4972-9eb7-7a96d216dccc> | 2.734375 | 710 | Content Listing | Science & Tech. | 35.380277 |
I think these are two of the most sustainable and cleanest energy sources. I have fantasies of a future where these play a much more major role in how we power the world than they do currently. Surprisingly, a quick search of Cyburbia doesn't turn up much discussion of them. So let's start that here.
Solar comes in two forms: Solar power generation and "passive solar". I think passive solar is brilliant stuff and should interest planners because it is mostly about planning infrastructure to take advantage of seasonal variations. Most people would likely think of it in terms of "energy conservation" but I don't think that's completely accurate. In essence, it uses sunlight to help heat a home (or other building) in winter and shade to help keep it from getting heated up too much in summer. It tends to be very low tech. And my understanding is that it results in a building that is more fundamentally comfortable.
Wind power: Well, this is kind of a secondary source of solar power since the sun's energy is responsible for making wind to begin with. I thought about leaving it out of the title of the thread entirely because of that but I know most people don't think of wind power as a form of solar energy.
I also like the fact that solar and wind tend to complement each other: if it isn't a beautiful, sunny day, the odds are good that winds are higher than usual.
Here are a few (somewhat random) links:
Solar Energy: http://www.solarenergy.org/
Wind Power: http://www.awea.org/
Here is a page I really like, a map that shows state resources: http://www.eere.energy.gov/windandhy...activities.asp
Map that shows how many megawatts of wind energy are produced in each state: http://www.eere.energy.gov/windandhy...d_capacity.asp
Last, I will note that I also like solar and wind for the potential to support my other dream of having more distributed, small/local generation instead of relying almost exclusively on centralized power generation. | <urn:uuid:3380f356-7839-48c9-9fb1-beeff446b38d> | 2.6875 | 441 | Comment Section | Science & Tech. | 61.319432 |
Thank you very much Ms. Sue!!
Can you help me with this problem please... 3y+5(y-3)+8
Thank you Ms. Sue!!!
Can you help me with this problem please... 8x-3(x-5)=30
The quantity of heat that changes the temperature of a mass of a substance is given by , where is the specific heat capacity of the substance. For example, for , . And for a change of phase, the quantity of heat that changes the phase of a mass is , where is the heat of fusion...
Imagine a giant dry-cleaner's bag full of air at a temperature of -31 floating like a balloon with a string hanging from it 15 above the ground. Estimate what its temperature would be if you were able to yank it suddenly back to Earth's surface.
One-ninth of Polymer Plastics sales are made in New England. If New England sales amount to $600,000, what are the total sales of the company?
ronnie swam 2.5 laps on monday and swam twice as far on tuesday how far did he swim in all?
Solve equation by expressing each side as a power of the same base & then equating exponents. e^x+5=1/e^6
weber state university
CH2=CH-CH2CH3+H2O What is the product of the reaction above
For Further Reading | <urn:uuid:c28714dc-4450-4fcb-8f33-3d6f58ca4665> | 3.296875 | 303 | Comment Section | Science & Tech. | 83.02 |
|Figure 1. A portion of Louis Agassiz's 1844 "genealogy of the class of fishes," reproduced from Patterson's 1977 paper on teleostean phylogeny. The figure shows the geological (historical) distribution of various types of fishes. "Agassiz's example shows clearly," Patterson writes (1977:580), "that belief in evolution is not necessary for the production of such diagrams: the information contained in these diagrams is therefore not necessarily concerned with evolution or phylogeny."|
Copyright © 1996 Access Research Network.
All rights reserved. International copyright secured.
File Date: 6.22.96 | <urn:uuid:7be3758b-4cc6-4fea-bed4-b75afdf10831> | 3.21875 | 128 | Knowledge Article | Science & Tech. | 37.103396 |
Debugging OWL Ontologies using Swoop
What is an Ontology?
Broadly speaking, an ontology is a set of axioms used
to formally define concepts and relations that exist in a
certain domain and assert information about individuals in that domain.
What is OWL? OWL is the de-facto standard for the creation and exchange of ontological models on the Web, and thus represents a basis of the Semantic Web. One of the main benefits of OWL is the support for formal reasoning, and to this effect, the sublanguage OWL-DL is its most relevant subset since its semantics are firmly rooted in Description Logic, a decidable fragment of First Order Logic.
Why is debugging OWL Ontologies hard? OWL-DL is based on an expressive description logic -- SHOIN(D), and a DL reasoner can be used to derive inferences from, or detect contradictions in an OWL Ontology. However, given the expressivity of the logic, newcomers to OWL have difficulty in understanding inferences and/or fixing errors in an OWL ontology, since most reasoners only report inferences (or errors) in the ontology without explaining how or why they are derived. Even DL experts find it hard to debug errors in large, complex ontologies. The two main types of semantic or logical errors found in an OWL ontology are:
- Unsatisfiable Concept - this occurs when a concept definition contains a contradiction which prevents the concept from having a model, i.e., the concept is forced to not have any individuals.
- Inconsistent Ontology - this occurs when the axioms in an ontology contain a contradiction which prevents the ontology from having a model, e.g., when the ontology asserts that an individual belongs to an unsatisfiable concept.
Our Goal? To devise non-standard DL services that are specifically catered towards the debugging and repair of logical inconsistencies in OWL ontologies. An additional motivation is to demonstrate that good debugging support will not only give users control over their modeling, but also encourage them to experiment more freely with expressions, and help them come to understand their ontologies through the debugging process.
This service is used to understand the output of the reasoner as it provides the justification premises (axioms) for any arbitrary entailment derived by the reasoner from an OWL-DL ontology -- from a debugging standpoint, this service is critical for identifying the precise set of axioms, along with the relevant sub-parts of each axiom, that is responsible for a particular error. For example, Figure 1 displays the use of the service to debug the unsatisfiable concept AI_Dept in the University OWL Ontology.
Figure 1: Axiom Pinpointing
As can be seen, in addition to identifying the minimal set of axioms in the ontology responsible for the error, the service also provides a natural language explanation for the cause of the root clash or contradiction. Also, notice how the axioms are ordered and indented in order to create a chain of inferences leading to the source of the error. In this case, an instance of AI_Dept is an instance of CS_Department (because of the subclass relation in 1), and, through a series of inferences (axioms 2-5) an instance of EE_Department, which is defined to be disjoint with CS_Department (axiom 6).
service is useful when there are a large number of semantic errors in an ontology. It prunes the problem space by separating the
root or critical errors in the ontology from the derived or dependent ones.
Typical cases when errors spread is if a class is defined to be a subclass of
another unsatisfiable class, or is related to another
unsatisfiable class via an existential property
restriction, in which case, it becomes unsatisfiable
Figure 2 displays the use of this service to debug the large number of unsatisfiable classes in the Tambis OWL Ontology - in this case only 3 out of 144 unsatisfiable classes are identified as roots.
Figure 2: Root Error Pinpointing
Using this service, we have an efficient iterative procedure to remove all the unsatisfiability bugs in the ontology: at each point, we focus solely on fixing all the root classes, which effectively fixes a large set of directly derived class bugs. However, doing so might reveal additional contradictions and a new set of unsatisfiable classes. We then use the service again to obtain a new set of roots and repeat the fixing process iteratively till no unsatisfiable classes are left in the ontology. In the Tambis case, we find that only 6 unsatisfiable classes (in two iterations) need to be fixed in order to resolve all the errors.
This service acts as a guideline for ontology repair, by aiding the user in understanding and evaluating the various repair options available. The basic methodology is the following: it takes the critical erroneous axioms as inputs, considers various metrics to rank axioms (defining notions such as semantic and syntactic relevance), and automatically generates repair solutions to fix any or all of the errors in the ontology based on the axiom ranks.
Figure 3: Ontology Repair
Figure 3 demonstrates the use of this service for generating a repair plan to remove all root unsatisfiable concepts in the University OWL Ontology. Key features of this service include the option to preview entailments that are lost/added when axioms are removed/inserted (impact analysis), the ability to modify plans on the fly by forcibly keeping or removing certain axioms, and the auto-suggestion of axiom rewrites in special cases.
non-standard query features in Swoop that are useful for debugging are:
displaying sub/super classes of arbitrary class expressions, and "Show
References", which highlights the usage of an OWL entity
(concept/property/individual) by listing all references of that entity in local
or external ontological definitions.
Figure 4 displays the use of the latter query type to debug an unsatisfiable class in the Sweet-JPL ontology. In this case, the class OceanCrustLayer is unsatisfiable because it is a subclass of both GeometricObject_2D and GeometricObject_3D which enforce conflicting restrictions on the data-property hasDimension, and here, the use of Show References reveals the incorrect usage of the property hasDimension.
Figure 4: Using non-standard queries to debug logical inconsistencies
Part of good debugging support for OWL ontologies is making experimentation safe, easy, and effective. Swoop has an ontology versioning feature that supports ad hoc undo/redo of changes (with logging) coupled with the ability to checkpoint and archive different ontology versions. Such a feature can play a vital role in ontology debugging. Consider the scenario in which a user starts with an inconsistent ontology version, performs a set of changes in succession (undoing and redoing as necessary), in order to reach a final consistent version. Here the change logs give a direct pointer to the source of inconsistency. The checkpointing allows the user to switch between versions easily exploring different modeling alternatives.
Figure 5: Change Management in Swoop: change log with option for undo, checkpoints etc.
Alternately, if the user has two different ontology versions, one consistent and the other inconsistent, a diff between the versions can be performed using Swoop's Concise Format Renderer in order to determine possible change paths between the versions. By examining these change paths, and noting the common bug-producing changes, users can find and eliminate erroneous entity-definitions and axioms in the ontology.
Once a series of changes has proven effective in removing the defect and seems sensible, the modeler can use Swoop's integrated Annotea client to publish the set of changes plus a commentary (see Figure 6). Other subscribers to the Annotea store can see these changes and commentary in context they were made, apply the changes to see their effect, and publish responses. These exchanges persist, providing a repository of real cases for subsequent modelers to study.
Figure 6: Using the Annotea Client in Swoop to publish and share explanations on the Ontology. In this case, the annotation also contains a change set that when applied to the ontology will rectify the problem.
As noted earlier, the Axiom Pinpointing service can be used to identify the minimal set of axioms responsible for any arbitrary entailment of an OWL-DL ontology. We have thus extended our explanation support in Swoop to cover any inference derived by the reasoner, Pellet. Figure 7 shows an example used to explain a subsumption (subclass) relationship in the Koala ontology.
Figure 7: Explaining why MaleStudentWith3Daughters is a subclass of Parent in the Koala ontology
The debugging support has been integrated into Swoop since v2.3. Download it here.
All ontologies listed on this page and used as samples for debugging are listed here.
Aditya Kalyanpur. Debugging
and Repair of OWL Ontologies. Ph.D. Dissertation,
Aditya Kalyanpur,
Bijan Parsia, Evren Sirin, Bernardo Cuenca-Grau. Repairing
Unsatisfiable Concepts in OWL Ontologies. In The European Semantic Web Conference ESWC 2006 (Best-Paper Award Winner)
Aditya Kalyanpur, Bijan Parsia, Evren Sirin, James Hendler. Debugging Unsatisfiable Classes in OWL Ontologies. Journal of Web Semantics, Volume 3 Issue 4, 2006
Aditya Kalyanpur, Bijan Parsia, Evren Sirin. Black Box Techniques for Debugging Unsatisfiable Concepts. In The 2005 International Workshop on Description Logics - DL2005,
Bijan Parsia, Evren Sirin, Aditya Kalyanpur. Debugging owl ontologies. In The 14th International World Wide Web Conference
Note that the work
described here represents ongoing research. We appreciate and welcome any
feedback on this topic - please send mail to the Swoop-Devel
list (especially if you have examples of buggy OWL ontologies!)
Last Updated: February 2006 | <urn:uuid:db6518ac-08ff-41f9-8996-3aabdf4c9053> | 3.21875 | 2,152 | Academic Writing | Software Dev. | 25.964436 |
National Geographic Atlas of the Ocean: The Deep Frontier by Sylvia A. Earle, National Geographic, £35, ISBN 0792264266
WHEN student Marie Tharp and oceanographer Bruce Heezen began drawing maps of the ocean floor from ships' echo-sounder traces in the 1950s, they created an inspiring image of the hidden face of our planet. Their maps continue to fascinate generations of would-be deep-sea explorers, inviting us to pore over every lonely seamount and ponder each mysterious trough.
Tharp is one of the contributors to the new
It traces the history of exploration and sketches the hot topics of marine science, from deep-sea vents and El Ni-o to bioluminescence and polar ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:80e2d7b2-6c07-4d79-b6e7-3a4a1524ce99> | 3.296875 | 181 | Truncated | Science & Tech. | 47.476414 |
Related Information Links
The Arrhenius Equation
Background reading:Common sense and chemical intuition suggest that the higher the temperature, the faster a given chemical reaction will proceed. Quantitatively this relationship between the rate a reaction proceeds and its temperature is determined by the Arrhenius Equation. At higher temperatures, the probability that two molecules will collide is higher. This higher collision rate results in a higher kinetic energy, which has an effect on the activation energy of the reaction. The activation energy is the amount of energy required to ensure that a reaction happens.
This calculator calculates the effect of temperature on reaction rates using the Arrhenius equation.
R has the value of 8.314 x 10-3 kJ mol-1K-1
You should use this calculator to investigate the influence of temperature on the rate coefficient.
This calculator allows you to perform three different calculations:
Sample Problem:The reaction:
has a rate coefficient of 1.0 x 10-10 s-1 at 300 K and an activation energy of 111 kJ mol-1. What is the rate coefficient at 273 K?
(Solution: calculate the value of A for a temperature of 300 K, then use the calculated value of A to calculate k at a temperature of 273 K. We generally assume that A and the activation energy Ea do not vary with temperature). | <urn:uuid:0746c54b-2726-4776-9b09-6cba040e2405> | 3.984375 | 278 | Tutorial | Science & Tech. | 41.673448 |
An exponent defines the number of times a number is to
be multiplied by itself. For example, in ab
is the base and b
is multiplied by itself b
In a numerical example, 25
An exponent can also be referred to as a power: a number with an
exponent of 2 is raised to the second power. The following are other terms
related to exponents with which you should be familiar:
- Base. The base refers to the
3 in 35. It is the number that is being
multiplied by itself however many times specified by the exponent.
- Exponent. The exponent is the 5 in 35.
It indicates the number of times the base is to be multiplied with
- Square. Saying that a number is squared means
that the number has been raised to the second power, i.e., that
it has an exponent of 2. In the expression 62,
6 has been squared.
- Cube. Saying that a number is cubed means
that it has been raised to the third power, i.e., that it has an
exponent of 3. In the expression 43,
4 has been cubed.
It may be worthwhile to memorize a few common powers before
taking the Math IIC, in order to save the time you’d take to calculate
them during the test. Here is a list of squares from 1 through 10.
Memorizing the first few cubes might also be helpful:
The first few powers of 2 are also useful to know for
Adding and Subtracting Numbers with Exponents
In order to add or subtract numbers with exponents, you
must first find the value of each power, then add the two numbers.
For example, to add 33
you must expand the exponents to get (3
3) + (4
4), and then, 27 + 16 = 43.
However, algebraic expressions that have the same bases
and exponents, such as 3x4 and 5x4,
can be added and subtracted. For example, 3x4 +
5x4 = 8x4.
Multiplying and Dividing Numbers with Exponents
To multiply exponential numbers raised to the same exponent,
raise their product to that exponent:
To divide exponential numbers raised to the same exponent,
raise their quotient to that exponent:
To multiply exponential numbers or tems that have the
same base, add the exponents together:
To divide two same-base exponential numbers or terms,
just subtract the exponents:
If you need to multiply or divide two exponential numbers
that don’t have the same base or exponent, you’ll just have to do
your work the old-fashioned way: multiply the exponential numbers
out, and multiply or divide them accordingly.
Raising an Exponent to an Exponent
Occasionally you might encounter a power raised to another
power, as in (32)4 and
In such cases, multiply the exponents:
Exponents and Fractions
To raise a fraction to an exponent, raise both the numerator
and the denominator to that exponent:
Exponents and Negative Numbers
As we said in the negative numbers section, when you multiply
a negative number by a negative number, you get a positive number,
and when you multiply a negative number by a positive number, you
get a negative number. These rules affect how negative numbers function
in reference to exponents.
- When you raise a negative number to an even
number exponent, you get a positive number. For example, (–2)4 =
16. To see why this is so, let’s break down the example. (–2)4 means
–2 –2 –2 –2. When you multiply the first
two –2s together, you get 4 because you are multiplying two negative
numbers. Then, when you multiply the 4 by the next –2, you get –8,
since you are multiplying a positive number by a negative number.
Finally, you multiply the –8 by the last –2 and get 16, since you’re
once again multiplying two negative numbers.
- When you raise a negative number to an odd power, you
get a negative number. To see why, refer to the example above and
stop the process at –8, which equals (–2)3.
These rules can help a great deal as you go about eliminating
answer choices and checking potential correct answers. For example,
if you have a negative number raised to an odd power and you get
a positive answer, you know your answer is wrong. Likewise, on that same
question, you could eliminate any answer choices that are positive.
There are a few special properties of certain exponents
that you also should know.
Any base raised to the power of zero is equal to 1. If
you see any exponent of the form x0, you
should know that its value is 1. Note, however, that 00 is
Any base raised to the power of one is equal to itself.
For example, 21 = 2, (–67)1 =
–67, and x1 = x.
This can be helpful when you’re attempting an operation on exponential
terms with the same base. For example:
Exponents can be fractions, too. When a number or term
is raised to a fractional power, it is called taking the root of
that number or term. This expression can be converted into a more convenient
For example, 213 ⁄ 5 is equal
to the fifth root of 2 to the thirteenth power:
symbol is also
known as the radical, and anything under the radical (in this case
) is called the radicand. For a more
familiar example, look at 91⁄2
is the same as
Seeing a negative number as a power may be a little strange
the first time around. But the principle at work is simple. Any
number or term raised to a negative power is equal to the reciprocal
of that base raised to the opposite power. For example:
Or a slightly more complicated example:
You’ve got the four rules of special exponents. Here are
some examples to firm up your knowledge: | <urn:uuid:0c4ca750-14e3-4814-930f-058dad34c491> | 4.4375 | 1,326 | Tutorial | Science & Tech. | 57.698414 |
Lightning occurs with ALL thunderstorms. An average 54 deaths and approximately 300 injuries occur around the U.S. annually. An estimated 100,000 thunderstorms occur nationwide each year. The southeast Texas area averages 50 to 60 days with thunderstorms per year.
Lightning results from the buildup and discharge of electrical energy between positive and negatively charged areas. Most lightning deaths or injuries occur when people are on a golf course, near water, or standing under trees for shelter. The late afternoon or early evening hours during the summer are the most common times for lightning casualties nationwide, but they can occur just about any time of year near the Gulf coast. The Gulf coast has the highest incidences of lightning strikes annually throughout the U.S.
Lightning can strike several miles away from a thunderstorm. If you hear thunder or know a thunderstorm is nearby or approaching, you should immediately take shelter in a building. Remenber, when thunder roars, go indoors! and stay away from trees, power poles, antennae and away from lakes ponds and water. Stay away from metal objects such as fences, railroad tracks and metal bleachers. If a closed building is not available, a closed automobile is your next best option for a relatively safe place when lightning occurs. Avoid using telephones and electrical appliances during a thunderstorm.
If you are caught outside during a thunderstorm...
No place outside is safe when lightning is in the area, but if you are caught outside with no safe shelter anywhere nearby, the following may reduce your risk:
- Immediately get off elevated areas such as hills, mountain ridges or peaks
- Never lie flat on the ground
- Stay away from tall isolated trees, power poles and antennae.
- Immediately get out and away from ponds, lakes and other bodies of water
- Stay away from objects that conduct electricity (fences, power lines, railroad tracks, metal bleachers)
- Wait at least 30 minutes since the last thunder is heard to resume activities
If someone is stuck..
- Victims do not carry an electric charge and may require immediate medical attention
- Monitor the victim and begin CPR or AED if necessary
- Call 911 for help
For additional lightning safety information and resources, visit the National Weather Service Lighting Safety Web Site.
July 15, 2012: Two men died when they were struck by lightning while sheltering under a tree during a thunderstorm. The two men had been playing in a soccer match and sheltered under a tree on the perimeter of the soccer field when the storm began.
What are some lightning events that have impacted Southeast Texas?
June 30, 2012: A utility worker was killed while repairing a power line in northeast Harris County. The man was initially revived at the scene but died the next day.
September 9, 2010: On an elementary school soccer field in Porter (Montgomery County), a 21 year old male and a 9 year old female were struck by an early evening lightning strike. The male's injuries proved fatal almost one month later. The female recovered from her injuries.
June 3, 2009: A male jogger was found dead on Crystal Beach (Bolivar Peninsula, Galveston County). The victim had burns on his torso from a mid morning lightning strike.
October 7, 2007: An afternoon lightning strike in Danburry (Brazoria County) killed a man who was standing underneath a tree.
June 5, 2007: An early morning lightning strike injured a man who was lying on the beach a few miles southwest of the mouth of the Colorado River (Matagorda County).
May 13, 2007: Four people were injured from a afternoon lightning strike at Bear Creek Park (Harris County).
September 14, 2004: One fatality and forty injuries resulted from an afternoon lightning strike during a high school football practice in Grapeland (Houston County).
August 2, 2000: Seventeen teenagers were injured when lightning struck a tree at Astroworld (Harris County) in the afternoon.
Lightning Statistics for Southeast Texas (1992-2012 graphs)
- Lightning Events (by month)
- Lightning Events (by time) | <urn:uuid:789ba18e-2d42-45d0-971d-6fd0aae3e4f9> | 3.453125 | 844 | Knowledge Article | Science & Tech. | 48.81612 |
In 2010, the Manomet Center for Conservation Sciences released a study of the carbon cycle of biomass. The Center looked at biomass using whole forests, a technique that is rarely used in biomass. Waste wood is the much more common fuel and has a more efficient "carbon profile."
The study itself concluded that biomass using waste wood is beneficial to the environment. After many press articles misinterpreted the results of the study, Manomet issued a clarification, saying in part:
"One commonly used press headline has been 'wood worse than coal' for GHG emissions or for 'the environment.' This is an inaccurate interpretation of our findings, which paint a much more complex picture."… "when the wood used to fuel an energy facility is all, or nearly all, logging debris that would have decomposed in the forest anyway, the debt period can be relatively short…"
Unfortunately, Massachusetts' Department of Energy Resources (DOER) considered only the study's overall conclusion that biomass using whole forests is not carbon efficient. The agency moved forward with its plans that will basically categorize biomass as a non-renewable energy source.
In April 2012, Massachusetts' Department of Energy Resources (DOER) finalized harsh new regulations on biomass. These rules will prevent biomass facilities from qualifying for renewable energy tax credits unless they can meet unrealistic efficiency standards. This will cause existing facilities to close or dramatically cut back energy production, and will all but prevent new facility development.
Massachusetts' new restrictive biomass regulations will hurt the industry as well as its goal of stimulating the use of renewable energy in the state.
Below is a collection of several studies by environmental experts that refute Manomet's findings:
How Manomet got it Backwards (Dr. William Strauss, President, FutureMetrics)
Carbon 101: Understanding the Carbon Cycle and the Forest Carbon Debate (Dr. Jim Bowyer, Professor Emeritus, University of Minnesota Department of Bioproducts and Biosystems Engineering, et al.)
How Carbon Neutral is forest bioenergy? (Roger Sedjo, Senior Fellow and Director, Resources for the Future)
Accounting for Greenhouse Gas Emissions from Wood Bioenergy (Jay O'Laughlin, Professor of Forestry and Policy Sciences, Director of the College of Natural Resources Policy Analysis Group, University of Idaho)
A Look at the Details of CO2 Emissions from burning Wood vs. Coal (Dr. William Strauss, President, FutureMetrics) | <urn:uuid:8302afab-48f3-49b6-8ed6-84352a50351f> | 3 | 498 | Knowledge Article | Science & Tech. | 22.663538 |
Rock and the Rock Cycle
Rocks are always on the move through the rock cycle!
Almost all of the rock that we have on Earth today is made of the same stuff as the rocks that dinosaurs and other ancient life forms walked, crawled or swam over. While the stuff that rocks are made from has stayed the same, the rocks themselves, have not. Over time rocks are recycled into other rocks. Moving tectonic plates are responsible for destroying and forming many different types of rocks. | <urn:uuid:64181842-885a-4809-8d06-5b8ffaeae3e5> | 3.28125 | 100 | Knowledge Article | Science & Tech. | 59.525837 |
In all my years of teaching and writing about web design, I don’t think I’ve ever seen a topic explode in terms of interest level and passion as quickly as I have with HTML5. Despite the huge amount of interest in the topic, there is still a great deal of confusion about what HTML5 is and how to go about learning it. In my opinion, one of the best ways to approach HTML5 is by first comparing it to HTML 4 and learning the differences. That way, it’s easier to understand exactly what is changing in regards to HTML and cut through some of the hype and clutter that is currently surrounding the topic.
Although HTML5 represents an ambitious step forward in the evolution of HTML, it is largely a revised version of HTML 4. If you are comfortable writing HTML4, you should find yourself quite comfortable with the majority of the HTML5 specification. With that in mind, let’s take a closer look at the differences between HTML5 and HTML 4.
First, it’s important to note that the HTML5 specification is designed not just to replace HTML 4, but also the XHTML 1.0 and DOM Level 2 specifications. That means the serialization of HTML to XML and the DOM specification are now contained inside the HTML5 specification, instead of belonging to separate specs. It also contains detailed parsing rules that are designed to improve the interoperability of systems that use HTML documents. As such, the HTML5 Specification is much larger than HTML 4 and covers a lot more ground.
One of the first places you’ll notice a difference in writing HTML5 documents is in the doctype and character encoding. In the past, based on the version of HTML they were using authors have had to use long, arcane doctypes to trigger standards mode in modern browsers. You may recognize this code, for example:Now, rather than having to deal with multiple complex doctypes, you simply use a single doctype that declares the document as an HTML file. Since HTML is no longer SGML-based, no DTD is required. Character encoding is likewise simplified. All that is required now is a meta tag with a charset attribute. Here’s what the above code looks like in HTML5:
There are, of course, new elements in HTML5 that are not part of HTML 4. These new elements assist with page structure and code semantics, allow embedded content, and include new phrasing tags that help add additional meaning to content within the page. By now, you’ve probably heard of new elements such as the section, article, and header tags that will make structuring pages more meaningful and elements like the video and audio tags that make it much easier to add multimedia to sites. In addition to new tags, several new attributes have been added to existing elements to extend their power and functionality as well.
Forms undergo a dramatic update in HTML5 as well. Much of the work done on the Web Forms 2.0 specification has been added to the HTML5 spec. The result of this is new form controls and input types that allow you to create more powerful forms and more compelling user experiences. New form elements include date pickers, color pickers, and numeric steppers. The input element is now much more versatile, with new input types such as url, email, and search that will make it easier to control the presentation and behavior of form content both on the web and within mobile applications. It’s worth noting that HTML5 also adds support for the PUT and DELETE form methods, making it easier to submit data to a wider array of applications.
By far the addition to HTML5 that is getting the most attention is the introduction of several new API’s that are designed to make developing web applications easier across multiple devices and user agents. These APIs include the much talked about video and audio API, an API for building offline applications, an API for editing page content, one that controls Drag and Drop functionality, another that focuses on history, and one that controls Application protocols and media types. Other API’s like Geolocation, Web Sockets, and Web Messaging are associated with HTML5, but are defined within their own specifications.
Those are a few of the highlights of the differences between HTML5 and HTML 4, and should give you a good idea of how HTML5 will change the way that web sites and web applications are authored. Sign up for the lynda.com Online Training Library® New Releases announcement so you’ll know when my HTML5 tutorials are available. | <urn:uuid:7b9d595d-0bba-489c-a998-792d5de6ccf4> | 3.046875 | 936 | Personal Blog | Software Dev. | 49.939615 |
From data in the links provided below, it appears that current temperature rise of the world ocean would be enough to raise a volume of water more than 100 times greater than Sydney Harbour from freezing to boiling point every day.
World Ocean Heat Content
Graph Credit: Adapted from S. Levitus et al., Geophys. Res. Letts.; © AGU 2012
For those who imagine that the stability of global air temperature at record high levels over the last decade means we can keep cooking the planet, think again - the heat is going into the oceans.
Science Magazine states
"Global warming contrarians remind the public that the world has not warmed all that much, if at all, during the past decade or so. But that's the atmosphere. Oceanographers with their thermometers in Earth's biggest reservoir of heat—the world's ocean—report in a paper to be published in Geophysical Research Letters that greenhouse warming has in fact been proceeding apace the past decade, not to mention the past half century. Ninety-three percent of the heat trapped by increasing greenhouse gases goes into warming the ocean, not the atmosphere. So taking the ocean's temperature is the most comprehensive way to monitor global warming. A group of National Oceanic and Atmospheric Administration scientists has revised and updated their decade-old compilation of temperature measurements from the upper 2000 meters of the world's ocean. Its store of heat (red line with error bars in graph above) steadily increased over the past 20 years. And the upper ocean has warmed so much in the past 50 years that its added heat would be enough to warm the lower atmosphere by about 36°C (thankfully a physically impossible feat).
This data is from an article since published in Geophysical Research Letters
GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L10603, 5 PP., 2012
World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010 by S. Levitus National Oceanographic Data Center, NOAA, Silver Spring, Maryland, USA et al
Key Point: The rise in world ocean heat content since 1955 is due to the increase in atmospheric greenhouse gasses
- updated estimates of the change of ocean heat content and the thermosteric [heat induced] component of sea level change of the 0–700 and 0–2000 m layers of the World Ocean for 1955–2010.
- estimates are based on historical data not previously available, additional modern data, and bathythermograph data corrected for instrumental biases.
- heat content of the World Ocean for the 0–2000 m layer increased by ... 0.09°C.
- Ocean accounts for approximately 93% of the warming of the earth system since 1955.
A discussion of this material is at http://www.skepticalscience.com/Brea...ill_A_LOT.html
This post notes that the ocean is very big, and claims (with an apparent mathematical error) that the tiny temperature increase found by such studies has been enough to boil Sydney Harbour dry twice a day. Checking the numbers (see below) suggests the real magnitude is far worse.
How come? This massive heat increase is due to humans adding CO2 to the atmosphere, which lets light enter from the sun but does not let heat escape. CO2 is like a transparent blanket. "NOTHING ELSE FITS THE EVIDENCE... When the first analyses of Ocean Heat Content calculated from old temperature data from the oceans were first published in the early 2000's, they were described as the 'Smoking Gun'. Because they were. They are the primary observational evidence for Global Warming and the human nature of it."
The graph of World Ocean Heat Content above shows a very clear and obvious direction upwards. The average temperature of the ocean at any given time is hotter than at earlier times. The ocean is the main repository of heat on the earth surface, containing about 1.3 bilion teralitres of water, and is responsible for an estimated 93% of global warming. Unlike the atmosphere, which has been in a hot lull since the turn of the millennium, the ocean temperature has continued its remorseless increase. So showing that the "missing heat" from the last decade can in fact be seen in the main repository is correct analysis of the whole system.
0.1 degrees is small. But the ocean is very big. All of that extra heat, with its continued upward trend, has to have come from somewhere. Energy cannot be created or destroyed. As the scientific paper and summaries explain, the only plausible explanation for this obvious heating trend is the extra blanket we have added to the air through CO2 emissions.
The Sydney Harbour example is just a vivid way of showing how big this heat increase really is. The Harbour seems big, but it is tiny compared to the whole ocean. If all the extra heat measured in the whole ocean had been concentrated in a body of water the size of Sydney Harbour (known as a Sydharb in Australian water circles), 0.1 degrees over the whole ocean would reportedly be enough for 100 degrees every twelve hours.
Lets look at the calculations
0.1 Degrees for 1.3 billion teralitres over 50 years (world ocean)
= 100 Degrees for 1.3 million teralitres over 50 years
= 100 Degrees for 26,000 teralitres over 1 year
=~ 100 Degrees for 40 teralitres over 12 hours
In fact Sydney Harbour only contains 0.56 teralitres. So it seems there is a mathematical error in the Skeptical Science source I cited above, and the amount of measured ocean warming could bring to boiling point [edit to change - not boil dry] more than 100 Sydharbs every day, starting from freezing point [edit to change - not starting from ice]. While the GRL data only covers the top two km of the ocean, assuming no heat change for the deep ocean would only cut this estimate by half, to 50 Sydney Harbours per day.
Readers are welcome to check my calculations here. Once we understand the orders of magnitude, the steady continued rise in ocean temperature translates to a massive system heat increase, that has to go somewhere. We are accelerating the addition of the CO2 blanket, and now there are feedback loops from Arctic methane, etc. This illustration suggests we will hit climate tipping points faster than we realize. | <urn:uuid:e07ba644-c7ad-4744-8157-06a943ff010f> | 3.46875 | 1,319 | Comment Section | Science & Tech. | 56.054815 |
The definition itself does not make things very clear though. Let us try understand a bit more. So today, suppose you have a web application that you want to deploy on multiple platform, say Android, iPhone, etc. In this app, you want to use one or more of the platform services to obtain some kind of data or carry out some kind of processing. Lets take, for example, an application that lets you invite friends to a movie as soon as you book a movie ticket. The app lets you select which friends to invite directly from your phone's contacts book. That means, if you're the developer of such an app, you'll most probably have to deal with consuming the platform's contact book using the phone APIs, for doing which, each platform would obviously have its own different ways. So there are 2 problems here:
2) How can you do so in a platform agnostic way, so that your web app works not just on Android but also on iPhone, notwithstanding the different ways of using the contacts book in both.
PhoneGap aspires to be the answer to both of the above questions, and also allows creation of hybrid applications, consisting of both native and web components. How? It provides APIs which abstract the platform's contact book for you, so that you only deal with the PhoneGap API and let PhoneGap do the rest of the magic for you.
Download and installation1) OS: Windows x86 / Linux x86 .
2) IBM Rational Team Concert: Download/buy from RTC downloads page. Note that if you're using an existing Eclipse installation to install RTC, make sure you have Eclipse 3.5 or lower (Eclipse 3.6 is not yet supported for Android development). RTC can also be installed directly on Eclipse 3.5.x (see http://jazz.net/library/techtip/384).
3) Android SDK: Download the SDK from Android website - http://developer.android.com/sdk/index.html.
4) Android Development Tools(ADT): This is an eclipse plugin- the equivalent of JDT(Java Development Tools)- for Android Development. ADT can be downloaded and installed using instructions given at http://developer.android.com/sdk/eclipse-adt.html. Please follow the detailed instructions on this page for installing the ADT plugin, and also for setting the location of Android SDK in Eclipse.
5) It is a good idea to read through the following step by step guide to set up RTC and prepare it for Android Development: http://jazzpractices.wordpress.com/2010/08/10/how-to-set-up-rtc-for-android-development/.
Refer to my earlier blogpost on developing Phonegap applications using Eclipse to read about how to install Phonegap on the RTC Eclipse client.
*Note: You can similarly develop Windows mobile applications using RTC's Visual Studio client.
Creating a PhoneGap Android project
- Create a new PhoneGap project by clicking on the PhoneGap command you see on the coolbar.
- If you don't have the source code for PhoneGap, check "Use Built-in PhoneGap" on the project wizard
- Click Next and create the new Android project in the Android project wizard.
- Launch the new application as an Android application and you will get an application as shown below.
The DemoThe demo showcases how a distributed team working on a cross-platform, hybrid mobile application can leverage PhoneGap APIs, and also the functionality provided by RTC through the complete lifecycle of the application, right from planning to implementation to design and testing, and even defect tracking.
In the demo, the application to be developed has a native activity to start with, which displays a list of all phone contacts. Clicking a particular contact displays the contact details in another native activity and provides a button to send an email to that contact. Clicking the button launches a web based email client (this is a PhoneGap activity), which has an option to cc more contacts from the phone's contact book. This makes use of the PhoneGap contacts API to fetch a list of the phone contacts from the web based activity.
Check out the video below:
With the recent release of the Rational's Jazz based solution for Application Lifecycle Management, also called Collaborative Lifecycle Management or CLM, it is possible to do much more by integrating multiple tools with RTC. To see Rational Team Concert and CLM live in action, do visit IBM Rational Innovate India 2011, Bangalore. You can catch me there at the RTC solutions center. Also, if you're interested in building JavaME applications, you can also read up my jazz.net article - Developing Java ME applications using Rational Team Concert inan agile way to see how to do so using RTC.
For more information
- Android Developer website
- Getting started with Rational Team Concert
- Developing Android applications using Eclipse
- Using PhoneGap in Eclipse to develop Android applications
- PhoneGap Android Eclipse quick start
- Rational Solution for Android Mobile Application Development | <urn:uuid:d9e9bb7a-8252-4dad-af84-9477f06e8b96> | 3.171875 | 1,066 | Tutorial | Software Dev. | 51.334877 |
I need a way to make a program work on only the first computer it is run on, it doesn't need to be very secure but it needs to work. The idea that I came up with is to have the program save the MAC Address (of the first computer it is run on) inside of its own EXE. And then from then on check to see if the MAC Address corresponds to the computer its on..
So I have a couple questions
1. How can I have a program edit its own file during execution. Is this possible? Are there workaround ways to do it?
2. How can I have the compiler tell me where, in the file, a certain piece of data was saved. I'm using GCC, though I could use Visual C++.
Self modifying executables are quite dangerous: most systems do not allow it. The main problem is that there are checksums embedded in the executable. If you don't patch them correctly, it stops working. Also, moving from one compiler to another or from one OS to another could cause the program to stop working.
How about alternative techniques like
1) writing it to the registry.
2) creating a file in allusers\documents
3) encrypted file containing the MAC
These are somewhere other than the directory from which the executable is run so, in theory, an unknowing pirate will only copy the exe and not the registry entry or the file in allusers\documents.
MAC addresses aren't totally safe (neither is any technique to a really determined cracker). If you are running MS loopback adapter, that is the first MAC address picked so anyone could just install the MS loopback adapter and they've got the Software. Also, network cards can change. Do you have an option to reset this and take a new address.
Thanks, but the problem with storing the MAC address in some hidden file is exactly that, the person would simply copy the EXE, which would be the first thing most users would try.. This protection is aimed against novice users, crackers will find a way around nearly any protection if they are given reason to try, so I'm not worried about making it extremely secure since even big companies like Microsoft and Apple can't seem to keep their programs extremely secure.
So if there's no way for an EXE to edit itsself I was thinking that I could store another EXE in a resource in the file, and then EXE #1 would export EXE #2, EXE #1 would close and #2 would edit the now available file. Then the only problem remaining is I wouldn't know where to edit the file.. I'd need to know where the constant I set aside in the code was saved. I could do a search for 00-00-00.... but that seems too unreliable to me.
Suggestions? Or a way to have the EXE edit itself if you know of one..
I need a way to make a program work on only the first computer it is run on
I don't understand the use case, I presume. Say Joe Schmoe downloads your app from somewhere, stores it on his flash drive, plugs in the flash drive to PC A , copies the exe and runs it. Now, what would prevent Joe Schmoe from copying the exe from his flash drive onto PC B and run it like a virgin copy ?
One of the most effective ways is to modify part of the download so that each one is unique. Then require registration via a webservice. Do not allow multiple registrations of the same "Serial Number"....
The KEY part is requiring an interaction with the server to achieve activation. Of course it is necessary to provide an alternate if you have people legitimately installing on machines without WebAccess, but there are service providers who specialize in that.....
TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!)
2008, 2009 In theory, there is no difference between theory and paractice; in practice there is.
* Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions
* How NOT to post a question here
* Of course you read this carefully before you posted
* Need homework help? Read this first
The suggestion is quite simple: never let your exe run with no registration information present in the system. Consequense: using an application installer (that gathers a registration information, puts it into a system but does nothing really valuable except installation) is a minimal requirement. The optional step is further activation.
And one more point to be clear: forget about self-modification. This trick is easily cracked by plain binary diffing. | <urn:uuid:9c296902-575d-4a49-a422-e6d5c68d4641> | 2.984375 | 982 | Comment Section | Software Dev. | 61.352724 |
The basis for this data set is the Normalized Difference Vegetation Index
(NDVI) data set calculated from AVHRR data by Los et al. (1994) following the
work of Tucker et al. (1986). These data were already in the form of a 1 deg.
x 1 deg. monthly composited NDVI data set; i.e., no further single channel
data or geometric information were available at the time. Some simple
procedures were used to fill in gaps in the data set, and crude corrections
were made to account for the effects of solar angle and persistent clouds to
make the temporally and spatially continuous FASIR-NDVI product, see Sellers
et al. (1994). The FASIR-NDVI data were used to create fields of FPAR, leaf
area index, and greenness, which in turn were used to calculate monthly snow-
free albedo and surface roughness fields, see Sellers et al. (1994). | <urn:uuid:8b3810bd-9eb4-493e-9a1e-75d41461dafe> | 2.890625 | 209 | Knowledge Article | Science & Tech. | 59.473505 |
Life habit: lichenized, lichenicolous, or non-lichenized Thallus: immersed or superficial, often rudimentary but occasionally well developed photobiont: if present, trentepohlioid or coccal green algae Ascomata: rounded, elongate or branched exciple: usually not distinct, often composed of peripheral hyphal structures made of thick-walled and/or pigmented hyphae, often resembling those in the epihymenium, or not well developed at all epihymenium: conspicuous and with distinct hyphae or indistinct hymenium: with densely to widely spaced asci, separated by paraphysoidal hamathecium of anastomosing and branched septate hyphae; subhymenium: indistinct to distinct in color and structure, usually composed of densely arranged, ±short-celled vegetative hyphae and variously shaped ascogenous hyphae asci: with two functional wall layers and an internal apical beak, semi-fissitunicate, ovate to subcylindrical, usually stalked, often with KI+ elongated blue ring, with usually 8, rarely fewer ascospores; asci opening by a slit-like rupture of the outer layer, and stretching endotunica layers, with the internal layers stretched further up ascospores: colorless to brown, transversely septate to muriform, oblong to ovoid-fusiform, with a smooth or verrucose epispore, or lacking an epispore Conidiomata: pycnidial; conidiogenous cells: ±cylindrical, phialidic conidia: bacilliform, filiform, or falcate Geography: cosmopolitan Substrate: on bark, leaves, rock, wood or lichens. Notes: Pending further molecular evidence, Arthonia should be treated as a heterogeneous assemblage of ±well delimited species groups. In contrast to traditional concepts, the characters of ascospore septation are weak indicators of relationships. Therefore several species with muriform ascospores previously assigned to Arthothelium are included here in Arthonia, if other characters, especially those of ascomatal construction, are shared. Arthonia in a narrow sense includes species close to A. radiata, which usually forms symbioses with trentepohlioid algae. While A. radiata seems to be absent from the Sonoran Desert, a number of species share Arthonia s. str.'s brownish to olivacous pigmentation of the epihymenial structures but are usually non-lichenized. These include A. glaucella, A. granosa, A. pinastri, A. pruinosella, A. rhoidis, A. sexlocularis, and A. tetramera (while A. punctiformis and A. galactites seem to be missing), as well as species with muriform spores, which were previously assigned to Arthothelium, e.g. A. albopulverea and A. beccariana. These species seem to be intractably variable and without observation of spore septations and spore sizes, the recognition of these species is practically impossible. Most lichenicolous species that occur on hosts with chlorococcal green algae are phenotypically related to this large complex as well. Other species groups appear to be more distinct. For example, the Arthonia cinnabarina group (incl. A. cinnabarina, A. redingeri, and A. speciosa), of which most representatives are characterized by crystallized reddish pigments (usually becoming violet and dissolving in K), differs in its ascomatal anatomy and ascospore septation characters. The A. pruinata group includes species with pale brown colored fruit bodies, which are often covered by a pruina of crystallized compounds produced by the mycobiont. Three saxicolous species (A. gerhardii, A. madreana, and A. verrucosa) seem to represent another distinct group. The relationships of other species with Arthonia are still speculative due to the unclear significance of a low number of characters: Arthonia lecanactidea or A. sanguinea might have separate, isolated positions in Arthonia. This is also true for Arthonia mirabilis with its unusual pigmentation. Egea and Torrente (1995) suggested exclusion of the distinct soil inhabitant A. glebosa from Arthonia, mainly because of its thick dark structures underneath the hymenium. The biological relationships of lichenicolous Arthonia species with their hosts are diverse, ranging from ±commensalic life forms to rather destructive behavior. Species in the Sonoran regions are regarded as moderately aggressive, especially those that infect the reproductive structures of their hosts (A. clemens, A. lecanorina, A. intexta, A. subfuscicola, and A. varians). These species are not exclusively parasites of their fungal hosts, as their hyphae can extend beyond the fungal structures into the algal layers of their hosts, where they may establish appressorial contacts with the photobionts. Many specimens from the Sonoran Desert were incorrectly identified in the past. Typical cases are indicated in the notes to the species. In the early Hasse literature (e.g. 1895 to 1906) the following arthonioid names appear, although they were not subsequently used in the flora (Hasse 1913): Arthonia astroidea Ach. (= A. radiata (Ach.) Ach.), A. astroidea var. swartziana Nyl., A. diffusa Nyl., A. dispersa var. cytisi A. Massal., A. galacitites var. depuncta Nyl. (= A. galactites (DC.) Dufour ), A. interveniens Nyl., A. ochrolutea Nyl., A. quintaria Nyl. and A. spadicea Leight. These names were apparently later not reconsidered by Hasse (1913) and probably should be rejected as occurring in southern California, and therefore they are not treated below. Further species occurrences are dubious and were also not included here, including A. adveniens Nyl., A. polygramma Nyl., A. polymorpha Ach., A. subdiffusa Willey, and A. taediosa Nyl. The latter name was often mistakenly used for A. beccariana, but it has significantly larger ascospores. Arthonia caesia (Flotow) Arnold, a conspicuous species with bluish-red pruina on the discs, and lichenized with coccal green algae, was illustrated for California in Brodo (2001), but I have not seen Sonoran material which fits to this species. It should be noted that some further undescribed taxa likely exist in the area, but these have not been considered, as their relationships with other species could not sufficiently be clarified and material was not available. Pycnidia: c. 0.1 mm; wall: olivaceous brown. 2-3 µm thick Chemical reactions: thallus hyphae I-, KI+ blue; ascomatal gels I+ blue becoming ±red, with KI+ blue and elongated ring-structure in tholus. Substrate and ecology: commonly found on dead twigs in Mediterranean regions, anombro-, photo-, thermophilic, pioneer species, not lichen-forming World distribution: Mediterranean Europe, California and Baja California Sonoran distribution: southern Californa (Channel Islands and mainland), Baja California, and Baja California Sur. | <urn:uuid:7dc568e8-8474-4e94-8697-6a7f3776cd7e> | 3 | 1,659 | Knowledge Article | Science & Tech. | 23.925375 |
Ask Dr. Math
High School Archive
Dr. Math Home ||
Middle School ||
High School ||
Dr. Math FAQ
See also the
Dr. Math FAQ:
Browse High School Euclidean/Plane Geometry
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Pythagorean theorem proofs.
- Why Proofs? Definitions and Axioms [09/16/2001]
Why are proofs important in the development of a mathematical system like
- Why Was Pi Invented? [04/10/2003]
What is pi good for?
- Wood Flooring [08/09/1997]
Estimating the number of linear feet in 1 sq. ft., making wood flooring
3.25 inches wide, producing 5,000 sq. ft. per day.
- World War II Window Blackout [10/21/2001]
Mr. Brown had a square window 120cm x 120cm, but the only material he
could find was a sheet of plywood 160cm x 90cm; same area, different
shape. He drew some lines and cut out just two congruent shapes, which he
joined to make a square of the correct size. How did he do it?
- Writing a Proof [05/16/2001]
Is there a certain way I should go about writing a proof?
- Yes, a Graph Can Touch an Asymptote [06/08/2006]
Is it true that the graph of a function can never touch an asymptote?
- Zero Degree and 360 Degree Angles [10/22/2003]
I am curious if there is such a thing as a zero degree angle, and if
so, what does it look like? I am also wondering if a zero degree
angle equals a 360 degree angle? I understand that a 360 degree angle
is essentially a circle, or the amount of 'turn' that equals a circle.
I am pondering how you would draw a 360 degree angle?
- Zooming Rectangles [05/11/2005]
What is the math behind zooming rectangles?
Search the Dr. Math Library:
© 1994-2013 Drexel University. All rights reserved.
Home || The Math Library || Quick Reference || Search || Help
The Math Forum is a research and educational enterprise of the Drexel University School of Education. | <urn:uuid:fd0f46a0-196e-4fee-8f4d-76714aa513b6> | 2.921875 | 513 | Q&A Forum | Science & Tech. | 80.457499 |
L4 and L5 points are the most stable for a given orbiting celestial body. The only difference is that L4 lies in front of the body while L5 is ahead of it as @tpg quotes it. They are almost the same because both form an equilateral triangle with the bodies. If you sketch both, you'd get a 60 degree angle. These Lagrange points are handled with care while dealing with celestial mechanics and it was the mysterious (completely math-based) solution for our miracled three-body problem. That's all...
Does they defined with the respect to rotation of Earth around the Sun?
Their definition swirls around in orbital motion of a body. A body that orbits the sun along with Earth could be placed at L4 or L5. The mass of the body could be larger, indicating the stability of these points. On the other hand, L1, L2 and L3 are almost unstable (only negligible masses could stay in them) and even a slight perturbation could cause a great deviation in the configuration of orbiting body.
So the main conclusion of this equilateral triangle is - Gravitational equilibrium. The masses of other two bodies balance each other, thereby equilibrating the third (orbiting) body.
You might be interested in this one already here - Why are L4 and L5 lagrangian points stable? and a good podcast about these points. Greedy for more math..? | <urn:uuid:cc2c7c8c-bd15-4487-bb16-702325e8d056> | 2.921875 | 296 | Q&A Forum | Science & Tech. | 61.597207 |
Standard economic theory predicts that exploitation alone is unlikely to result in species extinction because of the escalating costs of finding the last individuals of a declining species.
Researchers argued that the human predisposition to place exaggerated value on rarity fuels disproportionate exploitation of rare species, rendering them even rarer and thus more desirable, ultimately leading them into an extinction vortex.
They presented a simple mathematical model and various empirical examples to show how the value attributed to rarity in some human activities could precipitate the extinction of rare species—a concept that they call the anthropogenic Allee effect.
The alarming finding that human perception of rarity can precipitate species extinction has serious implications for the conservation of species that are rare or that may become so, be they charismatic and emblematic or simply likely to become fashionable for certain activities.
Courchamp F, Angulo E, Rivalan P, Hall RJ, Signoret L, et al. 2006 Rarity Value and Species Extinction: The Anthropogenic Allee Effect. PLoS Biol 4(12): e415. doi:10.1371/journal.pbio.0040415
You can download the entire report in PDF file format HERE. | <urn:uuid:df4204e2-db09-4c4d-8dd3-e9cad984f855> | 3.8125 | 236 | Truncated | Science & Tech. | 25.594816 |
What Does A Black Hole Sound Like?
Sept. 9, 2003: Astronomers using NASA’s Chandra X-ray Observatory have found, for the first time, sound waves from a supermassive black hole. The “note” is the deepest ever detected from any object in our Universe. The tremendous amounts of energy carried by these sound waves may solve a longstanding problem in astrophysics.
The black hole resides in the Perseus cluster of galaxies located 250 million light years from Earth. In 2002, astronomers obtained a deep Chandra observation that shows ripples in the gas filling the cluster. These ripples are evidence for sound waves that have traveled hundreds of thousands of light years away from the cluster’s central black hole.
“The Perseus sound waves are much more than just an interesting form of black hole acoustics,” says Steve Allen, of the Institute of Astronomy and a co-investigator in the research. “These sound waves may be the key in figuring out how galaxy clusters, the largest structures in the Universe, grow.”
Uh… Light can’t escape a black hole how can sound?
You’re right. That’s what’s odd about this post
I think what this post was trying to get at was that they’re not actually referring to sound waves, but radio waves or other types of electromagnetic radiation which can be ‘translated’ - so to speak - into sound.
Thaaaaaaat’s not really what I was getting at
SPIDA and it’s about time
Happy Birthday Emily!
JAKE I’M GOING TO CRY i love you thank you precious angel lambchop
lov u 2
oops i wish you didn’t have autoplay
i’m just secretly active. it’s a thing. | <urn:uuid:8beb4123-3e6b-4bc2-b1a9-2c66c56aa568> | 3.53125 | 390 | Comment Section | Science & Tech. | 60.545625 |
Minnesota Drought Situation Report - February 21, 2007
Drought Monitor - February 13, 2007
The National Drought Mitigation Center Drought Monitor continues to place much
of northern Minnesota in the Extreme Drought or
Severe Drought categories.
Portions of central Minnesota are depicted in the Moderate Drought
classification. Most of the rest of Minnesota is depicted as being Abnormally Dry.
How the drought developed during summer 2006
Dryness was entrenched across much of northern and central Minnesota for the majority of the 2006 growing season. The timing of the dry weather was
unfortunate. Historically, the period from May through September is the wettest time of the year in Minnesota. In 2006 however, rainfall
totals fell well short of long-term averages in many locales, especially in northern Minnesota (see: rainfall maps). The dry
weather began in mid-May and continued through June. The dryness was then compounded by one of Minnesota's hottest Julys ever. This
led to the rapid intensification of drought. By the end of July, much of northern and
central Minnesota was categorized as experiencing an Extreme Drought. The summer rainfall deficits and hot weather caused
deteriorating crop conditions, dwindling stream flows, lowering lake levels, and increased the danger of wildfire. Impacts of the drought included: surface water appropriation
permit suspensions, municipal watering restrictions and bans, an agricultural disaster declaration for 36 counties, the lowest lake levels in 30 years, and a number of major wildfires.
Substantial rains in the late summer and early autumn brought relief to some central Minnesota counties. However, many northern counties were missed by these
rain events. Dry weather persisted through the autumn of 2006, notably worsening the situation in northeastern Minnesota and
rekindling concerns about topsoil moisture in portions of southern Minnesota. Fall rains are very efficient at replenishing the soil moisture profile. The lack of widespread,
ample autumn rains exaggerated and prolonged the drought situation in many areas of the state.
Potential drought issues facing northern Minnesota in 2007
Spring and summer precipitation totals will need to far exceed normal for surface water systems to recover quickly from the 2006 deficits. This is possible,
but not climatologically likely. The drought was quick to develop, but most likely the impacts will be slow to repair. In keeping with the dry pattern established
in 2006, snowfall totals during winter of 2006-2007 have thus far been well short of average. Much of the
northern two thirds of northern Minnesota reports snow depths that rank historically below the 5th percentile.
Some drought concerns for the 2007 growing season include:
antecedent dry conditions, along with a continuation of sparse winter snow cover, could lead to an explosive spring wildfire situation | <urn:uuid:b8ff200b-b567-458d-9e50-948eec67cff8> | 3.03125 | 568 | Knowledge Article | Science & Tech. | 34.699664 |
Invasion of the Lionfish
The Caribbean's coral reefs are struggling under intense pressure from overfishing, pollution, development, and climate change. Now these over-stressed reefs are facing an additional threat from an uninvited guest, which has the potential to tip them over the edge. Luckily, community efforts are helping to stem the problem.
Red Lionfish (Pterois volitans) in its native Fiji
Photo by Steve Turek
Lionfish are voracious predatory fish native to the Indo-Pacific. It is believed that they were introduced to the Caribbean in the early 1990s from local aquariums or fish hobbyists in Florida. Once loose in the marine environment and free from natural population controls, their numbers began exploding as they consumed vast quantities of reef fish.
Lionfish are known for eating anything they can fit in their mouths, and seem to eat nearly constantly. During a study of invasive lionfish in the Bahamas, one lionfish was observed to eat twenty small fish in the space of thirty minutes. The study also found that a single lionfish per reef could reduce the juvenile fish populations by almost 80% in just five weeks. Fast-growing and capable of reproducing year-round, lionfish are also out-competing native species and spreading more quickly than anyone predicted.
With their indiscriminant eating habits, lionfish directly impact populations of numerous fish and crustacean species, including commercially valuable species like snappers, groupers, and lobsters, as well as ecologically key species like parrotfish, which prevent algal overgrowth on reefs. The Caribbean lionfish invasion could become the most disastrous marine invasion in history by drastically decreasing the abundance of coral reef fishes throughout the entire region.
The Mitigation Strategy
|Once their venomous spines have been removed,
lionfish are perfectly safe to eat.
Although lionfish have no natural predators in the Caribbean, it turns out that these fish make quite a tasty dish for humans. The best option for controlling their populations seems to be for us to become their main predators. CORAL's Belize Field Manager, Valentine Rosado (Val), is helping with the effort.
Lionfish reached Belize in late 2008, and since then the country has declared all-out war on the invaders. The quickly-formed Belize Lionfish Project is working to educate stakeholders, including fishermen and dive instructors, on how to get involved in lionfish abatement, and has also begun a series of fishing tournaments whose sole purpose is capturing lionfish. A number of Belizeans, including Val, traveled to the Bahamas (where lionfish have been a problem for far longer) for a workshop on lionfish handling and preparation.
Unfortunately, catching lionfish is no easy task: they are armed with many venomous spines that can cause extremely painful injuries. Spear guns are one option for capture, but they are ineffective against small lionfish and are also being discouraged because of their potential for encouraging poaching. Furthermore, according to Val:
"I've been diving with tour guides and with tourists and I can personally emphasize that nets and bags will not work. It is too much of a hassle and inconvenience, and requires a few minutes to effectively capture a lionfish. I can see tourists complaining that their guide was more absorbed in the capturing and killing of lionfish than supervising the dive and divers."
However, Val came back from the workshop in the Bahamas with some great ideas for making lionfish control more feasible for dive instructors. He reports that a three-pronged spear is used widely against lionfish in the Bahamas. Dive leaders in San Pedro have devised a simplified version of the spear and have had good success with it in trials at home. The spear is safe, effective, unobtrusive, and easy to construct, and can even make a good pointer. Val is hoping to produce many of these tools and distribute them to tour guides. Instruction on their use could even become part of CORAL's Sustainable Marine Recreation workshops.
Although complete eradication of lionfish from the Caribbean is probably an impossible goal, keeping their numbers low can give other fish populations a chance to persist. Creative solutions to the problem will be essential to prevent lionfish from devastating Caribbean coral reef ecosystems.
For more information:
ECOMAR's Belize Lionfish Project
REEF's Lionfish Research Program
For a good list of lionfish recipes, click here. | <urn:uuid:f4f3d76e-c65e-4d02-b017-cdf769fa6872> | 3.515625 | 909 | Knowledge Article | Science & Tech. | 31.735082 |
Why Aren’t Birds Bigger?
Birds are bigger than mammals, but the largest-ever bird weighed less than a cow. Why aren't birds bigger? One theory is that as avian eggs get bigger their shells get disproportionately thicker to support their weight. Eventually the shell would be too thick for a chick to break through, and that’s your maximum egg size (and thus maximum bird size). I don’t buy this.
Status: Initial allometry of ratite shell thickness done. | <urn:uuid:a67c6f55-6427-4631-bb37-0215ac5379a8> | 3.09375 | 106 | Knowledge Article | Science & Tech. | 71.845556 |
File:Phanerozoic Climate Change Rev.png
From Global Warming Art
This figure shows the long-term evolution of oxygen isotope ratios during the Phanerozoic eon as measured in fossils, reported by Veizer et al. (1999), and updated online in 2004 . Such ratios reflect both the local temperature at the site of deposition and global changes associated with the extent of continental glaciation. As such, relative changes in oxygen isotope ratios can be interpreted as rough changes in climate. Quantitative conversion between this data and direct temperature changes is a complicated process subject to many systematic uncertainties, however it is estimated that each 1 part per thousand change in δ18O represents roughly a 1.5-2 °C change in tropical sea surface temperatures (Veizer et al. 2000).
Also shown on this figure are blue bars showing periods when geological criteria (Frakes et al. 1992) indicate cold temperatures and glaciation as reported by Veizer et al. (2000). The Jurassic-Cretaceous period, plotted as a lighter blue bar, was interpreted as a "cool" period on geological grounds, but the configuration of continents at that time appears to have prevented the formation of large scale ice sheets.
All data presented here have been adjusted to the 2004 ICS geologic timescale . The "short-term average" was constructed by applying a σ = 3 Myr Gaussian weighted moving average to the original 16,692 reported measurements. The gray bar is the associated 95% statistical uncertainty in the moving average. The "low frequency mode" is determined by applied a band-pass filter to the short-term averages in order to select fluctuations on timescales of 60 Myr or greater.
On geologic time scales, the largest shift in oxygen isotope ratios is due to the slow radiogenic evolution of the mantle. A variety of proposals exist for dealing with this, and are subject to a variety of systematic biases, but the most common approach is simply to suppress long-term trends in the record. This approach was applied in this case by subtracting a quadratic polynomial fit to the short-term averages. As a result, it is not possible to draw any conclusion about very long-term (>200 Myr) changes in temperatures from this data alone. However, it is usually believed that temperatures during the present cold period and during the Cretaceous thermal maximum are not greatly different from cold and hot periods during most of the rest the Phanerozoic. Some recent work has disputed this (Royer et al. 2004) suggesting instead that the highs and lows in the early part of the Phanerozoic were both significantly warmer than their recent counterparts.
Common symbols for geologic periods are plotted at the top and bottom of the figure for reference.
The long-term changes in isotope ratios have been interpreted as a ~140 Myr quasi-periodicty in global climate (Veizer et al. 2000) and some authors (Shaviv and Veizer 2003) have interpreted this periodicity as being driven by the solar system's motions about the galaxy. Encounters with galactic spiral arms can plausibly lead to a factor of 3 increase in cosmic ray flux. Since cosmic rays are the primary source of ionization in the troposphere, these events can plausibly impact global climate. A major limitation of this theory is that existing measurements can only poorly constrain the timing of encounters with the spiral arms.
The more traditional view is that long-term changes in global climate are controlled by geologic forces, and in particular, changes in the configuration of continents as a result of plate tectonics.
|Temperature Record Series|
|This figure is part of series of plots showing changes in Earth's temperature over time.|
|Time Period: 25 yrs | 150 yrs | 1 kyr | 2 kyr | 12 kyr | 450 kyr | 5 Myr | 65 Myr | 500 Myr|
|See also: Future predicted changes | Map of recent warming | Temperature change category|
This figure was originally prepared by Robert A. Rohde from publicly available data.
- Frakes, L. A., J.E. Francis and J.I. Syktus (1992). Climate Modes of the Phanerozoic. Cambridge: Cambridge University Press. ISBN 0521366275
- Royer, Dana L., Robert A. Berner, Isabel P. Montañez, Neil J. Tabor, and David J. Beerling (2004). "CO2 as a primary driver of Phanerozoic climate". GSA Today 14 (3): 4-10.
- [abstract] [ Shaviv, N. and Veizer, J. (2003). "Celestial driver of Phanerozoic climate?". GSA Today July 2003: 4-10.
- [abstract] [ Veizer, J., Ala, D., Azmy, K., Bruckschen, P., Buhl, D., Bruhn, F., Carden, G.A.F., Diener, A., Ebneth, S., Godderis, Y., Jasper, T., Korte, C., Pawellek, F., Podlaha, O. and Strauss, H. (1999). "87Sr/86Sr, δ13C and δ18O evolution of Phanerozoic seawater". Chemical Geology 161: 59-88.
- [abstract] [ Veizer, J., Y. Godderis, and L.M. Francois (2000). "Evidence for decoupling of atmospheric CO2 and global climate during the Phanerozoic eon". Nature 408: 698-701.
- The statistical errors, plotted in gray, may be significantly smaller than the systematic biases that can occur. Such systematic concerns include:
- Different fossil types, spanning different phyla, are measured in different parts of the record, and biological differences in how oxygen is incorporated into different fossils may introduce biases.
- Because oxygen isotopes reflect both local temperatures and global glaciation, it is necessary to sample from large spatial areas to provide adequate global coverage. Such extensive coverage may not be available in all time periods (particularly older periods).
- In constructing the "low frequency mode", the filter was applied to the short-term averages rather than to the data directly because applying such a filter to the data directly strongly biases the results towards the value at times that have been heavily sampled.
GWArt images and pages linking to this file
Wikipedia pages and images linking to this file
Click on a date/time to view the file as it appeared at that time.
|current||14:25, 24 February 2006||600×448 (30 KB)||Robert A. Rohde| | <urn:uuid:93591c69-f3f1-4b77-b694-6e68242f541d> | 3.671875 | 1,408 | Knowledge Article | Science & Tech. | 50.920098 |
In this post I will talk for one of the new feature in C# 4.0 – Named and Optional Parameter. Actually these are not a single feature but two different feature. You can get more benefit if you work with them together. So, what are those? Named parameter is a way to provide an parameter to the method using the name of the corresponding parameter instead of its position in the parameter list. Whereas, optional parameter allows you to omit arguments to member invocation.
Let us discuss it in depth with a simple example. First start with the Optional Parameter. Suppose you are creating an Calculator application where you have to do some “Add” operation by passing two, three or four parameters to the method. What will you do if you are using C# 2.0 or 3.0? You will have no other choice than creating as many methods and passing the parameters in this which is known as method overloading.
public int Add(int a, int b); public int Add(int a, int b, int c); public int Add(int a, int b, int c, int d);
If you are using C# 4.0 just forget about method overloading by using the optional parameter. Optional parameter is a default value passes to the method signature. Let us go for modifying our previous example to use optional parameter.
public int Add(int a, int b, int c = 0, int d = 0);
Here we are passing default value to the parameter “c” and “d”. Hence you can call this method as per your requirement like this:
Add(10, 20); // 10 + 20 + 0 + 0 Add(10, 20, 30); // 10 + 20 + 30 + 0 Add(10, 20, 30, 40); // 10 + 20 + 30 + 40
In the first case it will pass ‘0’ (zero) as the optional parameter c and d, in the second case the fourth parameter will be treated as ‘0’ (zero) and in the third case all the parameters are available with proper values.
Now, let us think that we have a situation where we are creating an account of an user with the method named “CreateAccount” which has a non-optional parameter “Name” and two optional parameters “Address” & “Age”. Here in some scenario you want to pass only the “Name” and “Age”, how can you pass? Just think on it.
public void CreateAccount(string name, string address = "unknown", int age = 0);
Are you thinking of calling the method with an empty string or null value to the “Address” parameter? Oh, not really. There comes the another new feature of C# 4.0 called as “Named Parameter”. Using this you can call the method with proper name resolution or even you can change the position of the parameter while calling the method. Have a look into the following code example:
CreateAccount("Kunal", age: 28); CreateAccount(address: "India", name: "Kunal");
First example shows calling the method without the middle parameter “Address” and the second example demonstrates calling the same method after changing the position of the parameter in the parameter list using the name of the parameter. By this way you can create a single method with Named & Optional Parameter instead of overloading it for several times and use it everywhere you need as per your requirement. This gives a more cleaner code with less efforts. Enjoy working with this. Cheers. | <urn:uuid:d66dc3e3-9303-4b43-8672-34dafb7872e7> | 3.265625 | 756 | Personal Blog | Software Dev. | 60.194853 |
NASA announced Tuesday a Grand Challenge focused on finding all asteroid threats to human populations and knowing what to do about them.
The challenge is a large-scale effort that will use multi-disciplinary collaborations and a variety of partnerships with other government agencies, international partners, industry, academia, and citizen scientists.
› NASA Asteroid Initiative
After an extensive year-and-a-half search, NASA has a new group of potential astronauts who will help the agency push the boundaries of exploration and travel to new destinations in the solar system. Eight candidates have been selected to be NASA's newest astronaut trainees.
This week marks the anniversary of two significant events in the history of space exploration -- the flight of Valentina Tereshkova 50 years ago on June 16 and of Sally Ride 30 years ago on June 18.
As the first women from their respective countries to fly in space, Tereshkova and Ride helped to usher in an era of equality in human spaceflight. Since their historic flights, 55 women (and counting) have journeyed into space.
NASA's NuSTAR mission has been busy studying the most energetic phenomena in the universe. Recently, a few high-energy events have sprung up, akin to "things that go bump in the night." When one telescope catches a sudden outpouring of high-energy light in the sky, NuSTAR and a host of other telescopes stop what they were doing and take a better look.
When we get sick, our immune systems kick into gear to tell our bodies how to heal. Our T cells -- white blood cells that act like tiny generals -- order an army of immune cells to organize and attack the enemy. Microgravity studies aboard the International Space Station are helping researchers pinpoint what drives these responses, leading to future medical treatments on Earth. | <urn:uuid:4fda2255-cbcb-495f-a749-0d21fa8c32f2> | 2.921875 | 363 | Content Listing | Science & Tech. | 35.21358 |