id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
7,895,973 | https://en.wikipedia.org/wiki/ERAM | The En Route Automation Modernization (ERAM) system architecture replaces the En Route Host computer system and its backup. ERAM provides all of today's functionality and:
Adds new capabilities needed to support the evolution of US National Airspace System.
Improves information security and streamlines traffic flow at US international borders.
Processes flight radar data.
Provides communications support.
Generates display data to air traffic controllers.
The display system provides real-time electronic aeronautical information and efficient data management.
Provides a fully functional backup system, precluding the need to restrict operations in the event of a primary failure.
The backup system provides the National Transportation Safety Board-recommended safety alerts, altitude warnings and conflict alerts.
Improves surveillance by using a greater number and variety of surveillance sources.
Detects and alerts air traffic controllers when aircraft are flying too close together for both safety and long term planning.
ERAM simultaneously supports many operating modes and complex airspace configurations, driven by thousands of users who want to use the airspace differently.
Allows more radars and flights than the old Host Computer System which ERAM replaces.
The open system architecture enables the use of future capabilities to efficiently handle traffic growth, and ensure a more stable and supportable system.
Implementation
The FAA is deploying ERAM at 20 Air Route Traffic Control Centers (ARTCCs), the Williams J. Hughes Technical Center, and the FAA Academy.
Step 1, 2006 Replace the current En Route computer backup system with Enhanced Backup Surveillance.
Step 2, 2007 Provide controllers real-time electronic access to weather data, aeronautical data, air traffic control procedures documents, Notices to Airmen (NOTAMs), Pilot Reports (PIREPs) and other information with the En Route Information Display System (ERIDS).
Step 3, 2009 Replace the current En Route Host computer air traffic control with a fully redundant, state of the art system that enables new capabilities and requires no stand-alone backup system.
Nationwide adoption
By the end of September 2011, ERAM was in continuous use at two relatively low-traffic centers, the Salt Lake City (ZLC) and Seattle (ZSE) ARTCCs. The project was over budget and behind schedule, and the original deployment dates were pushed back several times. While the system was deemed suitable for operational use, many workarounds were in place while awaiting software updates. Testing and dry runs continued while software bugs and requirements changes were worked out.
As of March 2015, the Operational Readiness Decision (ORD) for ERAM has been declared at the Salt Lake City, Seattle, Denver (ZDV), Minneapolis (ZMP), Albuquerque (ZAB), Chicago (ZAU), Los Angeles (ZLA), Kansas City (ZKC), Houston (ZHU), Indianapolis (ZID), Oakland (ZOA), Boston (ZBW), Miami (ZMA), Cleveland (ZOB), Fort Worth (ZFW), Memphis (ZME), Atlanta (ZTL), Jacksonville (ZJX) and New York (ZNY) ARTCCs. ORD marks the point after which the legacy HOST Computer System can be decommissioned. In addition to the ORD sites, continuous operations have been declared at the Washington (ZDC) ARTCC, meaning all 20 ARTCCs in the CONUS are now using ERAM 24/7 to control en route air traffic over an area covering more than 3 million square miles.
In April 2014, the ERAM system at the Los Angeles ARTCC failed, causing a ground-stop that propagated throughout the western United States and lasting as long as 2.5 hours.
All ARTCCs operational under ERAM are running with software that includes the NextGen capabilities of Automatic Dependent Surveillance-Broadcast (ADS-B) and System Wide Information Management (SWIM).
References
Air traffic control systems | ERAM | Technology,Engineering | 780 |
19,062,032 | https://en.wikipedia.org/wiki/Nuclear%20Receptor%20Signaling%20Atlas | The Nuclear Receptor Signaling Atlas (NURSA) was a United States National Institutes of Health-funded research consortium focused on nuclear receptors and nuclear receptor coregulators. Its co-principal investigators were Bert O'Malley and Neil McKenna of Baylor College of Medicine and Ron Evans of the Salk Institute. NURSA has now been retired and replaced by the Signaling Pathways Project (SPP).
References
External links
Biological databases | Nuclear Receptor Signaling Atlas | Chemistry,Biology | 83 |
8,751,011 | https://en.wikipedia.org/wiki/Cross-figure | A cross-figure (also variously called cross number puzzle or figure logic) is a puzzle similar to a crossword in structure, but with entries that consist of numbers rather than words, where individual digits are entered in the blank cells. Clues may be mathematical ("the seventh prime number"), use general knowledge ("date of the Battle of Hastings") or refer to other clues ("9 down minus 3 across").
Clues
The numbers can be clued in various ways:
The clue can make it possible to find the number required directly, by using general knowledge (e.g. "Date of the Battle of Hastings") or arithmetic (e.g. "27 times 79") or other mathematical facts (e.g. "Seventh prime number")
The clue may require arithmetic to be applied to another answer or answers (e.g. "25 across times 3" or "9 down minus 3 across")
The clue may indicate possible answers but make it impossible to give the correct one without using crosslights (e.g. "A prime number")
One answer may be related to another in a non-determinate way (e.g. "A multiple of 24 down" or "5 across with its digits rearranged")
Some entries may either not be clued at all, or refer to another clue (e.g. 7 down may be clued as "See 13 down" if 13 down reads "7 down plus 5")
Entries may be grouped together for clueing purposes, e.g. "1 across, 12 across, and 17 across together contain all the digits except 0"
Some cross-figures use an algebraic type of clue, with various letters taking unknown values (e.g. "A - 2B, where neither A nor B is known in advance)
Another special type of puzzle uses a real-world situation such as a family outing and base most clues on this (e.g. "Time taken to travel from Ayville to Beetown")
Cross-figures that use mostly the first type of clue may be used for educational purposes, but most enthusiasts would agree that this clue type should be used rarely, if at all. Without this type a cross-figure may superficially seem to be impossible to solve, since no answer can apparently be filled in until another has first been found, which without the first type of clue appears impossible. However, if a different approach is adopted where, instead of trying to find complete answers (as would be done for a crossword) one gradually narrows down the possibilities for individual cells (or, in some cases, whole answers) then the problem becomes tractable. For example, if 12 across and 7 down both have three digits and the clue for 12 across is "7 down times 2", one can work out that (i) the last digit of 12 across must be even, (ii) the first digit of 7 down must be 1, 2, 3 or 4, and (iii) the first digit of 12 across must be between 2 and 9 inclusive. (It is an implicit rule of cross-figures that numbers cannot start with 0; however, some puzzles explicitly allow this) By continuing to apply this sort of argument, a solution can eventually be found. Another implicit rule of cross-figures is that no two answers should be the same (in cross-figures allowing numbers to start with 0, 0123 and 123 may be considered different.)
Creation
A curious feature of cross-figures is that it makes perfect sense for the setter of a puzzle to try to solve it themself. Indeed, the setter should ideally do this (without direct reference to the answer) as it is essentially the only way to find out if the puzzle has a single unique solution. Alternatively, there are computer programs available that can be used for this purpose; however, they may not make it clear how difficult the puzzle is.
Popularity
Given that some basic mathematical knowledge is needed to solve cross-figures, they are much less popular than crosswords. As a result, very few books of them have ever been published. Dell Magazines publishes a magazine called Math & Logic Problems four times a year that includes these puzzles, which they name "Figure Logics"; the eighteen puzzles contained within each issue generally increase in difficulty, from easy to "challenger". A magazine called Figure it Out, which was dedicated to number puzzles, included some, but it was very short-lived. This also explains why cross-figures have fewer established conventions than crosswords (especially cryptic crosswords). One exception is the use of the semicolon (;) to attach two strings of numbers together, for example 1234;5678 becomes 12345678. Some cross-figures voluntarily ignore this option and other "non-mathematical" approaches (e.g. palindromic numbers and repunits) where same result can be achieved through algebraic means.
External links
"Cross-figure Puzzles by Yochanan Dvir"
"On Crossnumber Puzzles and The Lucas-Bonaccio Farm 1998"
The Little Pigley Farm crossnumber puzzle and its history by Joel Pomerantz
Cross-figure/Crossword Hybrids by Jordan Inman
Logic puzzles
Recreational mathematics
Crosswords | Cross-figure | Mathematics | 1,076 |
31,429 | https://en.wikipedia.org/wiki/Twin%20paradox | In physics, the twin paradox is a thought experiment in special relativity involving twins, one of whom takes a space voyage at relativistic speeds and returns home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin sees the other twin as moving, and so, as a consequence of an incorrect and naive application of time dilation and the principle of relativity, each should paradoxically find the other to have aged less. However, this scenario can be resolved within the standard framework of special relativity: the travelling twin's trajectory involves two different inertial frames, one for the outbound journey and one for the inbound journey. Another way to understand the paradox is to realize the travelling twin is undergoing acceleration, which makes them a non-inertial observer. In both views there is no symmetry between the spacetime paths of the twins. Therefore, the twin paradox is not actually a paradox in the sense of a logical contradiction.
Starting with Paul Langevin in 1911, there have been various explanations of this paradox. These explanations "can be grouped into those that focus on the effect of different standards of simultaneity in different frames, and those that designate the acceleration [experienced by the travelling twin] as the main reason". Max von Laue argued in 1913 that since the traveling twin must be in two separate inertial frames, one on the way out and another on the way back, this frame switch is the reason for the aging difference. Explanations put forth by Albert Einstein and Max Born invoked gravitational time dilation to explain the aging as a direct effect of acceleration. However, it has been proven that neither general relativity, nor even acceleration, are necessary to explain the effect, as the effect still applies if two astronauts pass each other at the turnaround point and synchronize their clocks at that point. The situation at the turnaround point can be thought of as where a pair of observers, one travelling away from the starting point and another travelling toward it, pass by each other, and where the clock reading of the first observer is transferred to that of the second one, both maintaining constant speed, with both trip times being added at the end of their journey.
History
In his famous paper on special relativity in 1905, Albert Einstein deduced that for two stationary and synchronous clocks that are placed at points A and B, if the clock at A is moved along the line AB and stops at B, the clock that moved from A would lag behind the clock at B. He stated that this result would also apply if the path from A to B was polygonal or circular. Einstein considered this to be a natural consequence of special relativity, not a paradox as some suggested, and in 1911, he restated and elaborated on this result as follows (with physicist Robert Resnick's comments following Einstein's):
In 1911, Paul Langevin gave a "striking example" by describing the story of a traveler making a trip at a Lorentz factor of (99.995% the speed of light). The traveler remains in a projectile for one year of his time, and then reverses direction. Upon return, the traveler will find that he has aged two years, while 200 years have passed on Earth. During the trip, both the traveler and Earth keep sending signals to each other at a constant rate, which places Langevin's story among the Doppler shift versions of the twin paradox. The relativistic effects upon the signal rates are used to account for the different aging rates. The asymmetry that occurred because only the traveler underwent acceleration is used to explain why there is any difference at all, because "any change of velocity, or any acceleration has an absolute meaning".
Max von Laue (1911, 1913) elaborated on Langevin's explanation. Using Hermann Minkowski's spacetime formalism, Laue went on to demonstrate that the world lines of the inertially moving bodies maximize the proper time elapsed between two events. He also wrote that the asymmetric aging is completely accounted for by the fact that the astronaut twin travels in two separate frames, while the Earth twin remains in one frame, and the time of acceleration can be made arbitrarily small compared with the time of inertial motion. Eventually, Lord Halsbury and others removed any acceleration by introducing the "three-brother" approach. The traveling twin transfers his clock reading to a third one, traveling in the opposite direction. Another way of avoiding acceleration effects is the use of the relativistic Doppler effect .
Neither Einstein nor Langevin considered such results to be problematic: Einstein only called it "peculiar" while Langevin presented it as a consequence of absolute acceleration. Both men argued that, from the time differential illustrated by the story of the twins, no self-contradiction could be constructed. In other words, neither Einstein nor Langevin saw the story of the twins as constituting a challenge to the self-consistency of relativistic physics.
Specific example
Consider a space ship traveling from Earth to the nearest star system: a distance years away, at a speed (i.e., 80% of the speed of light).
To make the numbers easy, the ship is assumed to attain full speed in a negligible time upon departure (even though it would actually take about 9 months accelerating at 1 g to get up to speed). Similarly, at the end of the outgoing trip, the change in direction needed to start the return trip is assumed to occur in a negligible time. This can also be modelled by assuming that the ship is already in motion at the beginning of the experiment and that the return event is modelled by a Dirac delta distribution acceleration.
The parties will observe the situation as follows:
Earth perspective
The Earth-based mission control reasons about the journey this way: the round trip will take in Earth time (i.e. everybody who stays on Earth will be 10 years older when the ship returns). The amount of time as measured on the ship's clocks and the aging of the travelers during their trip will be reduced by the factor , the reciprocal of the Lorentz factor (time dilation). In this case and the travelers will have aged only when they return.
Travellers' perspective
The ship's crew members also calculate the particulars of their trip from their perspective. They know that the distant star system and the Earth are moving relative to the ship at speed v during the trip. In their rest frame the distance between the Earth and the star system is years (length contraction), for both the outward and return journeys. Each half of the journey takes , and the round trip takes twice as long (6 years). Their calculations show that they will arrive home having aged 6 years. The travelers' final calculation about their aging is in complete agreement with the calculations of those on Earth, though they experience the trip quite differently from those who stay at home.
Conclusion
No matter what method they use to predict the clock readings, everybody will agree about them. If twins are born on the day the ship leaves, and one goes on the journey while the other stays on Earth, they will meet again when the traveler is 6 years old and the stay-at-home twin is 10 years old.
Resolution of the paradox in special relativity
The paradoxical aspect of the twins' situation arises from the fact that at any given moment the travelling twin's clock is running slow in the earthbound twin's inertial frame, but based on the relativity principle one could equally argue that the earthbound twin's clock is running slow in the travelling twin's inertial frame. One proposed resolution is based on the fact that the earthbound twin is at rest in the same inertial frame throughout the journey, while the travelling twin is not: in the simplest version of the thought-experiment, the travelling twin switches at the midpoint of the trip from being at rest in an inertial frame which moves in one direction (away from the Earth) to being at rest in an inertial frame which moves in the opposite direction (towards the Earth). In this approach, determining which observer switches frames and which does not is crucial. Although both twins can legitimately claim that they are at rest in their own frame, only the traveling twin experiences acceleration when the spaceship engines are turned on. This acceleration, measurable with an accelerometer, makes his rest frame temporarily non-inertial. This reveals a crucial asymmetry between the twins' perspectives: although we can predict the aging difference from both perspectives, we need to use different methods to obtain correct results.
Role of acceleration
Although some solutions attribute a crucial role to the acceleration of the travelling twin at the time of the turnaround, others note that the effect also arises if one imagines two separate travellers, one outward-going and one inward-coming, who pass each other and synchronize their clocks at the point corresponding to "turnaround" of a single traveller. In this version, physical acceleration of the travelling clock plays no direct role; "the issue is how long the world-lines are, not how bent". The length referred to here is the Lorentz-invariant length or "proper time interval" of a trajectory which corresponds to the elapsed time measured by a clock following that trajectory (see Section Difference in elapsed time as a result of differences in twins' spacetime paths below). In Minkowski spacetime, the travelling twin must feel a different history of accelerations from the earthbound twin, even if this just means accelerations of the same size separated by different amounts of time, however "even this role for acceleration can be eliminated in formulations of the twin paradox in curved spacetime, where the twins can fall freely along space-time geodesics between meetings".
Relativity of simultaneity
For a moment-by-moment understanding of how the time difference between the twins unfolds, one must understand that in special relativity there is no concept of absolute present. For different inertial frames there are different sets of events that are simultaneous in that frame. This relativity of simultaneity means that switching from one inertial frame to another requires an adjustment in what slice through spacetime counts as the "present". In the spacetime diagram on the right, drawn for the reference frame of the Earth-based twin, that twin's world line coincides with the vertical axis (his position is constant in space, moving only in time). On the first leg of the trip, the second twin moves to the right (black sloped line); and on the second leg, back to the left. Blue lines show the planes of simultaneity for the traveling twin during the first leg of the journey; red lines, during the second leg. Just before turnaround, the traveling twin calculates the age of the Earth-based twin by measuring the interval along the vertical axis from the origin to the upper blue line. Just after turnaround, if he recalculates, he will measure the interval from the origin to the lower red line. In a sense, during the U-turn the plane of simultaneity jumps from blue to red and very quickly sweeps over a large segment of the world line of the Earth-based twin. When one transfers from the outgoing inertial frame to the incoming inertial frame there is a jump discontinuity in the age of the Earth-based twin (6.4 years in the example above).
A non-spacetime approach
As mentioned above, an "out and back" twin paradox adventure may incorporate the transfer of clock reading from an "outgoing" astronaut to an "incoming" astronaut, thus eliminating the effect of acceleration. Also, the physical acceleration of clocks does not contribute to the kinematical effects of special relativity. Rather, in special relativity, the time differential between two reunited clocks is produced purely by uniform inertial motion, as discussed in Einstein's original 1905 relativity paper, as well as in all subsequent kinematical derivations of the Lorentz transformations.
Because spacetime diagrams incorporate Einstein's clock synchronization (with its lattice of clocks methodology), there will be a requisite jump in the reading of the Earth clock time made by a "suddenly returning astronaut" who inherits a "new meaning of simultaneity" in keeping with a new clock synchronization dictated by the transfer to a different inertial frame, as explained in Spacetime Physics by John A. Wheeler.
If, instead of incorporating Einstein's clock synchronization (lattice of clocks), the astronaut (outgoing and incoming) and the Earth-based party regularly update each other on the status of their clocks by way of sending radio signals (which travel at light speed), then all parties will note an incremental buildup of asymmetry in time-keeping, beginning at the "turn around" point. Prior to the "turn around", each party regards the other party's clock to be recording time differently from his own, but the noted difference is symmetrical between the two parties. After the "turn around", the noted differences are not symmetrical, and the asymmetry grows incrementally until the two parties are reunited. Upon finally reuniting, this asymmetry can be seen in the actual difference showing on the two reunited clocks.
The equivalence of biological aging and clock time-keeping
All processes—chemical, biological, measuring apparatus functioning, human perception involving the eye and brain, the communication of force—are constrained by the speed of light. There is clock functioning at every level, dependent on light speed and the inherent delay at even the atomic level. Biological aging, therefore, is in no way different from clock time-keeping. This means that biological aging would be slowed in the same manner as a clock.
What it looks like: the relativistic Doppler shift
In view of the frame-dependence of simultaneity for events at different locations in space, some treatments prefer a more phenomenological approach, describing what the twins would observe if each sent out a series of regular radio pulses, equally spaced in time according to the emitter's clock. This is equivalent to asking, if each twin sent a video feed of themselves to each other, what do they see in their screens? Or, if each twin always carried a clock indicating his age, what time would each see in the image of their distant twin and his clock?
Shortly after departure, the traveling twin sees the stay-at-home twin with no time delay. At arrival, the image in the ship screen shows the staying twin as he was 1 year after launch, because radio emitted from Earth 1 year after launch gets to the other star 4 years afterwards and meets the ship there. During this leg of the trip, the traveling twin sees his own clock advance 3 years and the clock in the screen advance 1 year, so it seems to advance at the normal rate, just 20 image seconds per ship minute. This combines the effects of time dilation due to motion (by factor , five years on Earth are 3 years on ship) and the effect of increasing light-time-delay (which grows from 0 to 4 years).
Of course, the observed frequency of the transmission is also the frequency of the transmitter (a reduction in frequency; "red-shifted"). This is called the relativistic Doppler effect. The frequency of clock-ticks (or of wavefronts) which one sees from a source with rest frequency frest is
when the source is moving directly away. This is fobs = frest for v/c = 0.8.
As for the stay-at-home twin, he gets a slowed signal from the ship for 9 years, at a frequency the transmitter frequency. During these 9 years, the clock of the traveling twin in the screen seems to advance 3 years, so both twins see the image of their sibling aging at a rate only their own rate. Expressed in other way, they would both see the other's clock run at their own clock speed. If they factor out of the calculation the fact that the light-time delay of the transmission is increasing at a rate of 0.8 seconds per second, both can work out that the other twin is aging slower, at 60% rate.
Then the ship turns back toward home. The clock of the staying twin shows "1 year after launch" in the screen of the ship, and during the 3 years of the trip back it increases up to "10 years after launch", so the clock in the screen seems to be advancing 3 times faster than usual.
When the source is moving towards the observer, the observed frequency is higher ("blue-shifted") and given by
This is fobs = 3frest for v/c = 0.8.
As for the screen on Earth, it shows that trip back beginning 9 years after launch, and the traveling clock in the screen shows that 3 years have passed on the ship. One year later, the ship is back home and the clock shows 6 years. So, during the trip back, both twins see their sibling's clock going 3 times faster than their own. Factoring out the fact that the light-time-delay is decreasing by 0.8 seconds every second, each twin calculates that the other twin is aging at 60% his own aging speed.
The x–t (space–time) diagrams at right show the paths of light signals traveling between Earth and ship (1st diagram) and between ship and Earth (2nd diagram). These signals carry the images of each twin and his age-clock to the other twin. The vertical black line is the Earth's path through spacetime and the other two sides of the triangle show the ship's path through spacetime (as in the Minkowski diagram above). As far as the sender is concerned, he transmits these at equal intervals (say, once an hour) according to his own clock; but according to the clock of the twin receiving these signals, they are not being received at equal intervals.
After the ship has reached its cruising speed of 0.8c, each twin would see 1 second pass in the received image of the other twin for every 3 seconds of his own time. That is, each would see the image of the other's clock going slow, not just slow by the factor 0.6, but even slower because light-time-delay is increasing 0.8 seconds per second. This is shown in the figures by red light paths. At some point, the images received by each twin change so that each would see 3 seconds pass in the image for every second of his own time. That is, the received signal has been increased in frequency by the Doppler shift. These high frequency images are shown in the figures by blue light paths.
The asymmetry in the Doppler shifted images
The asymmetry between the Earth and the space ship is manifested in this diagram by the fact that more blue-shifted (fast aging) images are received by the ship. Put another way, the space ship sees the image change from a red-shift (slower aging of the image) to a blue-shift (faster aging of the image) at the midpoint of its trip (at the turnaround, 3 years after departure); the Earth sees the image of the ship change from red-shift to blue shift after 9 years (almost at the end of the period that the ship is absent). In the next section, one will see another asymmetry in the images: the Earth twin sees the ship twin age by the same amount in the red and blue shifted images; the ship twin sees the Earth twin age by different amounts in the red and blue shifted images.
Calculation of elapsed time from the Doppler diagram
The twin on the ship sees low frequency (red) images for 3 years. During that time, he would see the Earth twin in the image grow older by . He then sees high frequency (blue) images during the back trip of 3 years. During that time, he would see the Earth twin in the image grow older by When the journey is finished, the image of the Earth twin has aged by
The Earth twin sees 9 years of slow (red) images of the ship twin, during which the ship twin ages (in the image) by He then sees fast (blue) images for the remaining 1 year until the ship returns. In the fast images, the ship twin ages by The total aging of the ship twin in the images received by Earth is , so the ship twin returns younger (6 years as opposed to 10 years on Earth).
The distinction between what they see and what they calculate
To avoid confusion, note the distinction between what each twin sees and what each would calculate. Each sees an image of his twin which he knows originated at a previous time and which he knows is Doppler shifted. He does not take the elapsed time in the image as the age of his twin now.
If he wants to calculate when his twin was the age shown in the image (i.e. how old he himself was then), he has to determine how far away his twin was when the signal was emitted—in other words, he has to consider simultaneity for a distant event.
If he wants to calculate how fast his twin was aging when the image was transmitted, he adjusts for the Doppler shift. For example, when he receives high frequency images (showing his twin aging rapidly) with frequency , he does not conclude that the twin was aging that rapidly when the image was generated, any more than he concludes that the siren of an ambulance is emitting the frequency he hears. He knows that the Doppler effect has increased the image frequency by the factor 1 / (1 − v/c). Therefore, he calculates that his twin was aging at the rate of
when the image was emitted. A similar calculation reveals that his twin was aging at the same reduced rate of εfrest in all low frequency images.
Simultaneity in the Doppler shift calculation
It may be difficult to see where simultaneity came into the Doppler shift calculation, and indeed the calculation is often preferred because one does not have to worry about simultaneity. As seen above, the ship twin can convert his received Doppler-shifted rate to a slower rate of the clock of the distant clock for both red and blue images. If he ignores simultaneity, he might say his twin was aging at the reduced rate throughout the journey and therefore should be younger than he is. He is now back to square one, and has to take into account the change in his notion of simultaneity at the turnaround. The rate he can calculate for the image (corrected for Doppler effect) is the rate of the Earth twin's clock at the moment it was sent, not at the moment it was received. Since he receives an unequal number of red and blue shifted images, he should realize that the red and blue shifted emissions were not emitted over equal time periods for the Earth twin, and therefore he must account for simultaneity at a distance.
Viewpoint of the traveling twin
During the turnaround, the traveling twin is in an accelerated reference frame. According to the equivalence principle, the traveling twin may analyze the turnaround phase as if the stay-at-home twin were freely falling in a gravitational field and as if the traveling twin were stationary. A 1918 paper by Einstein presents a conceptual sketch of the idea. From the viewpoint of the traveler, a calculation for each separate leg, ignoring the turnaround, leads to a result in which the Earth clocks age less than the traveler. For example, if the Earth clocks age 1 day less on each leg, the amount that the Earth clocks will lag behind amounts to 2 days. The physical description of what happens at turnaround has to produce a contrary effect of double that amount: 4 days' advancing of the Earth clocks. Then the traveler's clock will end up with a net 2-day delay on the Earth clocks, in agreement with calculations done in the frame of the stay-at-home twin.
The mechanism for the advancing of the stay-at-home twin's clock is gravitational time dilation. When an observer finds that inertially moving objects are being accelerated with respect to themselves, those objects are in a gravitational field insofar as relativity is concerned. For the traveling twin at turnaround, this gravitational field fills the universe. In a weak field approximation, clocks tick at a rate of where Φ is the difference in gravitational potential. In this case, where g is the acceleration of the traveling observer during turnaround and h is the distance to the stay-at-home twin. The rocket is firing towards the stay-at-home twin, thereby placing that twin at a higher gravitational potential. Due to the large distance between the twins, the stay-at-home twin's clocks will appear to be sped up enough to account for the difference in proper times experienced by the twins. It is no accident that this speed-up is enough to account for the simultaneity shift described above. The general relativity solution for a static homogeneous gravitational field and the special relativity solution for finite acceleration produce identical results.
Other calculations have been done for the traveling twin (or for any observer who sometimes accelerates), which do not involve the equivalence principle, and which do not involve any gravitational fields. Such calculations are based only on the special theory, not the general theory, of relativity. One approach calculates surfaces of simultaneity by considering light pulses, in accordance with Hermann Bondi's idea of the k-calculus. A second approach calculates a straightforward but technically complicated integral to determine how the traveling twin measures the elapsed time on the stay-at-home clock. An outline of this second approach is given in a separate section below.
Difference in elapsed time as a result of differences in twins' spacetime paths
The following paragraph shows several things:
how to employ a precise mathematical approach in calculating the differences in the elapsed time
how to prove exactly the dependency of the elapsed time on the different paths taken through spacetime by the twins
how to quantify the differences in elapsed time
how to calculate proper time as a function (integral) of coordinate time
Let clock K be associated with the "stay at home twin".
Let clock K' be associated with the rocket that makes the trip.
At the departure event both clocks are set to 0.
Phase 1: Rocket (with clock K') embarks with constant proper acceleration a during a time Ta as measured by clock K until it reaches some velocity V.
Phase 2: Rocket keeps coasting at velocity V during some time Tc according to clock K.
Phase 3: Rocket fires its engines in the opposite direction of K during a time Ta according to clock K until it is at rest with respect to clock K. The constant proper acceleration has the value −a, in other words the rocket is decelerating.
Phase 4: Rocket keeps firing its engines in the opposite direction of K, during the same time Ta according to clock K, until K' regains the same speed V with respect to K, but now towards K (with velocity −V).
Phase 5: Rocket keeps coasting towards K at speed V during the same time Tc according to clock K.
Phase 6: Rocket again fires its engines in the direction of K, so it decelerates with a constant proper acceleration a during a time Ta, still according to clock K, until both clocks reunite.
Knowing that the clock K remains inertial (stationary), the total accumulated proper time Δτ of clock K' will be given by the integral function of coordinate time Δt
where v(t) is the coordinate velocity of clock K' as a function of t according to clock K, and, e.g. during phase 1, given by
This integral can be calculated for the 6 phases:
Phase 1
Phase 2
Phase 3
Phase 4
Phase 5
Phase 6
where a is the proper acceleration, felt by clock K' during the acceleration phase(s) and where the following relations hold between V, a and Ta:
So the traveling clock K' will show an elapsed time of
which can be expressed as
whereas the stationary clock K shows an elapsed time of
which is, for every possible value of a, Ta, Tc and V, larger than the reading of clock K':
Difference in elapsed times: how to calculate it from the ship
In the standard proper time formula
Δτ represents the time of the non-inertial (travelling) observer K' as a function of the elapsed time Δt of the inertial (stay-at-home) observer K for whom observer K' has velocity v(t) at time t.
To calculate the elapsed time Δt of the inertial observer K as a function of the elapsed time Δτ of the non-inertial observer K', where only quantities measured by K' are accessible, the following formula can be used:
where a(τ) is the proper acceleration of the non-inertial observer K' as measured by himself (for instance with an accelerometer) during the whole round-trip. The Cauchy–Schwarz inequality can be used to show that the inequality follows from the previous expression:
Using the Dirac delta function to model the infinite acceleration phase in the standard case of the traveller having constant speed v during the outbound and the inbound trip, the formula produces the known result:
In the case where the accelerated observer K' departs from K with zero initial velocity, the general equation reduces to the simpler form:
which, in the smooth version of the twin paradox where the traveller has constant proper acceleration phases, successively given by a, −a, −a, a, results in
where the convention c = 1 is used, in accordance with the above expression with acceleration phases and inertial (coasting) phases
A rotational version
Twins Bob and Alice inhabit a space station in circular orbit around a massive body in space. Bob suits up and exits the station. While Alice remains inside the station, continuing to orbit with it as before, Bob uses a rocket propulsion system to cease orbiting and hover where he was. When the station completes an orbit and returns to Bob, he rejoins Alice. Alice is now younger than Bob. In addition to rotational acceleration, Bob must decelerate to become stationary and then accelerate again to match the orbital speed of the space station.
No twin paradox in an absolute frame of reference
Einstein's conclusion of an actual difference in registered clock times (or aging) between reunited parties caused Paul Langevin to posit an actual, albeit experimentally indiscernible, absolute frame of reference:
In 1911, Langevin wrote: "A uniform translation in the aether has no experimental sense. But because of this it should not be concluded, as has sometimes happened prematurely, that the concept of aether must be abandoned, that the aether is non-existent and inaccessible to experiment. Only a uniform velocity relative to it cannot be detected, but any change of velocity ... has an absolute sense."
In 1913, Henri Poincaré's posthumous Last Essays were published and there he had restated his position: "Today some physicists want to adopt a new convention. It is not that they are constrained to do so; they consider this new convention more convenient; that is all. And those who are not of this opinion can legitimately retain the old one."
In the relativity of Poincaré and Hendrik Lorentz, which assumes an absolute (though experimentally indiscernible) frame of reference, no paradox arises due to the fact that clock slowing (along with length contraction and velocity) is regarded as an actuality, hence the actual time differential between the reunited clocks.
In that interpretation, a party at rest with the totality of the cosmos (at rest with the barycenter of the universe, or at rest with a possible ether) would have the maximum rate of time-keeping and have non-contracted length. All the effects of Einstein's special relativity (consistent light-speed measure, as well as symmetrically measured clock-slowing and length-contraction across inertial frames) fall into place.
That interpretation of relativity, which John A. Wheeler calls "ether theory B (length contraction plus time contraction)", did not gain as much traction as Einstein's, which simply disregarded any deeper reality behind the symmetrical measurements across inertial frames. There is no physical test which distinguishes one interpretation from the other.
In 2005, Robert B. Laughlin (Physics Nobel Laureate, Stanford University), wrote about the nature of space: "It is ironic that Einstein's most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed ... The word 'ether' has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum. ... Relativity actually says nothing about the existence or nonexistence of matter pervading the universe, only that any such matter must have relativistic symmetry (i.e., as measured)."
In Special Relativity (1968), A. P. French wrote: "Note, though, that we are appealing to the reality of A's acceleration, and to the observability of the inertial forces associated with it. Would such effects as the twin paradox (specifically -- the time keeping differential between reunited clocks) exist if the framework of fixed stars and distant galaxies were not there? Most physicists would say no. Our ultimate definition of an inertial frame may indeed be that it is a frame having zero acceleration with respect to the matter of the universe at large."
See also
Bell's spaceship paradox
Clock hypothesis
Ehrenfest paradox
Herbert Dingle
Ladder paradox
List of paradoxes
Supplee's paradox
Time dilation
Time for the Stars
Primary sources
Secondary sources
Further reading
The ideal clock
The ideal clock is a clock whose action depends only on its instantaneous velocity, and is independent of any acceleration of the clock.
Gravitational time dilation; time dilation in circular motion
External links
Twin Paradox overview in the Usenet Physics FAQ
The twin paradox: Is the symmetry of time dilation paradoxical? From Einsteinlight: Relativity in animations and film clips.
FLASH Animations: from John de Pillis. (Scene 1): "View" from the Earth twin's point of view. (Scene 2): "View" from the traveling twin's point of view.
Relativity Science Calculator - Twin Clock Paradox
Relativistic paradoxes
Special relativity
Albert Einstein
Time in physics
Thought experiments in physics
Interstellar travel | Twin paradox | Physics,Astronomy | 7,215 |
65,162,567 | https://en.wikipedia.org/wiki/Golden%20Gate%20of%20the%20Ecliptic | The Golden Gate of the Ecliptic is an asterism in the constellation Taurus that has been known for several thousand years. The asterism is formed of the two eye-catching open star clusters, the Pleiades and the Hyades that form the posts of a virtual gate on either side of the ecliptic line.
Since all planets as well as the Moon and the Sun always move very closely along the virtual circle of the ecliptic, all these seven orbiting bodies regularly pass through the Golden Gate of the Ecliptic. Since the Moon is the closest of these heavenly bodies to the Earth and it is inclined at a high enough angle to the ecliptic, on some occasions, the Moon can cover the stars of the open star clusters or even pass outside the Gate.
History
From 4000 to 1500 BC the equinox was within the constellation Taurus, and therefore great importance was attached to this constellation. The 4500 year old sky tablet of the neolithic Tal-Qadi Temple in Malta is thought to display the Golden Gate of the Ecliptic.
Sources
Michael A. Rappenglück: Palaeolithic Timekeepers Looking at the Golden Gate of the Ecliptic; The Lunar Cycle and the Pleiades in the Cave of La-Tête-du-lion (Ardèche, France) — 21,000 BP, in: Barbieri C., Rampazzi F. (editors): Earth-Moon Relationships, Springer, Dordrecht
External links
The Golden Gate of the Ecliptic. In Wikibook: The Tal-Qadi Sky Tablet
References
Asterisms (astronomy)
Taurus (constellation)
Pleiades
Hyades (star cluster) | Golden Gate of the Ecliptic | Astronomy | 351 |
55,651,129 | https://en.wikipedia.org/wiki/History%20of%20computing%20in%20Romania | This article describes the history of computing in Romania.
HC family
The Romanian computers (HC 85, HC 85+, HC 88, HC 90, HC 91 and HC 2000) were clones of the ZX Spectrum produced at ICE Felix from 1985 to 1994. HC 85 was first designed at Institutul Politehnic București by Prof. Dr. Ing. Adrian Petrescu (in laboratory), then redesigned at ICE Felix (in order to be produced at industrial scale). Their operating system was a BASIC interpreter.
aMIC
was a Romanian microcomputer designed by Prof. Adrian Petrescu at Institutul Politehnic București in 1982, later produced at Fabrica de Memorii in Timișoara.
MARICA and DACICC
MARICA and the DACICC family (DACICC-1 and DACICC-200) were Romanian computers produced in 1959–1968 at T. Popoviciu Institute of Numerical Analysis, Cluj-Napoca.
Felix series
was a Romanian IBM-PC compatible produced at ICE Felix in 1985–1990.
was a family of Romanian computers produced by ICE Felix from 1970 to 1978. They were similar to IBM/360; their operating system was SIRIS.
was a family of Romanian mini and microcomputers in 1975–1984.
CoBra
was a Romanian personal computer produced at I.T.C.I Brașov, in 1986.
Independent
was a series of Romanian minicomputers, manufactured from 1983 to 1989. They were compatible with DEC-PDP 11–34, running RSX-11M operating system. They were produced at ITC Timișoara, with memory chips also produced in Timișoara.
See also
Electronics industry in the Socialist Republic of Romania
History of computer hardware in Yugoslavia
Computer systems in the Soviet Union
History of computing in Poland
History of computer hardware in Bulgaria
External links
Soviet Block computers with references to Romania
List of computers's manufacturers in Romania
Science and technology in Romania
Romania | History of computing in Romania | Technology | 401 |
2,797,065 | https://en.wikipedia.org/wiki/Gamma%20Apodis | Gamma Apodis (γ Aps, γ Apodis) is the Bayer designation for a star in the southern circumpolar constellation of Apus. From parallax measurements, the distance to this star can be estimated as . It is visible to the naked eye with an apparent visual magnitude of 3.86. A stellar classification of G9 III identifies it as a giant star in the later stages of its evolution. It is an active X-ray source with a luminosity of , making it one of the 100 strongest stellar X-ray sources within 50 parsecs of the Sun.
Naming
In Chinese caused by adaptation of the European southern hemisphere constellations into the Chinese system, (), meaning Exotic Bird, refers to an asterism consisting of γ Apodis, ζ Apodis, ι Apodis, β Apodis, δ Octantis, δ1 Apodis, η Apodis, α Apodis and ε Apodis. Consequently, γ Apodis itself is known as (, ).
References
147675
Apodis, Gamma
Apus
G-type giants
Astronomical X-ray sources
081065
6102
Durchmusterung objects
Gliese and GJ objects | Gamma Apodis | Astronomy | 256 |
38,761,099 | https://en.wikipedia.org/wiki/Secondary%20%28chemistry%29 | Secondary is a term used in organic chemistry to classify various types of compounds (e. g. alcohols, alkyl halides, amines) or reactive intermediates (e. g. alkyl radicals, carbocations). An atom is considered secondary if it has two 'R' Groups attached to it. An 'R' group is a carbon containing group such as a methyl (CH3 ). A secondary compound is most often classified on an alpha carbon (middle carbon) or a nitrogen. The word secondary comes from the root word 'second' which means two.
This nomenclature can be used in many cases and further used to explain relative reactivity. The reactivity of molecules varies with respect to the attached atoms. Thus, a primary, secondary, tertiary and quaternary molecule of the same function group will have different reactivities.
Secondary alcohols
Secondary alcohols have the formula RCH(OH)R' where R and R' are organyl.
Secondary amines
A secondary amine has the formula RR'NH where R and R' are organyl.
Secondary amides
Secondary amides have the formula RC(O)NHR' where R can be H or organyl and R' is organyl. which is the loss of the single proton bonded to the middle nitrogen.
Secondary phosphines
Secondary phosphines have two 'R' groups attached to a phosphorus atom and again, a P-H bond.
Further uses
"Secondary" is a general term used in chemistry that can be applied to many molecules, even more than the ones listed here; the principles seen in these examples can be further applied to other functional group containing molecules. The ones shown above are common molecules seen in many organic reactions. By classifying a molecule as secondary it then be compared with a molecule of primary or tertiary nature to determine the relative reactivity.
See also
Primary (chemistry)
Tertiary (chemistry)
Quaternary (chemistry)
References
Chemical nomenclature | Secondary (chemistry) | Chemistry | 403 |
2,936,453 | https://en.wikipedia.org/wiki/Polyisocyanurate | Polyisocyanurate (), also referred to as PIR, polyol, or ISO, is a thermoset plastic typically produced as a foam and used as rigid thermal insulation. The starting materials are similar to those used in polyurethane (PUR) except that the proportion of methylene diphenyl diisocyanate (MDI) is higher and a polyester-derived polyol is used in the reaction instead of a polyether polyol. The resulting chemical structure is significantly different, with the isocyanate groups on the MDI trimerising to form isocyanurate groups which the polyols link together, giving a complex polymeric structure.
Manufacturing
The reaction of (MDI) and polyol takes place at higher temperatures compared with the reaction temperature for the manufacture of PUR. At these elevated temperatures and in the presence of specific catalysts, MDI will first react with itself, producing a stiff, ring molecule, which is a reactive intermediate (a tri-isocyanate isocyanurate compound). Remaining MDI and the tri-isocyanate react with polyol to form a complex poly(urethane-isocyanurate) polymer (hence the use of the abbreviation PUI as an alternative to PIR), which is foamed in the presence of a suitable blowing agent. This isocyanurate polymer has a relatively strong molecular structure, because of the combination of strong chemical bonds, the ring structure of isocyanurate and high cross link density, each contributing to the greater stiffness than found in comparable polyurethanes. The greater bond strength also means these are more difficult to break, and as a result a PIR foam is chemically and thermally more stable: breakdown of isocyanurate bonds is reported to start above 200 °C, compared with urethane at 100 to 110 °C.
PIR typically has an MDI/polyol ratio, also called its index (based on isocyanate/polyol stoichiometry to produce urethane alone), higher than 180. By comparison PUR indices are normally around 100. As the index increases material stiffness the brittleness also increases, although the correlation is not linear. Depending on the product application greater stiffness, chemical and/or thermal stability may be desirable. As such PIR manufacturers can offer multiple products with identical densities but different indices in an attempt to achieve optimal end use performance.
Uses
PIR is typically produced as a foam and used as rigid thermal insulation. Its thermal conductivity has a typical value of 0.023 W/(m·K) (0.16 BTU·in/(hr·ft2·°F)) depending on the perimeter:area ratio. PIR foam panels laminated with pure embossed aluminium foil are used for fabrication of pre-insulated duct that is used for heating, ventilation and air conditioning systems. Prefabricated PIR sandwich panels are manufactured with corrosion-protected, corrugated steel facings bonded to a core of PIR foam and used extensively as roofing insulation and vertical walls (e.g. for warehousing, factories, office buildings etc.). Other typical uses for PIR foams include industrial and commercial pipe insulation, and carving/machining media (competing with expanded polystyrene and rigid polyurethane foams).
Effectiveness of the insulation of a building envelope can be compromised by gaps resulting from shrinkage of individual panels. Manufacturing criteria require that shrinkage be limited to less than 1% (previously 2%). Even when shrinkage is limited to substantially less than this limit, the resulting gaps around the perimeter of each panel can reduce insulation effectiveness, especially if the panels are assumed to provide a vapor/infiltration barrier. Multiple layers with staggered joints, ship lapped or tongue & groove joints greatly reduce these problems.
Polyisocyanurates of isophorone diisocyanate are also used in the preparation of polyurethane coatings based on acrylic polyols and polyether polyols.
Health hazards
PIR insulation can be a mechanical irritant to skin, eyes, and upper respiratory system during fabrication (such as dust). No statistically significant increased risks of respiratory diseases have been found in studies.
Fire risk
PIR is at times stated to be fire retardant, or contain fire retardants, but these describe the results of "small scale tests" and "do not reflect [all] hazards under real fire conditions"; the extent of hazards from fire include not just resistance to fire but the scope for toxic byproducts from different fire scenarios.
A 2011 study of fire toxicity of insulating materials at the University of Central Lancashire's Centre for Fire and Hazard Science studied PIR and other commonly used materials under more realistic and wide-ranging conditions representative of a wider range of fire hazard, observing that most fire deaths resulted from toxic product inhalation. The study evaluated the degree to which toxic products were released, looking at toxicity, time-release profiles, and lethality of doses released, in a range of flaming, non-flaming, and poorly ventilated fires, and concluded that PIR generally released a considerably higher level of toxic products than the other insulating materials studied (PIR > PUR > EPS > PHF; glass and stone wools also studied). In particular, hydrogen cyanide is recognised as a significant contributor to the fire toxicity of PIR (and PUR) foams.
PIR insulation board (cited as the FR4000 and the FR5000 products of Celotex, a Saint-Gobain company) was proposed to be used externally in the refurbishment of Grenfell Tower, London, with vertical and horizontal runs of 100 mm and 150 mm thickness respectively; subsequently "Ipswich firm Celotex confirmed it provided insulation materials for the refurbishment." On 14 June 2017 the block of flats, within 15 minutes, was enveloped in flames from the fourth floor to the top 24th floor. The public inquiry into the fire determined that the Celotex cladding material was one of the primary causes of the rapid spread of the fire, as they were much more flammable than permitted by building regulations. Celotex deceived regulators about the fire performance of the cladding by secretly adding fire retardant materials to the cladding panels that were used during safety testing.
References
External links
Polyisocyanurate Insulation Manufacturers Association
Polyisocyanurate Insulation energy savings, by Center for the Polyurethanes Industry
Continuous Insulation Resources for several types of rigid foam continuous insulation
Plastics
Polyurethanes
Building insulation materials
Thermosetting plastics | Polyisocyanurate | Physics | 1,373 |
3,117,324 | https://en.wikipedia.org/wiki/Beta%20Gruis | Beta Gruis (β Gruis, abbreviated Beta Gru, β Gru), formally named Tiaki , is the second brightest star in the southern constellation of Grus. It was once considered the rear star in the tail of the constellation of the (Southern) Fish, Piscis Austrinus: it, with Alpha, Delta, Theta, Iota, and Lambda Gruis, belonged to Piscis Austrinus in medieval Arabic astronomy.
Nomenclature
β Gruis (Latinised to Beta Gruis) is the star's Bayer designation.
It bore the traditional Tuamotuan name of Tiaki. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Tiaki for this star on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Crane, refers to an asterism consisting of Beta Gruis, Alpha Gruis, Epsilon Gruis, Eta Gruis, Delta Tucanae, Zeta Gruis, Iota Gruis, Theta Gruis, Delta² Gruis and Mu¹ Gruis. Consequently, Beta Gruis itself is known as (, ). The Chinese name gave rise to another English name, Ke.
Properties
Beta Gruis is a red giant star on the asymptotic giant branch with an estimated mass of about 2.4 times that of the Sun and a surface temperature of approximately 3,500 K, just over half the surface temperature of the Sun. This low temperature accounts for the dull red color of an M-type star. The total luminosity is about 3,200 times that of the Sun, and it has 150 times the Sun's radius.
It is one of the brightest stars at infrared and near-infrared wavelenghts. At the K band, it is the fifth-brightest star in the night sky.
Alan William James Cousins announced that Beta Gruis is a variable star in 1952.
Beta Gruis is a semiregular variable (SRb) star that varies in magnitude by about 0.4. It varies between intervals when it displays regular changes with a 37-day periodicity and times when it undergoes slow irregular variability.
References
External links
MSN Encarta (Archived 2009-10-31)
M-type giants
Semiregular variable stars
Asymptotic-giant-branch stars
Grus (constellation)
Gruis, Beta
Durchmusterung objects
214952
112122
8636
Tiaki | Beta Gruis | Astronomy | 531 |
3,934,357 | https://en.wikipedia.org/wiki/Lip%C3%B3t%20Fej%C3%A9r | Lipót Fejér (or Leopold Fejér, ; 9 February 1880 – 15 October 1959) was a Hungarian mathematician of Jewish heritage. Fejér was born Leopold Weisz, and changed to the Hungarian name Fejér around 1900.
Biography
He was born in Pécs, Austria-Hungary, into the Jewish family of Victoria Goldberger and Samuel Weiss. His maternal great-grandfather Samuel Nachod was a doctor and his grandfather was a renowned scholar, author of a Hebrew-Hungarian dictionary. Leopold's father, Samuel Weiss, was a shopkeeper in Pecs. In primary schools Leopold was not doing well, so for a while his father took him away to home schooling. The future scientist developed his interest in mathematics in high school thanks to his teacher Sigismund Maksay.
Fejér studied mathematics and physics at the University of Budapest and at the University of Berlin, where he was taught by Hermann Schwarz. In 1902 he earned his doctorate from University of Budapest (today Eötvös Loránd University). From 1902 to 1905 Fejér taught there and from 1905 until 1911 he taught at Franz Joseph University in Kolozsvár in Austria-Hungary (now Cluj-Napoca in Romania). In 1911 Fejér was appointed to the chair of mathematics at the University of Budapest and he held that post until his death. He was elected corresponding member (1908), member (1930) of the Hungarian Academy of Sciences.
During his period in the chair at Budapest Fejér led a highly successful Hungarian school of analysis. He was the thesis advisor of mathematicians such as John von Neumann, Paul Erdős, George Pólya and Pál Turán. Thanks to Fejér, Hungary has developed a strong mathematical school: he has educated a new generation of students who have gone on to become eminent scientists. As Polya recalled, a large number of them became interested in mathematics thanks to Fejér, his fascinating personality and charisma. Fejér gave short (no more than an hour) but very entertaining lectures and often sat with students in cafés, discussing mathematical problems and telling stories from his life and how he interacted with the world's leading mathematicians.
Fejér's research concentrated on harmonic analysis and, in particular, Fourier series.
Fejér collaborated to produce important papers, one with Carathéodory on entire functions in 1907 and another major work with Frigyes Riesz in 1922 on conformal mappings (specifically, a short proof of the Riemann mapping theorem).
In 1944, Fejér was forced to resign because of his Jewish background. One night at the end of December 1944, members of the Arrow Cross Party stormed into his house. Fejér and all the residents of his house were convoyed to the banks of the Danube and were about to be shot, but were miraculously saved by a phone call "from a brave officer". Fejér was later found in a hospital in the city, where he was admitted "under unexplained circumstances". This severe trauma left a permanent mark on the scientist's mental faculties, something even he himself noticed and later often said of himself "since I became an idiot". Still, according to his colleagues, he kept on an even keel until mid-1950s, when he became senile.
Lipót Fejér died in Budapest on 15 October 1959. His grave is in the distinguished Kerepesi Cemetery.
Pólya on Fejér
Pólya writes the following about Fejér, telling us much about his personality:
He had artistic tastes. He deeply loved music and was a good pianist. He liked a well-turned phrase. 'As to earning a living', he said, 'a professor's salary is a necessary, but not sufficient, condition.' Once he was very angry with a colleague who happened to be a topologist, and explaining the case at length he wound up by declaring '... and what he is saying is a topological mapping of the truth'.
He had a quick eye for foibles and miseries; in seemingly dull situations he noticed points that were unexpectedly funny or unexpectedly pathetic. He carefully cultivated his talent of raconteur; when he told, with his characteristic gestures, of the little shortcomings of a certain great mathematician, he was irresistible. The hours spent in continental coffee houses with Fejér discussing mathematics and telling stories are a cherished recollection for many of us. Fejér presented his mathematical remarks with the same verve as his stories, and this may have helped him in winning the lasting interest of so many younger men in his problems.
In the same article Pólya writes about Fejér's style of mathematics:
Fejér talked about a paper he was about to write up. 'When I write a paper,' he said, 'I have to rederive for myself the rules of differentiation and sometimes even the commutative law of multiplication.' These words stuck in my memory and years later I came to think that they expressed an essential aspect of Fejér's mathematical talent; his love for the intuitively clear detail.
It was not given to him to solve very difficult problems or to build vast conceptual structures. Yet he could perceive the significance, the beauty, and the promise of a rather concrete not too large problem, foresee the possibility of a solution and work at it with intensity. And, when he had found the solution, he kept on working at it with loving care, till each detail became fully transparent.
It is due to such care spent on the elaboration of the solution that Fejér's papers are very clearly written, and easy to read and most of his proofs appear very clear and simple. Yet only the very naive may think that it is easy to write a paper that is easy to read, or that it is a simple thing to point out a significant problem that is capable of a simple solution.
Gallery
See also
Fejér window
Real algebraic geometry
References
Sources
External links
Birthplace of Lipót Fejér.
Further reading
1880 births
1959 deaths
People from Pécs
Hungarian Jews
Approximation theorists
20th-century Hungarian mathematicians
Members of the Hungarian Academy of Sciences
Mathematical analysts
Academic staff of Franz Joseph University
Burials at Kerepesi Cemetery
Mathematicians from Austria-Hungary | Lipót Fejér | Mathematics | 1,260 |
41,754,342 | https://en.wikipedia.org/wiki/Feng%20Zhang | Feng Zhang (; born October 22, 1981) is a Chinese–American biochemist. Zhang currently holds the James and Patricia Poitras Professorship in Neuroscience at the McGovern Institute for Brain Research and in the departments of Brain and Cognitive Sciences and Biological Engineering at the Massachusetts Institute of Technology. He also has appointments with the Broad Institute of MIT and Harvard (where he is a core member). He is most well known for his central role in the development of optogenetics and CRISPR technologies.
Early life and education
Zhang was born in China in 1981 and given the name 锋 (which means "point of a spear; edge of a tool; vanguard"). Both of his parents were computer programmers in China. At age 11, he moved to Iowa with his mother (his father was not able to join them for several years). He attended Theodore Roosevelt High School and Central Academy in Des Moines, graduating in 2000. In 1999, he attended the Research Science Institute at MIT, and in 2000 he won 3rd place in the Intel Science Talent Search. He earned his B.A. in chemistry and physics in 2004 from Harvard University, where he worked with Xiaowei Zhuang. He then received his PhD in chemical and biological engineering from Stanford University in 2009 under the guidance of Karl Deisseroth where he developed the technologies behind optogenetics with Edward Boyden. He served as an independent Junior Fellow in the Harvard Society of Fellows.
Research
Zhang's lab is focused on using synthetic biology to develop technologies for genome and epigenome engineering to study neurobiology. He is a leader in the field of optogenetics, which was named the 2010 "Method of the Year". As a postdoc, he began work on using TAL effectors to control gene transcription.
Based on previous work by the Sylvain Moineau Lab, Zhang began work to harness and optimize the CRISPR system to work in human cells in early 2011. While Zhang's group was optimizing the Cas9 system in human cells, the collaborating groups of Emmanuelle Charpentier and Jennifer Doudna described a chimeric RNA design which is capable of facilitating cleavage of DNA using purified Cas9 protein and a synthetic guide. Zhang's group compared their RNA expression approach with a design based on the Doudna / Charpentier chimeric RNA for use in human cells and established features of the guide necessary for Cas9 to function effectively in mammalian cells which are dispensable in biochemical assays. Zhang, Doudna, and other colleagues from Harvard founded Editas Medicine in September 2013 to develop and commercialize CRISPR-based therapies.
Zhang discovered Cas13 with Harvard colleague Eugene Koonin using computational biology methods. In 2016, Zhang cofounded Arbor Biotechnologies to develop Cas13 for therapeutic use.
His lab has also developed a sensitive diagnostic nucleic acid detection protocol that is based on CRISPR termed SHERLOCK (Specific High sensitivity Enzymatic Reporter UnLOCKing) that is able to detect and distinguish strains of viruses and bacteria present in as low as attomolar (10−18 M) concentration. Zhang cofounded Sherlock Biosciences in 2018 to further develop this diagnostic technology.
Also in 2018, Zhang cofounded Beam Therapeutics with Editas cofounder and Harvard colleague David R. Liu to further advance Liu's work on base editing and prime editing.
He has an h-index of 109 according to Google Scholar.
Honors
Zhang is a recipient of the NIH Director's Pioneer Award and a 2012 Searle Scholar. He was named one of MIT Technology Reviews's TR35 in 2013. His work on optogenetics and CRISPR has been recognized by a number of awards, including: the 2011 Perl-UNC Prize (shared with Boyden and Deisseroth); the 2014 Alan T. Waterman Award, the National Science Foundation's highest honor that annually recognizes an outstanding researcher under the age of 35; the 2014 Gabbay Award (shared with Jennifer Doudna and Emmanuelle Charpentier); the 2014 Young Investigator Award from the Society for Neuroscience (shared with Diana Bautista) as well as the ISTT Young Investigator Award from the International Society for Transgenic Technologies. Zhang also received a New York Stem Cell Foundation (NYSCF) – Robertson Stem Cell Investigator Award in 2014, and was named the 2016 NYSCF – Robertson Stem Cell Prize Recipient. In 2015, Zhang became the inaugural recipient of Tsuneko & Reiji Okazaki Award (Nagoya University) and in 2016, he was once again (for the 2nd and 3rd time) sharing honors with Doudna and Charpentier when receiving the Gairdner Foundation International Award and the Tang Prize. In 2017 he received the Albany Medical Center Prize (jointly with Emmanuelle Charpentier, Jennifer Doudna, Luciano Marraffini, and Francisco Mojica) and the Lemelson-MIT Prize. In 2019 he received the Harvey Prize of the Technion/Israel for the year 2018 (jointly with Emmanuelle Charpentier and Jennifer Doudna). In 2019, Zhang received the Golden Plate Award of the American Academy of Achievement. In 2021 he received the Richard Lounsbery Award.
In 2018, Zhang was elected as a Fellow of the American Academy of Arts and Sciences, and a member of the National Academy of Sciences, National Academy of Medicine.
Zhang's research on CRISPR-Cas9 gene editing, while significant, was not recognized by the Nobel Prize committee in 2020, which instead awarded the prize to Emmanuelle Charpentier and Jennifer Doudna for their groundbreaking work on the subject. In 2025, Zhang and Doudna were both recipients of the National Medal of Technology and Innovation.
References
External links
1982 births
Living people
American neuroscientists
Harvard College alumni
Stanford University School of Medicine alumni
Massachusetts Institute of Technology School of Science faculty
Synthetic biologists
Chinese emigrants to the United States
Chinese neuroscientists
American biochemists
Chinese biochemists
People from Shijiazhuang
Chemists from Hebei
Members of the United States National Academy of Sciences
Fellows of the American Academy of Arts and Sciences
Educators from Hebei
Biologists from Hebei
Theodore Roosevelt High School (Iowa) alumni
Members of the National Academy of Medicine | Feng Zhang | Biology | 1,286 |
2,464,777 | https://en.wikipedia.org/wiki/Spodium | Spodium, (Latin for ashes or soot) refers to burned bone (usually used for medical purposes), or the act of divination with ash.
Spodium may also refer to other types of ash, such as the scrapings from the inside of a furnace.
Spodium has a long history of medical usage, mentioned by Hippocrates and, for example, in the Medical Poem of Salerno "...Who knows the cause why Spodium stancheth bleeding?..." (in this case spodium referring to oxen bone ashes).
Incineration
History of ancient medicine | Spodium | Chemistry,Engineering | 132 |
51,019,702 | https://en.wikipedia.org/wiki/Lutgarde%20Raskin | Lutgarde Raskin is a Belgian-American scientist and Professor of Environmental Engineering. She is best known for her studies of microbial ecology in engineered water systems for sewage treatment and drinking water production.
Raskin earned her B.S. and M.S. in engineering and her B.S. in economics from Katholieke Universiteit Leuven. In 1993, she completed her Ph.D. at the University of Illinois. From 1993-2005, she was a member of the faculty of the University of Illinois. Since 2005, she has been a member of the faculty of the University of Michigan where she currently holds the Altarum Institute/ERIM Russell O’Neal Endowed Professorship.
Raskin is a Fellow of the American Academy of Microbiology (AAM) and the Water Environment Federation (WEF). In 2006, she won the Walter L. Huber Civil Engineering Research Prize from the American Society of Civil Engineers, in 2007, she won the Frontier Award in Research from the Association of Environmental Engineering and Science Professors, and in 2016, she won the ISME/IWA Biocluster Award. In 2021, Raskin was elected a member of the National Academy of Engineering for, "application of genetic tools to improve anaerobic biological water treatment."
Raskin has advocated for changes in the operation of drinking water treatment facilities to encourage the growth of potentially beneficial microorganisms. In the wake of the Flint water crisis, she responded to allegations by Marc Edwards that researchers at the University of Michigan declined to collaborate on joint research of the problem.
References
External links
Google Scholar
Environmental engineers
Living people
University of Michigan faculty
University of Illinois faculty
KU Leuven alumni
University of Illinois alumni
Year of birth missing (living people) | Lutgarde Raskin | Chemistry,Engineering | 355 |
10,354,629 | https://en.wikipedia.org/wiki/ProSavin | ProSavin is an experimental drug believed to be of use in the treatment of Parkinson's disease. It is administered to the striatum in the brain, inducing production of dopamine.
It is manufactured by Oxford BioMedica. Results from a Phase I/II clinical trial were published in the Lancet and showed safety, but little efficacy. ProSavin was superseded by AXO-Lenti-PD (OXB-102), an optimized version of the drug.
Mechanism of action
Prosavin uses Oxford BioMedica's Lentivector delivery system to transfer three genes, aromatic amino acid dopa decarboxylase, tyrosine hydroxylase and GTP-cyclohydrolase 1, to the striatum in the brain, reprogramming transduced cells to secrete dopamine.
See also
TroVax
References
Drugs acting on the nervous system
Virotherapy | ProSavin | Chemistry | 195 |
657,442 | https://en.wikipedia.org/wiki/Card%20manipulation | Card manipulation, commonly known as card magic, is the branch of magic that deals with creating effects using sleight of hand techniques involving playing cards. Card manipulation is often used in magical performances, especially in close-up, parlor, and street magic. Some of the most recognized names in this field include Dai Vernon, Tony Slydini, Ed Marlo, S.W. Erdnase, Richard Turner, John Scarne, Ricky Jay and René Lavand. Before becoming world-famous for his escapes, Houdini billed himself as "The King of Cards". Among the more well-known card tricks relying on card manipulation are Ambitious Card, and Three-card Monte, a common street hustle also known as Find the Lady.
History
Playing cards became popular with magicians in the 15th century as they were props which were inexpensive, versatile, and easily accessible. Card magic has bloomed into one of the most popular branches of magic, accumulating thousands of techniques and ideas. These range from complex mathematics like those used by Persi Diaconis, the use of psychological techniques like those taught by Banachek, to extremely difficult sleight of hand like that of Ed Marlo and Dai Vernon.
Card magic, in one form or another, likely dates from the time playing cards became commonly known, towards the second half of the fourteenth century, but its history in this period is largely undocumented. Compared to sleight of hand magic in general and to cups and balls, it is a new form of magic. However, due to its versatility as a prop it has become popular amongst modern magicians.
Martin Gardner called S.W. Erdnase's 1902 treatise on card manipulation Artifice, Ruse and Subterfuge at the Card Table: A Treatise on the Science and Art of Manipulating Cards "the most famous, the most carefully studied book ever published on the art of manipulating cards at gaming tables".
Technique
Illusions performed with playing cards are constructed using basic card manipulation techniques (or sleights). It is the intention of the performer that such sleights are performed in a manner which is undetectable to the audience—however, that result takes practice and a thorough understanding of method. Manipulation techniques include:
Lifts
Lifts are techniques which extract one or more cards from a deck. The produced card(s) are normally known to the audience, for example having previously been selected or identified as part of the illusion. In sleight of hand, a "double lift" can be made to extract two cards from the deck, but held together to appear as one card.
False deals
Dealing cards (for example at the start of a traditional card game) is considered a fair means of distributing cards. False deals are techniques which appear to deliver cards fairly, when actually the cards delivered are predetermined or known to the performer. False dealing techniques include: second dealing, bottom dealing, middle dealing, false counts (more or less cards are dealt than expected), and double dealing (the top and bottom cards of a small packet are dealt together).
Side steal
A technique invented by magician F. W. Conradi. It is used to control a predetermined card to the top of a deck (most of the time).
Passes
The effect of the card pass is that an identified card is inserted somewhere into a deck. However, following rapid and concealed manipulation by the performer, it is secretly moved or displaced - usually to the top (or bottom) of the deck. A pass is achieved by swapping the portion of the deck from the identified card downwards, with the portion of the deck above the identified card (cutting the deck secretly to control a certain card). Pass techniques include: the classic pass, the invisible turn-over pass, the Zingone Perfect Table pass, the flesh grip pass, the jog pass, the Braue pass, the Charlier pass, the finger palm pass and the Hermann pass. Simply, a card pass is a secret cut of the deck (not to be confused with a coin pass which is a false transfer of a coin from one hand to the other).
Palming
Palming is a technique for holding or concealing one or more cards in the palm of the hand. Cards palmed from a deck are typically held in reserve (unseen by the audience) until production is required for the illusion being performed. Palming techniques include: the Braue diagonal tip-up, the swing, the thumb-count, face card palm, the crosswise, new vertical, the gamblers' squaring, the gamblers' flat, the Hugard top palm, the flip-over, the Hofzinser bottom, the Braue bottom, the Tenkai palm and the Zingone bottom.
False shuffles
Shuffling cards is considered a fair means to randomize the cards contained in a deck. False shuffles are techniques which appear to fairly shuffle a deck, when actually the cards in the deck are maintained in an order appropriate to the illusion being performed. False shuffles can be performed that permit one or more cards to be positioned in a deck, or even for the entire deck to remain in an unshuffled state (for example the state the deck was in before the shuffle). False shuffle techniques include: the perfect riffle, the strip-out, the Hindu shuffle, the gamblers', and various stock shuffling techniques (where the locations of one or more cards are controlled during the false shuffle).
False cuts
Cutting a deck of cards is a technique whereby the deck is split into two portions (the split point being randomly determined – often by a member of the audience), which are then swapped – the effect being to make sure that no one is sure of which card is on the top of the deck. False cuts are techniques whereby the performer appears to organise a fair cut, when actually a predetermined card (or cards) is organised to be located on the top of the deck. False cutting techniques include: the false running cut, and the gambler's false cut.
Color change
A color change is the effect of changing one card to another in front of the spectator's eyes. Usually the cards changed are of different colors, or a face card into a number card, in order to make the change more apparent. There are many different techniques to accomplish this effect, but among the most common are the classic color change and the snap change, as they are easier to master than others. Professional magicians usually perform other color changes such as the Cardini or Erdnase change.
Crimps
Crimps are techniques whereby part of a card is intentionally physically marked, creased, or bent to facilitate identification during an illusion. Crimp techniques include: the regular crimp, the gamblers' crimp, the breather crimp and the peek crimp.
Jogs
A jog is one or more cards which protrude slightly from somewhere within a deck or stack of cards. The protrusion, although not noticeable to the audience, permits the performer to retain knowledge about the location of the card during other manipulations. While jogs are not always hidden from the audience, they are most often. Some varieties include "in jogs", "side jogs", and "out jogs".
Reverses
Card reverses are techniques whereby one or more cards in a deck are made to change their orientation, for example from face up to face down.
Forces
Card forces are the sleight which involves forcing a spectator to choose a card that has been predetermined by the performer, while maintaining supposed free choice. Some forces include; the classic force, the riffle force, and the slip force.
Misdirection
Misdirection, though not entirely specific to card magic, is indeed very prominent in most card performances. In many cases, the ‘skill’ of a card illusionist is determined by how well they can switch the audiences attention from one part of the performance to the next, which becomes more difficult when dealing with hecklers. Magicians can use card techniques like flourishing, verbal misdirection and by cracking jokes, in order to mislead the audience, making concealment of important sleight of hand easier in the process.
See also
List of card manipulation techniques
Card flourish
Card marking
Card sharp
Card throwing
Sleight of hand
Trick deck
References
Citations
Sources
External links
The Royal Road to Card Magic, 1999
Magic Tricks with Cards Photo Feature, Havana Times, June 22, 2010
Card tricks
Object manipulation
Physical activity and dexterity toys | Card manipulation | Biology | 1,754 |
5,318,198 | https://en.wikipedia.org/wiki/Transmission%20coefficient | The transmission coefficient is used in physics and electrical engineering when wave propagation in a medium containing discontinuities is considered. A transmission coefficient describes the amplitude, intensity, or total power of a transmitted wave relative to an incident wave.
Overview
Different fields of application have different definitions for the term. All the meanings are very similar in concept: In chemistry, the transmission coefficient refers to a chemical reaction overcoming a potential barrier; in optics and telecommunications it is the amplitude of a wave transmitted through a medium or conductor to that of the incident wave; in quantum mechanics it is used to describe the behavior of waves incident on a barrier, in a way similar to optics and telecommunications.
Although conceptually the same, the details in each field differ, and in some cases the terms are not an exact analogy.
Chemistry
In chemistry, in particular in transition state theory, there appears a certain "transmission coefficient" for overcoming a potential barrier. It is (often) taken to be unity for monomolecular reactions. It appears in the Eyring equation.
Optics
In optics, transmission is the property of a substance to permit the passage of light, with some or none of the incident light being absorbed in the process. If some light is absorbed by the substance, then the transmitted light will be a combination of the wavelengths of the light that was transmitted and not absorbed. For example, a blue light filter appears blue because it absorbs red and green wavelengths. If white light is shone through the filter, the light transmitted also appears blue because of the absorption of the red and green wavelengths.
The transmission coefficient is a measure of how much of an electromagnetic wave (light) passes through a surface or an optical element. Transmission coefficients can be calculated for either the amplitude or the intensity of the wave. Either is calculated by taking the ratio of the value after the surface or element to the value before. The transmission coefficient for total power is generally the same as the coefficient for intensity.
Telecommunications
In telecommunication, the transmission coefficient is the ratio of the amplitude of the complex transmitted wave to that of the incident wave at a discontinuity in the transmission line.
Consider a wave travelling through a transmission line with a step in impedance from to . When the wave transitions through the impedance step, a portion of the wave will be reflected back to the source. Because the voltage on a transmission line is always the sum of the forward and reflected waves at that point, if the incident wave amplitude is 1, and the reflected wave is , then the amplitude of the forward wave must be sum of the two waves or .
The value for is uniquely determined from first principles by noting that the incident power on the discontinuity must equal the sum of the power in the reflected and transmitted waves:
.
Solving the quadratic for leads both to the reflection coefficient:
,
and to the transmission coefficient:
.
The probability that a portion of a communications system, such as a line, circuit, channel or trunk, will meet specified performance criteria is also sometimes called the "transmission coefficient" of that portion of the system. The value of the transmission coefficient is inversely related to the quality of the line, circuit, channel or trunk.
Quantum mechanics
In non-relativistic quantum mechanics, the transmission coefficient and related reflection coefficient are used to describe the behavior of waves incident on a barrier. The transmission coefficient represents the probability flux of the transmitted wave relative to that of the incident wave. This coefficient is often used to describe the probability of a particle tunneling through a barrier.
The transmission coefficient is defined in terms of the incident and transmitted probability current density J according to:
where is the probability current in the wave incident upon the barrier with normal unit vector and is the probability current in the wave moving away from the barrier on the other side.
The reflection coefficient R is defined analogously:
Law of total probability requires that , which in one dimension reduces to the fact that the sum of the transmitted and reflected currents is equal in magnitude to the incident current.
For sample calculations, see rectangular potential barrier.
WKB approximation
Using the WKB approximation, one can obtain a tunnelling coefficient that looks like
where are the two classical turning points for the potential barrier. In the classical limit of all other physical parameters much larger than the reduced Planck constant, denoted , the transmission coefficient goes to zero. This classical limit would have failed in the situation of a square potential.
If the transmission coefficient is much less than 1, it can be approximated with the following formula:
where is the length of the barrier potential.
See also
Reflection coefficient
Reflections of signals on conducting lines
References
Quantum mechanics
Geometrical optics
Physical optics
Fiber-optic communications | Transmission coefficient | Physics | 937 |
30,839,620 | https://en.wikipedia.org/wiki/Manufacturers%20Aircraft%20Association | The Manufacturer's Aircraft Association (MAA) was a trade association and patent pool of U.S. aircraft manufacturers formed in 1917.
The U.S. military and other elements of the U.S. federal government pressured the Wright Company, the Curtiss Aeroplane and Motor Company, and other manufacturers to form the association to break a patent logjam that was preventing U.S. manufacturers from making airplanes that the U.S. military could use in World War I. Legally, the MAA was a private corporation which had an agreement with the airplane manufacturers to cross-license their patents without substantial royalties.
The MAA was dissolved in 1977.
History and records
The U.S. entered World War I in 1917. The two major U.S. companies holding aviation patents, the Wright Company and the Curtiss Company, had effectively blocked the building of new airplanes, which were desired for the war effort. The U.S. government, as a result of a recommendation from the National Advisory Committee for Aeronautics, formed by then Assistant Secretary of the Navy, Franklin D. Roosevelt, pressured the industry to form a cross-licensing organization, the MAA, in 1917. The association was designed as a patent pool which drew up a cross-licensing agreement to allow manufacturers to have unrestrained use of airplane patents in order to produce airplanes for the government's war effort. Early members included aviation pioneers Orville Wright and Glenn Curtiss, as well as representatives of major aircraft manufacturing units in the United States.
Frank Henry Russell participated in the MAA's formation and was elected its president, which he remained until his death in 1947.
Records of the MAA are archived among the Transportation Collections of the University of Wyoming's American Heritage Center, in Laramie, Wyoming. Those records reportedly document the history of aircraft, and document the relationships of the MAA with its various members, the military, and the U.S. Congress. They include records related to the aircraft manufacturing industry's principal trade associations: the Aeronautical Chamber of Commerce of America, Inc. (1920-1943), and the Aerospace Industries Association, Inc. (AIA) (1952-1975). The online inventory of those MAA records provides an introductory section with additional information on the history of the MAA.
See also
The Wright brothers patent war—which led to the creation of the MAA
Aerospace Industries Association—a parallel organization that took over some of the same roles after the MAA ended
References
External links
Manufacturers Aircraft Association records at the American Heritage Center
History in Flight at the American Heritage Center at AHC blogs
Aerospace
Trade associations based in the United States
Patent pools | Manufacturers Aircraft Association | Physics | 535 |
3,581,427 | https://en.wikipedia.org/wiki/Defense%20Data%20Network | The Defense Data Network (DDN) was a computer networking effort of the United States Department of Defense from 1983 through 1995. It was based on ARPANET technology.
History
As an experiment, from 1971 to 1977, the Worldwide Military Command and Control System (WWMCCS) purchased and operated an ARPANET-type system from BBN Technologies for the Prototype WWMCCS Intercomputer Network (PWIN). The experiments proved successful enough that it became the basis of the much larger WIN system. Six initial WIN sites in 1977 increased to 20 sites by 1981.
In 1975, the Defense Communication Agency (DCA) took over operation of the ARPANET as it became an operational tool in addition to an ongoing research project. At that time, the Automatic Digital Network (AUTODIN), carried most of the Defense Department's message traffic. Starting in 1972, attempts had been made to introduce some packet switching into its planned replacement, AUTODIN II. AUTODIN II development proved unsatisfactory, however, and in 1982, AUTODIN II was canceled, to be replaced by a combination of several packet-based networks that would connect military installations.
The DCA used "Defense Data Network" (DDN) as the program name for this new network. Under its initial architecture, as developed by the Institute for Defense Analysis, the DDN would consist of two separate instances: the unclassified MILNET, which would be split off the ARPANET; and a classified network, also based on ARPANET technology, which would provide services for WIN, DODIIS, and SACDIN. C/30 packet switches, developed by BBN Technologies as upgraded Interface Message Processors, would provide the network technology. End-to-end encryption would be provided by ARPANET encryption devices, namely the Internet Private Line Interface (IPLI) or Blacker.
After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out. Both networks carried unclassified information, and were connected at a small number of points which would allow total separation in the event of an emergency.
As a large-scale, private internet, the DDN provided Internet Protocol connectivity across the United States and to US military bases abroad. The Defense Communications Engineering Center (DCEC), part of DCA, handled DDN network engineering and DDN network operations. The DCEC was located in Reston, Virginia from the mid-1980s until it was closed and merged with a DISA site in Bailey's Crossroads, Virginia in the early 2000s (long after DCA had been merged into the new Defense Information Systems Agency (DISA)).
Throughout the 1980s it expanded as a set of four parallel military networks, each at a different security level. The networks were:
Military Network (MILNET) for UNCLASSIFIED traffic
Defense Secure Network One (DSNET 1) for SECRET traffic
Defense Secure Network Two (DSNET 2) for TOP SECRET traffic
Defense Secure Network Three (DSNET 3) for TOP SECRET/Sensitive Compartmented Information (TS/SCI)
MILNET and DSNET 1 were common user networks, much like the public Internet, but DSNET 2 was dedicated to supporting the Worldwide Military Command and Control System (WWMCCS) and DSNET 3 was dedicated to supporting the DOD Intelligence Information System (DODIIS). These networks transitioned to become the NIPRNET, SIPRNET, and JWICS networks in the 1990s.
DDN-NIC
DDN-NIC or Network Information Center (NIC) was located at the DDN Installation and Integration Support (DIIS) program office in Chantilly, Virginia. It provided general reference services to DDN users via telephone, electronic mail, and U.S. mail. It was the first organization responsible for the assignment of TCP/IP addresses and Autonomous System numbers.
See also
Defense Information Systems Network (DISN)
References
External links
Cybertelecom :: Internet History 1983
Wide area networks
Internet access
Telecommunications equipment of the Cold War
United States Department of Defense information technology | Defense Data Network | Technology | 833 |
25,614 | https://en.wikipedia.org/wiki/Race%20%28human%20categorization%29 | Race is a categorization of humans based on shared physical or social qualities into groups generally viewed as distinct within a given society. The term came into common usage during the 16th century, when it was used to refer to groups of various kinds, including those characterized by close kinship relations. By the 17th century, the term began to refer to physical (phenotypical) traits, and then later to national affiliations. Modern science regards race as a social construct, an identity which is assigned based on rules made by society. While partly based on physical similarities within groups, race does not have an inherent physical or biological meaning. The concept of race is foundational to racism, the belief that humans can be divided based on the superiority of one race over another.
Social conceptions and groupings of races have varied over time, often involving folk taxonomies that define essential types of individuals based on perceived traits. Modern scientists consider such biological essentialism obsolete, and generally discourage racial explanations for collective differentiation in both physical and behavioral traits.
Even though there is a broad scientific agreement that essentialist and typological conceptions of race are untenable, scientists around the world continue to conceptualize race in widely differing ways. While some researchers continue to use the concept of race to make distinctions among fuzzy sets of traits or observable differences in behavior, others in the scientific community suggest that the idea of race is inherently naive or simplistic. Still others argue that, among humans, race has no taxonomic significance because all living humans belong to the same subspecies, Homo sapiens sapiens.
Since the second half of the 20th century, race has been associated with discredited theories of scientific racism, and has become increasingly seen as a largely pseudoscientific system of classification. Although still used in general contexts, race has often been replaced by less ambiguous and/or loaded terms: populations, people(s), ethnic groups, or communities, depending on context. Its use in genetics was formally renounced by the U.S. National Academies of Sciences, Engineering, and Medicine in 2023.
Defining race
Modern scholarship views racial categories as socially constructed, that is, race is not intrinsic to human beings but rather an identity created, often by socially dominant groups, to establish meaning in a social context. Different cultures define different racial groups, often focused on the largest groups of social relevance, and these definitions can change over time.
Historical race concepts have included a wide variety of schemes to divide local or worldwide populations into races and sub-races. Across the world, different organizations and societies choose to disambiguate race to different extents:
In South Africa, the Population Registration Act, 1950 recognized only White, Black, and Coloured, with Indians added later.
The government of Myanmar recognizes eight "major national ethnic races".
The Brazilian census classifies people into brancos (Whites), pardos (multiracial), pretos (Blacks), amarelos (Asians), and indigenous (see Race and ethnicity in Brazil), though many people use different terms to identify themselves.
Legal definitions of whiteness in the United States used before the civil rights movement were often challenged for specific groups.
Furthermore, the United States Census Bureau proposed but then withdrew plans to add a new category to classify Middle Eastern and North African peoples in the 2020 U.S. census, due to a dispute over whether this classification should be considered a white ethnicity or a separate race.
The establishment of racial boundaries often involves the subjugation of groups defined as racially inferior, as in the one-drop rule used in the 19th-century United States to exclude those with any amount of African ancestry from the dominant racial grouping, defined as "white". Such racial identities reflect the cultural attitudes of imperial powers dominant during the age of European colonial expansion. This view rejects the notion that race is biologically defined.
According to geneticist David Reich, "while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today's racial constructs are real". In response to Reich, a group of 67 scientists from a broad range of disciplines wrote that his concept of race was "flawed" as "the meaning and significance of the groups is produced through social interventions".
Although commonalities in physical traits such as facial features, skin color, and hair texture comprise part of the race concept, this linkage is a social distinction rather than an inherently biological one. Other dimensions of racial groupings include shared history, traditions, and language. For instance, African-American English is a language spoken by many African Americans, especially in areas of the United States where racial segregation exists. Furthermore, people often self-identify as members of a race for political reasons.
When people define and talk about a particular conception of race, they create a social reality through which social categorization is achieved. In this sense, races are said to be social constructs. These constructs develop within various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of While race is understood to be a social construct by many, most scholars agree that race has real material effects in the lives of people through institutionalized practices of preference and discrimination.
Socioeconomic factors, in combination with early but enduring views of race, have led to considerable suffering within disadvantaged racial groups. Racial discrimination often coincides with racist mindsets, whereby the individuals and ideologies of one group come to perceive the members of an outgroup as both racially defined and morally inferior. As a result, racial groups possessing relatively little power often find themselves excluded or oppressed, while hegemonic individuals and institutions are charged with holding racist attitudes. Racism has led to many instances of tragedy, including slavery and genocide.
In some countries, law enforcement uses race to profile suspects. This use of racial categories is frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting stereotypes. Because in some societies racial groupings correspond closely with patterns of social stratification, for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial categories may in part reflect subjective attributions, self-identities, and social institutions.
Scholars continue to debate the degrees to which racial categories are biologically warranted and socially constructed. For example, in 2008, John Hartigan Jr. argued for a view of race that focused primarily on culture, but which does not ignore the potential relevance of biology or genetics. Accordingly, the racial paradigms employed in different disciplines vary in their emphasis on biological reduction as contrasted with societal construction.
In the social sciences, theoretical frameworks such as racial formation theory and critical race theory investigate implications of race as social construction by exploring how the images, ideas and assumptions of race are expressed in everyday life. A large body of scholarship has traced the relationships between the historical, social production of race in legal and criminal language, and their effects on the policing and disproportionate incarceration of certain groups.
Historical origins of racial classification
Groups of humans have always identified themselves as distinct from neighboring groups, but such differences have not always been understood to be natural, immutable and global. These features are the distinguishing features of how the concept of race is used today. In this way the idea of race as we understand it today came about during the historical process of exploration and conquest which brought Europeans into contact with groups from different continents, and of the ideology of classification and typology found in the natural sciences. The term race was often used in a general biological taxonomic sense, starting from the 19th century, to denote genetically differentiated human populations defined by phenotype.
The modern concept of race emerged as a product of the colonial enterprises of European powers from the 16th to 18th centuries which identified race in terms of skin color and physical differences. Author Rebecca F. Kennedy argues that the Greeks and Romans would have found such concepts confusing in relation to their own systems of classification. According to Bancel et al., the epistemological moment where the modern concept of race was invented and rationalized lies somewhere between 1730 and 1790.
Colonialism
According to Smedley and Marks the European concept of "race", along with many of the ideas now associated with the term, arose at the time of the scientific revolution, which introduced and privileged the study of natural kinds, and the age of European imperialism and colonization which established political relations between Europeans and peoples with distinct cultural and political traditions. As Europeans encountered people from different parts of the world, they speculated about the physical, social, and cultural differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves from throughout the world, created a further incentive to categorize human groups in order to justify the subordination of African slaves.
Drawing on sources from classical antiquity and upon their own internal interactions – for example, the hostility between the English and Irish powerfully influenced early European thinking about the differences between people – Europeans began to sort themselves and others into groups based on physical appearance, and to attribute to individuals belonging to these groups behaviors and capacities which were claimed to be deeply ingrained. A set of folk beliefs took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral qualities. Similar ideas can be found in other cultures, for example in China, where a concept often translated as "race" was associated with supposed common descent from the Yellow Emperor, and used to stress the unity of ethnic groups in China. Brutal conflicts between ethnic groups have existed throughout history and across the world.
Early taxonomic models
The first post-Graeco-Roman published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684. In the 18th century the differences among human groups became a focus of scientific investigation. But the scientific classification of phenotypic variation was frequently coupled with racist ideas about innate predispositions of different groups, always attributing the most desirable features to the White, European race and arranging the other races along a continuum of progressively undesirable attributes. The 1735 classification of Carl Linnaeus, inventor of zoological taxonomy, divided the human species Homo sapiens into continental varieties of europaeus, asiaticus, americanus, and afer, each associated with a different humour: sanguine, melancholic, choleric, and phlegmatic, respectively. Homo sapiens europaeus was described as active, acute, and adventurous, whereas Homo sapiens afer was said to be crafty, lazy, and careless.
The 1775 treatise "The Natural Varieties of Mankind", by Johann Friedrich Blumenbach proposed five major divisions: the Caucasoid race, the Mongoloid race, the Ethiopian race (later termed Negroid), the American Indian race, and the Malayan race, but he did not propose any hierarchy among the races. Blumenbach also noted the graded transition in appearances from one group to adjacent groups and suggested that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them".
From the 17th through 19th centuries, the merging of folk beliefs about group differences with scientific explanations of those differences produced what Smedley has called an "ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further argued that some groups may be the result of mixture between formerly distinct populations, but that careful study could distinguish the ancestral races that had combined to produce admixed groups. Subsequent influential classifications by Georges Buffon, Petrus Camper and Christoph Meiners all classified "Negros" as inferior to Europeans. In the United States the racial theories of Thomas Jefferson were influential. He saw Africans as inferior to Whites especially in regards to their intellect, and imbued with unnatural sexual appetites, but described Native Americans as equals to whites.
Polygenism vs monogenism
In the last two decades of the 18th century, the theory of polygenism, the belief that different races had evolved separately in each continent and shared no common ancestor, was advocated in England by historian Edward Long and anatomist Charles White, in Germany by ethnographers Christoph Meiners and Georg Forster, and in France by Julien-Joseph Virey. In the US, Samuel George Morton, Josiah Nott and Louis Agassiz promoted this theory in the mid-19th century. Polygenism was popular and most widespread in the 19th century, culminating in the founding of the Anthropological Society of London (1863), which, during the period of the American Civil War, broke away from the Ethnological Society of London and its monogenic stance, their underlined difference lying, relevantly, in the so-called "Negro question": a substantial racist view by the former, and a more liberal view on race by the latter.
Modern scholarship
Models of human evolution
Today, all humans are classified as belonging to the species Homo sapiens. However, this is not the first species of homininae: the first species of genus Homo, Homo habilis, evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that Archaic Homo sapiens (A group including the possible species H. heidelbergensis, H. rhodesiensis, and H. neanderthalensis) evolved out of African H. erectus () or H. ergaster. Anthropologists support the idea that anatomically modern humans (Homo sapiens) evolved in North or East Africa from an archaic human species such as H. heidelbergensis and then migrated out of Africa, mixing with and replacing H. heidelbergensis and H. neanderthalensis populations throughout Europe and Asia, and H. rhodesiensis populations in Sub-Saharan Africa (a combination of the Out of Africa and Multiregional models).
Biological classification
In the early 20th century, many anthropologists taught that race was an entirely biological phenomenon and that this was core to a person's behavior and identity, a position commonly called racial essentialism. This, coupled with a belief that linguistic, cultural, and social groups fundamentally existed along racial lines, formed the basis of what is now called scientific racism. After the Nazi eugenics program, along with the rise of anti-colonial movements, racial essentialism lost widespread popularity. New studies of culture and the fledgling field of population genetics undermined the scientific standing of racial essentialism, leading race anthropologists to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.
The first to challenge the concept of race on empirical grounds were the anthropologists Franz Boas, who provided evidence of phenotypic plasticity due to environmental factors, and Ashley Montagu, who relied on evidence from genetics. E. O. Wilson then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies".
Human genetic variation is predominantly within races, continuous, and complex in structure, which is inconsistent with the concept of genetic human races. According to the biological anthropologist Jonathan Marks,
Subspecies
The term race in biology is used with caution because it can be ambiguous. Generally, when it is used it is effectively a synonym of subspecies. (For animals, the only taxonomic unit below the species level is usually the subspecies; there are narrower infraspecific ranks in botany, and race does not correspond directly with any of them.) Traditionally, subspecies are seen as geographically isolated and genetically differentiated populations. Studies of human genetic variation show that human populations are not geographically isolated. and their genetic differences are far smaller than those among comparable subspecies.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered different subspecies by the criterion that most individuals of such populations can be allocated correctly by inspection. Wright argued: "It does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each of these groups that every individual can easily be distinguished from every other." While in practice subspecies are often defined by easily observable physical appearance, there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to race is generally regarded as discredited by biologists and anthropologists.
Ancestrally differentiated populations (clades)
In 2000, philosopher Robin Andreasen proposed that cladistics might be used to categorize human races biologically, and that races can be both biologically real and socially constructed. Andreasen cited tree diagrams of relative genetic distances among populations published by Luigi Cavalli-Sforza as the basis for a phylogenetic tree of human races (p. 661). Biological anthropologist Jonathan Marks (2008) responded by arguing that Andreasen had misinterpreted the genetic literature: "These trees are phenetic (based on similarity), rather than cladistic (based on monophyletic descent, that is from a series of unique ancestors)." Evolutionary biologist Alan Templeton (2013) argued that multiple lines of evidence falsify the idea of a phylogenetic tree structure to human genetic diversity, and confirm the presence of gene flow among populations. Marks, Templeton, and Cavalli-Sforza all conclude that genetics does not provide evidence of human races.
Previously, anthropologists Lieberman and Jackson (1995) had also critiqued the use of cladistics to support concepts of race. They argued that "the molecular and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples". For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively grouped as Caucasians prior to the analysis of their DNA variation. They argued that this a priori grouping limits and skews interpretations, obscures other lineage relationships, deemphasizes the impact of more immediate clinal environmental factors on genomic diversity, and can cloud our understanding of the true patterns of affinity.
In 2015, Keith Hunley, Graciela Cabana, and Jeffrey Long analyzed the Human Genome Diversity Project sample of 1,037 individuals in 52 populations, finding that diversity among non-African populations is the result of a serial founder effect process, with non-African populations as a whole nested among African populations, that "some African populations are equally related to other African populations and to non-African populations", and that "outside of Africa, regional groupings of populations are nested inside one another, and many of them are not monophyletic". Earlier research had also suggested that there has always been considerable gene flow between human populations, meaning that human population groups are not monophyletic. Rachel Caspari has argued that, since no groups currently regarded as races are monophyletic, by definition none of these groups can be clades.
Clines
One crucial innovation in reconceptualizing genotypic and phenotypic variation was the anthropologist C. Loring Brace's observation that such variations, insofar as they are affected by natural selection, slow migration, or genetic drift, are distributed along geographic gradations or clines. For example, with respect to skin color in Europe and Africa, Brace writes:
In part, this is due to isolation by distance. This point called attention to a problem common to phenotype-based descriptions of races (for example, those based on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion was that, since clines cross racial boundaries, "there are no races, only clines".
In a response to Livingstone, Theodore Dobzhansky argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingstone that if races have to be 'discrete units', then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept". The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment". He further observed that even when there is clinal variation: "Race differences are objectively ascertainable biological phenomena ... but it does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingstone and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, the biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly – for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa. As the anthropologists Leonard Lieberman and Fatimah Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous".
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. A skin-lightening mutation, estimated to have occurred 20,000 to 50,000 years ago, partially accounts for the appearance of light skin in people who migrated out of Africa northward into what is now Europe. East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as put it:
Genetically differentiated populations
Another way to look at differences between populations is to measure genetic differences rather than physical differences between groups. The mid-20th-century anthropologist William C. Boyd defined race as: "A population which differs significantly from other populations in regard to the frequency of one or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant 'constellation'". Leonard Lieberman and Rodney Kirk have pointed out that "the paramount weakness of this statement is that if one gene can distinguish races then the number of races is as numerous as the number of human couples reproducing". Moreover, the anthropologist Stephen Molnar has suggested that the discordance of clines inevitably results in a multiplication of races that renders the concept itself useless. The Human Genome Project states "People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other." Massimo Pigliucci and Jonathan Kaplan argue that human races do exist, and that they correspond to the genetic classification of ecotypes, but that real human races do not correspond very much, if at all, to folk racial categories. In contrast, Walsh & Yun reviewed the literature in 2011 and reported: "Genetic studies using very few chromosomal loci find that genetic polymorphisms divide human populations into clusters with almost 100 percent accuracy and that they correspond to the traditional anthropological categories."
Some biologists argue that racial categories correlate with biological traits (e.g. phenotype), and that certain genetic markers have varying frequencies among human populations, some of which correspond more or less to traditional racial groupings.
Distribution of genetic variation
The distribution of genetic variants within and among human populations are impossible to describe succinctly because of the difficulty of defining a population, the clinal nature of variation, and heterogeneity across the genome (Long and Kittles 2003). In general, however, an average of 85% of statistical genetic variation exists within local populations, ≈7% is between local populations within the same continent, and ≈8% of variation occurs between large groups living on different continents. The recent African origin theory for humans would predict that in Africa there exists a great deal more diversity than elsewhere and that diversity should decrease the further from Africa a population is sampled. Hence, the 85% average figure is misleading: Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 60% of human genetic diversity exists in the least diverse population they analyzed (the Surui, a population derived from New Guinea). Statistical analysis that takes this difference into account confirms previous findings that "Western-based racial classifications have no taxonomic significance".
Cluster analysis
A 2002 study of random biallelic genetic loci found little to no evidence that humans were divided into distinct biological groups.
In his 2003 paper, "Human Genetic Diversity: Lewontin's Fallacy", A. W. F. Edwards argued that rather than using a locus-by-locus analysis of variation to derive taxonomy, it is possible to construct a human classification system based on characteristic genetic patterns, or clusters inferred from multilocus genetic data. Geographically based human studies since have shown that such genetic clusters can be derived from analyzing of a large number of loci which can assort individuals sampled into groups analogous to traditional continental racial groups. Joanna Mountain and Neil Risch cautioned that while genetic clusters may one day be shown to correspond to phenotypic variations between groups, such assumptions were premature as the relationship between genes and complex traits remains poorly understood. However, Risch denied such limitations render the analysis useless: "Perhaps just using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out? ... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact that it has utility."
Early human genetic cluster analysis studies were conducted with samples taken from ancestral population groups living at extreme geographic distances from each other. It was thought that such large geographic distances would maximize the genetic variation between the groups sampled in the analysis, and thus maximize the probability of finding cluster patterns unique to each group. In light of the historically recent acceleration of human migration (and correspondingly, human gene flow) on a global scale, further studies were conducted to judge the degree to which genetic cluster analysis can pattern ancestrally identified groups as well as geographically separated groups. One such study looked at a large multiethnic population in the United States, and "detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity – as opposed to current residence – is the major determinant of genetic structure in the U.S. population."
have argued that even when individuals can be reliably assigned to specific population groups, it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster. They found that many thousands of genetic markers had to be used in order for the answer to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "never". This assumed three population groups separated by large geographic ranges (European, African and East Asian). The entire world population is much more complex and studying an increasing number of groups would require an increasing number of markers for the same answer. The authors conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes". Witherspoon, et al. concluded: "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population."
Anthropologists such as C. Loring Brace, the philosophers Jonathan Kaplan and Rasmus Winther, and the geneticist Joseph Graves, have argued that the cluster structure of genetic data is dependent on the initial hypotheses of the researcher and the influence of these hypotheses on the choice of populations to sample. When one samples continental groups, the clusters become continental, but if one had chosen other sampling patterns, the clustering would be different. Weiss and Fullerton have noted that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form and all other populations could be described as being clinally composed of admixtures of Maori, Icelandic and Mayan genetic materials. Kaplan and Winther therefore argue that, seen in this way, both Lewontin and Edwards are right in their arguments. They conclude that while racial groups are characterized by different allele frequencies, this does not mean that racial classification is a natural taxonomy of the human species, because multiple other genetic patterns can be found in human populations that crosscut racial distinctions. Moreover, the genomic data underdetermines whether one wishes to see subdivisions (i.e., splitters) or a continuum (i.e., lumpers). Under Kaplan and Winther's view, racial groupings are objective social constructions (see Mills 1998) that have conventional biological reality only insofar as the categories are chosen and constructed for pragmatic scientific reasons. In earlier work, Winther had identified "diversity partitioning" and "clustering analysis" as two separate methodologies, with distinct questions, assumptions, and protocols. Each is also associated with opposing ontological consequences vis-a-vis the metaphysics of race. Philosopher Lisa Gannett has argued that biogeographical ancestry, a concept devised by Mark Shriver and Tony Frudakis, is not an objective measure of the biological aspects of race as Shriver and Frudakis claim it is. She argues that it is actually just a "local category shaped by the U.S. context of its production, especially the forensic aim of being able to predict the race or ethnicity of an unknown suspect based on DNA found at the crime scene".
Clines and clusters in genetic variation
Recent studies of human genetic clustering have included a debate over how genetic variation is organized, with clusters and clines as the main possible orderings. argued for smooth, clinal genetic variation in ancestral populations even in regions previously considered racially homogeneous, with the apparent gaps turning out to be artifacts of sampling techniques. disputed this and offered an analysis of the Human Genetic Diversity Panel showing that there were small discontinuities in the smooth genetic variation for ancestral populations at the location of geographic barriers such as the Sahara, the Oceans, and the Himalayas. Nonetheless, stated that their findings "should not be taken as evidence of our support of any particular concept of biological race ... Genetic differences among human populations derive mainly from gradations in allele frequencies rather than from distinctive 'diagnostic' genotypes." Using a sample of 40 populations distributed roughly evenly across the Earth's land surface, found that "genetic diversity is distributed in a more clinal pattern when more geographically intermediate populations are sampled".
Guido Barbujani has written that human genetic variation is generally distributed continuously in gradients across much of Earth, and that there is no evidence that genetic boundaries between human populations exist as would be necessary for human races to exist.
Over time, human genetic variation has formed a nested structure that is inconsistent with the concept of races that have evolved independently of one another.
Social constructions
As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, historians, cultural anthropologists and other social scientists re-conceptualized the term "race" as a cultural category or identity, i.e., a way among many possible ways in which a society chooses to divide its members into categories.
Many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race", following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the civil rights movement in the United States and the emergence of numerous anti-colonial movements worldwide. They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an objective reality but which was believed in because of its social functions.
Craig Venter and Francis Collins of the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining the data from the genome mapping, Venter realized that although the genetic variation within the human species is on the order of 1–3% (instead of the previously assumed 1%), the types of variations do not support the notion of genetically defined races. Venter said, "Race is a social concept. It's not a scientific one. There are no bright lines (that would stand out), if we could compare all the sequenced genomes of everyone on the planet. ... When we try to apply science to try to sort out these social differences, it all falls apart."
Anthropologist Stephan Palmié has argued that race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym", "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference". As such, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race; only history and social relationships will.
Imani Perry has argued that race "is produced by social arrangements and political decision making", and that "race is something that happens, rather than something that is. It is dynamic, but it holds no objective truth." Similarly, in Racial Culture: A Critique (2005), Richard T. Ford argued that while "there is no necessary correspondence between the ascribed identity of race and one's culture or personal sense of self" and "group difference is not intrinsic to members of social groups but rather contingent o[n] the social practices of group identification", the social practices of identity politics may coerce individuals into the "compulsory" enactment of "prewritten racial scripts".
Brazil
Compared to 19th-century United States, 20th-century Brazil was characterized by a perceived relative absence of sharply defined racial groups. According to anthropologist Marvin Harris, this pattern reflects a different history and different social relations.
Race in Brazil was "biologized", but in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences. There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only a very limited number of categories to choose from, to the extent that full siblings can pertain to different racial groups.
Over a dozen racial categories would be recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and not one category stands significantly isolated from the rest. That is, race referred preferentially to appearance, not heredity, and appearance is a poor indication of ancestry, because only a few genes are responsible for someone's skin color and traits: a person who is considered white may have more African ancestry than a person who is considered black, and the reverse can be also true about European ancestry. The complexity of racial classifications in Brazil reflects the extent of genetic mixing in Brazilian society, a society that remains highly, but not strictly, stratified along color lines. These socioeconomic factors are also significant to the limits of racial lines, because a minority of pardos, or brown people, are likely to start declaring themselves white or black if socially upward, and being seen as relatively "whiter" as their perceived social status increases (much as in other regions of Latin America).
Fluidity of racial categories aside, the "biologification" of race in Brazil referred above would match contemporary concepts of race in the United States quite closely, though, if Brazilians are supposed to choose their race as one among, Asian and Indigenous apart, three IBGE's census categories. While assimilated Amerindians and people with very high quantities of Amerindian ancestry are usually grouped as caboclos, a subgroup of pardos which roughly translates as both mestizo and hillbilly, for those of lower quantity of Amerindian descent a higher European genetic contribution is expected to be grouped as a pardo. In several genetic tests, people with less than 60-65% of European descent and 5–10% of Amerindian descent usually cluster with Afro-Brazilians (as reported by the individuals), or 6.9% of the population, and those with about 45% or more of Subsaharan contribution most times do so (in average, Afro-Brazilian DNA was reported to be about 50% Subsaharan African, 37% European and 13% Amerindian).
If a more consistent report with the genetic groups in the gradation of genetic mixing is to be considered (e.g. that would not cluster people with a balanced degree of African and non-African ancestry in the black group instead of the multiracial one, unlike elsewhere in Latin America where people of high quantity of African descent tend to classify themselves as mixed), more people would report themselves as white and pardo in Brazil (47.7% and 42.4% of the population as of 2010, respectively), because by research its population is believed to have between 65 and 80% of autosomal European ancestry, in average (also >35% of European mt-DNA and >95% of European Y-DNA).
From the last decades of the Empire until the 1950s, the proportion of the white population increased significantly while Brazil welcomed 5.5 million immigrants between 1821 and 1932, not much behind its neighbor Argentina with 6.4 million, and it received more European immigrants in its colonial history than the United States. Between 1500 and 1760, 700.000 Europeans settled in Brazil, while 530.000 Europeans settled in the United States for the same given time. Thus, the historical construction of race in Brazilian society dealt primarily with gradations between persons of majority European ancestry and little minority groups with otherwise lower quantity therefrom in recent times.
European Union
According to the Council of the European Union:
The European Union uses the terms racial origin and ethnic origin synonymously in its documents and according to it "the use of the term 'racial origin' in this directive does not imply an acceptance of such [racial] theories". Haney López warns that using "race" as a category within the law tends to legitimize its existence in the popular imagination. In the diverse geographic context of Europe, ethnicity and ethnic origin are arguably more resonant and are less encumbered by the ideological baggage associated with "race". In European context, historical resonance of "race" underscores its problematic nature. In some states, it is strongly associated with laws promulgated by the Nazi and Fascist governments in Europe during the 1930s and 1940s. Indeed, in 1996, the European Parliament adopted a resolution stating that "the term should therefore be avoided in all official texts".
The concept of racial origin relies on the notion that human beings can be separated into biologically distinct "races", an idea generally rejected by the scientific community. Since all human beings belong to the same species, the ECRI (European Commission against Racism and Intolerance) rejects theories based on the existence of different "races". However, in its Recommendation ECRI uses this term in order to ensure that those persons who are generally and erroneously perceived as belonging to "another race" are not excluded from the protection provided for by the legislation. The law claims to reject the existence of "race", yet penalize situations where someone is treated less favourably on this ground.
United States
The immigrants to the United States came from every region of Europe, Africa, and Asia. They mixed among themselves and with the indigenous inhabitants of the continent. In the United States most people who self-identify as African American have some European ancestors, while many people who identify as European American have some African or Amerindian ancestors.
Since the early history of the United States, Amerindians, African Americans, and European Americans have been classified as belonging to different races. Efforts to track mixing between groups led to a proliferation of categories, such as mulatto and octoroon. The criteria for membership in these races diverged in the late 19th century. During the Reconstruction era, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless of appearance. By the early 20th century, this notion was made statutory in many states. Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum). To be White one had to have perceived "pure" White ancestry. The one-drop rule or hypodescent rule refers to the convention of defining a person as racially black if he or she has any known African ancestry. This rule meant that those that were mixed race but with some discernible African ancestry were defined as black. The one-drop rule is specific to not only those with African ancestry but to the United States, making it a particularly African-American experience.
The decennial censuses conducted since 1790 in the United States created an incentive to establish racial categories and fit people into these categories.
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from the Spanish-speaking countries of Latin America to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White, Amerindian, Asian, and mixed groups). However, there is a common misconception in the US that Hispanic/Latino is a race or sometimes even that national origins such as Mexican, Cuban, Colombian, Salvadoran, etc. are races. In contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Views across disciplines over time
Anthropology
The concept of race classification in physical anthropology lost credibility around the 1960s and is now considered untenable. A 2019 statement by the American Association of Physical Anthropologists declares:Race does not provide an accurate representation of human biological variation. It was never accurate in the past, and it remains inaccurate when referencing contemporary human populations. Humans are not divided biologically into distinct continental types or racial genetic clusters. Instead, the Western concept of race must be understood as a classification system that emerged from, and in support of, European colonialism, oppression, and discrimination.Wagner et al. (2017) surveyed 3,286 American anthropologists' views on race and genetics, including both cultural and biological anthropologists. They found a consensus among them that biological races do not exist in humans, but that race does exist insofar as the social experiences of members of different races can have significant effects on health.
Wang, Štrkalj et al. (2003) examined the use of race as a biological concept in research papers published in China's only biological anthropology journal, Acta Anthropologica Sinica. The study showed that the race concept was widely used among Chinese anthropologists. In a 2007 review paper, Štrkalj suggested that the stark contrast of the racial approach between the United States and China was due to the fact that race is a factor for social cohesion among the ethnically diverse people of China, whereas "race" is a very sensitive issue in America and the racial approach is considered to undermine social cohesion – with the result that in the socio-political context of US academics scientists are encouraged not to use racial categories, whereas in China they are encouraged to use them.
Lieberman et al. in a 2004 study researched the acceptance of race as a concept among anthropologists in the United States, Canada, the Spanish speaking areas, Europe, Russia and China. Rejection of race ranged from high to low, with the highest rejection rate in the United States and Canada, a moderate rejection rate in Europe, and the lowest rejection rate in Russia and China. Methods used in the studies reported included questionnaires and content analysis.
Kaszycka et al. (2009) in 2002–2003 surveyed European anthropologists' opinions toward the biological race concept. Three factors – country of academic education, discipline, and age – were found to be significant in differentiating the replies. Those educated in Western Europe, physical anthropologists, and middle-aged persons rejected race more frequently than those educated in Eastern Europe, people in other branches of science, and those from both younger and older generations. "The survey shows that the views on race are sociopolitically (ideologically) influenced and highly dependent on education."
United States
Since the second half of the 20th century, physical anthropology in the United States has moved away from a typological understanding of human biological diversity towards a genomic and population-based perspective. Anthropologists have tended to understand race as a social classification of humans based on phenotype and ancestry as well as cultural factors, as the concept is understood in the social sciences. Since 1932, an increasing number of college textbooks introducing physical anthropology have rejected race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996.
A 1998 "Statement on 'Race'" composed by a select committee of anthropologists and issued by the executive board of the American Anthropological Association, which they argue "represents generally the contemporary thinking and scholarly positions of a majority of anthropologists", declares:
An earlier survey, conducted in 1985 , asked 1,200 American scientists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." Among anthropologists, the responses were:
physical anthropologists: 41%
cultural anthropologists: 53%
Lieberman's study also showed that more women reject the concept of race than men.
The same survey, conducted again in 1999, showed that the number of anthropologists disagreeing with the idea of biological race had risen substantially. The results were as follows:
physical anthropologists: 69%
cultural anthropologists: 80%
A line of research conducted by Cartmill (1998), however, seemed to limit the scope of Lieberman's finding that there was "a significant degree of change in the status of the race concept". Goran Štrkalj has argued that this may be because Lieberman and collaborators had looked at all the members of the American Anthropological Association irrespective of their field of research interest, while Cartmill had looked specifically at biological anthropologists interested in human variation.
In 2007, Ann Morning interviewed over 40 American biologists and anthropologists and found significant disagreements over the nature of race, with no one viewpoint holding a majority among either group. Morning also argues that a third position, "antiessentialism", which holds that race is not a useful concept for biologists, should be introduced into this debate in addition to "constructionism" and "essentialism".
According to the 2000 University of Wyoming edition of a popular physical anthropology textbook, forensic anthropologists are overwhelmingly in support of the idea of the basic biological reality of human races. Forensic physical anthropologist and professor George W. Gill has said that the idea that race is only skin deep "is simply not true, as any experienced forensic anthropologist will affirm" and "Many morphological features tend to follow geographic boundaries coinciding often with climatic zones. This is not surprising since the selective forces of climate are probably the primary forces of nature that have shaped human races with regard not only to skin color and hair form but also the underlying bony structures of the nose, cheekbones, etc. (For example, more prominent noses humidify air better.)" While he can see good arguments for both sides, the complete denial of the opposing evidence "seems to stem largely from socio-political motivation and not science at all". He also states that many biological anthropologists see races as real yet "not one introductory textbook of physical anthropology even presents that perspective as a possibility. In a case as flagrant as this, we are not dealing with science but rather with blatant, politically motivated censorship".
In partial response to Gill's statement, Professor of Biological Anthropology C. Loring Brace argues that the reason laymen and biological anthropologists can determine the geographic ancestry of an individual can be explained by the fact that biological characteristics are clinally distributed across the planet, and that does not translate into the concept of race. He states: The concept of "race" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine. Brace has criticized forensic anthropologists for this, arguing that they in fact should be talking about regional ancestry. He argues that while forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region of Africa, categorizing that skeletal as being "black" is a socially constructed category that is only meaningful in the particular social context of the United States, and which is not itself scientifically valid.
Biology, anatomy, and medicine
In the same 1985 survey , 16% of the surveyed biologists and 36% of the surveyed developmental psychologists disagreed with the proposition: "There are biological races in the species Homo sapiens."
The authors of the study also examined 77 college textbooks in biology and 69 in physical anthropology published between 1932 and 1989. Physical anthropology texts argued that biological races exist until the 1970s, when they began to argue that races do not exist. In contrast, biology textbooks did not undergo such a reversal but many instead dropped their discussion of race altogether. The authors attributed this to biologists trying to avoid discussing the political implications of racial classifications, and to the ongoing discussions in biology about the validity of the idea of "subspecies". The authors concluded, "The concept of race, masking the overwhelming genetic similarity of all peoples and the mosaic patterns of variation that do not correspond to racial divisions, is not only socially dysfunctional but is biologically indefensible as well (pp. 5 18–5 19)."
A 1994 examination of 32 English sport/exercise science textbooks found that 7 (21.9%) claimed that there are biophysical differences due to race that might explain differences in sports performance, 24 (75%) did not mention nor refute the concept, and 1 (3.1%) expressed caution with the idea.
In February 2001, the editors of Archives of Pediatrics and Adolescent Medicine asked "authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so". The editors also stated that "analysis by race and ethnicity has become an analytical knee-jerk reflex". Nature Genetics now ask authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved".
Morning (2008) looked at high school biology textbooks during the 1952–2002 period and initially found a similar pattern with only 35% directly discussing race in the 1983–92 period from initially 92% doing so. However, this has increased somewhat after this to 43%. More indirect and brief discussions of race in the context of medical disorders have increased from none to 93% of textbooks. In general, the material on race has moved from surface traits to genetics and evolutionary history. The study argues that the textbooks' fundamental message about the existence of races has changed little.
Surveying views on race in the scientific community in 2008, Morning concluded that biologists had failed to come to a clear consensus, and they often split along cultural and demographic lines. She notes: "At best, one can conclude that biologists and anthropologists now appear equally divided in their beliefs about the nature of race."
Gissis (2008) examined several important American and British journals in genetics, epidemiology and medicine for their content during the 1946–2003 period. He wrote that "Based upon my findings I argue that the category of race only seemingly disappeared from scientific discourse after World War II and has had a fluctuating yet continuous use during the time span from 1946 to 2003, and has even become more pronounced from the early 1970s on".
33 health services researchers from differing geographic regions were interviewed in a 2008 study. The researchers recognized the problems with racial and ethnic variables but the majority still believed these variables were necessary and useful.
A 2010 examination of 18 widely used English anatomy textbooks found that they all represented human biological variation in superficial and outdated ways, many of them making use of the race concept in ways that were current in 1950s anthropology. The authors recommended that anatomical education should describe human anatomical variation in more detail and rely on newer research that demonstrates the inadequacies of simple racial typologies.
A 2021 study that examined over 11,000 papers from 1949 to 2018 in the American Journal of Human Genetics, found that "race" was used in only 5% of papers published in the last decade, down from 22% in the first. Together with an increase in use of the terms "ethnicity", "ancestry", and location-based terms, it suggests that human geneticists have mostly abandoned the term "race".
The National Academies of Sciences, Engineering, and Medicine (NASEM), supported by the US the National Institutes of Health, formally declared that "researchers should not use race as a proxy for describing human genetic variation". The report of its Committee on the Use of Race, Ethnicity, and Ancestry as Population Descriptors in Genomics Research titled Using Population Descriptors in Genetics and Genomics Research was released on 14 March 2023. The report stated: "In humans, race is a socially constructed designation, a misleading and harmful surrogate for population genetic differences, and has a long history of being incorrectly identified as the major genetic reason for phenotypic differences between groups." The committee co-chair Charmaine D. Royal and Robert O. Keohane of Duke University agreed in the meeting: "Classifying people by race is a practice entangled with and rooted in racism."
Sociology
Lester Frank Ward (1841–1913), considered to be one of the founders of American sociology, rejected notions that there were fundamental differences that distinguished one race from another, although he acknowledged that social conditions differed dramatically by race. At the turn of the 20th century, sociologists viewed the concept of race in ways that were shaped by the scientific racism of the 19th and early 20th centuries. Many sociologists focused on African Americans, called Negroes at that time, and claimed that they were inferior to whites. White sociologist Charlotte Perkins Gilman (1860–1935), for example, used biological arguments to claim the inferiority of African Americans. American sociologist Charles H. Cooley (1864–1929) theorized that differences among races were "natural", and that biological differences result in differences in intellectual abilities. Edward Alsworth Ross (1866–1951), also an important figure in the founding of American sociology, and a eugenicist, believed that whites were the superior race, and that there were essential differences in "temperament" among races. In 1910, the Journal published an article by Ulysses G. Weatherly (1865–1940) that called for white supremacy and segregation of the races to protect racial purity.
W. E. B. Du Bois (1868–1963), one of the first African-American sociologists, was the first sociologist to use sociological concepts and empirical research methods to analyze race as a social construct instead of a biological reality. Beginning in 1899 with his book The Philadelphia Negro, Du Bois studied and wrote about race and racism throughout his career. In his work, he contended that social class, colonialism, and capitalism shaped ideas about race and racial categories. Social scientists largely abandoned scientific racism and biological reasons for racial categorization schemes by the 1930s. Other early sociologists, especially those associated with the Chicago School, joined Du Bois in theorizing race as a socially constructed fact. By 1978, William Julius Wilson argued that race and racial classification systems were declining in significance, and that instead, social class more accurately described what sociologists had earlier understood as race. By 1986, sociologists Michael Omi and Howard Winant successfully introduced the concept of racial formation to describe the process by which racial categories are created. Omi and Winant assert that "there is no biological basis for distinguishing among human groups along the lines of race".
Eduardo Bonilla-Silva, Sociology professor at Duke University, remarks: "I contend that racism is, more than anything else, a matter of group power; it is about a dominant racial group (whites) striving to maintain its systemic advantages and minorities fighting to subvert the racial status quo." The types of practices that take place under this new color-blind racism is subtle, institutionalized, and supposedly not racial. Color-blind racism thrives on the idea that race is no longer an issue in the United States. There are contradictions between the alleged color-blindness of most whites and the persistence of a color-coded system of inequality.
Today, sociologists generally understand race and racial categories as socially constructed, and reject racial categorization schemes that depend on biological differences.
Political and practical uses
Biomedicine
In the United States, federal government policy promotes the use of racially categorized data to identify and address health disparities between racial or ethnic groups. In clinical settings, race has sometimes been considered in the diagnosis and treatment of medical conditions. Doctors have noted that some medical conditions are more prevalent in certain racial or ethnic groups than in others, without being sure of the cause of those differences. Recent interest in race-based medicine, or race-targeted pharmacogenomics, has been fueled by the proliferation of human genetic data which followed the decoding of the human genome in the first decade of the twenty-first century. There is an active debate among biomedical researchers about the meaning and importance of race in their research. Proponents of the use of racial categories in biomedicine argue that continued use of racial categorizations in biomedical research and clinical practice makes possible the application of new genetic findings, and provides a clue to diagnosis. Biomedical researchers' positions on race fall into two main camps: those who consider the concept of race to have no biological basis and those who consider it to have the potential to be biologically meaningful. Members of the latter camp often base their arguments around the potential to create genome-based personalized medicine.
Other researchers point out that finding a difference in disease prevalence between two socially defined groups does not necessarily imply genetic causation of the difference. They suggest that medical practices should maintain their focus on the individual rather than an individual's membership to any group. They argue that overemphasizing genetic contributions to health disparities carries various risks such as reinforcing stereotypes, promoting racism or ignoring the contribution of non-genetic factors to health disparities. International epidemiological data show that living conditions rather than race make the biggest difference in health outcomes even for diseases that have "race-specific" treatments. Some studies have found that patients are reluctant to accept racial categorization in medical practice.
Law enforcement
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics.
Criminal justice agencies in England and Wales use at least two separate racial/ethnic classification systems when reporting crime, as of 2010. One is the system used in the 2001 Census when individuals identify themselves as belonging to a particular ethnic group: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). The other is categories used by the police when they visually identify someone as belonging to an ethnic group, e.g. at the time of a stop and search or an arrest: White – North European (IC1), White – South European (IC2), Black (IC3), Asian (IC4), Chinese, Japanese, or South East Asian (IC5), Middle Eastern (IC6), and Unknown (IC0). "IC" stands for "Identification Code;" these items are also referred to as Phoenix classifications. Officers are instructed to "record the response that has been given" even if the person gives an answer which may be incorrect; their own perception of the person's ethnic background is recorded separately. Comparability of the information being recorded by officers was brought into question by the Office for National Statistics (ONS) in September 2007, as part of its Equality Data Review; one problem cited was the number of reports that contained an ethnicity of "Not Stated".
In many countries, such as France, the state is legally banned from maintaining data based on race.
In the United States, the practice of racial profiling has been ruled to be both unconstitutional and a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's populations. Many consider de facto racial profiling an example of institutional racism in law enforcement.
Mass incarceration in the United States disproportionately impacts African American and Latino communities. Michelle Alexander, author of The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010), argues that mass incarceration is best understood as not only a system of overcrowded prisons. Mass incarceration is also, "the larger web of laws, rules, policies, and customs that control those labeled criminals both in and out of prison". She defines it further as "a system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls", illustrating the second-class citizenship that is imposed on a disproportionate number of people of color, specifically African-Americans. She compares mass incarceration to Jim Crow laws, stating that both work as racial caste systems.
Many research findings appear to agree that the impact of victim race in the interpersonal violence (IPV) arrest decision might include a racial bias in favor of white victims. A 2011 study in a national sample of IPV arrests found that female arrest was more likely if the male victim was white and the female offender was black, while male arrest was more likely if the female victim was white. For both female and male arrest in IPV cases, situations involving married couples were more likely to lead to arrest compared to dating or divorced couples. More research is needed to understand agency and community factors that influence police behavior and how discrepancies in IPV interventions/ tools of justice can be addressed.
Recent work using DNA cluster analysis to determine race background has been used by some criminal investigators to narrow their search for the identity of both suspects and victims. Proponents of DNA profiling in criminal investigations cite cases where leads based on DNA analysis proved useful, but the practice remains controversial among medical ethicists, defense lawyers and some in law enforcement.
The Constitution of Australia contains a line about 'people of any race for whom it is deemed necessary to make special laws', despite there being no agreed definition of race described in the document.
Forensic anthropology
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a 1992 article, anthropologist Norman Sauer noted that anthropologists had generally abandoned the concept of race as a valid representation of human biological diversity, except for forensic anthropologists. He asked, "If races don't exist, why are forensic anthropologists so good at identifying them?" He concluded:
Identification of the ancestry of an individual is dependent upon knowledge of the frequency and distribution of phenotypic traits in a population. This does not necessitate the use of a racial classification scheme based on unrelated traits, although the race concept is widely used in medical and legal contexts in the United States. Some studies have reported that races can be identified with a high degree of accuracy using certain methods, such as that developed by Giles and Elliot. However, this method sometimes fails to be replicated in other times and places; for instance, when the method was re-tested to identify Native Americans, the average rate of accuracy dropped from 85% to 33%. Prior information about the individual (e.g. Census data) is also important in allowing the accurate identification of the individual's "race".
In a different approach, anthropologist C. Loring Brace said:
In association with a NOVA program in 2000 about race, he wrote an essay opposing use of the term.
A 2002 study found that about 13% of human craniometric variation existed between regions, while 6% existed between local populations within regions and 81% within local populations. In contrast, the opposite pattern of genetic variation was observed for skin color (which is often used to define race), with 88% of variation between regions. The study concluded: "The apportionment of genetic diversity in skin color is atypical, and cannot be used for purposes of classification."
Similarly, a 2009 study found that craniometrics could be used accurately to determine what part of the world someone was from based on their cranium; however, this study also found that there were no abrupt boundaries that separated craniometric variation into distinct racial groups. Another 2009 study showed that American blacks and whites had different skeletal morphologies, and that significant patterning in variation in these traits exists within continents. This suggests that classifying humans into races based on skeletal characteristics would necessitate many different "races" being defined.
In 2010, philosopher Neven Sesardić argued that when several traits are analyzed at the same time, forensic anthropologists can classify a person's race with an accuracy of close to 100% based on only skeletal remains. Sesardić's claim has been disputed by philosopher Massimo Pigliucci, who accused Sesardić of "cherry pick[ing] the scientific evidence and reach[ing] conclusions that are contradicted by it". Specifically, Pigliucci argued that Sesardić misrepresented a paper by Ousley et al. (2009), and neglected to mention that they identified differentiation not just between individuals from different races, but also between individuals from different tribes, local environments, and time periods.
See also
Casta
Clan
Cultural identity
Ethnic nationalism
Ethnic stereotype
Genetic distance
Human skin color
Interracial marriage
List of contemporary ethnic groups
Melanism
Minzu (anthropology)
Multiracial
Race and ethnicity in censuses (US)
Race and ethnicity in Latin America
Racialization
Raciolinguistics
UNESCO statements on race
Race (French Constitution
References
Bibliography
Lay summary (1 December 2013)
Originally appeared in:
Further reading
Lay summary (28 April 2013) This review of current research includes chapters by Ian Whitmarsh, David S. Jones, Jonathan Kahn, Pamela Sankar, Steven Epstein, Simon M. Outram, George T. H. Ellison, Richard Tutton, Andrew Smart, Richard Ashcroft, Paul Martin, George T. H. Ellison, Amy Hinterberger, Joan H. Fujimura, Ramya Rajagopalan, Pilar N. Ossorio, Kjell A. Doksum, Jay S. Kaufman, Richard S. Cooper, Angela C. Jenks, Nancy Krieger, and Dorothy Roberts.
Popular press
Extract from
External links
Companion website to California Newsreel feature.
A collection of essays by professors and research scientists.
Official statements
Originally published in Federal Register, 30 October 1997.
A public education program, including history, human variation, and lived experience.
Anthropology
Kinship and descent
Social inequality | Race (human categorization) | Biology | 14,545 |
1,623,208 | https://en.wikipedia.org/wiki/Growth%20medium | A growth medium or culture medium is a solid, liquid, or semi-solid designed to support the growth of a population of microorganisms or cells via the process of cell proliferation or small plants like the moss Physcomitrella patens. Different types of media are used for growing different types of cells.
The two major types of growth media are those used for cell culture, which use specific cell types derived from plants or animals, and those used for microbiological culture, which are used for growing microorganisms such as bacteria or fungi. The most common growth media for microorganisms are nutrient broths and agar plates; specialized media are sometimes required for microorganism and cell culture growth. Some organisms, termed fastidious organisms, require specialized environments due to complex nutritional requirements. Viruses, for example, are obligate intracellular parasites and require a growth medium containing living cells.
Types
The most common growth media for microorganisms are nutrient broths (liquid nutrient medium) or lysogeny broth medium. Liquid media are often mixed with agar and poured via a sterile media dispenser into Petri dishes to solidify. These agar plates provide a solid medium on which microbes may be cultured. They remain solid, as very few bacteria are able to decompose agar (the exception being some species in the genera: Cytophaga, Flavobacterium, Bacillus, Pseudomonas, and Alcaligenes). Bacteria grown in liquid cultures often form colloidal suspensions.
The difference between growth media used for cell culture and those used for microbiological culture is that cells derived from whole organisms and grown in culture often cannot grow without the addition of, for instance, hormones or growth factors which usually occur in vivo. In the case of animal cells, this difficulty is often addressed by the addition of blood serum or a synthetic serum replacement to the medium. In the case of microorganisms, no such limitations exist, as they are often unicellular organisms. One other major difference is that animal cells in culture are often grown on a flat surface to which they attach, and the medium is provided in a liquid form, which covers the cells. In contrast, bacteria such as Escherichia coli may be grown on solid or in liquid media.
An important distinction between growth media types is that of chemically defined versus undefined media. A defined medium will have known quantities of all ingredients. For microorganisms, they consist of providing trace elements and vitamins required by the microbe and especially defined carbon and nitrogen sources. Glucose or glycerol are often used as carbon sources, and ammonium salts or nitrates as inorganic nitrogen sources. An undefined medium has some complex ingredients, such as yeast extract or casein hydrolysate, which consist of a mixture of many chemical species in unknown proportions. Undefined media are sometimes chosen based on price and sometimes by necessity – some microorganisms have never been cultured on defined media.
A good example of a growth medium is the wort used to make beer. The wort contains all the nutrients required for yeast growth, and under anaerobic conditions, alcohol is produced. When the fermentation process is complete, the combination of medium and dormant microbes, now beer, is ready for consumption. The main types are
culture media
minimal media
selective media
differential media
transport media
indicator media
Culture media
Culture media contain all the elements that most bacteria need for growth and are not selective, so they are used for the general cultivation and maintenance of bacteria kept in laboratory culture collections.
An undefined medium (also known as a basal or complex medium) contains:
a carbon source such as glucose
water
various salts
a source of amino acids and nitrogen (e.g. beef, yeast extract)
This is an undefined medium because the amino-acid source contains a variety of compounds; the exact composition is unknown.
A defined medium (also known as chemically defined medium or synthetic medium) is a medium in which
all the chemicals used are known
no yeast, animal, or plant tissue is present
Examples of nutrient media:
nutrient agar
plate count agar
trypticase soy agar
Minimal media
A defined medium that has just enough ingredients to support growth is called a "minimal medium". The number of ingredients that must be added to a minimal medium varies enormously depending on which microorganism is being grown. Minimal media are those that contain the minimum nutrients possible for colony growth, generally without the presence of amino acids, and are often used by microbiologists and geneticists to grow "wild-type" microorganisms. Minimal media can also be used to select for or against recombinants or exconjugants.
Minimal medium typically contains:
a carbon source, which may be a sugar such as glucose, or a less energy-rich source such as succinate
various salts, which may vary among bacteria species and growing conditions; these generally provide essential elements such as magnesium, nitrogen, phosphorus, and sulfur to allow the bacteria to synthesize protein and nucleic acids
water
Supplementary minimal media are minimal media that also contains a single selected agent, usually an amino acid or a sugar. This supplementation allows for the culturing of specific lines of auxotrophic recombinants.
Selective media
Selective media are used for the growth of only selected microorganisms. For example, if a microorganism is resistant to a certain antibiotic, such as ampicillin or tetracycline, then that antibiotic can be added to the medium to prevent other cells, which do not possess the resistance, from growing. Media lacking an amino acid such as proline in conjunction with E. coli unable to synthesize it were commonly used by geneticists before the emergence of genomics to map bacterial chromosomes.
Selective growth media are also used in cell culture to ensure the survival or proliferation of cells with certain properties, such as antibiotic resistance or the ability to synthesize a certain metabolite. Normally, the presence of a specific gene or an allele of a gene confers upon the cell the ability to grow in the selective medium. In such cases, the gene is termed a marker.
Selective growth media for eukaryotic cells commonly contain neomycin to select cells that have been successfully transfected with a plasmid carrying the neomycin resistance gene as a marker. Gancyclovir is an exception to the rule, as it is used to specifically kill cells that carry its respective marker, the Herpes simplex virus thymidine kinase.
Examples of selective media:
Eosin methylene blue contains dyes that are toxic for Gram-positive bacteria. It is the selective and differential medium for coliforms.
YM (yeast extract agar) has a low pH, deterring bacterial growth.
MEA (malt extract agar) has a low pH, deterring bacterial growth.
MacConkey agar is for Gram-negative bacteria.
Hektoen enteric agar is selective for Gram-negative bacteria.
HIS-selective medium is a type cell culture medium that lacks the amino acid histidine.
Mannitol salt agar is selective for gram-positive bacteria and differential for mannitol.
Xylose lysine deoxycholate is selective for Gram-negative bacteria.
Buffered charcoal yeast extract agar is selective for certain gram-negative bacteria, especially Legionella pneumophila.
Baird-Parker agar is for gram-positive staphylococci.
Sabouraud agar is selective to certain fungi due to its low pH (5.6) and high glucose concentration (3–4%).
DRBC (dichloran rose bengal chloramphenicol agar) is a selective medium for the enumeration of moulds and yeasts in foods. Dichloran and rose bengal restrict the growth of mould colonies, preventing overgrowth of luxuriant species and assisting accurate counting of colonies.
MMN (Modified Melin-Norkrans) medium and BAF medium are used for ectomycorrhizal fungi.
Columbia Nalidixic Acid (CNA) agar contains antibiotics (nalidixic acid and colistin) that inhibit Gram-negative organisms, aiding in the selective isolation of Gram-positive bacteria.
Differential media
Differential or indicator media distinguish one microorganism type from another growing on the same medium. This type of media uses the biochemical characteristics of a microorganism growing in the presence of specific nutrients or indicators (such as neutral red, phenol red, eosin y, or methylene blue) added to the medium to visibly indicate the defining characteristics of a microorganism. These media are used for the detection of microorganisms and by molecular biologists to detect recombinant strains of bacteria.
Examples of differential media:
Blood agar (used in strep tests) contains bovine heart blood that becomes transparent in the presence of β-hemolytic organisms such as Streptococcus pyogenes and Staphylococcus aureus.
Eosin methylene blue is differential for lactose fermentation.
Granada medium is selective and differential for Streptococcus agalactiae (group B streptococcus) which grows as distinctive red colonies in this medium.
MacConkey agar is differential for lactose fermentation.
Mannitol salt agar is differential for mannitol fermentation.
X-gal plates are differential for lac operon mutants.
Transport media
Transport media should fulfill these criteria:
Temporary storage of specimens being transported to the laboratory for cultivation
Maintain the viability of all organisms in the specimen without altering their concentration
Contain only buffers and salt
Lack of carbon, nitrogen, and organic growth factors so as to prevent microbial multiplication
Transport media used in the isolation of anaerobes must be free of molecular oxygen.
Examples of transport media:
Thioglycolate broth is for strict anaerobes.
Stuart transport medium is a non-nutrient soft agar gel containing a reducing agent to prevent oxidation, and charcoal to neutralize.
Certain bacterial inhibitors are used for gonococci, and buffered glycerol saline for enteric bacilli.
Venkataraman Ramakrishna (VR) medium is used for V. cholerae.
Enriched media
Enriched media contain the nutrients required to support the growth of a wide variety of organisms, including some of the more fastidious ones. They are commonly used to harvest as many different types of microbes as are present in the specimen. Blood agar is an enriched medium in which nutritionally rich whole blood supplements the basic nutrients. Chocolate agar is enriched with heat-treated blood (), which turns brown and gives the medium the color for which it is named.
Physiological relevance
The choice of culture medium might affect the physiological relevance of findings from tissue culture experiments, especially for metabolic studies. In addition, the dependence of a cell line on a metabolic gene was shown to be affected by the media type. When performing a study involving several cell lines, utilizing a uniform culture media for all cell lines might reduce the bias in the generated datasets. Using a growth medium that better represents the physiological levels of nutrients can improve the physiological relevance of in vitro studies and recently such media types, as Plasmax and human plasma-like medium (HPLM), were developed.
Culture medium for Mammalian cells
The selection of cell culture medium is crucial for efficient mammalian cell culture, significantly affecting cell growth, productivity, and consistency across batches. In protein expression, the choice of media can also influence the therapeutic characteristics of produced proteins through processes like glycosylation. Different types of media, such as serum-containing, serum-free, protein-free, and chemically defined media, have distinct benefits and drawbacks.
Serum-containing media are rich in growth factors but can lead to variability and contamination issues. Fetal bovine serum (FBS) is commonly used due to its high capacity to support cell growth, although it poses biosafety concerns due to its inconsistent composition. In contrast, serum-free media (SFM) offer standardized formulations that enhance reliability and reduce contamination risks. They are designed to include essential nutrients like amino acids, vitamins, and glucose, but can sometimes provide weaker growth performance compared to serum-containing alternatives. The development of protein-free and chemically defined media is aimed at achieving greater consistency and control in cell culture processes.
Ultimately, the composition of the culture medium directly impacts cell viability and productivity, making the careful selection and design of culture media essential for successful mammalian cell culture.
See also
Cell culture
Impedance microbiology
Modified Chee's medium
References
External links
"The Nutrient Requirements of Cells"
Growth media
Microbiological media | Growth medium | Biology | 2,645 |
19,989,669 | https://en.wikipedia.org/wiki/Cypress%20forest | A Cypress forest is a western United States plant association typically dominated by one or more cypress species. Example species comprising the canopy include Cupressus macrocarpa. In some cases these forests have been severely damaged by goats, cattle and other grazing animals. While cypress species are clearly dominant within a Cypress forest, other trees such as California Buckeye, Aesculus californica, are found in some Cypress forests.
Examples
The Guadalupe Island Cypress Forest is situated on Guadalupe Island, offshore from Baja California. This forest of Hesperocyparis guadalupensis trees was devastated by introduced goats, but conservation biology efforts have been conducted to assist in restoring the forest.
Another example on the Pacific Coast mainland of Northern California is the Sargent's cypress Forest, located in coastal Marin County, California.
See also
Pygmy forest
Line notes
References
Marlene A. Rodriguez-Malagon, Alejandro Hinojosa Corona, Alfonso Aguirre-Munoz and Cesar Garcia Gutierrez The Guadalupe Island Forest Recovery Track
C.Michael Hogan (2008) Aesculus californica, Globaltwitcher.com, ed. N. Strӧmberg
Sargent Forest of Marin County, California
USGS Bolinas Quadrangle Map, U.S. Government Printing Office, Washington, DC
Forests | Cypress forest | Biology | 260 |
39,402,658 | https://en.wikipedia.org/wiki/Rio%20Blanco%20%28Colorado%29 | Rio Blanco is a stream that is a tributary of the San Juan River in southern Colorado, United States. The stream originates in the San Juan Mountains and flows for through the San Juan National Forest and private lands to its confluence with the San Juan River in Archuleta County, Colorado. Colorado classifies the Rio Blanco as an Aquatic Life Coldwater Class 1/Recreation Class 1 waterway supporting water supply and agricultural uses. The river also features native cutthroat trout and introduced rainbow trout fishing.
Restoration project
The San Juan–Chama Diversion Project, which was started in 1971, diverted water from the river at the Blanco Diversion Dam and eventually led to reduced water flow and a loss of aquatic habitat in the Rio Blanco. In 1998, Colorado included a 12-mile reach of the lower Rio Blanco on its list of impaired waters for failure to support its aquatic life due to sediment. A grass roots project organized by the residents aided by numerous local statewide and federal agencies to restore the stream channel to match the altered flow regime, resulted in the river's physical and biological function and water quality being improved. In 2008, Colorado removed the Rio Blanco, including the 12-miles of the lower reach, from the state's list of impaired waters.
Fishing
Parts of the river range from wide, and fishing is permitted along the side creeks including Rito Blanco. Most of the trout are in the range.
There are of the river that were transformed by hydrologist Dave Rosgen, from being wide and shallow to a deep flow that now supports a variety of trout. Rosgen's plan included placing boulders and old trees on the river banks to direct the waters toward a channel that would be more defined. He also constructed a tube to divert away the gravel and sand, which allows water to flow through, but the sediment is routed to a holding area which is regularly emptied and used elsewhere. Cost of the renovation was about $1 million. The three-mile stretch may be fished by guests of El Rancho Pinoso, a privately owned ranch near the river.
See also
List of rivers of Colorado
Stream restoration
Restoration ecology
References
Blanco
Blanco
Blanco
Fish of North America
Colorado River Storage Project
Aquatic ecology | Rio Blanco (Colorado) | Engineering,Biology | 440 |
14,597,968 | https://en.wikipedia.org/wiki/Sound%20on%20tape | SOT is an acronym for the phrase sound on tape. It refers to any audio recorded on analog or digital video formats. It is used in scriptwriting for television productions and filmmaking to indicate the portions of the production that will use room tone or other audio from the time of recording, as opposed to audio recorded later (studio voice-over, Foley, etc.).
In broadcast journalism, SOT is generally considered to be audio captured from an individual who is on camera, like an interviewee and may also be referred to as a soundbite.
See also
Filmmaking
MOS (filmmaking)
Sound-on-film
External links
United States Department of State
http://www.nvm.org.au/General%20Articles.htm#WHAT_IS_NAT_SOT
References
Audio engineering
Television terminology | Sound on tape | Engineering | 170 |
36,677,584 | https://en.wikipedia.org/wiki/Effects%20of%20sleep%20deprivation%20in%20space | Studies, which include laboratory investigations (Category I) and field evaluations (Category II and Category III) of population groups that are analogous to astronauts (e.g., medical and aviation personnel), provide compelling evidence that working long shifts for extended periods of time contributes to sleep deprivation and can cause performance decrements, health problems, and other detrimental consequences, including accidents, that can affect both the worker and others.
Performance errors relative to sleep loss and extended wakefulness
A meta-analysis (Category I) that was conducted by Pilcher and Huffcutt examined data that were drawn from 19 research studies to characterize the effects of sleep deprivation on specific types of human performance. Motor skills, cognitive skills, and mood were assessed in terms of: partial sleep derivation (also known as sleep deprivation), which is defined as fewer than 5 hours of sleep in a 24-hour period for 1 or more days; short-term total sleep deprivation (no sleep attained for fewer than 45 hours); and long-term sleep deprivation (no sleep attained for a period in excess of 45 hours). These researchers found that sleep-deprived subjects performed considerably worse on motor tasks, cognitive tasks, and measures of mood than did non-sleep-deprived subjects. The greatest impact on cognitive performance was seen from multiple days of partial sleep deprivation, although short- and long-term sleep deprivation also showed an effect. Meta-analyses of sleep deprivation effects in medical residents found deficits in both laboratory tasks and clinical tasks.
The magnitude of the chronic partial sleep loss has been experienced by astronauts in flight has been reported to negatively impact cognitive performance in multiple Category I, Category II and Category III laboratory and field studies. Performance can be affected whether sleep loss is in the form of a night of substantially reduced sleep, a night of total sleep deprivation, or a series of less drastic, but more chronic, restricted sleep hours. A 1997 study by Dinges et al. revealed that when sleep is restricted to the level that is commonly experienced by astronauts, a "sleep debt" accrues and, in less than 1 week, performance deficits during waking hours reach levels of serious impairment.
Chronic reduction of sleep can impact performance in a manner that is similar to that of total sleep deprivation. A study by Van Dongen et al., which used 48 subjects, evaluated the specific performance effects of chronic sleep restriction in comparison to the effects of 3 nights of total sleep deprivation. Sleep restriction conditions included 14 consecutive nights of 8, 6, or 4 hours of sleep opportunity, with actual sleep quantity validated by polysomnography recordings. Subjects who were subjected to sleep restriction conditions underwent neurobehavioral assessments every 2 hours during their scheduled wakefulness, while subjects who were subjected to the sleep deprivation condition were tested every 2 hours throughout their total 88 hours of sleep deprivation.
The neurobehavioral assessment battery that was used in the Van Dongen et al. study included the psychomotor vigilance task (PVT). The PVT - which determines alertness and the effects of fatigue on cognitive performance (as determined by lapses in response time and accuracy of responses) by measuring the speed with which subjects respond to a visual or auditory stimulus (by pressing a response button) - has become a standard laboratory tool for the assessment of sustained performance in a variety of experimental conditions. The PVT detects changes in basic neurobehavioral performance that involve vigilant attention, response speed, and impulsivity; and it has been extensively validated in ground-based laboratory studies to detect cognitive deficits that are caused by a variety of factors (e.g., restricted sleep, sleep/wake shifts, motion sickness, residual sedation from sleep medications). The PVT is an optimal tool for repeated use, in contrast to some other cognitive measures, as studies have shown no minimal learning effects and aptitude differences when using the PVT.
Results from these laboratory studies indicate that multiple consecutive sleep episodes of 4 or 6 hours significantly erode performance on the PVT and on measures of working memory, and that performance under these two conditions (i.e., 4 or 6 hours) was comparable to the performance that is found under conditions of 1 to 2 days of total sleep deprivation. Surprisingly, by the end of the 14 days of sleep restriction, subjects in the 4- and 6-hour sleep period conditions reported feeling only slightly sleepy. As these reports were taken when performance was at its lowest level, this indicates that the subjects may no longer have been aware of their performance deficits because of inadequate recovery sleep (figure 3–2).
Subjects who spent 4 hours in bed reached levels of impairment at 6 days and of severe impairment at 11 days. Subjects who spent 6 hours in bed reached levels of impairment at 7 days. It appears that subjects who spent 8 hours in bed approached levels of impairment. Figure 3-3, which is from Belenky et al., however, demonstrates that subjects who spent 9 hours in bed did not approach these levels of impairment, indicating that 9 hours in bed may be needed to alleviate the risk of performance errors.
Similar performance effects resulting from chronically restricted can also be seen in the Category I study by Belenky et al. and in figure 3-3. This study involved 66 subjects who were observed in four conditions (i.e., 3, 5, 7 and 9 hours in bed) for 7 days. PVT testing showed severe impairments in reaction time under the 3-hour condition, with lapses in responses increasing steadily across the 7 days of sleep restriction. Subjects who spent 3 hours in bed reached levels of severe impairment at 5 days, while subjects who spent 5 hours in bed reached levels of impairment at 4 days.
These Category I laboratory studies by Van Dongen et al. and Belenky et al. clearly show that subjects suffered performance impairments resulting from total sleep deprivation and/or chronic sleep restriction.
Cognitive impairments are present even after an individual has been awake for approximately 17 hours; in fact, recent studies have shown that these decrements are similar to those that result from an elevated blood alcohol level. A compelling Category I laboratory study from Williamson and Feyer used a cross-over randomized control design to observe cognitive and motor performance after minor sleep deprivation to performance after alcohol consumption. All subjects participated in both alcohol consumption and sleep deprivation, and the order of testing was counterbalanced so that half of the subjects participated in the alcohol consumption part first while the other half participated in the sleep deprivation part first. To avoid carry-over effects from one condition to the next, subjects were provided with a night of rest in a motel between each condition.
Results indicate that, on average, performance with a blood alcohol level of 0.05% remained equivalent to performance after being awake for 16.9 to 18.6 hours. Performance with a blood alcohol level of 0.1% was equivalent to performance after being awake for 17.7 to 19.7 hours, or to restricted sleep of 4 to 5 hours per night for 1 week. Similar studies that compare performance after a time of sleep deprivation to performance with elevated blood alcohol level have confirmed these results. These findings are compelling as the duration of wakefulness (17 hours), which results in decrements that are similar to those that are induced by a 0.05% blood alcohol level, is considered by many to be within the range of a "normal" waking "day"; many individuals can recall an incident in which they had to waken early in the morning and work all day into the night. Astronauts, who sleep on average of 6 hours per night, may be performing critical tasks 17 hours or more after wakening.
Performance errors relative to sleep desynchronization and work overload
Research suggests that circadian desynchronization and work overload may also impair performance. Specifically, a controlled laboratory study by Wright et al. evaluated the relationship between circadian rhythms and performance by assessing body temperature, which is regulated by the circadian mechanisms of the body. Body temperature is at its highest near the circadian peak and lowest near the circadian minimum (this is when the body is driven to sleep). It has long been recognized that a positive relationship exists between daily rhythms of the body temperature and neurobehavioral performance and alertness in humans.
The study protocol forced circadian desynchronization for 12 consecutive 28-hour days; participants were allowed 9.3 hours of scheduled time in bed and 18.7 hours of scheduled wakefulness. Performance on validated measures was evaluated every 2 hours, beginning 2 hours after the scheduled wake time. The protocol, therefore, assessed performance when the body is normally driven to sleep (which is related to the point at which body temperature is at its lowest) relative to performance during normal waking hours, and allowed for assessment of the effects of body temperature independent of (and associated with) sleep hours and time of day. During the circadian peak ( when body temperature is high), performance and alertness are high; conversely, near the circadian phase of low body temperature, performance and alertness are low. These results have been replicated in other forced desynchrony and extended wakefulness laboratory protocols.
Results from these laboratory protocols can be extrapolated to field conditions. Studies in the medical industry, where highly educated and trained individuals (e.g., physicians) are subject to circadian shifting and extended work shifts in addition to sleep loss, further demonstrate serious performance errors with populations that are analogous to astronauts. In a two-session, with subject, Category II experiment that was conducted by Arnedt et al., the performance of 34 medical interns was observed under four conditions:
after 4 weeks of a light rotation (averaging 44 hours of rotations/week)
after 4 weeks of a heavy rotation (averaging 80 hours of rotations/week)
after 4 weeks of a heavy rotation with a 0.05% blood alcohol level
after 4 weeks of a light rotation with a 0.05% blood alcohol level
Performance measures included the PVT and a simulated driving task. Findings of the Arnedt et al. experiment indicate that performance impairment after a heavy-call rotation is comparable to the impairment that is associated with a combined 0.04% to 0.05% blood alcohol level and a light-call rotation. Results of this experiment demonstrate that decrements that are created by extended work shifts are similar to the decrements that are created by elevated blood alcohol levels.
Work hours and sleep loss were shown to impact performance in a Category III evaluation by Rogers et al. A total of 393 registered nurses logged scheduled hours worked, actual hours worked, time of day worked, overtime, days off, and sleep/wake patterns. Questions concerning errors and near-errors were also included. Analysis showed that work duration, overtime, and number of hours worked per week significantly affected the number of errors. The likelihood of making an error increased with longer work hours, and was three times higher when the nurses worked shifts lasting 12.5 hours or more. Working overtime increased the odds of making at least one error, regardless of the originally scheduled length of the shift. Working more than 40 and more than 50 hours per week significantly increased the risk of making an error.
Similar findings were attained in a subsequent Category III evaluation of 2,737 medical interns. A Web-based survey was conducted across the U.S. in which interns completed 17,003 confidential monthly reports. These 60-item reports contained information concerning work hours, sleep, and activities during the month, number of days off, and the number of extended-duration work shifts (defined as at least 24 hours of continuous work). These interns were also asked to report whether they had made significant fatigue-related or non-fatigue-related medical errors. Other questions assessed how often they had nodded off or fallen asleep during patient care or educational activities.
Analysis revealed a significant relationship between the number of extended-duration work shifts and the reported rates of fatigue-related noteworthy medical errors. Specifically, the number of reported fatigue-related medical errors increased as the number of extended-duration shifts per month increased. At least one fatigue-related significant medical error was reported in 3.8% of months with no extended-duration work shifts; and at least one fatigue-related significant medical error was reported in 9.8% of months that had between one and four extended-duration work shifts and in 16% of months that had five or more extended-duration work shifts. Furthermore, the frequency of attentional failures was strongly associated with the frequency of extended-duration work shifts. Evidence from this study further corroborates the negative impact that extended-duration work shifts may have on performance, as well as increased accidents and injuries.
Working extended hours or overnight shifts also poses the added difficulty of requiring performance from an individual at a time with the body is driven to sleep by the circadian system. Sleep, alertness, and cognitive functioning are determined by the interaction of two processes: the endogenous circadian pacemaker and the homeostatic drive for sleep. The endogenous circadian pacemaker generates the 24-hour circadian rhythm that regulates subjective alertness and sleep propensity as well as core body temperature, cognitive functions, and melatonin secretion, as described above. It is also highly sensitive to light, which is its primary synchronization. Misalignment of the circadian rhythm results in disturbed sleep, impaired performance alertness, waking-hour melatonin secretion, and reduced levels of nocturnal secretion of growth hormone. The outcome, therefore, can range from performance error to long-term health decrements.
Individuals who work at night and attempt to sleep during the day suffer because the timing of their sleep/wake schedule remains out of phase with the timing of the environmental light. Night workers are particularly prone to vehicle accidents, and their decreased alertness, performance, and vigilance are likely to blame for a higher rate of industrial accidents and quality control errors on the job, injuries and a general decline in work productivity rate. Recent information also suggests that as the body normally releases melatonin when it is dark, working under artificial like at night suppresses the released of melatonin, which may increase the risk of developing cancer.
In summation, ground-based evidence demonstrates that sleep loss, circadian desynchronization, and extended work shifts lead to increased performance errors and accidents. The extent to which these risk factors are also present in the space flight environment is therefore an important consideration.
See also
Effects of sleep deprivation on cognitive performance
Shift work sleep disorder
Sleep deprivation
Skylab 4
References
Space medicine
Sleep medicine
Spaceflight health effects | Effects of sleep deprivation in space | Biology | 3,006 |
25,815,583 | https://en.wikipedia.org/wiki/Worley%20noise | Worley noise, also called Voronoi noise and cellular noise, is a noise function introduced by Steven Worley in 1996. Worley noise is an extension of the Voronoi diagram that outputs a real value at a given coordinate that corresponds to the Distance of the nth nearest seed (usually n=1) and the seeds are distributed evenly through the region. Worley noise is used to create procedural textures.
Worley noise of Euclidean distance is differentiable and continuous everywhere except on the edges of the Voronoi diagram of the set of seeds and on the location of the seeds.
Basic algorithm
The algorithm chooses random points in space (2- or 3-dimensional) and then for every location in space takes the distances Fn to the nth-closest point (e.g. the second-closest point) and uses combinations of those to control color information (note that Fn+1 > Fn). More precisely:
Randomly distribute feature points in space organized as grid cells. In practice this is done on the fly without storage (as a procedural noise). The original method considered a variable number of seed points per cell so as to mimic a Poisson disc, but many implementations just put one. This is an optimization that limits the number of terms that will be compared
At run time, extract the distances Fn from the given location to the closest seed point. For F1 It is only necessary to find the value of the seeds location in the grid cell being sampled and the grid cells adjacent to that grid cell.
Noise W(x) is formally the vector of distances, plus possibly the corresponding seed ids, user-combined so as to produce a color.
See also
Fractal
Voronoi diagram
Perlin noise
Simplex noise
References
Further reading
External links
Detailed description on how to implement cell noise
A version with the color plates appended at the end
Noise (graphics)
Special effects
Fractals
Computer graphic techniques | Worley noise | Mathematics,Technology | 392 |
33,334,030 | https://en.wikipedia.org/wiki/Duocarmycin | The duocarmycins are members of a series of related natural products first isolated from Streptomyces bacteria in 1978. They are notable for their extreme cytotoxicity and thus represent a class of exceptionally potent antitumour antibiotics.
Biological activity
As small-molecule, synthetic, DNA minor groove binding alkylating agents, duocarmycins are suitable to target solid tumors. They bind to the minor groove of DNA and alkylate the nucleobase adenine at the N3 position. The irreversible alkylation of DNA disrupts the nucleic acid architecture, which eventually leads to tumor cell death. Analogues of naturally occurring antitumour agents, such as duocarmycins, represent a new class of highly potent antineoplastic compounds.
The work of Dale L. Boger and others created a better understanding of the pharmacophore and mechanism of action of the duocarmycins. This research has led to synthetic analogs including adozelesin, bizelesin, and carzelesin which progressed into clinical trials for the treatment of cancer. Similar research that Boger utilized for comparison to his results involving elimination of cancerous tumors and antigens was centered around the use of similar immunoconjugates that were introduced to cancerous colon cells. These studies related to Boger's research involving antigen-specificity that is necessary to the success of the duocarmycins as antitumor treatments.
Duocarmycin analogues vs tubulin binders
The duocarmycin have shown activity in a variety of multi-drug resistant (MDR) models. Agents that are part of this class of duocarmycins have the potency in the low picomolar range. This makes them suitable for maximizing the cell-killing potency of antibody-drug conjugates to which they are attached.
Duocarmycins
Antibody-drug conjugates
The DNA modifying agents such as duocarmycin are being used in the development of antibody-drug conjugate or ADCs. Scientists at The Netherlands-based Byondis (formerly Synthon) have combined a unique linkers with duocarmycin derivatives that have a hydroxyl group which is crucial for biological activity. Using this technology scientists aim to create ADCs having an optimal therapeutic window, balancing the effect of potent cell-killing agents on tumor cells versus healthy cells.
Synthetic analogs
The synthetic analogs of duocarmycins include adozelesin, bizelesin, and carzelesin. As members of the cyclopropylpyrroloindole family, these investigational drugs have progressed into clinical trials for the treatment of cancer.
Bizelesin
Bizelesin is antineoplastic antibiotic which binds to the minor groove of DNA and induces interstrand cross-linking of DNA, thereby inhibiting DNA replication and RNA synthesis. Bizelesin also enhances p53 and p21 induction and triggers G2/M cell-cycle arrest, resulting in cell senescence without apoptosis.
References
Experimental cancer drugs
Alkylating antineoplastic agents
Alkaloids
Antineoplastic drugs
Biotechnology
Chemotherapeutic adjuvants
Antibiotics | Duocarmycin | Chemistry,Biology | 675 |
3,708,953 | https://en.wikipedia.org/wiki/OMI%20cryptograph | The OMI cryptograph was a rotor cipher machine produced and sold by Italian firm Ottico Meccanica Italiana (OMI) in Rome.
The machine had seven rotors, including a reflecting rotor. The rotors stepped regularly. Each rotor could be assembled from two sections with different wiring: one section consisted of a "frame" containing ratchet notches, as well as some wiring, while the other section consisted of a "slug" with a separate wiring. The slug section fitted into the frame section, and different slugs and frames could be interchanged with each other. As a consequence, there were many permutations for the rotor selection.
The machine was offered for sale during the 1960s.
References
Cipher A. Deavours and Louis Kruh, "Machine Cryptography and Modern Cryptanalysis", Artech House, 1985, pp. 146–147
F. L. Bauer, Decrypted Secrets, 2nd edition, Springer-Verlag, 2000, , pp. 112,136.
Cryptographic hardware
Rotor machines | OMI cryptograph | Physics,Technology | 212 |
2,045,565 | https://en.wikipedia.org/wiki/Cataclysmic%20pole%20shift%20hypothesis | The cataclysmic pole shift hypothesis is a pseudo-scientific claim that there have been recent, geologically rapid shifts in the axis of rotation of Earth, causing calamities such as floods and tectonic events or relatively rapid climate changes.
There is evidence of precession and changes in axial tilt, but this change is on much longer time-scales and does not involve relative motion of the spin axis with respect to the planet. However, in what is known as true polar wander, the Earth rotates with respect to a fixed spin axis. Research shows that during the last 200 million years a total true polar wander of some 30° has occurred, but that no rapid shifts in Earth's geographic axial pole were found during this period. A characteristic rate of true polar wander is 1° or less per million years. Between approximately 790 and 810 million years ago, when the supercontinent Rodinia existed, two geologically rapid phases of true polar wander may have occurred. In each of these, the magnetic poles of Earth shifted by approximately 55° due to a large shift in the crust.
Definition and clarification
The geographic poles are defined by the points on the surface of Earth that are intersected by the axis of rotation. The pole shift hypothesis describes a change in location of these poles with respect to the underlying surface – a phenomenon distinct from the changes in axial orientation with respect to the plane of the ecliptic that are caused by precession and nutation, and is an amplified event of a true polar wander. Geologically, a surface shift separate from a planetary shift, enabled by earth's molten core.
Pole shift hypotheses are not connected with plate tectonics, the well-accepted geological theory that Earth's surface consists of solid plates which shift over a viscous, or semifluid asthenosphere; nor with continental drift, the corollary to plate tectonics which maintains that locations of the continents have moved slowly over the surface of Earth, resulting in the gradual emerging and breakup of continents and oceans over hundreds of millions of years.
Pole shift hypotheses are not the same as geomagnetic reversal, the occasional reversal of Earth's magnetic field (effectively switching the north and south magnetic poles).
Speculative history
In popular literature, many conjectures have been suggested involving very rapid polar shift. A slow shift in the poles would display the most minor alterations and no destruction. A more dramatic view assumes more rapid changes, with dramatic alterations of geography and localized areas of destruction due to earthquakes and tsunamis.
Early proponents
An early mention of a shifting of Earth's axis can be found in an 1872 article entitled "Chronologie historique des Mexicains" by Charles Étienne Brasseur de Bourbourg, a specialist in Mesoamerican codices who interpreted ancient Mexican myths as evidence for four periods of global cataclysms that had begun around 10,500 BCE.
In the 1930s and 40s, American Edgar Cayce's prophecies predicted cataclysms he called "Earth changes"; Ruth Montgomery would later cite Cayce's prophecies to support her polar shift theories.
In 1948, Hugh Auchincloss Brown, an electrical engineer, advanced a hypothesis of catastrophic pole shift. Brown also argued that accumulation of ice at the poles caused recurring tipping of the axis, identifying cycles of approximately seven millennia.
In his pseudo-scientific 1950 work Worlds in Collision, Immanuel Velikovsky postulated that the planet Venus emerged from Jupiter as a comet. During two proposed near-approaches in about 1450 BCE, he suggested that the direction of Earth's rotation was changed radically, then reverted to its original direction on the next pass. This disruption supposedly caused earthquakes, tsunamis, and the parting of the Red Sea. Further, he said near misses by Mars between 776 and 687 BCE also caused Earth's axis to change back and forth by ten degrees. Velikovsky cited historical records in support of his work, although his studies were generally ridiculed by the scientific community.
Recent conjectures
Several authors have offered pseudoscientific arguments for the hypothesis, including journalist and New Age enthusiast Ruth Shick Montgomery. Skeptics counter that these works combine speculation, the work of psychics, and modern folklore, while largely avoiding any effort at basic science by trying to disprove their own hypothesis.
Earth crustal displacement hypothesis
Charles Hapgood is now perhaps the best remembered early proponent of the hypothesis that some climate changes and ice ages could be explained by large sudden shifts of the geographic poles. In his books The Earth's Shifting Crust (1958) (which includes a foreword by Albert Einstein) and Path of the Pole (1970), Hapgood speculated that accumulated polar ice mass destabilizes Earth's rotation, causing crustal displacement but not disturbing Earth's axial orientation. Hapgood argued that shifts (of no more than 40 degrees) occurred about every 5,000 years, interrupting 20,000- to 30,000-year periods of polar stability. He cited recent North Pole locations in Hudson Bay (60°N, 73°W), the Atlantic Ocean between Iceland and Norway (72°N, 10°E) and the Yukon (63°N, 135°W). However, in his subsequent work The Path of the Pole, Hapgood conceded Einstein's point that the weight of the polar ice is insufficient to cause polar shift. Instead, Hapgood argued that causative forces must be located below the surface. Hapgood encouraged Canadian librarian Rand Flem-Ath to pursue scientific evidence backing Hapgood's claims. Flem-Ath published the results of this work in 1995 in When the Sky Fell co-written with his wife Rose.
In popular culture
The idea of earth crust displacement is featured in 2012, a 2009 film based on the 2012 phenomenon.
Scientific research
While there are reputable studies showing that true polar wander has occurred at various times in the past, the rates are much smaller (1° per million years or slower) than predicted by the pole shift hypothesis (up to 1° per thousand years). Analysis of the evidence does not lend credence to Hapgood's hypothesized rapid displacement of layers of Earth.
Data indicates that the geographical poles have not deviated by more than about 5° over the last 130 million years, contradicting the hypothesis of a cataclysmic polar wander event.
More rapid past possible occurrences of true polar wander have been measured: from 790 to 810 million years ago, true polar wander of approximately 55° may have occurred twice.
See also
Dzhanibekov effect
Large low-shear-velocity provinces
Low-velocity zone
Ultra low velocity zone
Inner core super-rotation
Intermediate axis theorem
Global catastrophic risk
Earth Changes
North Magnetic Pole
South Magnetic Pole
Tollmann's bolide hypothesis
The Nibiru cataclysm, another fringe hypothesis that has often been suggested as a cause for cataclysmic pole shifts
References
External links
Alleged "Evidence" of Earth Crustal Displacement (Pole Shift)Analysis of specific evidence used to argue for geologically recent Pole Shift
Fingerprints of the Gods (1995) by Graham Hancock, an analysis of arguments made for a Late Pleistocene Pole Shift, based on the ideas of Rand Flem-Ath by
"The Day the Earth Fell Over" at LiveScience
Charting Imaginary Worlds: Pole Shifts, Ice Sheets, and Ancient Sea Kings
Minds in Ablation Part Five Addendum: Living in Imaginary Worlds More about interpreting ancient maps and ideas of Charles Hapgood.
The Kerplop! Theory: Acme Instant Ice-Sheet Kit (Some Assembly Required)
"How to Escape Nibiru", podcast by Brian Dunning
Geodesy
Pole shift theory and theorists
Pseudoscience
Doomsday scenarios | Cataclysmic pole shift hypothesis | Mathematics | 1,614 |
203,082 | https://en.wikipedia.org/wiki/Temperate%20coniferous%20forest | Temperate coniferous forest is a terrestrial biome defined by the World Wide Fund for Nature. Temperate coniferous forests are found predominantly in areas with warm summers and cool winters, and vary in their kinds of plant life. In some, needleleaf trees dominate, while others are home primarily to broadleaf evergreen trees or a mix of both tree types. A separate habitat type, the tropical coniferous forests, occurs in more tropical climates.
Temperate coniferous forests are common in the coastal areas of regions that have mild winters and heavy rainfall, or inland in drier climates or montane areas. Many species of trees inhabit these forests including pine, cedar, fir, and redwood. The understory also contains a wide variety of herbaceous and shrub species. Temperate coniferous forests sustain the highest levels of biomass in any terrestrial ecosystem and are notable for trees of massive proportions in temperate rainforest regions.
Structurally, these forests are rather simple, consisting of 2 layers generally: an overstory and understory. However, some forests may support a layer of shrubs. Pine forests support an herbaceous ground layer that may be dominated by grasses and forbs that lend themselves to ecologically important wildfires. In contrast, the moist conditions found in temperate rain forests favor the dominance by ferns and some forbs.
Forest communities dominated by huge trees (e.g., giant sequoia, Sequoiadendron gigantea; redwood, Sequoia sempervirens), unusual ecological phenomena, occur in western North America, southwestern South America, as well as in the Australasian region in such areas as southeastern Australia and northern New Zealand.
The Klamath-Siskiyou ecoregion of western North America harbors diverse and unusual assemblages and displays notable endemism for a number of plant and animal taxa.
Ecoregions
Eurasia
North America
See also
Cedar hemlock douglas-fir forest
Temperate deciduous forest
References
External links
Temperate forest
Terrestrial biomes
Conifers
Forests
Biomes
Montane forests | Temperate coniferous forest | Biology | 404 |
12,387,824 | https://en.wikipedia.org/wiki/C6H8O | {{DISPLAYTITLE:C6H8O}}
The molecular formula C6H8O (molar mass: 96.13 g/mol, exact mass: 96.05751 u) may refer to:
Cyclohexenone
2,5-Dimethylfuran
2,3-Dimethylfuran
2,4-Dimethylfuran
3,4-Dimethylfuran | C6H8O | Chemistry | 92 |
2,070,411 | https://en.wikipedia.org/wiki/Rabinovich%E2%80%93Fabrikant%20equations | The Rabinovich–Fabrikant equations are a set of three coupled ordinary differential equations exhibiting chaotic behaviour for certain values of the parameters. They are named after Mikhail Rabinovich and Anatoly Fabrikant, who described them in 1979.
System description
The equations are:
where α, γ are constants that control the evolution of the system. For some values of α and γ, the system is chaotic, but for others it tends to a stable periodic orbit.
Danca and Chen note that the Rabinovich–Fabrikant system is difficult to analyse (due to the presence of quadratic and cubic terms) and that different attractors can be obtained for the same parameters by using different step sizes in the integration, see on the right an example of a solution obtained by two different solvers for the same parameter values and initial conditions. Also, recently, a hidden attractor was discovered in the Rabinovich–Fabrikant system.
Equilibrium points
The Rabinovich–Fabrikant system has five hyperbolic equilibrium points, one at the origin and four dependent on the system parameters α and γ:
where
These equilibrium points only exist for certain values of α and γ > 0.
γ = 0.87, α = 1.1
An example of chaotic behaviour is obtained for γ = 0.87 and α = 1.1 with initial conditions of (−1, 0, 0.5), see trajectory on the right. The correlation dimension was found to be 2.19 ± 0.01. The Lyapunov exponents, λ are approximately 0.1981, 0, −0.6581 and the Kaplan–Yorke dimension, DKY ≈ 2.3010
γ = 0.1
Danca and Romera showed that for γ = 0.1, the system is chaotic for α = 0.98, but progresses on a stable limit cycle for α = 0.14.
See also
List of chaotic maps
References
External links
Weisstein, Eric W. "Rabinovich–Fabrikant Equation." From MathWorld—A Wolfram Web Resource.
Chaotics Models a more appropriate approach to the chaotic graph of the system "Rabinovich–Fabrikant Equation"
Chaotic maps
Equations | Rabinovich–Fabrikant equations | Mathematics | 463 |
2,595,930 | https://en.wikipedia.org/wiki/Persistent%20programming%20language | Programming languages that natively and seamlessly allow objects to continue existing after the program has been closed down are called persistent programming languages. JADE is one such language.
A persistent programming language is a programming language extended with constructs to handle persistent data. It is distinguished from embedded SQL in at least two ways:
In a persistent programming language:
The query language is fully integrated with the host language and both share the same type system.
Any format changes required between the host language and the database are carried out transparently.
In Embedded SQL:
Where the host language and data manipulation language have different type systems, code conversion operates outside of the OO type system, and hence has a higher chance of having undetected errors.
Format conversion must be handled explicitly and takes a substantial amount of code.
Using Embedded SQL, a programmer is responsible for writing explicit code to fetch data into memory or store data back to the database.
In a persistent programming language, a programmer can manipulate persistent data without having to write such code explicitly.
The drawbacks of persistent programming languages include:
While they are powerful, it is easy to make programming errors that damage the database.
It is harder to do automatic high-level optimization.
They do not support declarative querying well.
Examples
MUMPS
JADE
Caché ObjectScript
See also
Object-relational mapping
Object-oriented database management systems
Object prevalence
Phantom OS - persistent OS project
References
Programming language classification
Persistent programming languages | Persistent programming language | Technology | 286 |
31,998,179 | https://en.wikipedia.org/wiki/The%20Verge | The Verge is an American technology news website headquartered in Lower Manhattan, New York City and operated by Vox Media. The website publishes news, feature stories, guidebooks, product reviews, consumer electronics news, and podcasts.
The website was launched on November 1, 2011, and uses Vox Media's proprietary multimedia publishing platform Chorus. In 2014, Nilay Patel was named editor-in-chief and Dieter Bohn executive editor; Helen Havlak was named editorial director in 2017. The Verge won five Webby Awards for the year 2012 including awards for Best Writing (Editorial), Best Podcast for The Vergecast, Best Visual Design, Best Consumer Electronics Site, and Best Mobile News App.
History
Origins
Between March and April 2011, up to nine of Engadgets writers, editors, and product developers, including editor-in-chief Joshua Topolsky, left AOL, the company behind that website, to start a new gadget site. The other departing editors included managing editor Nilay Patel and staffers Paul Miller, Ross Miller, Joanna Stern, Chris Ziegler, as well as product developers Justin Glow and Dan Chilton. In early April 2011, Topolsky announced that their unnamed new site would be produced in partnership with sports news website SB Nation, debuting some time in the fall. Topolsky lauded SB Nation similar interest in the future of publishing, including what he described as their beliefs in independent journalism and in-house development of their own content delivery tools. SB Nation's Jim Bankoff saw an overlap in the demographics of the two sites and an opportunity to expand SB Nation's model. Bankoff previously worked at AOL in 2005, where he led their Engadget acquisition. Other news outlets viewed the partnership as positive for both SB Nation and Topolsky's staff, and negative for AOL's outlook.
Bankoff, chairman and CEO of Vox Media (owner of SB Nation), said in a 2011 interview that though the company had started out with a focus on sports, other categories including consumer technology had growth potential for the company. Development of Vox Media's content management system (CMS), Chorus, was led by Trei Brundrett, who later became the chief operating officer for the company.
This Is My Next
Following news of his untitled partnership with SB Nation in April 2011, Topolsky announced that the Engadget podcast hosted by Patel, Paul Miller, and himself would continue at an interim site called This Is My Next. By August 2011, the site had reached 1 million unique visitors and 3.4 million page views. By October 2011, the site had 3 million unique views per month and 10 million total page views. Time listed the site in its Best Blogs of 2011, calling the prototype site "exemplary". The site closed upon The Verges launch on November 1, 2011.
On June 11, 2014, The Verge launched a new section called "This Is My Next", edited by former editor David Pierce, as a buyer's guide for consumer electronics. By 2022, this section had been retitled simply "Buying Guide".
Launch
The Verge launched November 1, 2011, along with an announcement of a new parent company: Vox Media. According to the company, the site launched with 4 million unique visitors and 20 million pageviews. At the time of Topolsky's departure, Engadget had 14 million unique visitors. Vox Media overall doubled its unique visitors to about 15 million during the last half of 2012. The Verge had 12 former Engadget staffers working with Topolsky at the time of launch. It hired Tom Warren, former Neowin editor-in-chief and WinRumors blogger, as their new United Kingdom based senior editor. In 2013, The Verge launched a new science section, Verge Science, with former Wired editor Katie Drummond leading the effort. Patel replaced Topolsky as editor-in-chief in mid-2014. Journalist Walt Mossberg joined The Verge editing team after Vox Media acquired Recode in 2015. By 2016, the website's advertising had shifted from display advertisements, matched with articles' contents, to partnerships and advertisements adjusted to the user.
2016–present
Vox Media revamped The Verge visual design for its fifth anniversary in November 2016. Its logo featured a modified Penrose triangle, an impossible object. On November 1, The Verge launched version 3.0 of its news platform, offering a redesigned website along with the new logo.
In September 2016, The Verge fired deputy editor Chris Ziegler after it learned that he had been working for Apple since July. Helen Havlak was promoted to editorial director in mid-2017. In 2017, The Verge launched "Guidebook" to host technology product reviews. In May 2018, Verge Science launched a YouTube channel, which had more than 638,000 subscribers and 30 million views by January 2019. The channel received more than 5.3 million views in November 2018 alone. As of August 2023, the channel has over 100 million views and 1.15 million subscribers.
In March 2022, Dieter Bohn announced his resignation from The Verge in his position of Executive Editor, and that he would be moving to a new position at Google.
The Verge rebranded and redesigned its website in September 2022 with a sharper, more simplistic logo, more colorful visual design, and new typefaces. Its new home page format resembled a news feed, incorporating external conversations from social media and reporting from other publications. The new format will, in part, reduce aggregation reporting.
In December 2024, The Verge began to paywall some content behind a subscription service; this offering covers "premium" reports, newsletters, and reviews, as well as fewer advertisements and other features.
Content
Podcasts
The Verge broadcasts a live weekly podcast, The Vergecast. The inaugural episode was November 4, 2011. It included a video stream of the hosts. A second weekly podcast was introduced on November 8, 2011. Unlike The Vergecast, The Verge Mobile Show was primarily focused on mobile phones. The Verge also launched the weekly podcast Ctrl-Walt-Delete, hosted by Walt Mossberg, in September 2015. The Verge What's Tech podcast was named among iTunes's best of 2015. The podcast Why'd You Push That Button?, launched in 2017 and co-hosted by Ashley Carman and Kaitlyn Tiffany, received a Podcast Award in the "This Week in Tech Technology Category" in 2018.
Editor-in-chief Nilay Patel hosts a weekly interview podcast called Decoder. On February 8th, 2024, Patel announced Decoder would now do two episodes per week.
Video content
On The Verge
On August 6, 2011, in an interview with the firm Edelman, The Verge co-founder Marty Moe announced it was launching The Verge Show, a web television series. After its launch, the show was named On The Verge. The first episode was recorded on Monday, November 14, 2011, with guest Matias Duarte. The show is a technology news entertainment show, and its format is similar to that of a late-night talk show, but it is broadcast over the Internet, not on television. The show's first episode was released on November 15, 2011.
Ten episodes of On The Verge were broadcast, with the most recent episode going out on November 10, 2012. On May 24, 2013, it was announced that the show would return under a new weekly format, alongside a new logo and theme tune.
Other video content
On May 8, 2013, editor-in-chief Topolsky announced Verge Video, a website that contains the video backlog from The Verge.
Circuit Breaker, a gadget blog, launched in 2016, has amassed nearly one million Facebook followers and debuted a live show on Twitter in October 2017. The blog's videos average more than 465,000 views, and Jake Kastrenakes serves as editor-in-chief, as of 2017. Also in 2016, USA Network and The Verge partnered on Mr. Robot Digital After Show, a digital aftershow for the television series Mr. Robot. In December, Twitter and Vox Media announced a live streaming partnership for The Verge programs covering the Consumer Electronics Show.
The series Next Level, hosted and produced by Lauren Goode, debuted in 2017 and was recognized in the "Technology" category at the 47th annual San Francisco / Northern California Emmy Awards (2018). In August 2017, The Verge launched the web series Space Craft, hosted by science reporter Loren Grush.
In 2022, The Verge produced the show The Future Of for Netflix.
PC Build guide controversy
In September 2018, The Verge published the article "How to Build a Custom PC for Editing, Gaming or Coding" with a companion YouTube video entitled "How we Built a $2000 Custom Gaming PC". The video was criticized for containing errors on almost every step presented by its host, Stefan Etienne, such as applying a unnecessary amount of thermal paste onto the processor as opposed to a small amount. An online harassment campaign against Etienne ensued.
In February 2019, lawyers from The Verge parent company Vox Media filed a DMCA takedown notice, requesting that YouTube remove videos critical of The Verges video, alleging copyright infringement. YouTube took down two of the videos, uploaded by YouTube channels BitWit and ReviewTechUSA, while applying a copyright "strike" to these two channels. YouTube later reinstated the two videos and retracted the copyright "strikes" after a request from Verge editor Nilay Patel, although Patel acknowledged that he agreed with the legal argument that led to their removal. Timothy B. Lee of Ars Technica described this controversy as an example of the Streisand effect, saying that while law regarding fair use is unclear regarding this type of situation, "the one legal precedent ... suggests ... that this kind of video is solidly within the bounds of copyright's fair use doctrine."
Nearly three years after the erroneous build, PC builder and YouTuber Linus Sebastian collaborated with Etienne in a video entitled "Fixing the Verge PC build", to rectify the mistakes thereof. In it, Etienne admitted not being an experienced builder at the time (having built only four computers at that point, with The Verge build being his first on camera), and revealed that before the video went live, The Verge was unwilling to hear from him to address what he saw were editing issues, insisting that the video be uploaded regardless.
See also
References
External links
American technology news websites
Technology blogs
Computing websites
Internet broadcasting
Internet properties established in 2011
Streaming television
Mass media in New York City
Podcasting companies
Video blogs
Vox Media | The Verge | Technology | 2,160 |
4,616,703 | https://en.wikipedia.org/wiki/Halosere | A halosere is an ecological succession in saline water environments. An example of a halosere is a salt marsh.
In a river estuary, large amounts of silt are deposited by the ebbing tides, as well as inflowing rivers.
Plants in halosere
The earliest plant colonizers are algae and zostera, which can tolerate submergence by the tide for most of the 12 hour cycle and which trap mud, causing it to accumulate.
Two other colonizer plants are Salicornia, and Spartina, which are both halophytes. Halophytes are plants that can tolerate saline conditions and they grow on the intertidal mudflats with a maximum of four hours' exposure to air every 12 hours. On a large scale halophytes have colonized the halosere on the banks of the Great Salt Lake in Utah. Halosere vegetation can also be found in the salt marshes of the Wadden Sea islands and the zone towards the dunes.
River estuaries
In a river estuary, large amount of silt are depositing. Halosere in river estuaries consist of mudflats and the so called sward zone. Halosere sward zones can be found in the Llanrhidian marsh on the Gower Peninsula.
See also
Seral community
References
Ecological succession
Wetlands
Estuaries | Halosere | Environmental_science | 276 |
49,109,176 | https://en.wikipedia.org/wiki/Technofeminism | Technofeminism is a theoretical and practical framework that explores the intersections between technology, gender, and power. Rooted in feminist thought, it critically examines how technology shapes, reinforces, or disrupts gender inequalities and seeks to envision more equitable futures through technological design and use.
The term is widely attributed to Judy Wajcman, a sociologist and feminist scholar. Wajcman introduced the concept in her influential 2004 book, TechnoFeminism.
Historically, technofeminism is closely linked to cyberfeminism, a concept which emerged in the early 1990s. The origins of cyber- and technofeminism are strongly attributed to the references of Donna Haraway's A Cyborg Manifesto. Since the 1990s, numerous feminist movements developed, addressing feminism and technology in various ways, and through different perspectives. Networks, ideas and concepts can overlap.
Technofeminism is often examined in conjunction with intersectionality, a term coined by Kimberlé Crenshaw which analyzes the relationships among various identities, such as race, socioeconomic status, sexuality, gender, and more.
TechnoFeminism book
Overview
TechnoFeminism is a book by academic sociologist Judy Wajcman which reframes the relationship between gender and technologies, and presents a feminist reading of the woman-machine relationship. It argues against a technocratic ideology, posing instead a thesis of society and technology being mutually constitutive. She supports this with examples of feminist history related to reproductive technologies and automation. It is considered a key contributor to the rise of feminist technoscience as a field.
Reception
According to a review in the American Journal of Sociology, Wajcman convincingly argues that "analyses of everything from transit systems to pap smears must include a technofeminist awareness of men's and women's often different positions as designers, manufacturing operatives, salespersons, purchasers, profiteers, and embodied users of such technologies."
In the journal Science, Technology and Human Values, Sally Wyatt notes that the "theoretical insights from feminist technoscience (can and should) be useful for empirical research as well as for political change and action" and that one way of moving towards this is "return to production and work as research sites because so much work in recent years has focused on consumption, identity, and representation."
Editions
Adding to the print edition, which has been reprinted several times, E-book editions of TechnoFeminism were released in 2013. The book has been translated into Spanish as El Tecnofeminismo.
Academic contexts
Scholars, such as Lori Beth De Hertogh, Liz Lane, and Jessica Oulette, as well as Angela Haas, have spoken out about the lack of technofeminist scholarship, especially in the context of overarching technological research.
A primary concern of technofeminism is the relationship between historical and societal norms, and technology design and implementation. Technofeminist scholars actively work to illuminate the often unnoticed inequities ingrained in systems and come up with solutions to combat them. They also research how technology can be used for positive ends, especially for marginalized groups.
Angela Haas
Angela Haas focuses on technofeminism as a predecessor of "digital cultural rhetorics research", the focus of her scholarship. The interactions between these two fields have led scholars to analyze the intersectional nature of technology, and how this intersectionality results in tools that do not serve all users.
Haas also explores how marginalized groups interact with digital technologies. Specific areas analyzed include how revealing aspects of one's identity influences their ability to exist online. Although at times digital spaces do not cater to marginalized groups, one example being the idea that someone who identifies as homosexual is perceived as "sexual in every situation", which alters how the online community they are a part of interacts with them.
However, at times, technology can be renewed to serve women and marginalized groups. Haas uses the example of the vibrator to prove this point. While it is now associated with female empowerment, the tool was originally used to control women suffering from "hysteria".
De Hertogh et al.
Lori Beth De Hertogh, Liz Lane, and Jessica Ouellette expanded upon previous scholars' work, placing it within the specific context of the "Computers and Composition" journal. In their work, the scholars analyzed frequencies of the term "technofeminism/t" and associated words in the "Computers and Composition" journal. Unfortunately, the occurrences were limited, leading the scholars to call for increased use of the term "technofeminism" in scholarly materials and increased intersectional frameworks in mainstream technology literature.
Kerri Elise Hauman
Kerri Hauman explores technofeminist themes in her PhD dissertation, specifically discussing how feminism exists in digital spaces. Using the example of "Feministing", a blog serving those invested in "feminist activism", Hauman applies various rhetorical frameworks (such as invitational rhetoric and rhetorical ecologies) to understand how online platforms can further social justice initiatives in some ways, but promote the exclusion of disadvantaged groups in others.
See also
Digital rhetoric
Feminist technoscience
Cyberfeminism
References
Further reading
Farquharson, Karen "Book review: 'TechnoFeminism', by Judy Wajcman" Australian Journal of Emerging Technologies and Society, Vol. 2, no. 2 (2004), pp. 156–157
Sarah M. Brown "TechnoFeminism (review)" NWSA Journal Volume 19, Number 3, Fall 2007 pp. 225–227
2004 non-fiction books
Books about the Internet
Books about feminism
Women and science
Politics and technology | Technofeminism | Technology | 1,158 |
42,844,729 | https://en.wikipedia.org/wiki/Smart%20Cell | Smart Cells are radio access nodes that provide wireless connectivity across multiple spectrum ranges and technologies. As of January 2014, Macrocells, Small Cells, and Wi-Fi connections were the primary means of data connectivity. For these types of cells, the spectrum utilized is static and is based on the antenna installed. A Smart Cell may transmit multiple frequencies and technologies which are controlled by the software and not the hardware (antenna).
Smart Cells are currently in the research and development stage, but support software-defined networks, which are proliferating the current mobile network structure, are being supported.
It's possible that Smart Cells will lower capital and operational costs due to reduced equipment and manual manipulations needed to modify cell site coverage. The term Smart Cell is also used to identify other technologies that enhance cell sites where it has reduced the need to manually manipulate radio access equipment or add additional carriers at a radio access node.
References
Smart devices
Mobile telecommunications
Radio communications | Smart Cell | Technology,Engineering | 191 |
55,754,856 | https://en.wikipedia.org/wiki/Pujiang%20line | The Pujiang line of Shanghai Metro () is an automated, driverless, rubber-tired Shanghai Metro line in the town of Pujiang in the Shanghainese district of Minhang. It was originally conceived as phase 3 of Shanghai Metro line 8, but afterwards was constructed as a separate line, connecting with line 8 at its southern terminus, Shendu Highway. The line opened for passenger trial operations on March 31, 2018. It is the first automated, driverless people mover line in the Shanghai Metro, and has 6 stations with a total length of . The people mover was expected to carry 73,000 passengers a day. The line is colored gray on system maps.
The line is operated by Shanghai Keolis Public Transport Operation & Management Co. Ltd. (), a joint venture owned by Keolis and Shanghai Shentong Metro Group for at least five years after opening.
History
Stations
Service routes
Important stations
- Passengers can interchange to line 8.
Future expansion
There are no plans to extend the line.
Station name change
On June 9, 2013, the Aerospace Museum was renamed (before Pujiang line began serving the station).
Headways
<onlyinclude>
<onlyinclude>
<onlyinclude>
<onlyinclude>
<onlyinclude>
Technology
Signalling
The entire operation of the new line is remotely controlled from a central dispatch room. Trains operate using the Cityflo 650 communications-based train control (CBTC) from CRRC Puzhen Bombardier Transportation Systems Limited, a joint venture between Bombardier and CRRC Nanjing Puzhen Co., Ltd. The automatic trains had initially six staff members working at each APM station, but the operator hopes to reduce that to one or two.
Rolling Stock
Pujiang line uses rubber-tyred Bombardier Innovia APM 300 trains. The trains have 4 cars each, totaling in length, with capacity for 566 passengers per train. There are large windows at each end of the train allowing passengers to look out the front and rear. The small trains with rubber tires running on concrete tracks allow for turning radii as tight as to be negotiated, compared to over for typical metro on steel rails. On 13 January 2017, Bombardier delivered the first out of 44 autonomous people movers to Shanghai.
References
2018 establishments in Shanghai
People mover systems in China
Railway lines opened in 2018
Rubber-tyred metros
Shanghai Metro lines
Siemens Mobility projects | Pujiang line | Technology,Engineering | 491 |
7,079,248 | https://en.wikipedia.org/wiki/Centrosymmetric%20matrix | In mathematics, especially in linear algebra and matrix theory, a centrosymmetric matrix is a matrix which is symmetric about its center.
Formal definition
An matrix is centrosymmetric when its entries satisfy
Alternatively, if denotes the exchange matrix with 1 on the antidiagonal and 0 elsewhere:
then a matrix is centrosymmetric if and only if .
Examples
All 2 × 2 centrosymmetric matrices have the form
All 3 × 3 centrosymmetric matrices have the form
Symmetric Toeplitz matrices are centrosymmetric.
Algebraic structure and properties
If and are centrosymmetric matrices over a field , then so are and for any in . Moreover, the matrix product is centrosymmetric, since . Since the identity matrix is also centrosymmetric, it follows that the set of centrosymmetric matrices over forms a subalgebra of the associative algebra of all matrices.
If is a centrosymmetric matrix with an -dimensional eigenbasis, then its eigenvectors can each be chosen so that they satisfy either or where is the exchange matrix.
If is a centrosymmetric matrix with distinct eigenvalues, then the matrices that commute with must be centrosymmetric.
The maximum number of unique elements in an centrosymmetric matrix is
Related structures
An matrix is said to be skew-centrosymmetric if its entries satisfy
Equivalently, is skew-centrosymmetric if , where is the exchange matrix defined previously.
The centrosymmetric relation lends itself to a natural generalization, where is replaced with an involutory matrix (i.e., ) or, more generally, a matrix satisfying for an integer . The inverse problem for the commutation relation of identifying all involutory that commute with a fixed matrix has also been studied.
Symmetric centrosymmetric matrices are sometimes called bisymmetric matrices. When the ground field is the real numbers, it has been shown that bisymmetric matrices are precisely those symmetric matrices whose eigenvalues remain the same aside from possible sign changes following pre- or post-multiplication by the exchange matrix. A similar result holds for Hermitian centrosymmetric and skew-centrosymmetric matrices.
References
Further reading
External links
Centrosymmetric matrix on MathWorld.
Linear algebra
Matrices | Centrosymmetric matrix | Mathematics | 480 |
15,227,804 | https://en.wikipedia.org/wiki/ZNF267 | Zinc finger protein 267 is a protein that in humans is encoded by the ZNF267 gene.
References
Further reading
External links
Transcription factors | ZNF267 | Chemistry,Biology | 30 |
47,566,317 | https://en.wikipedia.org/wiki/3%CE%B2-Dihydroprogesterone | 3β-Dihydroprogesterone (3β-DHP), also known as 3β-hydroxyprogesterone, or pregn-4-en-3β-ol-20-one (4-pregnenolone, δ4-pregnenolone), is an endogenous steroid. It is biosynthesized by 3β-hydroxysteroid dehydrogenase from progesterone. Unlike 3α-dihydroprogesterone (3α-DHP), 3β-DHP does not act as a positive allosteric modulator of the GABAA receptor, which is in accordance with the fact that other 3β-hydroxylated progesterone metabolites such as isopregnanolone and epipregnanolone similarly do not act as potentiators of this receptor and instead inhibit it as well as reverse the effects of potentiators like allopregnanolone. 3β-DHP has been reported to possess about the same potency as progesterone in a bioassay of progestogenic activity, whereas 3α-DHP was not assessed.
See also
5α-Dihydroprogesterone
5β-Dihydroprogesterone
3β-Androstanediol
Pregnenolone
Progesterone 3-acetyl enol ether
Quingestrone
References
Sterols
Ketones
Pregnanes
Progestogens | 3β-Dihydroprogesterone | Chemistry | 315 |
394,008 | https://en.wikipedia.org/wiki/Logical%20possibility | Logical possibility refers to a logical proposition that cannot be disproved, using the axioms and rules of a given system of logic. The logical possibility of a proposition will depend upon the system of logic being considered, rather than on the violation of any single rule. Some systems of logic restrict inferences from inconsistent propositions or even allow for true contradictions. Other logical systems have more than two truth-values instead of a binary of such values. Some assume the system in question is classical propositional logic. Similarly, the criterion for logical possibility is often based on whether or not a proposition is contradictory and as such, is often thought of as the broadest type of possibility.
In modal logic, a logical proposition is possible if it is true in some possible world. The universe of "possible worlds" depends upon the axioms and rules of the logical system in which one is working, but given some logical system, any logically consistent collection of statements is a possible world. The modal diamond operator is used to express possibility: denotes "proposition is possible".
Logical possibility is different from other sorts of subjunctive possibilities. The relationship between modalities (if there is any) is the subject of debate and may depend upon how one views logic, as well as the relationship between logic and metaphysics, for example, many philosophers following Saul Kripke have held that discovered identities such as "Hesperus = Phosphorus" are metaphysically necessary because they pick out the same object in all possible worlds where the terms have a referent. It is logically possible for “Hesperus = Phosphorus” to be false, since denying it does not violate a logical rule such as consistency. Other philosophers are of the view that logical possibility is broader than metaphysical possibility, so that anything which is metaphysically possible is also logically possible.
See also
Modal logic
Paraconsistent logic
Paradox
Possibility theory
Possible world
Subjunctive possibility
References
Bibliography
Modal logic
Possibility | Logical possibility | Mathematics | 399 |
22,469,208 | https://en.wikipedia.org/wiki/Jonas%20Kubilius | Jonas Kubilius (27 July 1921 – 30 October 2011) was a Lithuanian mathematician who worked in probability theory and number theory. He was rector of Vilnius University for 32 years, and served one term in the Lithuanian parliament.
Life and education
Kubilius was born in Fermos village, Eržvilkas county, Jurbarkas District Municipality, Lithuania on 27 July 1921. He graduated from Raseiniai high school in 1940 and entered Vilnius University, from which he graduated summa cum laude in 1946 after taking off a year to teach mathematics in middle school.
Kubilius received the Candidate of Sciences degree in 1951 from Leningrad University. His thesis, written under Yuri Linnik, was titled Geometry of Prime Numbers. He received the Doctor of Sciences degree (habilitation) in 1957 from the Steklov Institute of Mathematics in Moscow.
Career
Kubilius had simultaneous careers at Vilnius University and at the Lithuanian Academy of Sciences. He continued working at the university after receiving his bachelor's degree in 1946, and worked as a lecturer and assistant professor after receiving his Candidate degree in 1951. In 1958 he was promoted to professor and was elected rector of the university. He retired from the rector's position in 1991 after serving almost 33 years, and remained a professor in the university.
During the Khrushchev Thaw in the middle 1950s there were attempts to make the university "Lithuanian" by encouraging the use of the Lithuanian language in place of Russian and to revive the Department of Lithuanian Literature. This work was started by the rector Juozas Bulavas, but Stalinists objected and Bulavas was dismissed. Kubilius replaced him as rector and was more successful in resisting pressure to Russify the University: he returned Lithuanian language and culture to the forefront of the University. Česlovas Masaitis attributes Kubilius's success to "his ability to manipulate within the complex bureaucratic system of the Soviet Union and mainly because of his international recognition due to his scientific achievements." Kubilius also encouraged the faculty to write research papers in Lithuanian, English, German, and French, as well as in Russian, and he himself wrote several textbooks in Lithuanian. Jonas Kubilius known by pseudonym Bernotas was also involved in Lithuanian partisan movement. According to some sources Lithuanian partisans suggested him to continue studies and stay alive to work for Lithuania in the future.
In 1952 Kubilius became an employee of the Lithuanian Academy of Sciences in the Physics, Mathematics and Astronomy Sector. He initially promoted the development of probability theory in Lithuania, and later the development of differential equations and mathematical logic. In 1956 the Physical and Technical Institute was reorganized and Kubilius became head of the new Mathematical Sector. When he became rector of Vilnius University in 1958 he gave up his duties as head and was succeeded by Vytautas Statulevičius in 1960. In 1962 he was elected a member of the Academy. He held a position as Principal Scientific Worker at the Institute of Mathematics and Informatics, which split from the Lithuanian Academy of Sciences and is now an independent state scientific institute.
Kubilius's scientific work was in the areas of number theory and probability theory. The Turán–Kubilius inequality and the Kubilius model in probabilistic number theory are named after him. Eugenijus Manstavičius and Fritz Schweiger wrote about Kubilius's work in 1992, "the most impressive work has been done on the statistical theory of arithmetic functions which almost created a new research area called Probabilistic Number Theory. A monograph devoted to this topic was translated into English in 1964 and became very influential." (The monograph is Probabilistic Methods in the Theory of Numbers.)
Kubilius organized the first mathematical olympiad in Lithuania in 1951, and he wrote books of problems for students to use in preparing for the olympiads. He was a past president of the Lithuanian Mathematical Society.
In addition to his scientific and administrative work, Kubilius was a member of the Seimas (Lithuanian parliament) from 1992 to 1996.
Honors and awards
Order of the Lithuanian Grand Duke Gediminas
Selected publications
References
Further reading
External links
Jonas Kubilius home page at Vilnius University
Jonas Kubilius home page at the Institute of Mathematics and Informatics
1921 births
2011 deaths
20th-century Lithuanian mathematicians
21st-century Lithuanian mathematicians
People from Jurbarkas District Municipality
Academic staff of Vilnius University
Rectors of Vilnius University
Saint Petersburg State University alumni
Social Democratic Party of Lithuania politicians
Tenth convocation members of the Soviet of Nationalities
Eleventh convocation members of the Soviet of Nationalities
Commander's Crosses of the Order of the Lithuanian Grand Duke Gediminas
Heroes of Socialist Labour
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Number theorists
Probability theorists
Soviet mathematicians
Burials at Antakalnis Cemetery | Jonas Kubilius | Mathematics | 979 |
33,692,227 | https://en.wikipedia.org/wiki/Help%20Me%20Anthea%2C%20I%27m%20Infested | Help Me Anthea, I'm Infested is a 2007 factual entertainment television show produced by RDF Television for BBC Three, presented by Anthea Turner and Mark Coltman, a professional pest control expert. The presenters visit people whose houses have pest control problems, give them advice and help them to exterminate vermin.
Originally slated for six episodes, the BBC cut the series short after the third episode was broadcast. According to an interview with Anthea Turner, only the first three episodes were planned to be on bug infestations, although she did not specify what later episodes would cover.
Critical reactions were very negative: James Watson at the Daily Telegraph described it as being both boring and exhibiting "grinding, excruciating pointlessness", while The Guardian'''s Nancy Banks-Smith described it as "frightful". Charlie Brooker thought Turner came across as "a hard, judgemental piece of work who spends most of her time haranguing the human inhabitants for living in filth", and the resulting programme feels like "a strange psychodrama in which the punters are caught between unfeeling vermin on one side, and an unfeeling former Blue Peter presenter on the other". Jeremy Paxman used it as an example of the perceived low quality and lack of public value of BBC Three programmes in an interview with the BBC chairman, Sir Michael Lyons, on Newsnight along with My Man Boobs and Me, My Dog Is As Fat As Me, Freaky Eaters and Fat Men Can't Hunt. The novelist P.D. James listed it as one of the BBC's "most embarrassing programmes".
Rentokil Initial list the show as one of a small number of pest control-related television shows.
See alsoBilly the Exterminator''
References
External links
BBC Three original programming
2007 British television series debuts
2007 British television series endings
British non-fiction television series
Pest control | Help Me Anthea, I'm Infested | Biology | 400 |
564,719 | https://en.wikipedia.org/wiki/Hybrid%20system | A hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior – a system that can both flow (described by a differential equation) and jump (described by a state machine, automaton, or a difference equation). Often, the term "hybrid dynamical system" is used instead of "hybrid system", to distinguish from other usages of "hybrid system", such as the combination neural nets and fuzzy logic, or of electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.
In general, the state of a hybrid system is defined by the values of the continuous variables and a discrete mode. The state changes either continuously, according to a flow condition, or discretely according to a control graph. Continuous flow is permitted as long as so-called invariants hold, while discrete transitions can occur as soon as given jump conditions are satisfied. Discrete transitions may be associated with events.
Examples
Hybrid systems have been used to model several cyber-physical systems, including physical systems with impact, logic-dynamic controllers, and even Internet congestion.
Bouncing ball
A canonical example of a hybrid system is the bouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after an inelastic collision. A mathematical description of the bouncing ball follows. Let be the height of the ball and be the velocity of the ball. A hybrid system describing the ball is as follows:
When , flow is governed by
,
where is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.
When , jumps are governed by
,
where is a dissipation factor. This is saying that when the height of the ball is zero (it has impacted the ground), its velocity is reversed and decreased by a factor of . Effectively, this describes the nature of the inelastic collision.
The bouncing ball is an especially interesting hybrid system, as it exhibits Zeno behavior. Zeno behavior has a strict mathematical definition, but can be described informally as the system making an infinite number of jumps in a finite amount of time. In this example, each time the ball bounces it loses energy, making the subsequent jumps (impacts with the ground) closer and closer together in time.
It is noteworthy that the dynamical model is complete if and only if one adds the contact force between the ground and the ball. Indeed, without forces, one cannot properly define the bouncing ball and the model is, from a mechanical point of view, meaningless. The simplest contact model that represents the interactions between the ball and the ground, is the complementarity relation between the force and the distance (the gap) between the ball and the ground. This is written as
Such a contact model does not incorporate magnetic forces, nor gluing effects. When the complementarity relations are in, one can continue to integrate the system after the impacts have accumulated and vanished: the equilibrium of the system is well-defined as the static equilibrium of the ball on the ground, under the action of gravity compensated by the contact force . One also notices from basic convex analysis that the complementarity relation can equivalently be rewritten as the inclusion into a normal cone, so that the bouncing ball dynamics is a differential inclusion into a normal cone to a convex set. See Chapters 1, 2 and 3 in Acary-Brogliato's book cited below (Springer LNACM 35, 2008). See also the other references on non-smooth mechanics.
Hybrid systems verification
There are approaches to automatically proving properties of hybrid systems (e.g., some of the tools mentioned below). Common techniques for proving safety of hybrid systems are computation of reachable sets, abstraction refinement, and barrier certificates.
Most verification tasks are undecidable, making general verification algorithms impossible. Instead, the tools are analyzed for their capabilities on benchmark problems. A possible theoretical characterization of this is algorithms that succeed with hybrid systems verification in all robust cases implying that many problems for hybrid systems, while undecidable, are at least quasi-decidable.
Other modeling approaches
Two basic hybrid system modeling approaches can be classified, an implicit and an explicit one. The explicit approach is often represented by a hybrid automaton, a hybrid program or a hybrid Petri net. The implicit approach is often represented by guarded equations to result in systems of differential algebraic equations (DAEs) where the active equations may change, for example by means of a hybrid bond graph.
As a unified simulation approach for hybrid system analysis, there is a method based on DEVS formalism in which integrators for differential equations are quantized into atomic DEVS models. These methods generate traces of system behaviors in discrete event system manner which are different from discrete time systems. Detailed of this approach can be found in references [Kofman2004] [CF2006] [Nutaro2010] and the software tool PowerDEVS.
Software Tools
Simulation
HyEQ Toolbox: Hybrid system solver for MATLAB and Simulink
PowerDEVS: General-purpose tool for DEVS (Discrete Event System) modeling and simulation oriented to the simulation of hybrid systems
Reachability
Ariadne: C++ library for (numerically rigorous) reachability analysis of nonlinear hybrid systems
CORA: A MATLAB Toolbox for reachability analysis of cyber-physical systems, including hybrid systems
Flow*: A tool for reachability analysis of nonlinear hybrid systems
HyCreate: A tool for overapproximating reachability of hybrid automata
HyPro: C++ library for state set representations for hybrid systems reachability analysis
JuliaReach: A toolbox for set-based reachability
Temporal Logic and Other Verification
C2E2: Nonlinear hybrid system verifier
HyTech: Model checker for hybrid systems
HSolver: Verification tool for hybrid systems
KeYmaera: Theorem prover for hybrid systems
PHAVer: Polyhedral hybrid automaton verifier
S-TaLiRo: MATLAB toolbox for verification of hybrid systems with respect to temporal logic specifications
Other
SCOTS: Tool for the synthesis of correct-by-construction controllers for hybrid systems
SpaceEx: State-space explorer
See also
Hybrid automaton
Sliding mode control
Variable structure system
Variable structure control
Joint spectral radius
Cyber-physical system
Behavior trees (artificial intelligence, robotics and control)
Jump process (in the context of probability), an example of a (stochastic) hybrid system with zero flow component
Piecewise-deterministic Markov process (PDMP), an example of a (stochastic) hybrid system and a generalization of the jump process
Jump diffusion, an example of a (stochastic) hybrid system and a generalization of the PDMP
Further reading
[Kofman2004]
[CF2006]
[Nutaro2010]
External links
IEEE CSS Committee on Hybrid Systems
References
Systems theory
Differential equations
Dynamical systems
Control theory | Hybrid system | Physics,Mathematics | 1,484 |
24,249,235 | https://en.wikipedia.org/wiki/Comparison%20of%20radio%20systems | Many of the world's radio stations broadcast in a variety of analog and digital formats. This page will list and compare them in chart form.
Table
References
Broadcast engineering
Radio technology
Radio systems | Comparison of radio systems | Technology,Engineering | 39 |
696,817 | https://en.wikipedia.org/wiki/Roothaan%20equations | The Roothaan equations are a representation of the Hartree–Fock equation in a non orthonormal basis set which can be of Gaussian-type or Slater-type. It applies to closed-shell molecules or atoms where all molecular orbitals or atomic orbitals, respectively, are doubly occupied. This is generally called Restricted Hartree–Fock theory.
The method was developed independently by Clemens C. J. Roothaan and George G. Hall in 1951, and is thus sometimes called the Roothaan-Hall equations. The Roothaan equations can be written in a form resembling generalized eigenvalue problem, although they are not a standard eigenvalue problem because they are nonlinear:
where F is the Fock matrix (which depends on the coefficients C due to electron-electron interactions), C is a matrix of coefficients, S is the overlap matrix of the basis functions, and is the (diagonal, by convention) matrix of orbital energies. In the case of an orthonormalised basis set the overlap matrix, S, reduces to the identity matrix. These equations are essentially a special case of a Galerkin method applied to the Hartree–Fock equation using a particular basis set.
In contrast to the Hartree–Fock equations - which are integro-differential equations - the Roothaan–Hall equations have a matrix-form. Therefore, they can be solved using standard techniques.
See also
Hartree–Fock method
References
Quantum chemistry | Roothaan equations | Physics,Chemistry | 310 |
33,938,036 | https://en.wikipedia.org/wiki/Boletus%20occidentalis | Boletus occidentalis is a species of bolete fungus in the family Boletaceae. Found growing under Pinus occidentalis in Jarabacoa, Dominican Republic, it was described as new to science in 2007.
See also
List of Boletus species
References
External links
occidentalis
Fungi described in 2007
Fungi of the Caribbean
Fungus species | Boletus occidentalis | Biology | 76 |
52,784,957 | https://en.wikipedia.org/wiki/I%20Carinae | The Bayer designations i Carinae and I Carinae are distinct (lower and upper case i) and refer to stars/star systems of apparent magnitude 3.96 and 3.99 respectively.
for i Carinae, see HD 79447
for I Carinae, see HR 4102
See also
ι Carinae (Iota Carinae)
Carinae, i
Carina (constellation) | I Carinae | Astronomy | 79 |
65,673,969 | https://en.wikipedia.org/wiki/Basis%20of%20a%20matroid | In mathematics, a basis of a matroid is a maximal independent set of the matroid—that is, an independent set that is not contained in any other independent set.
Examples
As an example, consider the matroid over the ground-set R2 (the vectors in the two-dimensional Euclidean plane), with the following independent sets: It has two bases, which are the sets {(0,1),(2,0)} , {(0,3),(2,0)}. These are the only independent sets that are maximal under inclusion.
The basis has a specialized name in several specialized kinds of matroids:
In a graphic matroid, where the independent sets are the forests, the bases are called the spanning forests of the graph.
In a transversal matroid, where the independent sets are endpoints of matchings in a given bipartite graph, the bases are called transversals.
In a linear matroid, where the independent sets are the linearly-independent sets of vectors in a given vector-space, the bases are just called bases of the vector space. Hence, the concept of basis of a matroid generalizes the concept of basis from linear algebra.
In a uniform matroid, where the independent sets are all sets with cardinality at most k (for some integer k), the bases are all sets with cardinality exactly k.
In a partition matroid, where elements are partitioned into categories and the independent sets are all sets containing at most kc elements from each category c, the bases are all sets which contain exactly kc elements from category c.
In a free matroid, where all subsets of the ground-set E are independent, the unique basis is E.
Properties
Exchange
All matroids satisfy the following properties, for any two distinct bases and :
Basis-exchange property: if , then there exists an element such that is a basis.
Symmetric basis-exchange property: if , then there exists an element such that both and are bases. Brualdi showed that it is in fact equivalent to the basis-exchange property.
Multiple symmetric basis-exchange property: if , then there exists a subset such that both and are bases. Brylawski, Greene, and Woodall, showed (independently) that it is in fact equivalent to the basis-exchange property.
Bijective basis-exchange property: There is a bijection from to , such that for every , is a basis. Brualdi showed that it is equivalent to the basis-exchange property.
Partition basis-exchange property: For each partition of into m parts, there exists a partition of into m parts, such that for every , is a basis.
However, a basis-exchange property that is both symmetric and bijective is not satisfied by all matroids: it is satisfied only by base-orderable matroids.
In general, in the symmetric basis-exchange property, the element need not be unique. Regular matroids have the unique exchange property, meaning that for some , the corresponding b is unique.
Cardinality
It follows from the basis exchange property that no member of can be a proper subset of another.
Moreover, all bases of a given matroid have the same cardinality. In a linear matroid, the cardinality of all bases is called the dimension of the vector space.
Neil White's conjecture
It is conjectured that all matroids satisfy the following property: For every integer t ≥ 1, If B and B' are two t-tuples of bases with the same multi-set union, then there is a sequence of symmetric exchanges that transforms B to B'.
Characterization
The bases of a matroid characterize the matroid completely: a set is independent if and only if it is a subset of a basis. Moreover, one may define a matroid to be a pair , where is the ground-set and is a collection of subsets of , called "bases", with the following properties:
(B1) There is at least one base -- is nonempty;
(B2) If and are distinct bases, and , then there exists an element such that is a basis (this is the basis-exchange property).
(B2) implies that, given any two bases A and B, we can transform A into B by a sequence of exchanges of a single element. In particular, this implies that all bases must have the same cardinality.
Duality
If is a finite matroid, we can define the orthogonal or dual matroid by calling a set a basis in if and only if its complement is in . It can be verified that is indeed a matroid. The definition immediately implies that the dual of is .
Using duality, one can prove that the property (B2) can be replaced by the following:(B2*) If and are distinct bases, and , then there exists an element such that is a basis.
Circuits
A dual notion to a basis is a circuit. A circuit in a matroid is a minimal dependent set—that is, a dependent set whose proper subsets are all independent. The terminology arises because the circuits of graphic matroids are cycles in the corresponding graphs.
One may define a matroid to be a pair , where is the ground-set and is a collection of subsets of , called "circuits", with the following properties:
(C1) The empty set is not a circuit;
(C2) A proper subset of a circuit is not a circuit;
(C3) If C1 and C2 are distinct circuits, and x is an element in their intersection, then contains a circuit.
Another property of circuits is that, if a set is independent, and the set is dependent (i.e., adding the element makes it dependent), then contains a unique circuit , and it contains . This circuit is called the fundamental circuit of w.r.t. . It is analogous to the linear algebra fact, that if adding a vector to an independent vector set makes it dependent, then there is a unique linear combination of elements of that equals .
See also
Matroid polytope - a polytope in Rn (where n is the number of elements in the matroid), whose vertices are indicator vectors of the bases of the matroid.
References
Matroid theory | Basis of a matroid | Mathematics | 1,281 |
2,967,256 | https://en.wikipedia.org/wiki/Engel%20expansion | The Engel expansion of a positive real number x is the unique non-decreasing sequence of positive integers such that
For instance, Euler's number e has the Engel expansion
1, 1, 2, 3, 4, 5, 6, 7, 8, ...
corresponding to the infinite series
Rational numbers have a finite Engel expansion, while irrational numbers have an infinite Engel expansion. If x is rational, its Engel expansion provides a representation of x as an Egyptian fraction. Engel expansions are named after Friedrich Engel, who studied them in 1913.
An expansion analogous to an Engel expansion, in which alternating terms are negative, is called a Pierce expansion.
Engel expansions, continued fractions, and Fibonacci
observe that an Engel expansion can also be written as an ascending variant of a continued fraction:
They claim that ascending continued fractions such as this have been studied as early as Fibonacci's Liber Abaci (1202). This claim appears to refer to Fibonacci's compound fraction notation in which a sequence of numerators and denominators sharing the same fraction bar represents an ascending continued fraction:
If such a notation has all numerators 0 or 1, as occurs in several instances in Liber Abaci, the result is an Engel expansion. However, Engel expansion as a general technique does not seem to be described by Fibonacci.
Algorithm for computing Engel expansions
To find the Engel expansion of x, let
and
where is the ceiling function (the smallest integer not less than r).
If for any i, halt the algorithm.
Iterated functions for computing Engel expansions
Another equivalent method is to consider the map
and set
where
and
Yet another equivalent method, called the modified Engel expansion calculated by
and
The transfer operator of the Engel map
The Frobenius–Perron transfer operator of the Engel map acts on functions with
since
and the inverse of the n-th component is which is found by solving for .
Relation to the Riemann ζ function
The Mellin transform of the map is related to the Riemann zeta function by the formula
Example
To find the Engel expansion of 1.175, we perform the following steps.
The series ends here. Thus,
and the Engel expansion of 1.175 is (1, 6, 20).
Engel expansions of rational numbers
Every positive rational number has a unique finite Engel expansion. In the algorithm for Engel expansion, if ui is a rational number x/y, then ui + 1 = (−y mod x)/y. Therefore, at each step, the numerator in the remaining fraction ui decreases and the process of constructing the Engel expansion must terminate in a finite number of steps. Every rational number also has a unique infinite Engel expansion: using the identity
the final digit n in a finite Engel expansion can be replaced by an infinite sequence of (n + 1)s without changing its value. For example,
This is analogous to the fact that any rational number with a finite decimal representation also has an infinite decimal representation (see 0.999...).
An infinite Engel expansion in which all terms are equal is a geometric series.
Erdős, Rényi, and Szüsz asked for nontrivial bounds on the length of the finite Engel expansion of a rational number x/y ; this question was answered by Erdős and Shallit, who proved that the number of terms in the expansion is O(y1/3 + ε) for any ε > 0.
The Engel expansion for arithmetic progressions
Consider this sum:
where and .
Thus, in general
,
where represents the lower Incomplete gamma function.
Specifically, if ,
.
Engel expansion for powers of q
The Gauss identity of the q-analog can be written as:
Using this identity, we can express the Engel expansion for powers of as follows:
Furthermore, this expression can be written in closed form as:
where is the second Theta function.
Engel expansions for some well-known constants
= (1, 1, 1, 8, 8, 17, 19, 300, 1991, 2492, ...)
= (1, 3, 5, 5, 16, 18, 78, 102, 120, 144, ...)
= (1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ...)
More Engel expansions for constants can be found here.
Growth rate of the expansion terms
The coefficients ai of the Engel expansion typically exhibit exponential growth; more precisely, for almost all numbers in the interval (0,1], the limit exists and is equal to e. However, the subset of the interval for which this is not the case is still large enough that its Hausdorff dimension is one.
The same typical growth rate applies to the terms in expansion generated by the greedy algorithm for Egyptian fractions. However, the set of real numbers in the interval (0,1] whose Engel expansions coincide with their greedy expansions has measure zero, and Hausdorff dimension 1/2.
See also
Euler's continued fraction formula
Notes
References
.
.
.
.
.
.
.
External links
Mathematical analysis
Continued fractions
Egyptian fractions | Engel expansion | Mathematics | 1,103 |
375,113 | https://en.wikipedia.org/wiki/Othala | Othala (), also known as ēðel and odal, is a rune that represents the o and œ phonemes in the Elder Futhark and the Anglo-Saxon Futhorc writing systems respectively. Its name is derived from the reconstructed Proto-Germanic *ōþala- "heritage; inheritance, inherited estate". As it does not occur in Younger Futhark, it disappears from the Scandinavian record around the 8th century, but its usage continued in England into the 11th century, where it was sometimes further used in manuscripts as a shorthand for the word ("homeland"), similarly to how other runes were sometimes used at the time.
As with other symbols used historically in Europe such as the swastika and Celtic cross, othala has been appropriated by far-right groups such as the Nazi party and neo-Nazis, who have used it to represent ideas like Aryan heritage, a usage that is wholly modern and not attested in any ancient or medieval source. The rune also continues to be used in non-racist contexts, both in Heathenry and in wider popular culture such as the works of J.R.R. Tolkien and video games.
Name and etymology
The sole attested name of the rune is , meaning "homeland". Based on this, and cognates in other Germanic languages such as and , the can be reconstructed, meaning "ancestral land", "the land owned by one's kin", and by extension "property" or "inheritance". is in turn derived from , meaning "nobility" and "disposition".
Terms derived from are formative elements in some Germanic names, notably Ulrich.
The term "odal" () refers to Scandinavian laws of inheritance which established land rights for families that had owned that parcel of land over a number of generations, restricting its sale to others. Among other aspects, this protected the inheritance rights of daughters against males from outside the immediate family. Some of these laws remain in effect today in Norway as the Odelsrett (allodial right). The tradition of Udal law found in Shetland, Orkney, and the Isle of Man, is from the same origin.
Elder Futhark o-rune
The o-rune is attested early, in inscriptions from the 3rd century, such as the Thorsberg chape (DR7) and the Vimose planer (Vimose-Høvelen, DR 206).
The corresponding Gothic letter is (derived from Greek Ω), which had the name oþal. The othala rune is found in some transitional inscriptions of the 6th or 7th century, such as the Gummarp, Björketorp and Stentoften runestones, but it disappears from the Scandinavian record by the 8th century. The Old Norse o phoneme at this time becomes written in Younger Futhark in the same way as the u phoneme, with the Ur rune.
It has been suggested that the othala rune on the Ring of Pietroassa is used to represent the word "*oþal", referencing the ring as hereditary treasure. Similarly, Wolfgang Krause speculated that the o rune is used as an ideograph denoting possession in the Thorsberg chape inscription, reading the inscription owlþuþewaz as O[þila] - W[u]lþu-þewaz "inherited property - the servant of Wulþuz".
Anglo-Saxon œ-rune
Usage and shape
The Anglo-Saxon runes preserve the full set of 24 Elder Futhark runes (as well as introducing innovations), but in some cases these runes are given new sound values due to Anglo-Frisian sound changes. The othala rune is such a case: the o sound in the Anglo-Saxon system is now expressed by ōs ᚩ, a derivation of the old Ansuz rune; the othala rune is known in Old English as ēðel (with umlaut due to the form ōþila-) and is used to express an œ sound, but is attested only rarely in epigraphy (outside of simply appearing in a futhark row). In some runic inscriptions, such as on the Seax of Beagnoth, and more commonly in manuscripts, othala is written with a single vertical line instead of the two diagonal legs, perhaps due to its simpler form.
The rune is also used as a shorthand for the word or ("ancestral property or land") in texts such as Beowulf, Waldere and the Old English translation of Orosius' Historiae adversus paganos. This is similar to wider practices of the time, in which runes such as , and were also used as shorthands to write the name of the rune.
Notable attestations
Epigraphical attestations include:
the Frisian Westeremden yew-stick, possibly as part of a given name Ƿimod (Ƿimœd)
the Harford (Norfolk) brooch, dated c. 650, in a finite verb form: luda:gibœtæsigilæ "Luda repaired the brooch"
the left panel of the Franks Casket, twice: tƿœgen gibroþær afœddæ hiæ ƿylif "two brothers (scil. Romulus and Remus), a she-wolf nourished them".
Rune poem
The Anglo-Saxon rune poem preserves the meaning "an inherited estate" for the rune name:
Modern use
Far-right iconography
Deliberate use as a far-right symbol
[[File:Flag of Volksdeutsche in Croatia.svg|thumb|200px|Flag of the Croatian ]]
The symbol derived from othala with wings or feet (serifs) was the badge of the SS Race and Settlement Main Office, which was responsible for maintaining the racial purity of the Nazi Schutzstaffel (SS). It was also the emblem of ethnic Germans () of the 7th SS Volunteer Mountain Division Prinz Eugen operating during World War II in the Nazi Germany-sponsored Independent State of Croatia.
The rune and winged symbol have been used by the Neo-Nazi in Germany, and in South Africa by the Anglo-Afrikaner Bond, the , the , the Italian neo-fascist group National Vanguard, the Afrikaner Student Federation and the far-right wing White Liberation Movement before it was disbanded.Visser, Myda Marista Die Ideologiese Grondslae En Ontwikkeling Van Die Blanke Fascistiese Bewegings In Suid-Afrika, 1945- 1995 (The ideological foundations and development of white fascist movements in South Africa, 1945-1999) M.A. thesis University of Pretoria (1999) p. 164 In November 2016, the leadership of the National Socialist Movement announced their intention to replace the Nazi-pattern swastika with the othala rune on their uniforms and party regalia in an attempt to enter mainstream politics. The rune was further used, along with other traditional symbols from European cultures such as a Tiwaz rune and a Celtic cross, and slogans associated with Nazism and far-right extremism by the Christchurch mosque shooter Brenton Harrison Tarrant. Heathen Front was a Neo-Nazi group, active during the 1990s to 2005 that espoused a racist form of Heathenry and described its ideas as odalism in reference to the alternative name for othala.
White supremacists who use the rune often claim it symbolises the heritage or land of "white" or "Aryan" people which should be free from foreigners. It has been noted however that this usage is a new invention by the groups and is not attested in any source from before the modern period, being labelled by runologist Michael Barnes as "spring[ing] entirely from the imagination".
Alleged use as a far-right symbol
In some cases, individuals and organisations have been accused of using the rune as a far-right symbol, such as in April 2014 when the British Topman clothing company apologised after using it in one of their clothing lines. Furthermore, at the Conservative Political Action Conference (CPAC) held in Orlando, Florida, on February 25–28, 2021, the floor layout of the main stage resembled the winged form of the othala rune, leading to speculation on social media as to why that design was chosen. CPAC chairman Matt Schlapp said comparisons were "outrageous and slanderous". Design firm Design Foundry later took responsibility for the design of the stage, saying that it "intended to provide the best use of space, given the constraints of the ballroom and social distancing requirements." Ian Walters, director of communications for the ACU and CPAC, said they would stop using Design Foundry.
The neo-folk group Death in June used othala on the cover of their 7'' Come Before Christ And Murder Love'' alongside their "Totenkopf 6" logo. The group does not openly support far-right ideologies however scholars have noted the group's fascination with Nazism and extensive usage of Nazi, and more widely fascist, imagery.
Heathenry
Othala, along with other runes more widely, often feature prominently in the practices of Heathens, and are commonly used to decorate items and in tattoos. The use of runes such as othala by far-right groups has been strongly condemned by some Heathen groups, including Asatru UK which released a public statement that "[it] is categorically opposed to fascist movements, or any movements, using the symbols of our faith for hate".
Popular culture
The Anti-Defamation League notes that because it is part of the runic alphabet, the othala rune is used widely in a non-racist manner and should be interpreted in conjunction with its context.
As with other historical runes, othala is used by J.R.R. Tolkien in The Hobbit as seen on Thror's map of Erebor, and as a base for the dwarvish Cirth writing systems used in The Lord of the Rings and described in Tolkien's Legendarium. Othala is also used as the symbol for the "Lore" resource in Northgard, released in 2018.
The name of the rune is also used in Stargate SG-1, in which Othala is a world in the Ida Galaxy where the Asgard had lived.
See also
Troll cross – A symbol which resembles the rune
References
Bibliography
Primary
Secondary
External links
Fascist symbols
Heraldic charges
Nazi symbolism
Runes
Symbols | Othala | Mathematics | 2,154 |
26,492,417 | https://en.wikipedia.org/wiki/Hypercyclic%20operator | In mathematics, especially functional analysis, a hypercyclic operator on a topological vector space X is a continuous linear operator T: X → X such that there is a vector x ∈ X for which the sequence {Tn x: n = 0, 1, 2, …} is dense in the whole space X. In other words, the smallest closed invariant subset containing x is the whole space. Such an x is then called hypercyclic vector.
There is no hypercyclic operator in finite-dimensional spaces, but the property of hypercyclicity in spaces of infinite dimension is not a rare phenomenon: many operators are hypercyclic.
The hypercyclicity is a special case of broader notions of topological transitivity (see topological mixing), and universality. Universality in general involves a set of mappings from one topological space to another (instead of a sequence of powers of a single operator mapping from X to X), but has a similar meaning to hypercyclicity. Examples of universal objects were discovered already in 1914 by Julius Pál, in 1935 by Józef Marcinkiewicz, or MacLane in 1952. However, it was not until the 1980s when hypercyclic operators started to be more intensively studied.
Examples
An example of a hypercyclic operator is two times the backward shift operator on the ℓ2 sequence space, that is the operator, which takes a sequence
(a1, a2, a3, …) ∈ ℓ2
to a sequence
(2a2, 2a3, 2a4, …) ∈ ℓ2.
This was proved in 1969 by Rolewicz.
Known results
On every infinite-dimensional separable Fréchet space there is a hypercyclic operator. On the other hand, there is no hypercyclic operator on a finite-dimensional space, nor on a non-separable space.
If x is a hypercyclic vector, then Tnx is hypercyclic as well, so there is always a dense set of hypercyclic vectors.
Moreover, the set of hypercyclic vectors is a connected Gδ set when X is a metrizable space, and always contains a dense vector space, up to {0}.
constructed an operator on ℓ1, such that all the non-zero vectors are hypercyclic, providing a counterexample to the invariant subspace problem (and even invariant subset problem) in the class of Banach spaces. The problem, whether such an operator (sometimes called hypertransitive, or orbit transitive) exists on a separable Hilbert space, is still open (as of 2022).
References
See also
Topological mixing
Functional analysis
Operator theory
Invariant subspaces | Hypercyclic operator | Mathematics | 557 |
66,839,757 | https://en.wikipedia.org/wiki/Astrovirology | Astrovirology is an emerging subdiscipline of astrobiology which aims to understand what role viruses played in the origin and evolution of life on Earth as well as the potential for viruses beyond Earth.
Viruses and early life on Earth
Viruses drive evolution
Viruses are a major driving force in evolution; the arms race between viruses and their host, or the Red Queen hypothesis, causes strong evolutionary pressures in both the host and viruses. The host evolves to evade and destroy viruses, while the virus evolves mechanisms to continue infecting the host. Evolution is also influenced by viral horizontal gene transfer. Viral genes can be inserted into the host genome (ex. Retroviruses) and sometimes these genes are evolutionarily favorable. One common example of beneficial horizontal gene transfer in humans is the gene for syncytin, which came from ancient viruses and is important in placenta development.
Viruses influence major evolutionary events
Though unproven, some virologists posit that viruses may have played an important role in major evolutionary events, including the emergence of a DNA genome from an RNA world, divergence from LUCA to the three domains of life, archaea, bacteria, and eukarya, and development of multicellularity. Emergence of a DNA genome and divergence from LUCA may have been aided by horizontal gene transfer of polymerases and other gene-editing enzymes from viruses. Meanwhile, viral selection pressures could have also aided divergence from LUCA to defend against different viruses, while multicellularity provides greater cell population protection from viruses.
Viruses and Earth's environment
Viruses influence biogeochemical cycles
Viruses cause nutrient cycling in the ocean via the viral shunt, and up to 25% of the available carbon in the upper ocean is attributed to virus-induced cell lysis.
Around 5% of Earth's oxygen is thought to be produced by cells infected by viruses encoding photosynthetic genes otherwise absent from the cell. For example, some viruses of cyanobacteria contain genes for Photosystem II, which allows those cyanobacteria to photosynthesize and live in a different part of the ocean as their non-infected counterparts. Some viruses encode other metabolic genes that allow new metabolic functions in their host, for example, phosphate, carbon, and sulfur metabolism.
Extremophile viruses
Viruses have been found in extremely hot, cold, and acidic natural environments, up to , down to , and down to pH 1.5.
Viruses in space
Infectivity in space
Viruses including tobacco mosaic virus, poliovirus, and bacteriophage T1 have maintained infectivity after being exposed to space-like conditions including interstellar radiation, low temperature, and low pressure. Further studies are needed to assess the risk of viral hitchhikers, but any virus infecting an organism inside a habitable spacecraft can survive as long as that organism survives.
Effect on astronauts
Latent viruses such as herpes virus, prevalent in humans, can become reactive during spaceflight due to spaceflight stressors. While astronauts experienced few if any symptoms, the potential for other viruses to become reactivated or more virulent is a substantial threat.
Furthermore, some bacteria (Serratia marcescens) have been found to be more virulent in spaceflight conditions, leading to a question of whether viruses could also become more virulent.
Forward contamination potential
Limiting forward contamination is critical to be confident in the results of life detection efforts. Bacteria pose a significant contamination challenge in spacecraft assembly clean rooms despite decontamination procedures. However, viruses were found to be present at relatively low levels, based on a metagenomic analysis. Another metagenomic study detected viable human viruses, including herpesvirus and cycloviruses.
Back contamination potential
Life (and viruses) on other planetary bodies have two important potential origins: from Earth or from a second genesis (life originated on that planet). Ancient viruses could have been transported from Earth to another planetary body, perhaps following a massive meteorite impact or volcanic eruption. If this occurred, these viruses would likely be very biological similar to modern organisms. There may be minimal or no immunity among Earth life against the ancient virus, and whatever organism it can infect may be crippled by its re-introduction.
If extraterrestrial viruses are part of a second genesis, their infectivity of Earth life depends on how they encode their genetic information. While their encoding could be incompatible with Earth life, it is also possible that RNA, DNA, or similar molecules could encode for life in the second genesis. In this case, Earth life may be a suitable host.
Potential biosignatures/detection methods
While viruses may or may not be "alive", detection of virions on another planet would be powerful indirect evidence for life. The following methods could offer biosignatures with varying levels of usefulness:
Scanning electron microscopy: SEM has potential to be integrated onto a spacecraft, but currently lacks the resolution to detect virion structure.
Transmission electron microscopy: TEM can visualize virion structure, but the imaging procedure is more difficult than SEM, and so integration onto an automated spacecraft seems unlikely.
Lipid detection in rock: Enveloped viruses may be identifiable via this method.
Chemical identification: Specific chemicals can be identified via GC-MS, NMR, or FTIR spectroscopy.
Virus-mediated event: Large-scale lysis of a given host cell can cause easily detectable effects. For example, the chalk deposits in the white cliffs of Dover are caused by large-scale lysis of algae, which could have been virus-induced.
Proposed and current life detection missions
Astrovirologists have called for proposed missions to sample the water plumes of Enceladus and/or Europa for viruses. Others have called for virus detection as part of Mars rover missions like the Rosalind Franklin rover. However, given the lack of validated biosignatures to detect viruses in situ, sample return to Earth has been recommended, which would allow use of TEM and other detection methods requiring complex sample preparation and/or large equipment. The Mars 2020 Perseverance rover has equipment to drill regolith samples and store them for sample return on a future Mars mission.
References
Viruses
Astrobiology
Evolution | Astrovirology | Astronomy,Biology | 1,269 |
11,421,143 | https://en.wikipedia.org/wiki/IS128%20RNA | The IS128 RNA is a non-coding RNA found in bacteria such as Escherichia coli and Shigella flexneri. The RNA is 209 nucleotides in length. It is found between the sseA and sseB genes. The IS128 RNA was initially identified in a computational screen of the E. coli genome. The function of this RNA is unknown.
See also
IS061 RNA
IS102 RNA
References
External links
Non-coding RNA | IS128 RNA | Chemistry | 99 |
25,847,465 | https://en.wikipedia.org/wiki/Steensen%20Varming | Steensen Varming is an engineering firm headquartered in Copenhagen, Denmark.
History
It was founded by Niels Steensen and Jørgen Varming in Copenhagen, Denmark, in 1933. The firm specialised in civil, structural and building services engineering. During the 20th century, the practice grew out of Denmark and new offices were established in Australia (Steensen Varming Australia ‐ 1973), United Kingdom (Steensen Varming Mulcahy ‐ 1957) and Ireland (Varming Mulcahy Reilly Associates ‐ 1947).
Jørgen Varming was the son of a prominent Danish architect, Kristoffer Varming; Jørgen studied engineering at the University of Newcastle.
Sydney Opera House
Steensen and Varming were chosen by the Danish architect Jørn Utzon as the mechanical consulting engineers for the Sydney Opera House in Sydney in 1957. The Australian branch of Steensen & Varming Australia (later to be known as Steensen Varming) was led by Vagn Prestmark a partner from the Danish Steensen & Varming firm.
Prestmark established Steensen Varming in Australia in 1957 and the company was permanently established in Australia in 1973.
Steensen & Varming was not well known in Australia prior to the Sydney Opera House, it was however well established in Europe with offices in Dublin, Belfast, London, Edinburgh and Copenhagen and employed over 500 people by 1973.
When Utzon resigned from the Sydney Opera House in 1966, Steensen & Varming continued as the mechanical consultants ultimately delivering the design, documentation, contract administration and detailed site supervision of all mechanical, hydraulic and fire protection services, including the controls and supervisory system.
Steensen Varming's most known contribution to the Sydney Opera House, was the design for the water heat pump system. The architects and engineers agreed that constructing a boiler chimney stack or a cooling tower, would not be in keeping with the design of the Opera House, which ruled out the two normal approaches for large-scale air conditioning. Steensen Varming provided the design solution in using a heat pump system, which used water from the harbour as the cooling agent.
There were three main considerations which led to the design of the Opera House air conditioning as a heat pump system, the availability of the waters of Sydney harbour as a heat sources and sink, the aesthetics and the savings that could be achieved with a water-to-water heat pump. Three pumps draw water from Circular Quay, the water is filtered to remove debris and then passes through tubes and is discharged into the harbour at the opposite side of the Opera House. Fresh water circulates between the heat exchanger shells and the shells of the condenser and evaporators of three centrifugal chillers / heat pump sets.
The design innovation and technical expertise demonstrated in this landmark project subsequently led to the awarding of other projects in Australia to the Steensen Varming practice.
The engineering construction of the Sydney Opera House was featured in a National Geographic/BBC production hosted Richard Hammond called Engineering Connections. The programme aired in Australia on 13 March 2010. Part of the documentary featured the seawater heat rejection system originally designed by Steensen Varming and assistance on this documentary was provided by Steensen Varming who acted as technical liaison to the production team.
Australian projects
Ian Thorpe Aquatic Centre
Steensen Varming was the first Australian organisation to win an Award of Excellence from the International Association of Lighting Designers for the lighting of the Ian Thorpe Aquatic Centre, Sydney. The Ian Thorpe Aquatic Centre was one of the last architectural designs by the architect Harry Seidler and was completed in 2008.
The Mint, Historic Houses Trust Australia
The Sydney Mint was recently named as one of 30 projects that have reshaped the built environment since 1978. "The refurbishment project is an example of the Integration of services systems (by Steensen Varming), to provide a modern, functional headquarters while minimising the impact on the heritage and archaeological fabric of a site."
References
External links
Construction and civil engineering companies established in 1933
Danish brands
Danish companies established in 1933
Engineering consulting firms
Engineering companies of Denmark
Engineering companies of Australia
Engineering companies of the United Kingdom
Engineering companies of the Republic of Ireland
Service companies based in Copenhagen
Companies based in Copenhagen Municipality | Steensen Varming | Engineering | 867 |
67,982,033 | https://en.wikipedia.org/wiki/Romeo%20V.%20Turcan | Romeo V. Turcan (born 24 April 1970) is a professor at Aalborg University Business School. His research interests include creation and legitimation of new sectors and new organizations; Late-globalization, de-globalization, de-internationalization; Bubbles, collective behavior; High impact international entrepreneurship; and Cross-disciplinary theory building.
Education and career
Turcan holds a degree in mechanical engineering from the Air Force Engineering Military Academy, Riga, Latvia (1992) and in Philology from the Department of Post-University Studies, Moldova State University, Chișinău, Moldova (1995). In 2000, he received his MSc in International Marketing from the Department of Marketing, University of Strathclyde, Glasgow, United Kingdom; and in 2006, he received his PhD in International Entrepreneurship from the Hunter Centre for Entrepreneurship, University of Strathclyde.
Prior to commencing his academic career, Turcan worked in a range of posts involving public policy intervention in restructuring, rationalizing and modernizing business and public sectors such as power, oil, military high-tech, management consulting, information and communications technology (ICT) and higher education. In addition, he is the co-founder and former Executive Director of the International Association of Business and Parliament – Moldova.
He has also been a member of various boards including the board of Enterprise and Parliamentary Dialogue International, London, UK (2013-2019) and the board of The International Society of Markets and Development (2019-). He is chairman of the Organization of Moldovans in Denmark.
He is currently the project coordinator for the ERASMUS + Strategic Partnership project (2019-2023) and the H2020 Marie S. Curie project (2020-2024). In addition, he is the Founder and Coordinator of the Theory Building Research Program (2012-).
Honors
Since 2012, Turcan has been the main applicant and coordinator of four EU funded projects, incl., Marie S. Curie ITN, with a total value of more than 7.3 mil EUR:
"Legitimation of Newness and Its Impact on EU Agenda for Change", Marie S. Curie project (2020-2023, main applicant and coordinator)
"International Entrepreneurship Network for PhD and PhD Supervisor Training", Strategic Partnership (Erasmus+) project (2019-2022, main applicant and coordinator)
"PBLMD-TOPUP", ERASMUS+ Learning Mobility of Individuals (2017-2018, main applicant and coordinator)
"Introducing Problem Based Learning in Moldova: Toward Enhancing Students’ Competitiveness and Employability", ERASMUS+ Capacity Building national project (2015-2019, main applicant and coordinator)
"Enhancing University Autonomy in Moldova", ERASMUS+ Capacity Building structural project (2012-2015, main applicant and coordinator)
Publications
References
Academic staff of Aalborg University
People from Drochia District
1970 births
Living people
Moldovan engineers
Mechanical engineers | Romeo V. Turcan | Engineering | 586 |
2,590,921 | https://en.wikipedia.org/wiki/Animal%20engine | An animal engine is a machine powered by an animal. Horses, donkeys, oxen, dogs, and humans have all been used in this way. An unusual example of an animal engine was recorded at Portland, Victoria in 1866. A kangaroo had been tamed and trained to work a treadmill which drove various items of machinery.
See also
Experiment (horse powered boat)
Gin gang
Horse mill
Horse engine
Persian well
Treadwheel
Turnspit dog
Books
Animal Powered Machines, J. Kenneth Major. Shire Album 128 - Shire Publications 1985.
References
Grinding mills
Machinery | Animal engine | Physics,Technology,Engineering | 112 |
21,731,590 | https://en.wikipedia.org/wiki/RNA-Seq | RNA-Seq (named as an abbreviation of RNA sequencing) is a technique that uses next-generation sequencing to reveal the presence and quantity of RNA molecules in a biological sample, providing a snapshot of gene expression in the sample, also known as transcriptome.
Specifically, RNA-Seq facilitates the ability to look at alternative gene spliced transcripts, post-transcriptional modifications, gene fusion, mutations/SNPs and changes in gene expression over time, or differences in gene expression in different groups or treatments. In addition to mRNA transcripts, RNA-Seq can look at different populations of RNA to include total RNA, small RNA, such as miRNA, tRNA, and ribosomal profiling. RNA-Seq can also be used to determine exon/intron boundaries and verify or amend previously annotated 5' and 3' gene boundaries. Recent advances in RNA-Seq include single cell sequencing, bulk RNA sequencing, 3' mRNA-sequencing, in situ sequencing of fixed tissue, and native RNA molecule sequencing with single-molecule real-time sequencing. Other examples of emerging RNA-Seq applications due to the advancement of bioinformatics algorithms are copy number alteration, microbial contamination, transposable elements, cell type (deconvolution) and the presence of neoantigens.
Prior to RNA-Seq, gene expression studies were done with hybridization-based microarrays. Issues with microarrays include cross-hybridization artifacts, poor quantification of lowly and highly expressed genes, and needing to know the sequence a priori. Because of these technical issues, transcriptomics transitioned to sequencing-based methods. These progressed from Sanger sequencing of Expressed sequence tag libraries, to chemical tag-based methods (e.g., serial analysis of gene expression), and finally to the current technology, next-gen sequencing of complementary DNA (cDNA), notably RNA-Seq.
Methods
Library preparation
The general steps to prepare a complementary DNA (cDNA) library for sequencing are described below, but often vary between platforms.
RNA Isolation: RNA is isolated from tissue and mixed with Deoxyribonuclease (DNase). DNase reduces the amount of genomic DNA. The amount of RNA degradation is checked with gel and capillary electrophoresis and is used to assign an RNA integrity number to the sample. This RNA quality and the total amount of starting RNA are taken into consideration during the subsequent library preparation, sequencing, and analysis steps.
RNA selection/depletion: To analyze signals of interest, the isolated RNA can either be kept as is, enriched for RNA with 3' polyadenylated (poly(A)) tails to include only eukaryotic mRNA, depleted of ribosomal RNA (rRNA), and/or filtered for RNA that binds specific sequences (RNA selection and depletion methods table, below). RNA molecules having 3' poly(A) tails in eukaryotes are mainly composed of mature, processed, coding sequences. Poly(A) selection is performed by mixing RNA with oligomers covalently attached to a substrate, typically magnetic beads. Poly(A) selection has important limitations in RNA biotype detection. Many RNA biotypes are not polyadenylated, including many noncoding RNA and histone-core protein transcripts, or are regulated via their poly(A) tail length (e.g., cytokines) and thus might not be detected after poly(A) selection. Furthermore, poly(A) selection may display increased 3' bias, especially with lower quality RNA. These limitations can be avoided with ribosomal depletion, removing rRNA that typically represents over 90% of the RNA in a cell. Both poly(A) enrichment and ribosomal depletion steps are labor intensive and could introduce biases, so more simple approaches have been developed to omit these steps. Small RNA targets, such as miRNA, can be further isolated through size selection with exclusion gels, magnetic beads, or commercial kits.
cDNA synthesis: RNA is reverse transcribed to cDNA because DNA is more stable and to allow for amplification (which uses DNA polymerases) and leverage more mature DNA sequencing technology. Amplification subsequent to reverse transcription results in loss of strandedness, which can be avoided with chemical labeling or single molecule sequencing. Fragmentation and size selection are performed to purify sequences that are the appropriate length for the sequencing machine. The RNA, cDNA, or both are fragmented with enzymes, sonication, divalent ions, or nebulizers. Fragmentation of the RNA reduces 5' bias of randomly primed-reverse transcription and the influence of primer binding sites, with the downside that the 5' and 3' ends are converted to DNA less efficiently. Fragmentation is followed by size selection, where either small sequences are removed or a tight range of sequence lengths are selected. Because small RNAs like miRNAs are lost, these are analyzed independently. The cDNA for each experiment can be indexed with a hexamer or octamer barcode, so that these experiments can be pooled into a single lane for multiplexed sequencing.
Complementary DNA sequencing (cDNA-Seq)
The cDNA library derived from RNA biotypes is then sequenced into a computer-readable format. There are many high-throughput sequencing technologies for cDNA sequencing including platforms developed by Illumina, Thermo Fisher, BGI/MGI, PacBio, and Oxford Nanopore Technologies. For Illumina short-read sequencing, a common technology for cDNA sequencing, adapters are ligated to the cDNA, DNA is attached to a flow cell, clusters are generated through cycles of bridge amplification and denaturing, and sequence-by-synthesis is performed in cycles of complementary strand synthesis and laser excitation of bases with reversible terminators. Sequencing platform choice and parameters are guided by experimental design and cost. Common experimental design considerations include deciding on the sequencing length, sequencing depth, use of single versus paired-end sequencing, number of replicates, multiplexing, randomization, and spike-ins.
Small RNA/non-coding RNA sequencing
When sequencing RNA other than mRNA, the library preparation is modified. The cellular RNA is selected based on the desired size range. For small RNA targets, such as miRNA, the RNA is isolated through size selection. This can be performed with a size exclusion gel, through size selection magnetic beads, or with a commercially developed kit. Once isolated, linkers are added to the 3' and 5' end then purified. The final step is cDNA generation through reverse transcription.
Direct RNA sequencing
Because converting RNA into cDNA, ligation, amplification, and other sample manipulations have been shown to introduce biases and artifacts that may interfere with both the proper characterization and quantification of transcripts, single molecule direct RNA sequencing has been explored by companies including Helicos (bankrupt), Oxford Nanopore Technologies, and others. This technology sequences RNA molecules directly in a massively-parallel manner.
Single-molecule real-time RNA sequencing
Massively parallel single molecule direct RNA-Seq has been explored as an alternative to traditional RNA-Seq, in which RNA-to-cDNA conversion, ligation, amplification, and other sample manipulation steps may introduce biases and artifacts. Technology platforms that perform single-molecule real-time RNA-Seq include Oxford Nanopore Technologies (ONT) Nanopore sequencing, PacBio IsoSeq, and Helicos (bankrupt). Sequencing RNA in its native form preserves modifications like methylation, allowing them to be investigated directly and simultaneously. Another benefit of single-molecule RNA-Seq is that transcripts can be covered in full length, allowing for higher confidence isoform detection and quantification compared to short-read sequencing. Traditionally, single-molecule RNA-Seq methods have higher error rates compared to short-read sequencing, but newer methods like ONT direct RNA-Seq limit errors by avoiding fragmentation and cDNA conversion. Recent uses of ONT direct RNA-Seq for differential expression in human cell populations have demonstrated that this technology can overcome many limitations of short and long cDNA sequencing.
Single-cell RNA sequencing (scRNA-Seq)
Standard methods such as microarrays and standard bulk RNA-Seq analysis analyze the expression of RNAs from large populations of cells. In mixed cell populations, these measurements may obscure critical differences between individual cells within these populations.
Single-cell RNA sequencing (scRNA-Seq) provides the expression profiles of individual cells. Although it is not possible to obtain complete information on every RNA expressed by each cell, due to the small amount of material available, patterns of gene expression can be identified through gene clustering analyses. This can uncover the existence of rare cell types within a cell population that may never have been seen before. For example, rare specialized cells in the lung called pulmonary ionocytes that express the Cystic fibrosis transmembrane conductance regulator were identified in 2018 by two groups performing scRNA-Seq on lung airway epithelia.
Experimental procedures
Current scRNA-Seq protocols involve the following steps: isolation of single cell and RNA, reverse transcription (RT), amplification, library generation and sequencing. Single cells are either mechanically separated into microwells (e.g., BD Rhapsody, Takara ICELL8, Vycap Puncher Platform, or CellMicrosystems CellRaft) or encapsulated in droplets (e.g., 10x Genomics Chromium, Illumina Bio-Rad ddSEQ, 1CellBio InDrop, Dolomite Bio Nadia). Single cells are labeled by adding beads with barcoded oligonucleotides; both cells and beads are supplied in limited amounts such that co-occupancy with multiple cells and beads is a very rare event. Once reverse transcription is complete, the cDNAs from many cells can be mixed together for sequencing; transcripts from a particular cell are identified by each cell's unique barcode. Unique molecular identifier (UMIs) can be attached to mRNA/cDNA target sequences to help identify artifacts during library preparation.
Challenges for scRNA-Seq include preserving the initial relative abundance of mRNA in a cell and identifying rare transcripts. The reverse transcription step is critical as the efficiency of the RT reaction determines how much of the cell's RNA population will be eventually analyzed by the sequencer. The processivity of reverse transcriptases and the priming strategies used may affect full-length cDNA production and the generation of libraries biased toward the 3’ or 5' end of genes.
In the amplification step, either PCR or in vitro transcription (IVT) is currently used to amplify cDNA. One of the advantages of PCR-based methods is the ability to generate full-length cDNA. However, different PCR efficiency on particular sequences (for instance, GC content and snapback structure) may also be exponentially amplified, producing libraries with uneven coverage. On the other hand, while libraries generated by IVT can avoid PCR-induced sequence bias, specific sequences may be transcribed inefficiently, thus causing sequence drop-out or generating incomplete sequences.
Several scRNA-Seq protocols have been published:
Tang et al.,
STRT,
SMART-seq,
CEL-seq,
RAGE-seq, Quartz-seq and C1-CAGE. These protocols differ in terms of strategies for reverse transcription, cDNA synthesis and amplification, and the possibility to accommodate sequence-specific barcodes (i.e. UMIs) or the ability to process pooled samples.
In 2017, two approaches were introduced to simultaneously measure single-cell mRNA and protein expression through oligonucleotide-labeled antibodies known as REAP-seq, and CITE-seq.
Applications
scRNA-Seq is becoming widely used across biological disciplines including Development, Neurology, Oncology, Autoimmune disease, and Infectious disease.
scRNA-Seq has provided considerable insight into the development of embryos and organisms, including the worm Caenorhabditis elegans, and the regenerative planarian Schmidtea mediterranea. The first vertebrate animals to be mapped in this way were Zebrafish and Xenopus laevis. In each case multiple stages of the embryo were studied, allowing the entire process of development to be mapped on a cell-by-cell basis. Science recognized these advances as the 2018 Breakthrough of the Year.
Experimental considerations
A variety of parameters are considered when designing and conducting RNA-Seq experiments:
Tissue specificity: Gene expression varies within and between tissues, and RNA-Seq measures this mix of cell types. This may make it difficult to isolate the biological mechanism of interest. Single cell sequencing can be used to study each cell individually, mitigating this issue.
Time dependence: Gene expression changes over time, and RNA-Seq only takes a snapshot. Time course experiments can be performed to observe changes in the transcriptome.
Coverage (also known as depth): RNA harbors the same mutations observed in DNA, and detection requires deeper coverage. With high enough coverage, RNA-Seq can be used to estimate the expression of each allele. This may provide insight into phenomena such as imprinting or cis-regulatory effects. The depth of sequencing required for specific applications can be extrapolated from a pilot experiment.
Data generation artifacts (also known as technical variance): The reagents (e.g., library preparation kit), personnel involved, and type of sequencer (e.g., Illumina, Pacific Biosciences) can result in technical artifacts that might be mis-interpreted as meaningful results. As with any scientific experiment, it is prudent to conduct RNA-Seq in a well controlled setting. If this is not possible or the study is a meta-analysis, another solution is to detect technical artifacts by inferring latent variables (typically principal component analysis or factor analysis) and subsequently correcting for these variables.
Data management: A single RNA-Seq experiment in humans is usually 1-5 Gb (compressed), or more when including intermediate files. This large volume of data can pose storage issues. One solution is compressing the data using multi-purpose computational schemas (e.g., gzip) or genomics-specific schemas. The latter can be based on reference sequences or de novo. Another solution is to perform microarray experiments, which may be sufficient for hypothesis-driven work or replication studies (as opposed to exploratory research).
Analysis
Transcriptome assembly
Two methods are used to assign raw sequence reads to genomic features (i.e., assemble the transcriptome):
De novo: This approach does not require a reference genome to reconstruct the transcriptome, and is typically used if the genome is unknown, incomplete, or substantially altered compared to the reference. Challenges when using short reads for de novo assembly include 1) determining which reads should be joined together into contiguous sequences (contigs), 2) robustness to sequencing errors and other artifacts, and 3) computational efficiency. The primary algorithm used for de novo assembly transitioned from overlap graphs, which identify all pair-wise overlaps between reads, to de Bruijn graphs, which break reads into sequences of length k and collapse all k-mers into a hash table. Overlap graphs were used with Sanger sequencing, but do not scale well to the millions of reads generated with RNA-Seq. Examples of assemblers that use de Bruijn graphs are Trinity, Oases (derived from the genome assembler Velvet), Bridger, and rnaSPAdes. Paired-end and long-read sequencing of the same sample can mitigate the deficits in short read sequencing by serving as a template or skeleton. Metrics to assess the quality of a de novo assembly include median contig length, number of contigs and N50.
Genome guided: This approach relies on the same methods used for DNA alignment, with the additional complexity of aligning reads that cover non-continuous portions of the reference genome. These non-continuous reads are the result of sequencing spliced transcripts (see figure). Typically, alignment algorithms have two steps: 1) align short portions of the read (i.e., seed the genome), and 2) use dynamic programming to find an optimal alignment, sometimes in combination with known annotations. Software tools that use genome-guided alignment include Bowtie, TopHat (which builds on BowTie results to align splice junctions), Subread, STAR, HISAT2, and GMAP. The output of genome guided alignment (mapping) tools can be further used by tools such as Cufflinks or StringTie to reconstruct contiguous transcript sequences (i.e., a FASTA file). The quality of a genome guided assembly can be measured with both 1) de novo assembly metrics (e.g., N50) and 2) comparisons to known transcript, splice junction, genome, and protein sequences using precision, recall, or their combination (e.g., F1 score). In addition, in silico assessment could be performed using simulated reads.
A note on assembly quality: The current consensus is that 1) assembly quality can vary depending on which metric is used, 2) assembly tools that scored well in one species do not necessarily perform well in the other species, and 3) combining different approaches might be the most reliable.
Gene expression quantification
Expression is quantified to study cellular changes in response to external stimuli, differences between healthy and diseased states, and other research questions. Transcript levels are often used as a proxy for protein abundance, but these are often not equivalent due to post transcriptional events such as RNA interference and nonsense-mediated decay.
Expression is quantified by counting the number of reads that mapped to each locus in the transcriptome assembly step. Expression can be quantified for exons or genes using contigs or reference transcript annotations. These observed RNA-Seq read counts have been robustly validated against older technologies, including expression microarrays and qPCR. Tools that quantify counts are HTSeq, FeatureCounts, Rcount, maxcounts, FIXSEQ, and Cuffquant. These tools determine read counts from aligned RNA-Seq data, but alignment-free counts can also be obtained with Sailfish and Kallisto. The read counts are then converted into appropriate metrics for hypothesis testing, regressions, and other analyses. Parameters for this conversion are:
Sequencing depth/coverage: Although depth is pre-specified when conducting multiple RNA-Seq experiments, it will still vary widely between experiments. Therefore, the total number of reads generated in a single experiment is typically normalized by converting counts to fragments, reads, or counts per million mapped reads (FPM, RPM, or CPM). The difference between RPM and FPM was historically derived during the evolution from single-end sequencing of fragments to paired-end sequencing. In single-end sequencing, there is only one read per fragment (i.e., RPM = FPM). In paired-end sequencing, there are two reads per fragment (i.e., RPM = 2 x FPM). Sequencing depth is sometimes referred to as library size, the number of intermediary cDNA molecules in the experiment.
Gene length: Longer genes will have more fragments/reads/counts than shorter genes if transcript expression is the same. This is adjusted by dividing the FPM by the length of a feature (which can be a gene, transcript, or exon), resulting in the metric fragments per kilobase of feature per million mapped reads (FPKM). When looking at groups of features across samples, FPKM is converted to transcripts per million (TPM) by dividing each FPKM by the sum of FPKMs within a sample.
Total sample RNA output: Because the same amount of RNA is extracted from each sample, samples with more total RNA will have less RNA per gene. These genes appear to have decreased expression, resulting in false positives in downstream analyses. Normalization strategies including quantile, DESeq2, TMM and Median Ratio attempt to account for this difference by comparing a set of non-differentially expressed genes between samples and scaling accordingly.
Variance for each gene's expression: is modeled to account for sampling error (important for genes with low read counts), increase power, and decrease false positives. Variance can be estimated as a normal, Poisson, or negative binomial distribution and is frequently decomposed into technical and biological variance.
Spike-ins for absolute quantification and detection of genome-wide effects
RNA spike-ins are samples of RNA at known concentrations that can be used as gold standards in experimental design and during downstream analyses for absolute quantification and detection of genome-wide effects.
Absolute quantification: Absolute quantification of gene expression is not possible with most RNA-Seq experiments, which quantify expression relative to all transcripts. It is possible by performing RNA-Seq with spike-ins, samples of RNA at known concentrations. After sequencing, read counts of spike-in sequences are used to determine the relationship between each gene's read counts and absolute quantities of biological fragments. In one example, this technique was used in Xenopus tropicalis embryos to determine transcription kinetics.
Detection of genome-wide effects: Changes in global regulators including chromatin remodelers, transcription factors (e.g., MYC), acetyltransferase complexes, and nucleosome positioning are not congruent with normalization assumptions and spike-in controls can offer precise interpretation.
Differential expression
The simplest but often most powerful use of RNA-Seq is finding differences in gene expression between two or more conditions (e.g., treated vs not treated); this process is called differential expression. The outputs are frequently referred to as differentially expressed genes (DEGs) and these genes can either be up- or down-regulated (i.e., higher or lower in the condition of interest). There are many tools that perform differential expression. Most are run in R, Python, or the Unix command line. Commonly used tools include DESeq, edgeR, and voom+limma, all of which are available through R/Bioconductor. These are the common considerations when performing differential expression:
Inputs: Differential expression inputs include (1) an RNA-Seq expression matrix (M genes x N samples) and (2) a design matrix containing experimental conditions for N samples. The simplest design matrix contains one column, corresponding to labels for the condition being tested. Other covariates (also referred to as factors, features, labels, or parameters) can include batch effects, known artifacts, and any metadata that might confound or mediate gene expression. In addition to known covariates, unknown covariates can also be estimated through unsupervised machine learning approaches including principal component, surrogate variable, and PEER analyses. Hidden variable analyses are often employed for human tissue RNA-Seq data, which typically have additional artifacts not captured in the metadata (e.g., ischemic time, sourcing from multiple institutions, underlying clinical traits, collecting data across many years with many personnel).
Methods: Most tools use regression or non-parametric statistics to identify differentially expressed genes, and are either based on read counts mapped to a reference genome (DESeq2, limma, edgeR) or based on read counts derived from alignment-free quantification (sleuth, Cuffdiff, Ballgown). Following regression, most tools employ either familywise error rate (FWER) or false discovery rate (FDR) p-value adjustments to account for multiple hypotheses (in human studies, ~20,000 protein-coding genes or ~50,000 biotypes).
Outputs: A typical output consists of rows corresponding to the number of genes and at least three columns, each gene's log fold change (log-transform of the ratio in expression between conditions, a measure of effect size), p-value, and p-value adjusted for multiple comparisons. Genes are defined as biologically meaningful if they pass cut-offs for effect size (log fold change) and statistical significance. These cut-offs should ideally be specified a priori, but the nature of RNA-Seq experiments is often exploratory so it is difficult to predict effect sizes and pertinent cut-offs ahead of time.
Pitfalls: The raison d'etre for these complex methods is to avoid the myriad of pitfalls that can lead to statistical errors and misleading interpretations. Pitfalls include increased false positive rates (due to multiple comparisons), sample preparation artifacts, sample heterogeneity (like mixed genetic backgrounds), highly correlated samples, unaccounted for multi-level experimental designs, and poor experimental design. One notable pitfall is viewing results in Microsoft Excel without using the import feature to ensure that the gene names remain text. Although convenient, Excel automatically converts some gene names (SEPT1, DEC1, MARCH2) into dates or floating point numbers.
Choice of tools and benchmarking: There are numerous efforts that compare the results of these tools, with DESeq2 tending to moderately outperform other methods. As with other methods, benchmarking consists of comparing tool outputs to each other and known gold standards.
Downstream analyses for a list of differentially expressed genes come in two flavors, validating observations and making biological inferences. Owing to the pitfalls of differential expression and RNA-Seq, important observations are replicated with (1) an orthogonal method in the same samples (like real-time PCR) or (2) another, sometimes pre-registered, experiment in a new cohort. The latter helps ensure generalizability and can typically be followed up with a meta-analysis of all the pooled cohorts. The most common method for obtaining higher-level biological understanding of the results is gene set enrichment analysis, although sometimes candidate gene approaches are employed. Gene set enrichment determines if the overlap between two gene sets is statistically significant, in this case the overlap between differentially expressed genes and gene sets from known pathways/databases (e.g., Gene Ontology, KEGG, Human Phenotype Ontology) or from complementary analyses in the same data (like co-expression networks). Common tools for gene set enrichment include web interfaces (e.g., ENRICHR, g:profiler, WEBGESTALT) and software packages. When evaluating enrichment results, one heuristic is to first look for enrichment of known biology as a sanity check and then expand the scope to look for novel biology.
Alternative splicing
RNA splicing is integral to eukaryotes and contributes significantly to protein regulation and diversity, occurring in >90% of human genes. There are multiple alternative splicing modes: exon skipping (most common splicing mode in humans and higher eukaryotes), mutually exclusive exons, alternative donor or acceptor sites, intron retention (most common splicing mode in plants, fungi, and protozoa), alternative transcription start site (promoter), and alternative polyadenylation. One goal of RNA-Seq is to identify alternative splicing events and test if they differ between conditions. Long-read sequencing captures the full transcript and thus minimizes many of issues in estimating isoform abundance, like ambiguous read mapping. For short-read RNA-Seq, there are multiple methods to detect alternative splicing that can be classified into three main groups:
Count-based (also event-based, differential splicing): estimate exon retention. Examples are DEXSeq, MATS, and SeqGSEA.
Isoform-based (also multi-read modules, differential isoform expression): estimate isoform abundance first, and then relative abundance between conditions. Examples are Cufflinks 2 and DiffSplice.
Intron excision based: calculate alternative splicing using split reads. Examples are MAJIQ and Leafcutter.
Differential gene expression tools can also be used for differential isoform expression if isoforms are quantified ahead of time with other tools like RSEM.
Coexpression networks
Coexpression networks are data-derived representations of genes behaving in a similar way across tissues and experimental conditions. Their main purpose lies in hypothesis generation and guilt-by-association approaches for inferring functions of previously unknown genes. RNA-Seq data has been used to infer genes involved in specific pathways based on Pearson correlation, both in plants and mammals. The main advantage of RNA-Seq data in this kind of analysis over the microarray platforms is the capability to cover the entire transcriptome, therefore allowing the possibility to unravel more complete representations of the gene regulatory networks. Differential regulation of the splice isoforms of the same gene can be detected and used to predict their biological functions.
Weighted gene co-expression network analysis has been successfully used to identify co-expression modules and intramodular hub genes based on RNA seq data. Co-expression modules may correspond to cell types or pathways. Highly connected intramodular hubs can be interpreted as representatives of their respective module. An eigengene is a weighted sum of expression of all genes in a module. Eigengenes are useful biomarkers (features) for diagnosis and prognosis. Variance-Stabilizing Transformation approaches for estimating correlation coefficients based on RNA seq data have been proposed.
Variant discovery
RNA-Seq captures DNA variation, including single nucleotide variants, small insertions/deletions. and structural variation. Variant calling in RNA-Seq is similar to DNA variant calling and often employs the same tools (including SAMtools mpileup and GATK HaplotypeCaller) with adjustments to account for splicing. One unique dimension for RNA variants is allele-specific expression (ASE): the variants from only one haplotype might be preferentially expressed due to regulatory effects including imprinting and expression quantitative trait loci, and noncoding rare variants. Limitations of RNA variant identification include that it only reflects expressed regions (in humans, <5% of the genome), could be subject to biases introduced by data processing (e.g., de novo transcriptome assemblies underestimate heterozygosity), and has lower quality when compared to direct DNA sequencing.
RNA editing (post-transcriptional alterations)
Having the matching genomic and transcriptomic sequences of an individual can help detect post-transcriptional edits (RNA editing). A post-transcriptional modification event is identified if the gene's transcript has an allele/variant not observed in the genomic data.
Fusion gene detection
Caused by different structural modifications in the genome, fusion genes have gained attention because of their relationship with cancer. The ability of RNA-Seq to analyze a sample's whole transcriptome in an unbiased fashion makes it an attractive tool to find these kinds of common events in cancer.
The idea follows from the process of aligning the short transcriptomic reads to a reference genome. Most of the short reads will fall within one complete exon, and a smaller but still large set would be expected to map to known exon-exon junctions. The remaining unmapped short reads would then be further analyzed to determine whether they match an exon-exon junction where the exons come from different genes. This would be evidence of a possible fusion event, however, because of the length of the reads, this could prove to be very noisy. An alternative approach is to use paired-end reads, when a potentially large number of paired reads would map each end to a different exon, giving better coverage of these events (see figure). Nonetheless, the end result consists of multiple and potentially novel combinations of genes providing an ideal starting point for further validation.
Copy number alteration
Copy number alteration (CNA) analyses are commonly used in cancer studies. Gain and loss of the genes have signalling pathway implications and are a key biomarker of molecular dysfunction in oncology. Calling the CNA information from RNA-Seq data is not straightforward because of the differences in gene expression, which lead to the read depth variance of different magnitudes across genes. Due to these difficulties, most of these analyses are usually done using whole-genome sequencing / whole-exome sequencing (WGS/WES). But advanced bioinformatics tools can call CNA from RNA-Seq.
Other emerging analysis and applications
The applications of RNA-Seq are growing day by day. Other new application of RNA-Seq includes detection of microbial contaminants, determining cell type abundance (cell type deconvolution), measuring the expression of TEs and Neoantigen prediction etc.
History
RNA-Seq was first developed in mid 2000s with the advent of next-generation sequencing technology. The first manuscripts that used RNA-Seq even without using the term includes those of prostate cancer cell lines (dated 2006), Medicago truncatula (2006), maize (2007), and Arabidopsis thaliana (2007), while the term "RNA-Seq" itself was first mentioned in 2008. The number of manuscripts referring to RNA-Seq in the title or abstract (Figure, blue line) is continuously increasing with 6754 manuscripts published in 2018. The intersection of RNA-Seq and medicine (Figure, gold line) has similar celerity.
Applications to medicine
RNA-Seq has the potential to identify new disease biology, profile biomarkers for clinical indications, infer druggable pathways, and make genetic diagnoses. These results could be further personalized for subgroups or even individual patients, potentially highlighting more effective prevention, diagnostics, and therapy. The feasibility of this approach is in part dictated by costs in money and time; a related limitation is the required team of specialists (bioinformaticians, physicians/clinicians, basic researchers, technicians) to fully interpret the huge amount of data generated by this analysis.
Large-scale sequencing efforts
A lot of emphasis has been given to RNA-Seq data after the Encyclopedia of DNA Elements (ENCODE) and The Cancer Genome Atlas (TCGA) projects have used this approach to characterize dozens of cell lines and thousands of primary tumor samples, respectively. ENCODE aimed to identify genome-wide regulatory regions in different cohort of cell lines and transcriptomic data are paramount to understand the downstream effect of those epigenetic and genetic regulatory layers. TCGA, instead, aimed to collect and analyze thousands of patient's samples from 30 different tumor types to understand the underlying mechanisms of malignant transformation and progression. In this context RNA-Seq data provide a unique snapshot of the transcriptomic status of the disease and look at an unbiased population of transcripts that allows the identification of novel transcripts, fusion transcripts and non-coding RNAs that could be undetected with different technologies.
See also
Transcriptomics
DNA microarray
List of RNA-Seq bioinformatics tools
References
Further reading
External links
: a high-level guide to designing and implementing an RNA-Seq experiment.
Molecular biology
RNA
Gene expression
RNA sequencing | RNA-Seq | Chemistry,Biology | 7,374 |
63,337,149 | https://en.wikipedia.org/wiki/International%20Linear%20Algebra%20Society | The International Linear Algebra Society (ILAS) is a professional mathematical society organized to promote research and education in linear algebra, matrix theory and matrix computation. It serves the international community through conferences, publications, prizes and lectures. Membership in ILAS is open to all mathematicians and scientists interested in furthering its aims and participating in its activities.
History
ILAS was founded in 1989. Its genesis occurred at the Combinatorial Matrix Analysis Conference held at the University of Victoria in British Columbia, Canada, May 20–23, 1987, hosted by Dale Olesky and Pauline van den Driessche. ILAS was initially known as the International Matrix Group, founded in 1987. The founding officers of ILAS were Hans Schneider, President; Robert C. Thompson, Vice President; Daniel Hershkowitz, Secretary; and James R. Weaver, Treasurer.
ILAS Conferences
The inaugural meeting of ILAS took place at Brigham Young University (including one day at the Sundance Mountain Resort) in Provo, Utah, USA, from August 12–15, 1989. The organizing committee consisted of Wayne Barrett, Daniel Hershkowitz, Charles Johnson, Hans Schneider, and Robert C. Thompson. Much additional support came from Don Robinson, Chair of the BYU Mathematics Department, and James R. Weaver, ILAS Treasurer. The conference received support from Brigham Young University, the National Security Agency, and the National Science Foundation. There were 85 in attendance at the conference from 15 countries including Olga Taussky-Todd, a renowned mathematician in Matrix Theory. The proceedings of the Conference appeared in volume 150 of the journal Linear Algebra and Its Applications.
The 2nd ILAS conference was held in Lisbon, Portugal, August 3–7, 1992. The chair of the organizing committee was José Dias da Silva. There were 150 participants from 27 countries and the conference was supported by 11 different organizations. The proceedings of the conference can be found in volumes 197-198 of Linear Algebra and Its Applications.
ILAS conferences were held the next 4 years, alternating between the United States and Europe, before beginning the standard pattern of holding the Conference two of every three years (with a few exceptions). The number of participants at each ILAS conference has grown steadily through the years.
The first ILAS conference outside of the United States and Europe was held in Haifa, Israel in 2001. The first in the Far East was in Shanghai in 2007 and the first in Latin America was in Cancun, Mexico in 2008. The complete list of locations hosting ILAS conferences follows:
1. Provo, Utah, USA (1989)
2. Lisbon, Portugal (1992)
3. Pensacola, Florida, USA (1993)
4. Rotterdam, The Netherlands (1994)
5. Atlanta, Georgia, USA (1995)
6. Chemnitz, Germany (1996)
7. Madison, Wisconsin, USA (1998)
8. Barcelona, Spain (1999)
9. Haifa, Israel (2001)
10. Auburn, Alabama, USA (2002)
11. Coimbra, Portugal (2004)
12. Regina, Saskatchewan, Canada (2005)
13. Amsterdam, the Netherlands (2006)
14. Shanghai, China (2007)
15. Cancun, Mexico (2008)
16. Pisa, Italy (2010)
17. Braunschweig, Germany (2011)
18. Providence, Rhode Island, USA (2013)
19. Seoul, Korea (2014)
20. Leuven, Belgium (2016)
21. Ames, Iowa, USA (2017)
22. Rio de Janeiro, Brazil (2019)
23. Virtual (originally planned for New Orleans, Louisiana, USA) (2021)
24. Galway, Ireland (2022)
25. Madrid, Spain (2023)
26. Kaohsiung, Taiwan (2025)
Prizes and Special Lectures
ILAS has three prizes named after giants in Linear Algebra.
The Hans Schneider Prize. A distinctive feature of the 3rd ILAS meeting held at the University of West Florida in Pensacola, Florida, March 17–20, 1993, was the institution of the Hans Schneider Prize. This prize was initiated thanks to a donation to ILAS from Hans Schneider, the first president of ILAS and a founding editor of the journal Linear Algebra and Its Applications. Typically, the prize is awarded every 3 years and has evolved as a prize to recognize a person's career.
The ILAS Taussky–Todd Prize. Olga Taussky-Todd and John Todd have had a decisive impact on the development of theoretical and numerical linear algebra for over half a century. The ILAS Taussky–Todd Prize honors them for their many and varied mathematical achievements and for their efforts in promoting linear algebra and matrix theory. The prize is awarded once every three to four years recognizing a linear algebra researcher in their mid career. The ILAS Taussky–Todd Prize was originally referred to as the Taussky–Todd lecture, and was instituted at the 3rd ILAS meeting held at the University of West Florida in Pensacola, Florida, March 17–20, 1993.
The ILAS Richard A. Brualdi Early Career Prize. The prize is named for Richard A. Brualdi, who has had a major impact on the field, especially in combinatorial matrix theory. In addition, he has been instrumental to the success of ILAS since its inception. The ILAS Richard A. Brualdi Early Career Prize was instituted in 2021 and is awarded every three years to an outstanding early career researcher in the field of linear algebra, for distinguished contributions to the field.
In addition ILAS awards Special Lectures at ILAS conferences as well as conferences of collaborating mathematics organizations.
Publications
ILAS publishes an electronic journal - the Electronic Journal of Linear Algebra (ELA), founded in 1996. The first Editors-in-Chief were Volker Mehrmann and Daniel Hershkowitz. ELA is a platinum open access journal, meaning that it is free to all: no subscription and no article processing fee or page charges. ELA is an all-electronic journal that welcomes high quality mathematical articles that contribute new insights to matrix analysis and the various aspects of linear algebra and its applications. ELA sets high standards for refereeing while using conventional refereeing of articles that is carried out electronically.
ILAS also produces and distributes IMAGE, a semiannual electronic bulletin founded in 1988 with Robert C. Thompson as its first Editor. IMAGE contains: essays related to linear algebra activities; feature articles; interviews of linear algebra experts; book reviews; brief reports on conferences; ILAS business notices; announcements of upcoming workshops and conferences; problems and solutions; and news about individual members.
Presidents
Hans Schneider, 1987–1996
Richard A. Brualdi, 1996–2002
Daniel Hershkowitz, 2002–2008
Stephen Kirkland, 2008–2014
Peter Šemrl, 2014–2020
Daniel B. Szyld, 2020–present
Collaborations with other mathematics organizations
ILAS collaborates with the Society for Industrial and Applied Mathematics (SIAM), the American Mathematical Society (AMS) and the International Workshop on Operator Theory and its Applications (IWOTA).
The collaboration with SIAM started in 1999. The SIAM Activity Group on Linear Algebra (SIAG/LA) holds a conference every three years (when the year minus 2000 is divisible by 3). As part of the agreement, and to encourage interaction between ILAS and SIAG/LA members, the two societies do not hold conferences in the same year. As a result, ILAS holds conferences two out of every three years. In addition, the two societies exchange speakers with ILAS sponsoring two ILAS speakers at every triennial SIAM Applied Linear Algebra (SIAM ALA) meeting (organized by SIAG/LA) and with SIAM sponsoring a SIAM speaker at every ILAS conference. The first ILAS speakers at a SIAM ALA meeting were Hans Schneider and Hugo Woerdeman in 2000, and the first SIAM speakers at an ILAS conference were Michele Benzi and Misha Kilmer in 2002.
The collaboration with AMS started in late 2020 with the establishment of
ILAS as a partner in the Joint Mathematics Meetings (JMM). In this capacity
ILAS will support a speaker for the "ILAS Lecture" at the JMM to be
selected by ILAS. In addition, at least four special sessions at the JMM will be
identified as ILAS special sessions, the contents of which will be determined by ILAS.
The partnership took effect starting with the JMM 2022 held virtually.
The collaboration with IWOTA started in 2017 with the establishment of the Israel Gohberg ILAS-IWOTA Lecture, which is funded by donations. This lecture series consists of biennial lectures either at an ILAS conference or at an IWOTA meeting. Israel Gohberg was the founding president of IWOTA and an active member of ILAS. The first Israel Gohberg ILAS-IWOTA Lecturer was Vern Paulsen at the 2021 IWOTA Lancaster UK meeting.
References
External links
International Linear Algebra Society (ILAS) home page
Electronic Journal of Linear Algebra (ELA) home page
Linear algebra
Matrix theory
Mathematical societies
Mathematics conferences
Organizations established in 1989 | International Linear Algebra Society | Mathematics | 1,858 |
31,725,673 | https://en.wikipedia.org/wiki/Symplocamide%20A | Symplocamide A is a newly discovered (2008) 3-amino-6-hydroxy-2-piperidone (Ahp) cyclodepsipeptide that has been isolated from a marine cyanobacteria in Papua New Guinea, which has only been identified at the genus level (Symploca). Cyanobacteria, both freshwater and marine, are known as producers of diverse protease inhibitors that may be used to treat diseases, such as HIV, and some forms of cancer. Research on symplocamide A has shown that it is a strong serine protease inhibitor and has a high level of cytotoxicity to cancer cells when used in vitro. As of the time of this writing, its use as a treatment on human participants has not been done and future study will have to be done before any human testing can be commenced.
Origin
Symplocamide A has only recently been discovered when scientists were looking for various marine cyanobacterium in order to extract possible protease inhibitors not seen in freshwater cyanobacterium. It was obtained from Symploca sp. in Papua New Guinea, and the genus has not been recorded in the literature. Using nuclear magnetic resonance (NMR), the molecular formula (C46H71BrN10O13) and structure were elucidated. More data will need to be gathered, as without a genus being described by the discovery of symplocamide A, recreation of the data cannot be confirmed.
Biological activities
Symplocamide A is an extremely potent cytotoxin, which has shown potential for treating H460 lung cancer and neuro-2A neuroblastoma cells with IC50 values of 40 nM and 29 nM, respectively. Symplocamide A has also been determined to be a potent protease inhibitor which may be used for the treatment of infectious diseases, such as HIV and HCV, if it is similar to other protease inhibitors that have been used before as treatments.
References
Bacterial toxins
Macrocycles
Depsipeptides
Protease inhibitors | Symplocamide A | Chemistry | 444 |
11,570,041 | https://en.wikipedia.org/wiki/Ramulispora%20sorghicola | Ramulispora sorghicola is a plant pathogen infecting sorghum.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Sorghum diseases
Hyaloscyphaceae
Fungus species | Ramulispora sorghicola | Biology | 51 |
24,545,607 | https://en.wikipedia.org/wiki/Feathery%20degeneration | In histopathology, feathery degeneration, formally feathery degeneration of hepatocytes, is a form of liver parenchymal cell (i.e. hepatocyte) death associated with cholestasis.
Cells undergoing this form of cell death have a flocculant appearing cytoplasm, and are larger than normal hepatocytes.
Relation to ballooning degeneration
Feathery degeneration is somewhat similar in appearance to ballooning degeneration, which is due to other causes (e.g. alcohol, obesity); it also has cytoplasmic clearing and cell swelling.
See also
Elevated alkaline phosphatase (ALP)
Mallory body
Non-alcoholic fatty liver disease
Steatohepatitis
Primary sclerosing cholangitis
Additional images
References
Histopathology | Feathery degeneration | Chemistry | 177 |
61,978,638 | https://en.wikipedia.org/wiki/Ensemble%20coding | Ensemble coding, also known as ensemble perception or summary representation, is a theory in cognitive neuroscience about the internal representation of groups of objects in the human mind. Ensemble coding proposes that such information is recorded via summary statistics, particularly the average or variance. Experimental evidence tends to support the theory for low-level visual information, such as shapes and sizes, as well as some high-level features such as face gender. Nonetheless, it remains unclear the extent to which ensemble coding applies to high-level or non-visual stimuli, and the theory remains the subject of active research.
Theory
Extensive amounts of information are available to the visual system. Ensemble coding is a theory that suggests that people process the general gist of their complex visual surroundings by grouping objects together based on shared properties. The world is filled with redundant information of which the human visual system has become particularly sensitive. The brain exploits this redundancy and condenses the information. For example, the leaves of a tree or blades of grass give rise to the percept of 'tree-ness' and 'lawn-ness'. It has been demonstrated that individuals have the ability to quickly and accurately encode ensembles of objects, like leaves on a tree, and gather summary statistical information (like the mean and variance) from groups of stimuli. Some research suggests that this process provides rough visual information from the entire visual field, giving way to a complete and accurate picture of the visual world. Although the individual details of this accurate picture might be inaccessible, the 'gist' of the scene remains accessible. Ensemble coding is an adaptive process that lightens the cognitive load in the processing and storing of visual representations through the use of heuristics.
Operational definition
David Whitney and Allison Yamanashi Leib have developed an operational and flexible definition stating that ensemble coding should cover the following five concepts:
Ensemble perception is the ability to discriminate or reproduce a statistical moment.
Ensemble perception requires the integration of multiple items.
Ensemble information at each level of representation can be precise relative to the processing of single objects at that level.
Single-item recognition is not a prerequisite for ensemble coding.
Ensemble representations can be extracted with a temporal resolution at or beyond the temporal resolution of individual object recognition.
Opposing theories
Some research has found countering evidence to the theory of ensemble coding.
Limited visual capacity
Vision science has noted that although humans take in large amounts of visual information, adults are only able to process, attend to, and retain up to roughly four items from the visual environment. Furthermore, scientists have found that this visual upper limit capacity exists across various phenomena including change blindness, object tracking, and feature representation.
Low resolution representations and limited capacity
Additional theories in vision science propose that stimuli are represented in the brain individually as small, low resolution, icons stored in templates with limited capacities and are organized through associative links.
History
Throughout its history, ensemble coding been known by many names. Interest in the theory began to emerge in the early 20th century. In its earliest years, ensemble coding was known as Gestalt grouping. In 1923, Max Wertheimer, a Gestalt psychology theorist, was addressing how humans perceive their visual world holistically rather than individually. Gestaltists argued that in object perception, the individual object features were either lost or difficult to perceive and therefore the grouped object was the favored percept. Although Gestaltists helped define some of the central principles of object perception, research into modern ensemble coding did not occur until many years later.
In 1971, Norman Anderson was one of the earliest to conduct explicit ensemble coding research. Anderson's research into social ensemble coding showed that individuals described by two positive terms were rated more favorably than individuals described by two positive terms and two negative terms. This research on impression formation demonstrated that a weighted mean or average captures how information is integrated rather than the summation. Additional research during this time explored ensemble coding in group attractiveness, shopping preferences, and the perceived badness of criminals.
The current era
Findings by Dan Ariely in 2001 were the first data to support the modern theories of ensemble coding. Ariely used novel experimental paradigms, which he labeled "mean discrimination" and "member identification", to examine how sets of objects are perceived. He conducted three studies involving shape ensembles that varied in size. Across all studies, participants were able to accurately encode the mean size of the ensemble of objects, but they were inaccurate when asked if a certain object was a part of the set. Ariely's findings were the first that found statistical summary information emerge in the visual perception of grouped objects.
Consistent with Ariely's findings, follow-up research conducted by Sang Chul Chong and Anne Treisman in 2003 provided evidence that participants are engaging in summary statistical processes. Their research revealed that participant's maintained high accuracy in encoding the mean size of the stimuli even with short stimuli presentations as low as 50 milliseconds, memory delays, and object distribution differences.
Additional research has demonstrated that ensemble coding is not limited to the mean size of objects in the ensemble, but that additional content is extracted, such as average line orientation, average spatial location, average number, and statistical summaries such as the variances are detected. Observers are also able to extract accurate perceptual summaries of high-level features such as the average direction of eye gaze of grouped faces and the average walking direction of a crowd.
Levels of ensemble coding
People have the ability to encode ensembles of objects along various dimensions. These dimensions have been divided into levels that vary from low-level to high-level feature information.
Low-level feature information
Low-level ensemble coding has been observed in various psychophysical areas of research. For example, people accurately perceive the average size of objects, motion direction of grouped dots, number, line orientation, and spatial location.
High-level feature information
High-level ensemble coding extends to more complex, higher level objects including faces.
Independence of low- and high-level information
Some findings suggest lower-level and higher-level information may be processed by independent cognitive mechanisms
Social vision and ensemble coding
Based on the early work of Anderson, it appears that humans integrate semantic as well as social information into memory using ensemble coding. These findings suggest that social processes may hinge on the same sort of underlying mechanisms that allow people to perceive average object orientation and average object direction of motion.
In recent years, ensemble coding in the field of social vision has emerged. Social vision is a field of research that examines how people perceive one another. With the addition of ensemble coding, the field is able to explore people perception, or how people perceive groups of other people. This specific research area focuses on how observers accurately perceive and extract social information from groups and how that extracted information influences downstream judgments and behaviors. In 2018, seminal research introducing the use ensemble coding in the field of social vision was conducted by Briana Goodale. Goodale's research found that humans can accurately extract sex ratio summaries from ensembles of faces and that this sex ratio provides an early visual cue signaling sense of belonging and fit within group. Specifically, this research found that participants felt a stronger sense of belonging to a given ensemble as members of their own sex increased in the perceived ensemble.
Additional research has uncovered that in as little as 75 milliseconds, participants are able to derive the average sex ratio of an ensemble of faces. Furthermore, within that 75 milliseconds, participants were able to form impressions based on the perceived sex ratio and make inferences about the group's perceived threat. Specifically, this research found that groups were judged as more threatening as the ratio of men to women increased.
In 2023, researchers found that people can accurately gauge the average trustworthiness of multiple faces presented together, even at very brief exposure times (as short as 250 ms). The findings suggest that our brains efficiently extract a summary statistic of facial features from crowds, enabling quick social judgments that may influence behavior.
References
Cognitive psychology
Perception | Ensemble coding | Biology | 1,619 |
67,160,105 | https://en.wikipedia.org/wiki/Dungey%20Cycle | The Dungey cycle, officially proposed by James Dungey in 1961, is a phenomenon that explains interactions between a planet's magnetosphere and solar wind. Dungey originally proposed a cyclic behavior of magnetic reconnection between Earth's magnetosphere and flux of solar wind. This reconnection explained previously observed dynamics within Earth's magnetosphere. The rate of reconnection in the beginning of the cycle is dependent on the orientation of the interplanetary magnetic field as well as the resultant plasma conditions at the site of reconnection. On Earth, the reconnection cycle takes around 1 hour, but this differs from planet to planet.
Cyclic Behavior
The Dungey cycle occurs within three stages:
In the first stage, solar flux and the magnetopause connect, creating an opening in the magnetopause in which the solar wind can enter the magnetosphere. This opening is called the dayside reconnection and occurs on the side of the magnetosphere facing the solar wind source.
In the second stage, the flux travels in the direction of the solar wind across the magnetosphere.
In the third stage, at the magnetotail, reconnection closes the open flux, allowing for a new cycle to begin. This reconnection is called nightside reconnection.
Dungey's proposal originally put forth an explanation that the cycle is at steady state, and that the reconnection during stage one and three are equal. However, later work has found that the rate of reconnection is variable and affected by conditions at both the dayside reconnection site as well as the magnetotail.
Effect of interplanetary magnetic field orientation
The rate of reconnection at the magnetopause is heavily dependent on the orientation of the interplanetary magnetic field. Reconnection at the magnetopause occurs at higher rates when there is a stronger southward component to the field. This allows for solar wind with arbitrarily small shear angles to reconnect at the magnetopause. Under normal circumstances, the difference in field strength between the magnetopause and the surrounding fields only allow for solar winds with large shear angles to reconnect. A strong southward component normalizes the difference in field strength between the magnetopause and surrounding fields.
References
Geomagnetism
Planetary science
Solar phenomena
Space plasmas | Dungey Cycle | Physics,Astronomy | 471 |
77,693,785 | https://en.wikipedia.org/wiki/NGC%20664 | NGC664 is a spiral galaxy in the constellation of Pisces. Its velocity with respect to the cosmic microwave background is 5137 ± 21km/s, which corresponds to a Hubble distance of . In addition, six non redshift measurements give a distance of . It was discovered by British astronomer John Herschel on 24 September 1830.
Supernovae
Three supernovae have been observed in NGC664:
SN1996bw (typeII, mag.17.5) was discovered by the BAO Supernova Survey on 30 November 1996.
SN1997W (type II, mag.18) was discovered by the Harvard–Smithsonian Center for Astrophysics on 1 February 1997.
SN1999eb (typeIIn, mag.16.2) was discovered by the Lick Observatory Supernova Search (LOSS) on 2 October 1999.
NGC 664 Group
NGC 664 is the namesake of the four member NGC664 group. The other three galaxies are: IC 150, UGC 1204, and UGC 1240.
See also
List of NGC objects (1–1000)
References
External links
0664
006359
+01-05-029
01210
11507+2101
Pisces_(constellation)
Astronomical objects discovered in 1830
Discoveries by John Herschel
Spiral galaxies | NGC 664 | Astronomy | 275 |
10,723,149 | https://en.wikipedia.org/wiki/Deuterated%20DMSO | Deuterated DMSO, also known as dimethyl sulfoxide-d6, is an isotopologue of dimethyl sulfoxide (DMSO, (CH3)2S=O)) with chemical formula ((CD3)2S=O) in which the hydrogen atoms ("H") are replaced with their isotope deuterium ("D"). Deuterated DMSO is a common solvent used in NMR spectroscopy.
Production
Deuterated DMSO is produced by heating DMSO in heavy water (D2O) with a basic catalyst such as calcium oxide. The reaction does not give complete conversion to the d6 product, and the water produced must be removed and replaced with D2O several times to drive the equilibrium to the fully deuterated product.
Use in NMR spectroscopy
Pure deuterated DMSO shows no peaks in 1H NMR spectroscopy and as a result is commonly used as an NMR solvent. However commercially available samples are not 100% pure and a residual DMSO-d5 1H NMR signal is observed at 2.50ppm (quintet, JHD=1.9Hz). The 13C chemical shift of DMSO-d6 is 39.52ppm (septet).
References
Deuterated solvents | Deuterated DMSO | Chemistry | 281 |
21,241,343 | https://en.wikipedia.org/wiki/HAT-P-11 | HAT-P-11, also designated GSC 03561-02092 and Kepler-3, is a metal-rich orange dwarf star with a planetary system, away in the constellation Cygnus. This star is notable for its relatively large rate of proper motion. The apparent magnitude of this star is about 9.6, which means it is not visible to the naked eye but can be seen with a medium-sized amateur telescope on a clear dark night. The age of this star is about 6.5 billion years.
The star has active latitudes that generate starspots. The spots are similar in distribution to those on the Sun, but HAT-P-11 is a more active star and has a starspot coverage approximately 100 times greater than the Sun. The star appears to have an unusually small radius, which can be explained by the anomalously high helium fraction.
Planetary system
An exoplanet, designated HAT-P-11b, was discovered by the HATNet Project using the transit method, believed to be a little larger than the planet Neptune.
The planet orbits out of alignment from the star's spin axis, with an obliquity of about 100°. This star system was within the field of view of the Kepler Mission planet-hunter spacecraft. Water vapor and ammonia have been detected in the atmosphere of HAT-P-11b.
A trend in the radial velocity measurements taken to confirm the planet indicated a possible additional body in the system. This was confirmed in 2018 when a second planet, HAT-P-11c, was detected on an approximately nine-year orbit. In 2020, an astrometric detection of HAT-P-11c was published, along with Pi Mensae b, allowing its inclination and true mass to be determined.
Multiple 2024 studies present conflicting results about HAT-P-11c. One study suggests that the radial velocity variations attributed to HAT-P-11c may actually be caused by a magnetic activity cycle of the star. If this is the case, an outer planet may still exist given the evidence for one from astrometry, but farther from the star and with a different mass than previously thought. Another study instead claims further confirmation of the previously proposed planet. A third paper, published in response to the first, also corroborates the planetary nature of HAT-P-11c based on additional radial velocity data.
See also
HATNet Project
Kepler Mission
References
External links
Cygnus (constellation)
K-type main-sequence stars
Planetary transit variables
Planetary systems with two confirmed planets
3
097657
BD+47 2936
1144 | HAT-P-11 | Astronomy | 536 |
20,645,003 | https://en.wikipedia.org/wiki/Allylmagnesium%20bromide | Allylmagnesium bromide is a Grignard reagent used for introducing the allyl group. It is commonly available as a solution in diethyl ether. It may be synthesized by treatment of magnesium with allyl bromide while maintaining the reaction temperature below 0 °C to suppress formation of hexadiene. Allyl chloride can also be used in place of the bromide to give allylmagnesium chloride. These reagents are used to prepare metal allyl complexes.
References
Further reading
Organomagnesium compounds
Allyl compounds
Bromides | Allylmagnesium bromide | Chemistry | 117 |
42,965,137 | https://en.wikipedia.org/wiki/Encrypted%20Media%20Extensions | Encrypted Media Extensions (EME) is a W3C specification for providing a communication channel between web browsers and the Content Decryption Module (CDM) software which implements digital rights management (DRM). This allows the use of HTML video to play back DRM-wrapped content such as streaming video services without the use of heavy third-party media plugins like Adobe Flash or Microsoft Silverlight (both discontinued). The use of a third-party key management system may be required, depending on whether the publisher chooses to scramble the keys.
EME is based on the Media Source Extensions (MSE) specification, which enables adaptive bitrate streaming in HTML audio and video, e.g. using MPEG-DASH with MPEG-CENC protected content.
EME has been highly controversial because it places a necessarily proprietary, closed decryption component which requires per-browser licensing fees into what might otherwise be an entirely open and free software ecosystem. On July 6, 2017, W3C publicly announced its intention to publish an EME web standard, and did so on September 18. On the same day, the Electronic Frontier Foundation, who joined in 2014 to participate in the decision making, published an open letter resigning from W3C.
Support
In April 2013, on the Samsung Chromebook, Netflix became the first company to offer HTML video using EME.
, the Encrypted Media Extensions interface has been implemented in the Google Chrome, Internet Explorer, Safari, Firefox, and Microsoft Edge browsers.
While backers and the developers of the Firefox web browser were hesitant in implementing the protocol for ethical reasons due to its dependency on proprietary code, Firefox introduced EME support on Windows platforms in May 2015, originally using Adobe's Primetime DRM library, later replaced with the Widevine library (CDM). Firefox's implementation of EME uses an open-source sandbox to load the proprietary DRM modules, which are treated as plug-ins that are loaded when EME-encrypted content is requested. The sandbox was also designed to frustrate the ability for services and the DRM to uniquely track and identify devices. Additionally, it is always possible to disable DRM in Firefox, which then not only disables EME, but also uninstalls the Widevine DRM libraries.
Netflix supports HTML video using EME with a supported web browser: Chrome, Firefox, Microsoft Edge, Internet Explorer (on Windows 8.1 or newer), or Safari (on OS X Yosemite or newer). YouTube supports the MSE. Available players supporting MPEG-DASH using the MSE and EME are NexPlayer, THEOplayer by OpenTelly, the bitdash MPEG-DASH player, dash.js by DASH-IF or rx-player.
Note that certainly in Firefox and Chrome, EME does not work unless the media is supplied via Media Source Extensions.
Version 4.3 and subsequent versions of Android support EME.
Content Decryption Modules
Adobe Primetime CDM (used by old Firefox versions 47 to 51)
Widevine (used in Chrome and Firefox + their derivatives, including Opera and newest versions of Microsoft Edge)
PlayReady (used in EdgeHTML-based Microsoft Edge on Windows 10 and Internet Explorer 11 for Windows 8.1 and 10)
FairPlay (used in Safari since OS X Yosemite)
Criticism
EME has faced strong criticism from both inside and outside W3C. The major issues for criticism are implementation issues for open-source browsers, entry barriers for new browsers, lack of interoperability, concerns about security, privacy and accessibility, and possibility of legal trouble in the United States due to Chapter 12 of the DMCA.
In July 2020, Reddit started using a fingerprinting mechanism that involves loading every DRM module that browsers can support, and logs what ends up loading as part of the data collected. Users noticed this when Firefox began alerting them that Reddit "required" them to load DRM software to play media, although none of the media on the page actually needed it.
As of 2020, the ways in which EME interferes with open source have become concrete. None of the widely used CDMs are being licensed to independent open-source browser providers without paying a per-browser licensing fee (particularly to Google – for their Widevine CDM, which is used in nearly all recently developed web browsers).
See also
Media Source Extensions
World Wide Web Consortium
Digital rights management
Defective by Design
Electronic Frontier Foundation
Digital Millennium Copyright Act
Project DReaM
Protected Media Path
References
HTML5
Streaming media systems | Encrypted Media Extensions | Technology | 970 |
23,605,249 | https://en.wikipedia.org/wiki/Semantic%20decision%20table | A semantic decision table uses modern ontology engineering technologies to enhance traditional a decision table. The term "semantic decision table" was coined by Yan Tang and Prof. Robert Meersman from VUB STARLab (Free University of Brussels) in 2006. A semantic decision table is a set of decision tables properly annotated with an ontology. It provides a means to capture and examine decision makers’ concepts, as well as a tool for refining their decision knowledge and facilitating knowledge sharing in a scalable manner.
Background
A decision table is defined as a "tabular method of showing the relationship between a series of conditions and the resultant actions to be executed". Following the de facto international standard (CSA, 1970), a decision table contains three building blocks: the conditions, the actions (or decisions), and the rules.
A decision condition is constructed with a condition stub and a condition entry. A condition stub is declared as a statement of a condition. A condition entry provides a value assigned to the condition stub. Similarly, an action (or decision) composes two elements: an action stub and an action entry. One states an action with an action stub. An action entry specifies whether (or in what order) the action is to be performed.
A decision table separates the data (that is the condition entries and decision/action entries) from the decision templates (that are the condition stubs, decision/action stubs, and the relations between them). Or rather, a decision table can be a tabular result of its meta-rules.
Traditional decision tables have many advantages compared to other decision support manners, such as if-then-else programming statements, decision trees and Bayesian networks. A traditional decision table is compact and easily understandable. However, it still has several limitations. For instance, a decision table often faces the problems of conceptual ambiguity and conceptual duplication; and it is time consuming to create and maintain large decision tables. Semantic decision tables are an attempt to solve these problems.
Definition
A semantic decision table is modeled based on the framework of Developing Ontology-Grounded Methods and Applications (DOGMA). The separation of an ontology into extremely simple linguistic structures (also known as lexons) and a layer of lexon constraints used by applications (also known as ontological commitments), aiming to achieve a degree of scalability.
According to the DOGMA framework, a semantic decision table consists of a layer of the decision binary fact types called semantic decision table lexons and a semantic decision table commitment layer that consists of the constraints and axioms of these fact types.
A lexon l is a quintuple where and represent two concepts in a natural language (e.g., English); and (in, corresponds to "role and – refer to the relationships that the concepts share with respect to one another; is a context identifier refers to a context, which serves to disambiguate the terms into the intended concepts, and in which they become meaningful.
For example, a lexon <γ, driver's license, is issued to, has, driver> explains a fact that “a driver’s license is issued to a driver”, and “a driver has a driver’s license”.
The ontological commitment layer formally defines selected rules and constraints by which an application (or "agent") may make use of lexons. A commitment can contain various constraints, rules and axiomatized binary facts based on needs. It can be modeled in different modeling tools, such as object-role modeling, conceptual graph, and Unified Modeling Language.
Semantic decision table model
A semantic decision table contains richer decision rules than a decision table. During the annotation process, the decision makers need to specify all the implicit rules, including the hidden decision rules and the meta-rules of a set of decision tables. The semantics of these rules is derived from an agreement between the decision makers observing the real-world decision problems. The process of capturing semantics within a community is a process of knowledge acquisition.
Notes
References
Software testing | Semantic decision table | Engineering | 830 |
62,415,848 | https://en.wikipedia.org/wiki/Bostwick%20Historic%20District | The Bostwick Historic District, in Bostwick, Georgia, is a historic district which was listed on the National Register of Historic Places in 2002. The listing included 64 contributing buildings, a contributing structure, and four contributing sites on .
It is centered on the intersection of Bostwick Rd. (Georgia State Route 83) and Fairplay Rd. in Bostwick.
The oldest historic resource is the Bostwick Cemetery, established around 1859.
It was deemed significant partly as it is a "good example of a rural town in Georgia which developed from the
cultivation and processing of cotton. The district is significant in the areas of agriculture and industry
for its excellent collection of industrial buildings associated with the processing of cotton as well as
for the remaining cotton fields located within the district. John Bostwick, Sr. (1859-1929), considered
the founder of the town, started Bostwick Supply Company in 1892. In 1901, he started the Bostwick
Manufacturing Company that consisted of a cottonseed oil mill and other buildings (cotton gin,
granary, grist mill, warehouse, guano/fertilizer building) associated with the manufacturing of goods
from cotton. (All these buildings still remain.) Historically, the region surrounding the small town of
Bostwick was primarily planted in cotton. Currently, much of this land has been planted with other
crops, such as pine trees and peanuts, or left open to be used as pasture land for grazing by cattle.
The fields planted in cotton within the district still convey the historic significant pattern in Georgia of
agricultural fields abutting the town development."
It was deemed significant also for its architecture, specifically "for its excellent examples of historic residences, commercial, and community landmark buildings representing architectural types and styles popular in Georgia from the late 19th century into the early 20th century. The significant architectural types include Georgian cottage, gabled ell cottage, Queen Anne cottage, hall-parlor cottage, and bungalow. The significant architectural styles include Colonial Revival, Neoclassical Revival, Craftsman, and Folk Victorian. The John Bostwick, Sr. House, built in 1902, is an excellent representative example of a Georgian House, a two-story house with a central hallway on each floor with two rooms on either side, representing the Neoclassical Revival style. The character-defining features of the house include a full-height entry porch with lower full-width porch, truncated hipped roof, and wide cornice band. The historic stores are good examples of attached and freestanding buildings representing the Folk Victorian style. The character-defining features include a stepped parapet roof, recessed brick panels, and decorative arches over the windows and doors. The historic community landmark resources include two churches, the Susie Agnes Hotel, and the Bostwick Cemetery."
References
GA
Stepped gables
National Register of Historic Places in Morgan County, Georgia
Historic districts on the National Register of Historic Places in Georgia (U.S. state)
Buildings and structures completed in 1859 | Bostwick Historic District | Engineering | 596 |
23,142,393 | https://en.wikipedia.org/wiki/Compound%2048/80 | Compound 48/80 is a polymer produced by the condensation of N-methyl-p-methoxyphenethylamine with formaldehyde. It promotes histamine release, and in biochemical research, compound 48/80 is used to promote mast cell degranulation.
References
Organic polymers | Compound 48/80 | Chemistry,Biology | 66 |
30,844,531 | https://en.wikipedia.org/wiki/Source%20code%20virus | Source code viruses are a subset of computer viruses that make modifications to source code located on an infected machine. A source file can be overwritten such that it includes a call to some malicious code. By targeting a generic programming language, such as C, source code viruses can be very portable. Source code viruses are rare, partly due to the difficulty of parsing source code programmatically, but have been reported to exist.
One such virus (W32/Induc-A) was identified by anti-virus specialist Sophos as capable of injecting itself into the source code of any Delphi program it finds on an infected computer, and then compiles itself into a finished executable.
Notes
References
Computer viruses
Virus
Source code | Source code virus | Technology | 152 |
32,190,058 | https://en.wikipedia.org/wiki/Carboxypeptidase%20A%20inhibitor | In molecular biology, the carboxypeptidase A inhibitor family is a family of proteins which is represented by the well-characterised metallocarboxypeptidase A inhibitor (MCPI) from potatoes, which belongs to the MEROPS inhibitor family I37, clan IE. It inhibits metallopeptidases belonging to MEROPS peptidase family M14, carboxypeptidase A. In Russet Burbank potatoes, it is a mixture of approximately equal amounts of two polypeptide chains containing 38 or 39 amino acid residues. The chains differ in their amino terminal sequence only and are resistant to fragmentation by proteases. The structure of the complex between bovine carboxypeptidase A and the 39-amino-acid carboxypeptidase A inhibitor from potatoes has been determined at 2.5-Angstrom resolution.
The potato inhibitor is synthesised as a precursor, having a 29 amino acid N-terminal signal peptide, a 27 amino acid pro-peptide, the 39 amino acid mature inhibitor region and a 7 amino acid C-terminal extension. The 7 amino acid C-terminal extension is involved in inhibitor inactivation and may be required for targeting to the vacuole where the mature active inhibitor accumulates.
The N-terminal region and the mature inhibitor are weakly related to other solananaceous proteins found in this family, from potato, tomato and henbane, which have been incorrectly described as metallocarboxypeptidase inhibitors.
References
External links
MEROPS inhibitor family I37
MEROPS peptidase family M14
Protein domains | Carboxypeptidase A inhibitor | Biology | 331 |
11,127,559 | https://en.wikipedia.org/wiki/Cochliobolus%20spicifer | Cochliobolus spicifer is a fungal plant pathogen.
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Cochliobolus
Fungi described in 1964
Fungus species | Cochliobolus spicifer | Biology | 44 |
14,164,045 | https://en.wikipedia.org/wiki/Herpes%20simplex%20virus%20protein%20vmw65 | Vmw65, also known as VP16 or α-TIF (Trans Inducing Factor) is a trans-acting protein that forms a complex with the host transcription factors Oct-1 and HCF to induce immediate early gene transcription in the herpes simplex viruses.
VP16 is a strong transactivator and is often used in Y2H systems as the activation domain of the system.
References
Simplexviruses
Viral nonstructural proteins | Herpes simplex virus protein vmw65 | Biology | 95 |
32,844,232 | https://en.wikipedia.org/wiki/Saccharomyces%20eubayanus | Saccharomyces eubayanus, a cryotolerant (cold tolerant) type of yeast, is most likely the parent of the lager brewing yeast, Saccharomyces pastorianus.
Lager is a type of beer created from malted barley and fermented at low temperatures, originally in Bavaria. S. eubayanus was first discovered in Patagonia, possibly being an example of Columbian exchange, and is capable of fermenting glucose, along with the disaccharide maltose at reduced temperatures.
History
With the emergence of lager beer in the XVth century, S. eubayanus was considered to be the progenitor of S. pastorianus along with S. cerevisiae. Since 1985 the non-cerevisiae ancestor has been contentiously debated between S. eubayanus, and S. bayanus which "is not found outside the brewing environment". Upon the 2011 discovery of S. eubayanus in Argentina and consequential genome analysis, S. eubayanus was found to be 99% genetically identical to S. pastorianus and S. bayanus was dismissed as an ancestor.
First described in 2011, S. eubayanus was discovered in North Patagonia, ecologically associated with Nothofagus spp. (Southern Beech) forests and the parasitic biotrophic fungi Cyttaria spp. With discoveries in other parts of the world shortly after in east Asia, the South American origins of S. eubayanus have been challenged by genomic and phylogenetic evidence that suggests a Tibetan origin. The proponents of this theory argue that it "corresponds better with geography and world trade history" given the Eurasian land bridge. Since then, genomic analyses from South America strains have shown reduced genetic diversity suggesting a biogeographical radiation point from Patagonia.
In 2022, a researcher team from the University College Dublin isolated Saccharomyces eubayanus from soil samples in Ireland. Further isolations from different locations in Europe can be expected.
Phylogenetically, S. eubayanus is basal in the Saccharomyces genus, and well-adjusted to the cooler environment of Nothofagus forests, Saccharomyces species with thermo-tolerance are suggested to be derived traits.
Genomics
Population genomic analyses have identified two main populations of S. eubayanus located in Patagonia, Patagonia A and Patagonia B/Holarctic. "These are the closest known wild relatives of the Lager yeasts", comparing sub-genomes, the wild strains are 99.82% and 99.72% identical respectively.
Lager yeasts consist of two distinct lineages, said to have been hybridized from independent events 1000 years ago. Type one, called Saaz contains the allotriploid strains with one copy of the S. cerevisiae genome and two copies of the S. eubayanus genome. The second type, Frohberg, houses allotetraploid strains with one full diploid genome copy of S. cerevisiae and S. eubayanus. Saaz strains, which are more physiologically similar to their S. eubayanus parent, are much more efficient at growing in low temperatures, reflecting S. eubayanus' cryotolerant properties. S. eubayanus is said to provide the bottom-fermentation and cold temperature genetics that distinguish this ssp. from the top-brewing and bread making relative S. cerevisiae.
A de novo assembly of the S. eubayanus genome yielded 5,515 protein-coding genes, 4,993 of which were unambiguous 1:1 orthologs to S. cerevisiae, and S. uvarum.
Uses
In 2015, an interspecific hybridization of S. cerevisiae and S. eubayanus was successful in creating novel lager brewing yeasts. However hybrid genomes can result in genetic instability in industrial uses.
In 2016, S. eubayanus was used itself to brew lager beer.
References
eubayanus
Yeasts used in brewing
Fungi described in 2011
Fungus species | Saccharomyces eubayanus | Biology | 861 |
14,755,739 | https://en.wikipedia.org/wiki/EPH%20receptor%20A4 | EPH receptor A4 (ephrin type-A receptor 4) is a protein that in humans is encoded by the EPHA4 gene.
This gene belongs to the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands.
In 2012, a publication in Nature Medicine revealed a connection between EPHA4 and the neurodegenerative disease Amyotrophic lateral sclerosis (ALS), where a defective gene allows ALS patients to live considerably longer than patients with an intact gene. This opens up for development of treatment for this currently untreatable disease.
References
Further reading
Tyrosine kinase receptors | EPH receptor A4 | Chemistry | 226 |
46,651,211 | https://en.wikipedia.org/wiki/NGC%20339 | NGC 339 is a globular cluster in the constellation Tucana the Toucan. It is located both visually and physically in the Small Magellanic Cloud, being only about 10,000 ± 12,000 light years (3,000 ± 3,000 parsecs) closer than the cloud. It is rather prominent, being the brightest cluster in the southern reaches of the cloud. It was discovered by John Herschel on September 18, 1835. It was observed in 2005 by the Hubble Space Telescope. Its apparent V-band magnitude is 12.12, but at this wavelength, it has 0.19 magnitudes of interstellar extinction.
NGC 339 is about 6.3 billion years old. Its estimated mass is , and its total luminosity is , leading to a mass-to-luminosity ratio of 0.79 /. All else equal, older star clusters have higher mass-to-luminosity ratios; that is, they have lower luminosities for the same mass.
References
External links
Globular clusters
0339
18350918
Tucana
Discoveries by John Herschel
Small Magellanic Cloud | NGC 339 | Astronomy | 233 |
13,212,005 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20proven%20oil%20reserves | Proven oil reserves are those quantities of petroleum which, by analysis of geological and engineering data, can be estimated, with a high degree of confidence, to be commercially recoverable from a given date forward from known reservoirs and under current economic conditions.
Some statistics on this page are disputed and controversial—different sources (OPEC, CIA World Factbook, oil companies) give different figures. Some of the differences reflect different types of oil included. Different estimates may or may not include oil shale, mined oil sands or natural gas liquids.
Because proven reserves include oil recoverable under current economic conditions, nations may see large increases in proven reserves when known, but previously uneconomic deposits become economic to develop. In this way, Canada's proven reserves increased suddenly in 2003 when the oil sands of Alberta were seen to be economically viable. Similarly, Venezuela's proven reserves jumped in the late 2000s when the heavy oil of the Orinoco Belt was judged economic.
Sources
Sources sometimes differ on the volume of proven oil reserves. The differences sometimes result from different classes of oil included, and sometimes result from different definitions of proven. (The data below does not seem to include shale oil and other unconventional sources of oil such as tar sands. For instance, North America has over 3 trillion barrels of shale oil reserves, and the majority of oil produced in the US is from shale, leading to the paradoxical data below that the US will finish all its oil at 2024 production levels in 10 years.)
Countries
Reserve amounts are listed in millions of barrels.
indicates links to "Oil reserves in Country or Territory" or "Energy in Country or Territory" pages.
See also
List of countries by oil production
List of countries by oil consumption
List of countries by natural gas proven reserves
References
Oil, proven
Reserves
List of countries by proven oil reserves
Lists of countries | List of countries by proven oil reserves | Chemistry | 368 |
1,127,460 | https://en.wikipedia.org/wiki/Sylvester%27s%20law%20of%20inertia | Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if is a symmetric matrix, then for any invertible matrix , the number of positive, negative and zero eigenvalues (called the inertia of the matrix) of is constant. This result is particularly useful when is diagonal, as the inertia of a diagonal matrix can easily be obtained by looking at the sign of its diagonal elements.
This property is named after James Joseph Sylvester who published its proof in 1852.
Statement
Let be a symmetric square matrix of order with real entries. Any non-singular matrix of the same size is said to transform into another symmetric matrix , also of order , where is the transpose of . It is also said that matrices and are congruent. If is the coefficient matrix of some quadratic form of , then is the matrix for the same form after the change of basis defined by .
A symmetric matrix can always be transformed in this way into a diagonal matrix
which has only entries , , along the diagonal. Sylvester's law of inertia states that the number of diagonal entries of each kind is an invariant of , i.e. it does not depend on the matrix used.
The number of s, denoted , is called the positive index of inertia of , and the number of s, denoted , is called the negative index of inertia. The number of s, denoted , is the dimension of the null space of , known as the nullity of . These numbers satisfy an obvious relation
The difference, , is usually called the signature of . (However, some authors use that term for the triple
consisting of the nullity and the positive and negative indices of inertia of ; for a non-degenerate form of a given dimension these are equivalent data, but in general the triple yields more data.)
If the matrix has the property that every principal upper left minor is non-zero then the negative index of inertia is equal to the number of sign changes in the sequence
Statement in terms of eigenvalues
The law can also be stated as follows: two symmetric square matrices of the same size have the same number of positive, negative and zero eigenvalues if and only if they are congruent (, for some non-singular ).
The positive and negative indices of a symmetric matrix are also the number of positive and negative eigenvalues of . Any symmetric real matrix has an eigendecomposition of the form where is a diagonal matrix containing the eigenvalues of , and is an orthonormal square matrix containing the eigenvectors. The matrix can be written where is diagonal with entries , and is diagonal with . The matrix transforms to .
Law of inertia for quadratic forms
In the context of quadratic forms, a real quadratic form in variables (or on an -dimensional real vector space) can by a suitable change of basis (by non-singular linear transformation from to ) be brought to the diagonal form
with each . Sylvester's law of inertia states that the number of coefficients of a given sign is an invariant of , i.e., does not depend on a particular choice of diagonalizing basis. Expressed geometrically, the law of inertia says that all maximal subspaces on which the restriction of the quadratic form is positive definite (respectively, negative definite) have the same dimension. These dimensions are the positive and negative indices of inertia.
Generalizations
Sylvester's law of inertia is also valid if and have complex entries. In this case, it is said that and are -congruent if and only if there exists a non-singular complex matrix such that , where denotes the conjugate transpose. In the complex scenario, a way to state Sylvester's law of inertia is that if and are Hermitian matrices, then and are -congruent if and only if they have the same inertia, the definition of which is still valid as the eigenvalues of Hermitian matrices are always real numbers.
Ostrowski proved a quantitative generalization of Sylvester's law of inertia: if and are -congruent with , then their eigenvalues are related by
where are such that .
A theorem due to Ikramov generalizes the law of inertia to any normal matrices and : If and are normal matrices, then and are congruent if and only if they have the same number of eigenvalues on each open ray from the origin in the complex plane.
See also
Metric signature
Morse theory
Cholesky decomposition
Haynsworth inertia additivity formula
References
External links
Sylvester's law of inertia and *-congruence
Matrix theory
Quadratic forms
Theorems in linear algebra | Sylvester's law of inertia | Mathematics | 1,004 |
67,464,926 | https://en.wikipedia.org/wiki/Ditylum | Ditylum is a genus of diatoms belonging to the family Lithodesmiaceae.
The genus has cosmopolitan distribution.
Species:
Ditylum brightwellii
Ditylum buchananii
Ditylum cornutum
Ditylum ehrenbergii
Ditylum grovei
Ditylum inaequale
Ditylum pernodi
Ditylum segmentale
Ditylum sol
Ditylum trigonum
Ditylum trigonum
References
Diatoms
Diatom genera | Ditylum | Biology | 100 |
32,662,418 | https://en.wikipedia.org/wiki/Anomalous%20diffraction%20theory | Anomalous diffraction theory (also van de Hulst approximation, eikonal approximation, high energy approximation, soft particle approximation) is an approximation developed by Dutch astronomer van de Hulst describing light scattering for optically soft spheres.
The anomalous diffraction approximation for extinction efficiency is valid for optically soft particles and large size parameter, x = 2πa/λ:
,
where in this derivation since the refractive index is assumed to be real, and thus there is no absorption (). is the efficiency factor of extinction, which is defined as the ratio of the extinction cross section and geometrical cross section πa2. p = 4πa(n – 1)/λ has a physical meaning of the phase delay of the wave passing through the center of the sphere; a is the sphere radius, n is the ratio of refractive indices inside and outside of the sphere, and λ the wavelength of the light.
This set of equations was first described by van de Hulst. There are extensions to more complicated geometries of scattering targets.
The anomalous diffraction approximation offers a very approximate but computationally fast technique to calculate light scattering by particles. The ratio of refractive indices has to be close to 1, and the size parameter should be large. However, semi-empirical extensions to small size parameters and larger refractive indices are possible. The main advantage of the ADT is that one can (a) calculate, in closed form, extinction, scattering, and absorption efficiencies for many typical size distributions; (b) find solution to the inverse problem of predicting size distribution from light scattering experiments (several wavelengths); (c) for parameterization purposes of single scattering (inherent) optical properties in radiative transfer codes.
Another limiting approximation for optically soft particles is Rayleigh scattering, which is valid for small size parameters.
Notes and references
Scattering, absorption and radiative transfer (optics) | Anomalous diffraction theory | Chemistry | 400 |
59,208,283 | https://en.wikipedia.org/wiki/M%20equilibrium | M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization ("no mistakes") and perfect beliefs ("no surprises"). The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.
Background
A large body of work in experimental game theory has documented systematic departures from Nash equilibrium, the cornerstone of classic game theory. The lack of empirical support for Nash equilibrium led Nash himself to return to doing research in pure mathematics. Selten, who shared the 1994 Nobel Prize with Nash, likewise concluded that "game theory is for proving theorems, not for playing games". M equilibrium is motivated by the desire for an empirically relevant game theory.
M equilibrium accomplishes this by replacing the two main assumptions underlying classical game theory, perfect maximization and rational expectations, with the weaker notions of ordinal monotonicity –players' choice probabilities are ranked the same as the expected payoffs based on their beliefs – and ordinal consistency – players' beliefs yield the same ranking of expected payoffs as their choices.
M equilibria do not follow from the fixed-points that follow by imposing rational expectations and that have long dominated economics. Instead, the mathematical machinery used to characterize M equilibria is semi-algebraic geometry. Interestingly, some of this machinery was developed by Nash himself. The characterization of M equilibria as semi-algebraic sets allows for mathematically precise and empirically testable predictions.
Definition
M equilibrium is based on the following two conditions;
Ordinal monotonicity: choice probabilities are ranked the same as the expected payoffs based on players’ beliefs. This replaces the assumption of "perfect maximization".
Ordinal consistency: player’s beliefs yield the same ranking of expected payoffs as their choices. This replaces the rational expectations or perfect-beliefs assumption.
Let and denote the concatenations of players’ choice and belief profiles respectively, and let and denote the concatenations of players’ rank correspondences and profit functions. We write for the profile of expected payoffs based on players’ beliefs and for the profile of expected payoffs when beliefs are correct, i.e. for . The set of possible choice profiles is and the set of possible belief profiles is .
Definition: We say form an M Equilibrium if they are the closures of the largest non-empty sets and that satisfy:
for all , .
Properties
It can be shown that, generically, M equilibria satisfy the following properties:
M equilibria have positive measure in
M equilibria are "colorable" by a unique rank vector
Nash equilibria arise as boundary points of some M equilibrium
The number of M equilibria can generically be even or odd, and may be less than, equal, or greater than the number of Nash equilibria. Also, any M equilibrium may contain zero, one, or multiple Nash equilibria. Importantly, the measure of any M equilibrium choice set is bounded and decreases exponentially with the number of players and the number of possible choices.
Meta Theory
Surprisingly, M equilibrium "minimally envelopes" various parametric models based on fixed-points, including Quantal Response Equilibrium. Unlike QRE, however, M equilibrium is parameter-free, easy to compute, and does not impose the rational-expectations condition of homogeneous and correct beliefs.
Behavioral stability
The interior of a colored M equilibrium set consists of choices and beliefs that are behaviorally stable. A profile is behaviorally stable when small perturbations in the game do not destroy its equilibrium nature. So an M-equilibrium is behaviorally stable when it remains an M equilibrium even after perturbing the game. Behavioral stability is a strengthening of the concept of strategic stability.
See also
Bounded rationality
Behavioral game theory
References
Game theory equilibrium concepts | M equilibrium | Mathematics | 810 |
2,672,651 | https://en.wikipedia.org/wiki/Terminal%20alkene | In organic chemistry, terminal alkenes (alpha-olefins, α-olefins, or 1-alkenes) are a family of organic compounds which are alkenes (also known as olefins) with a chemical formula , distinguished by having a double bond at the primary, alpha (α), or 1- position. This location of a double bond enhances the reactivity of the compound and makes it useful for a number of applications.
Classification
There are two types of alpha-olefins, branched and linear (or normal). The chemical properties of branched alpha-olefins with a branch at either the second (vinylidene) or the third carbon number are significantly different from the properties of linear alpha-olefins and those with branches on the fourth carbon number and further from the start of the chain.
Examples of linear alpha-olefins are propene, but-1-ene and dec-1-ene. An example of a branched alpha-olefin is isobutylene.
Production
A variety of methods are employed for production of alpha-olefins. One class of methods starts with ethylene which is either dimerized or oligomerized. These conversions are respectively effected by the alphabutol process, giving 1-butene, and the Shell higher olefin process which gives a range of alpha-olefins. The former is based on titanium-based catalysts, and the latter relies on nickel-based catalysts. A whole other approach to alpha-olefins, especially long chain derivatives, involves cracking of waxes:
In the PACOL process (paraffin conversion to olefins), linear alkanes are dehydrogenated over a platinum-based catalyst.
Applications
Alpha-olefins are valued building blocks for other industrial chemicals.
A major portion of medium or long chain derivatives are converted to detergents and plasticizers. A common first step in making such products is hydroformylation followed by hydrogenation of the resulting aldehydes.
Long chain alpha-olefins are also oligomerized to give medium molecular weight oils that serve as lubricants.
Alkylation of benzene with alpha-olefins followed by ring-sulfonation gives linear alkylbenzene sulfonates (LABS) which are biodegradable detergents. Competing often with these petroleum-derived products are derivatives of fatty acids, such as fatty alcohols and fatty amines.
Low molecular weight alpha-olefins (butenes, hexenes, etc.) are used as comonomers, which are incorporated into polyethylene. Some are subjected to olefin metathesis as a route to propylene.
See also
Vinyl group
References
Alkenes | Terminal alkene | Chemistry | 581 |
21,852,334 | https://en.wikipedia.org/wiki/Latter%20Day%20Saint%20movement%20and%20engraved%20metal%20plates | Engraved metal plates are significant in the Latter Day Saint movement because in 1827, the founder, Joseph Smith, claimed to have obtained a set of engraved golden plates he had found four years earlier after being directed there by an angel. He claimed to have translated the engravings on the plates by divine power into English as the Book of Mormon, a religious text of that religious tradition.
Latter Day Saints believe that other engraved metal plates exist, many of which are mentioned in the Book of Mormon. In addition, Mormon apologists argue that the golden plates are part of a long tradition of writing on engraved metal plates in the Middle East.
The golden plates
The golden plates are a set of bound and engraved metal plates that Latter Day Saint denominations believe are the source of Joseph Smith's English translation of the Book of Mormon. Although several witnesses said they saw the plates, Smith said that he returned them to an angel after the translation was completed. Most Latter Day Saints assume their authenticity as a matter of faith.
Smith said he discovered the plates on September 22, 1823, on Cumorah hill, Manchester, New York, where he said they had been hidden in a buried box and protected for centuries by the angel Moroni, a resurrected ancient American prophet-historian, who had been last to write on them. Smith claimed that the angel required him to obey certain commandments prior to receiving them and that his inability to obey prevented him from obtaining the plates until four years later, on September 22, 1827.
During this period, Smith also began dictating written commandments in the voice of God, including a commandment to form a new church and to choose eleven men who would join Smith as witnesses of the plates. These witnesses later declared, in two separate written statements attached to the 1830 published Book of Mormon, that they had seen the plates.
The Book of Mormon is accepted by adherents of the Latter Day Saint movement as a sacred text.
Proposed secular origins
There have been a variety of secular theories proposed for the origins of the Book of Mormon's plates. These range from theories based on environmental influences to psychological theories to pranking that grew into the Mormon faith.
The most recent scholarship, by Sonia Hazard, argues that the plates were inspired by printing plates or something similar. Joseph Smith, in this theory, would have either encountered "plates" or similar objects, possibly even on the Hill Cumorah, and believed them to be ancient artifacts. Given the presence of witnesses who attested to physical encounters with the plates, Hazard argues that physical objects seem most likely to be the stimulus of the Book of Mormon and therefore the brass plates' narrative role.
In a similar vein, Ann Taves argues that the belief of Joseph Smith and others in the plates contributed to them perceiving a physical object. While, in Taves's view, the plates were not a material reality, they seemed to be so for the faith's eyewitnesses.
Peter Ingersoll, a contemporary of Smith, was quoted by Eber D. Howe as saying that the brass plates were in fact a bag of sand. Ingersoll then relates the story of Smith deceiving his family, the Three Witnesses, and the Eight Witnesses with said bag of sand. Ingersoll indicates that this was a joke that spiraled into the Mormon movement.
Book of Mormon
In addition to the golden plates, the Book of Mormon refers to several other sets of books written on metal plates:
The brass plates, originally in the custody of Laban, containing the writings of Old Testament prophets before the Babylonian exile, as well as the otherwise unknown prophets Zenos, Zenoch, Neum, and possibly others.
The large plates of Nephi, the source of the text abridged by Mormon and engraved on the golden plates.
The small plates of Nephi, the source of the first and second books of Nephi, and the books of Jacob, Enos, Jarom and Omni, which replaced the lost 116 pages.
The plates of Limhi
A set of twenty-four plates found by the people of Limhi containing the record of the Jaredites, translated by King Mosiah, and abridged by Moroni as the Book of Ether.
Kinderhook plates
In 1843, Smith acquired a set of six small bell-shaped plates, known as the Kinderhook Plates, found in Kinderhook, Pike County, Illinois. The plates were manufactured and buried by three men who lived in Kinderhook, and who had intended the plates as a prank against the LDS community. Although Smith did not translate the plates, William Clayton, his secretary, wrote that Smith said they contained "the history of the person with whom they were found and he was a descendant of Ham through the loins of Pharaoh king of Egypt." As Richard Bushman has written: "Joseph may not have detected the fraud, but he did not swing into a full-fledged translation as he had with the Egyptian scrolls. The trap did not quite spring shut, which foiled the conspirators original plan." After Smith's death, the Kinderhook plates were presumed lost, and for decades the Church of Jesus Christ of Latter-day Saints (LDS Church) published facsimiles of them in its official History of the Church. In 1980, the Kinderhook Plates were tested at Brigham Young University and determined to have been manufactured during the nineteenth century. Today, the LDS Church acknowledges that the Kinderhook plates were a hoax.
Voree Plates
James J. Strang, one of many rival claimants to succeed Smith in the 1844 succession crisis, said that he had discovered and translated a set of plates known as the Voree Plates or "Voree Record." Like Smith, Strang produced witnesses to testify to his plates' authenticity. Although Strang's attempt to supplant Brigham Young as Smith's successor proved abortive, Smith's mother, Lucy Mack Smith, and for a time all living witnesses to the Book of Mormon, including the three Whitmers and Martin Harris (although perhaps excluding Oliver Cowdery), accepted "Strang's leadership, angelic call, metal plates, and his translation of these plates as authentic." Strang equally claimed to have discovered and translated the Plates of Laban spoken of in the Book of Mormon. As with the Voree Plates, Strang produced witnesses who authenticated them. Strang's purported translation of these plates was published in 1850 as the Book of the Law of the Lord, which together with the Voree Record, is accepted as Scripture by members of Strang's diminutive church, the Church of Jesus Christ of Latter Day Saints (Strangite).
Mormon studies
Mormon apologist William J. Hamblin argued that the golden plates are part of a long tradition of writing on engraved metal plates in the ancient Mediterranean.
There are many Hebrew specific examples of writings on metal plates, including a reference in Exodus 28:36 of the Bible of the high priest wearing an engraved gold plate, excavated silver plates containing Numbers 6:24-26 of the Bible dating to the seventh century BC, a treaty with the Romans engraved on bronze, a list of hidden temple treasures on the Copper Scroll from Qumran, and a third century AD ritual text referencing writings on metal plates or amulets numerous times. In addition, there are numerous other semitic examples of writings on metal plates including three foundation plates of copper, silver, and gold dating to the 24th century BC and earlier, Byblos syllabic inscriptions on copper plates from the 18th century BC, the Kilauea gold plates (830-825 BC) containing a short prayer, Sargon II writings on six metal plates of bronze, lead, silver, and gold from Khorsabad (714-705 BC) about temple building, and the Pyrgi gold plate from Italy (500-475 BC) of a religious dedication. Evidence of this tradition is the stone boxes of large gold and silver plates covering the Apanada hoard (515 BCE) excavated in 1933. Furthermore, the Mandaeans of Iran are reported to maintain their entire Book of John in metal book made entirely of lead plates.
Nevertheless, there is no known extant example of writing on metal plates from the ancient Mediterranean longer than the eight-page Persian codex, and none from any ancient civilization in the Western Hemisphere.
See also
Reformed Egyptian
Jordan Lead Codices
List of plates (Latter Day Saint movement)
Mandaic lead rolls
Notes
References
.
Further reading
History of the Latter Day Saint movement
Joseph Smith
Book of Mormon artifacts
Mormon studies
Metals | Latter Day Saint movement and engraved metal plates | Chemistry | 1,758 |
4,634,174 | https://en.wikipedia.org/wiki/Ultra%20Port%20Architecture | The Ultra Port Architecture (UPA) bus was developed by Sun Microsystems as a high-speed graphics card to CPU interconnect, beginning with the Ultra 1 workstation in 1995.
See also
List of device bandwidths
External links
UPA Bus Whitepaper
Computer buses
Sun Microsystems hardware | Ultra Port Architecture | Technology | 61 |
11,421,966 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD34 | In molecular biology, snoRNA U34 (also known as SNORD34) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA U34 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
snoRNA U34 was initially characterised by a computational screen and in the human genome is encoded within intron 5 of the gene for ribosomal protein L13a. U34 is predicted to guide site-specific 2'-O-methylation of 25S rRNAs. Unusually for a snoRNA although the selection of the target nucleotide requires the antisense element and the conserved box D or D' of the snoRNA, in the case of U34 snoRNP the methyltransferase activity is thought to reside in one of the protein components. U34 snoRNA has homologues in mouse, Arabidopsis (annotated as snoR4) and in several copies in rice (alternatively named snoZ181).
References
External links
Small nuclear RNA | Small nucleolar RNA SNORD34 | Chemistry | 341 |
52,484,308 | https://en.wikipedia.org/wiki/Catenospegazzinia%20elegans | Catenospegazzinia elegans is a species of sac fungi. The holotype was found on dead inflorescence stalk of Xanthorrhoea preissii, in Western Australia.
References
External links
Catenospegazzinia elegans at global names
Fungi described in 1991
Enigmatic Ascomycota taxa
Fungi of Australia
Biota of Western Australia
Fungus species | Catenospegazzinia elegans | Biology | 82 |
43,285,597 | https://en.wikipedia.org/wiki/Plectania%20milleri | Plectania milleri is a species of fungus in the family Sarcosomataceae. Found in western North America, it was described as new to science in 1969. It is named in honor of mycologist Orson K. Miller.
References
External links
Pezizales
Fungi described in 1969
Fungi of North America
Fungus species | Plectania milleri | Biology | 68 |
71,856,117 | https://en.wikipedia.org/wiki/Thermosipho | Thermosipho is a genus of Gram-negative staining, anaerobic, and mostly thermophilic and hyperthermophilic bacteria in the family Thermotogaceae.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI).
Species incertae sedis:
Thermosipho ferriphilus Kendall et al. 2002
See also
List of bacteria genera
List of bacterial orders
References
Huber, R., Woese, C.R., Langworthy, T.A., Fricke, H., and Stetter, K.O. "Thermosipho africanus gen. nov., represents a new genus of thermophilic eubacteria within the 'thermotogales'." Syst. Appl. Microbiol. (1989) 12:32-37
Ravot, G., Ollivier, B., Patel, B.K.C., Magot, M., and Garcia, J.L. "Emended description of Thermosipho africanus as a carbohydrate-fermenting species using thiosulfate as an electron acceptor." Int. J. Syst. Bacteriol. (1996) 46:321-323
Thermotogota
Gram-negative bacteria
Anaerobes
Eukaryote genera
Thermophiles
Bacteria described in 1989 | Thermosipho | Biology | 324 |
31,101,067 | https://en.wikipedia.org/wiki/Cognitive%20bias%20modification | Cognitive bias modification (CBM) refers to procedures used in psychology that aim to directly change biases in cognitive processes, such as biased attention toward threat (vs. benign) stimuli and biased interpretation of ambiguous stimuli as threatening. The procedures are designed to modify information processing via cognitive tasks that use basic learning principles and repeated practice to encourage a healthier thinking style in line with the training contingency.
CBM research emerged as investigators used the same techniques to assess attention bias to the manipulation of attention bias. This allowed for tests of the causal relationship between cognitive biases and emotional states (e.g., does selectively attending to threatening information cause greater anxiety). Over time, CBM paradigms were developed to modify biases in other areas of information processing, including interpretation, memory, motivation (e.g., approach–avoidance behaviors), and attributional style. The early success of the procedures in inducing change in bias led researchers to see the potential benefit of CBM as an intervention for emotional and behavioral disorders. Given that the maladaptive cognitive processes implicated in models of emotional vulnerability and dysfunction are targeted by CBM, there is considerable interest in the theoretical and applied importance of the techniques. As such, many recent studies of CBM have targeted cognitive biases in people with anxiety and depressive symptoms.
Research on the effectiveness of CBM in shifting attention and interpretation biases has indicated promising evidence in adult populations, though there are also some null results. Additionally, CBM can reduce anxiety symptoms and stress vulnerability in some cases though these effects are more mixed. There is also some evidence of CBM’s effectiveness in depression symptomatology. Researchers have pointed to the practical benefits offered by CBM, such as scalability and ease of dissemination, potential for augmentation effects with cognitive-behavioral therapy, and cost-effectiveness. Further research on CBM is needed, however, as the evidence for its long-term effects are less clear, including in children.
Types
Two common features are used in the majority of CBM methodologies. First, the cognitive bias targeted for change represents a pattern of selective information processing that is known to characterize psychopathology. For example, individuals with anxiety disorders are characterized by an automatic tendency to attend toward threat, while paying less attention to neutral stimuli. Second, the cognitive bias is altered in a manner that does not involve instructing the individual to intentionally change such information-processing selectivity. Rather, change in the cognitive bias is induced by introducing a contingency designed such that successful task performance will be enhanced by adoption of a new pattern of responding.
Two of the most common types of CBM target attention and interpretation biases. Another type of CBM, approach–avoidance training, targets motivation biases associated with approach and avoidance behaviors.
Attention bias modification
Cognitive bias modification for attention (CBM-A) or attention bias modification (ABM) cognitive tasks are typically designed to draw attention to neutral or positive stimuli, and avoid negative or threatening stimuli. The cognitive tasks utilized in ABM were originally designed for the assessment of attentional bias and later adapted as training tasks.
Common paradigms to manipulate visual attention include the spatial cueing task and visual search task, in addition to the visual probe task. In a typical visual probe trial, a central fixation cross is presented, followed by the brief appearance of a threat and non-threat cue, such as a face with an angry expression and a face with a neutral expression. One of the cues is replaced by a probe, such as a small dot, letter or arrow. The aim is to respond as quickly as possible to identify the probe with a button-press response, for example, to indicate the letter shown or direction of the arrow presented. By having the probe occur routinely in the location where the neutral (rather than negative or threatening) face appeared, the individual learns though practice that attending to the neutral stimulus will enhance their performance on the task because they will be faster to identify the probe.
The logic guiding this training task follows from the assessment version of the task in which the probe appears equally and randomly following the neutral and threat stimuli. In this case, attention bias for threat is inferred from response times to probes. If an individual has a bias to direct attention to the spatial location of the threat stimuli, this should be reflected by faster response times to probes that appear in the same location as threat cues (threat-congruent trials) than non-threat cues (threat-incongruent trials). Conversely, if an individual has a bias to direct attention away from threat stimuli, this should be reflected by slower response times to probes replacing threat than non-threat cues.
Interpretation bias modification
Cognitive bias modification for interpretation (CBM-I) or interpretation bias modification (IBM) involves cognitive tasks that disambiguate an otherwise ambiguous sentence, paragraph, or picture to be either positively or negatively valenced. Interpretation bias tasks typically aim to increase the extent individuals interpret ambiguous situations in benign ways to encourage more flexible thinking that is less rigidly negative.
The ambiguous situations paradigm is one of the most commonly-used protocols used to manipulate interpretation bias. In this task, individuals are typically presented with short paragraphs describing an ambiguous situation. The emotional resolution of the paragraph is not revealed until the end of the paragraph—for example, "You ask a friend to look over some work you have done. You wonder what he will think about what you've written. He comes back with some comments, which are all very positi_e [word fragment in italics]." The resolution often features a word fragment that the individual is asked to solve. By repeatedly practicing assigning non-threatening meanings to the ambiguous situations, the individual is thought to learn that uncertainty is more likely to be resolved in a benign, rather than negative, way. The resolution of the ambiguity is typically reinforced through a brief question following the word fragment completion that requires the individual to respond in a way that matches the situation's ending as determined by the word fragment.
To see whether the ambiguous situations paradigm is successful in modifying interpretation bias, a "recognition" task that consists of a series of ambiguous scenarios is typically used as an outcome measure. In this task, the scenarios remain ambiguous even after solving the word fragment—for example, "You ask a friend to look over some work you have done. You wonder what he will think about what you've written. He comes back with some comments on a Thur_day [word fragment in italics]." In the second part of the recognition task, the titles of the ambiguous scenarios are displayed, together with four sentences per scenario that reflect different ways of understanding what occurred in the scenario that weren't actually stated. These sentences represent: a) a possible positive interpretation tied to the key emotional meaning of the scenario, b) a possible negative interpretation tied to the key emotional meaning of the scenario, c) a positive sentence that is not tied to the key emotional meaning of the scenario, and d) a negative sentence that is not tied to the key emotional meaning of the scenario. Individuals rate each sentence for its similarity in meaning to the original scenario. Higher similarity ratings for the positive (vs. negative) interpretation tied to the key emotional meaning of the scenario are thought to reflect a more positive interpretation.
Approach–avoidance training
Approach–avoidance training involves cognitive tasks that are designed to induce approach or avoidance behaviors towards specific stimuli. In the approach–avoidance task, a commonly used training protocol, individuals are shown images with a certain distinguishing feature on a computer screen, to which they should react as fast as possible using a joystick. For example, all images tilted to the left are pulled and become larger, while all images tilted to the right are pushed away and shrink in size. This zooming effect creates the visual impression that the pictures are coming closer upon pulling of the joystick, and that they move away upon pushing it.
Training involves selectively inducing avoidance of one type of stimulus and/or approach of another—for example, training avoidance behavior to alcohol-related stimuli for individuals with an alcohol use disorder by repeatedly practicing pushing the joystick when alcohol stimuli appear (and pulling the joystick for comparison stimuli), or training approach behavior to spider stimuli for individuals with arachnophobia by repeatedly practicing pulling the joystick when spider pictures appear (and pushing the joystick for comparison stimuli).
To see whether the training paradigm was successful in modifying approach–avoidance bias, the reaction time when participants are instructed to push away the target stimuli (e.g., alcohol or spider cues) compared to when participants are instructed to push away the comparison stimuli are contrasted, along with the analogous contrast for pulling the target vs. comparison stimuli.
Criticisms and limitations
One concern is whether CBM modification procedures will reliably change symptoms and achieve lasting benefits. This is not yet clear from research.
A 2015 meta-analysis of 49 trials looking at outcomes for anxiety and depression casts doubt on value of CBM. The paper concluded that 'CBM may have small effects on mental health problems, but it is also possible that there are no significant clinically relevant effects.' It notes that research is hampered by small, low-quality trials and by risk of publication bias.
Likewise, a recent meta-analysis has found that although attention bias modification (ABM) can be used as a treatment for several primary characteristics of social anxiety disorder (SAD), the durability of treatment and inability to treat secondary symptoms has been raised as potential issues. In this meta-analysis, the authors assessed the efficacy of ABM for SAD on symptoms, reactivity to speech challenge, attentional bias (AB) toward threat, and secondary symptoms at posttraining as well as SAD symptoms at 4-month follow-up. A systematic search in bibliographical databases uncovered 15 randomized studies involving 1043 individuals that compared ABM to a control training procedure. Data were extracted independently by two raters. All analyses were conducted on intent-to-treat data. Results revealed that ABM produces a small but significant reduction in SAD symptoms (g = 0.27), reactivity to speech challenge (g = 0.46), and AB (g = 0.30). These effects were moderated by characteristics of the ABM procedure, the design of the study, and trait anxiety at baseline. However, effects on secondary symptoms (g = 0.09) and SAD symptoms at 4-month follow-up (g = 0.09) were not significant. Although there was no indication of significant publication bias, the authors identified that quality of the studies was substandard and wedged the effect sizes. From a clinical point of view, these findings imply that ABM is not yet ready for wide-scale dissemination as a treatment for SAD in routine care.
See also
Cognitive bias mitigation
Cognitive vulnerability
Debiasing
Unconscious bias training
References
Behavior modification
Behavior therapy | Cognitive bias modification | Biology | 2,226 |
32,628,107 | https://en.wikipedia.org/wiki/Uppercase%20%28magazine%29 | Uppercase magazine is a quarterly craft, fashion, illustration, and design journal published by Janine Vangool in Calgary, Alberta. The first issue was released in June 2009, and included articles about the history of the screw, Heini Koskinen’s fashion design, Blanca Gomez's illustrations, and Karyn Valino's Toronto crafting business The Workroom.
Book publishing
In addition to the magazine, Uppercase publishes books about specific designers and illustrators. Publications include:
The Shatner Show (2007)
The Suitcase Series Volume 1: Camilla Engman
The Elegant Cockroach by Martin & Augustine (October 2010)
Work/Life 2: the UPPERCASE directory of international illustration (February 2011)
A Collection a Day by Lisa Congdon (March 2011)
The Suitcase Series Volume 2: Dottie Angel (August 2011)
Work/Life
Uppercase published two books, Work/Life and Work/Life 2, which act as databases for illustrators. Submissions are chosen by a juried process. The books include portfolio samples, interviews, and contact details. A companion iPhone app directs users to the individual illustrators' portfolio websites.
References
External links
http://www.uppercasemagazine.com
Janine Vangool interview with Zouch Magazine
2009 establishments in Alberta
Visual arts magazines published in Canada
Crafts
Design magazines
Do it yourself
Illustration
Magazines established in 2009
Quarterly magazines published in Canada
Typography
Magazines published in Alberta
Mass media in Calgary
Arts and crafts magazines | Uppercase (magazine) | Engineering | 301 |
45,088,122 | https://en.wikipedia.org/wiki/Estonian%20units%20of%20measurement | A number of units of measurement were used in Estonia to measure length, mass, area, capacity, etc.
Units used during the first half of the 20th century
Several units were used in Estonia. These units were Russian and local units.
Length
Several units were used in Estonia to measure length. One archine (Russian) was equal to 0.7112 m.
1 elle (Kuunar) = 0.75 archine
1 Foute = 3/7 archine
1 faden = 3 archine.
Mass
A number of units were used in Estonia to measure mass. One pfund was equal to 430 g (0.430 kg). Some other units are provided below:
1 quent = 1/128 pfund
1 loth = 1/32 pfund
1 liespfund = 20 pfund
1 centner = 120 pfund
1 tonne = 240 pfund
1 schiffspfund = 400 pfund.
Area
Several units were used in Estonia to measure area.
Reval (now Tallinn)
Some of Reval units are given below:
1 lofstelle = 1855 m2
1 tonnland = 5462.7 m2.
Livonian
Some of the Livonian units are given below:
1 lofstelle = 3710 m2 (accuracy is up to 3 digits)
1 tonnaland = 5194 m2.
Capacity
A number of units were used to measure capacity. 1 hulmit was equal to 11.48 L.
Reval
One lof (Reval) was equal to 3 hulmit.
Livonian
One lof (Livonian) was equal to 6 hulmit. One tonne (Livonian) was equal to 12 hulmit.
References
Culture of Estonia
Estonia | Estonian units of measurement | Mathematics | 366 |
40,558,576 | https://en.wikipedia.org/wiki/ACM%20Transactions%20on%20Information%20Systems | ACM Transactions on Information Systems (ACM TOIS) is a quarterly peer-reviewed scientific journal covering research on computer systems and their underlying technology. It was established in 1983 and is published by the Association for Computing Machinery. The editor-in-chief is Min Zhang (Tsinghua University).
The journal is abstracted and indexed in the Science Citation Index Expanded and Current Contents/Engineering, Computing & Technology. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.797.
References
External links
Computer science journals
Information systems journals
Academic journals established in 1983
Information Systems
Quarterly journals
English-language journals | ACM Transactions on Information Systems | Technology | 128 |
36,181,451 | https://en.wikipedia.org/wiki/Malware%20research | The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata. John von Neumann showed that in theory a program could reproduce itself. This constituted a plausibility result in computability theory. Fred Cohen experimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability and self-obfuscation using rudimentary encryption. His 1988 Doctoral dissertation was on the subject of computer viruses.
Cohen's faculty advisor, Leonard Adleman, presented a rigorous proof that, in the general case, algorithmic determination of the presence of a virus is undecidable. This problem must not be mistaken for that of determination within a broad class of programs that a virus is not present. This problem differs in that it does not require the ability to recognize all viruses.
Adleman's proof is perhaps the deepest result in malware computability theory to date and it relies on Cantor's diagonal argument as well as the halting problem. Ironically, it was later shown by Young and Yung that Adleman's work in cryptography is ideal in constructing a virus that is highly resistant to reverse-engineering by presenting the notion of a cryptovirus. A cryptovirus is a virus that contains and uses a public key and randomly generated symmetric cipher initialization vector (IV) and session key (SK).
In the cryptoviral extortion attack, the virus hybrid encrypts plaintext data on the victim's machine using the randomly generated IV and SK. The IV+SK are then encrypted using the virus writer's public key. In theory the victim must negotiate with the virus writer to get the IV+SK back in order to decrypt the ciphertext (assuming there are no backups). Analysis of the virus reveals the public key, not the IV and SK needed for decryption, or the private key needed to recover the IV and SK. This result was the first to show that computational complexity theory can be used to devise malware that is robust against reverse-engineering.
A growing area of computer virus research is to mathematically model the infection behavior of worms using models such as Lotka–Volterra equations, which has been applied in the study of biological virus. Various virus propagation scenarios have been studied by researchers such as propagation of computer virus, fighting virus with virus like predator codes, effectiveness of patching etc.
Behavioral malware detection has been researched more recently. Most approaches to behavioral detection are based on analysis of system call dependencies. The executed binary code is traced using strace or more precise taint analysis to compute data-flow dependencies among system calls. The result is a directed graph such that nodes are system calls, and edges represent dependencies. For example, if a result returned by system call (either directly as a result or indirectly through output parameters) is later used as a parameter of system call . The origins of the idea to use system calls to analyze software can be found in the work of Forrest et al. Christodorescu et al. point out that malware authors cannot easily reorder system calls without changing the semantics of the program, which makes system call dependency graphs suitable for malware detection. They compute a difference between malware and goodware system call dependency graphs and use the resulting graphs for detection, achieving high detection rates. Kolbitsch et al. pre-compute symbolic expressions and evaluate them on the syscall parameters observed at runtime.
They detect dependencies by observing whether the result obtained by evaluation matches the parameter values observed at runtime. Malware is detected by comparing the dependency graphs of the training and test sets. Fredrikson et al. describe an approach that uncovers distinguishing features in malware system call dependency graphs. They extract significant behaviors using concept analysis and leap mining. Babic et al. recently proposed a novel approach for both malware detection and classification based on grammar inference of tree automata. Their approach infers an automaton from dependency graphs, and they show how such an automaton could be used for detection and classification of malware.
Research in combining static and dynamic malware analysis techniques is also currently being conducted in an effort to minimize the shortcomings of both. Studies by researchers such as Islam et al. are working to integrate static and dynamic techniques in order to better analyze and classify malware and malware variants.
See also
History of computer viruses
References
Malware | Malware research | Technology | 923 |
1,820,047 | https://en.wikipedia.org/wiki/Dysgenics | Dysgenics refers to any decrease in the prevalence of traits deemed to be either socially desirable or generally adaptive to their environment due to selective pressure disfavouring their reproduction.
In 1915 the term was used by David Starr Jordan to describe the supposed deleterious effects of modern warfare on group-level genetic fitness because of its tendency to kill physically healthy men while preserving the disabled at home. Similar concerns had been raised by early eugenicists and social Darwinists during the 19th century, and continued to play a role in scientific and public policy debates throughout the 20th century.
More recent concerns about supposed dysgenic effects in human populations were advanced by the controversial psychologist and self-described "scientific racist" Richard Lynn, notably in his 1996 book Dysgenics: Genetic Deterioration in Modern Populations, which argued that changes in selection pressures and decreased infant mortality since the Industrial Revolution have resulted in an increased propagation of deleterious traits and genetic disorders.
Despite these concerns, genetic studies have shown no evidence for dysgenic effects in human populations. Reviewing Lynn's book, the scholar John R. Wilmoth notes: "Overall, the most puzzling aspect of Lynn's alarmist position is that the deterioration of average intelligence predicted by the eugenicists has not occurred."
See also
References
Eugenics
Evolutionary biology
Futures studies | Dysgenics | Biology | 269 |
39,052,461 | https://en.wikipedia.org/wiki/Virtual%20home%20design%20software | Virtual home design software is a type of computer-aided design software intended to help architects, designers, and homeowners preview their design implementations on-the-fly. These products differ from traditional homeowner design software and other online design tools in that they use HTML5 to ensure that changes to the design occur rapidly. This category of software as a service puts an emphasis on usability, speed, and customization.
Background
Homeowners, contractors, and architects use virtual home exterior design software to help visualize changes to designs. Since virtual home design suites that use HTML5 are able to rapidly propagate changes to the home design, users can A/B test designs much more efficiently than with previous iterations of online design software.
Virtual home design software has found widespread usage among homeowners who have suffered property damage, as server-side, HTML5-based design software is ideal for homeowners who wish to see what certain products will look like on damaged areas of their houses.
Examples
Several manufacturers use virtual home design software to display their products online. These companies that utilize virtual home design software include GAF Materials Corporation, James Hardie, Exterior Portfolio, and CertainTeed. Some companies, such as Design My Exterior, have built virtual home design software that is not limited to products or brands in order to allow for greater flexibility by the end-user. Design My Exterior also uses ImageMapster in order to generate a greater range of options with less processing time.
Live Home 3D is a virtual home design software for Microsoft Windows and macOS.
Future applications
Several companies are experimenting with virtual reality for architecture. They design virtual homes and allow customers to walk around with the help of a VR headset (such as the Occulus Rift). This way, customers get a realistic, true-to-scale idea of the result.
References
Computer-aided design software
Architectural design
Interior design | Virtual home design software | Engineering | 387 |
13,569,791 | https://en.wikipedia.org/wiki/Autonomous%20telepresence | Autonomous telepresence is a method of offering remote healthcare in a patient's home using robots and videoconferencing systems to provide a consumer-based mobile platform. At present the existing systems have little or no autonomy and rely on remote operators.
See also
Telepresence
Open-source robotics
References
References
Sparky Jr. Project
Hybridity
Telepresence
Computer-mediated communication | Autonomous telepresence | Technology | 81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.