id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
275,388 | https://en.wikipedia.org/wiki/Cynology | Cynology (rarely kynology, ) is the study of matters related to canines or domestic dogs.
In English, it is a term sometimes used to denote a serious zoological approach to the study of dogs as well as by writers on canine subjects, dog breeders, trainers and enthusiasts who study the dog informally.
Etymology
Cynology is a classical compound word (from Greek , , , , 'dog'; and , -logia) referring to the study of dogs. The word is not found in major English dictionaries and it is not a recognized study in English-speaking countries.
Similar words are in other languages, such German and Dutch . is also the source of the English word cynic, and is directly related to canine and hound.
Usage in English
The suffix '-logy' in English words refers to a study, or an academic discipline, or field of scientific study. English classical compound words of this type may confer an impression of scientific rigor on a non-scientific occupation or profession.
Usage in English of the word cynology is rare, and occasionally found in the names of dog training academies, with cynologist sometimes being used as a title by some dog trainers or handlers. People who informally study the dog may refer to themselves as 'cynologists' to imply serious study or scientific work.
The very rare term cynologist in English is generally found to refer to "canine specialists" such as; certified care professionals, certified show judges, breeders, breed enthusiasts, certified dog-trainers and professional dog-handlers.
Usage in other languages
Cynology may have other connotations or uses in languages other than English; see German , Dutch and Czech .
A similar word is used to refer to dog handlers and dog trainers in Russia.
A veterinary clinic in Armenia offers a 'cynologist' to assist with dog training.
A magazine in the Baltic states described as 'dedicated to the development of cynology in the Baltic countries' covers dog training, dog shows, and veterinary advice (a hobbyist magazine, not a scientific journal.)
References
External links
Further reading
Suchanova, J. & Tovstucha, R.E., Problems in translating the names of dog breeds from the perspective of different nomination principles & linguistic relativity. Coactivity: Philology, Educology 2016, 24(2): 113–121
Mammalogy
Dogs
Subfields of zoology | Cynology | Biology | 488 |
77,902,247 | https://en.wikipedia.org/wiki/SDSS%20J135646.10%2B102609.0 | SDSS J135646.10+102609.0 known as SDSS J1356+1026 and J1356+1026, is a low redshift quasar and galaxy merger located in the constellation of Boötes. It is located 1.85 billion light years from Earth. It is an ultraluminous inflared galaxy. It is considered radio-quiet with an unresolved radio source.
Characteristics
SDSS J135646.10+102609.0 is a merger product between two colliding galaxies, namely a disk galaxy and an elliptical galaxy. The galaxy has a luminosity of Lbol ≈ 1046 erg s−1 with an estimated black hole mass of M ~ 108 MΘ.
SDSS J135646.10+102609.0 has two active nuclei found merging, with a projected separation of only 2.4 or ~ 2.5 kiloparsecs. While nothing is known about the south nucleus, the north nucleus is classified a type 2 quasar and is the main galaxy merger member. Its host is a massive early-type galaxy or an ETG for short, with a position angle of 156 degrees. With a stellar population mainly made up of old stars, the star formation rate of the galaxy derived from infrared luminosity is 69 MΘ yr−1 according to a Atacama Large Millimeter Array (ALMA) sample. This high star formation rate indicates a consequence of an ongoing merger.
The north nucleus is heavily obscured. It was originally found during the Sloan Digital Sky Survey based on its [ O III] λ5007 emission. The north nucleus also has an average velocity dispersion value of 160 km s−1. According to B-I color map of the galaxy using HST/WFC 3 images, astronomers found a dust lane crossing its nucleus with its position angle matching with the oxocarbon major axis. Using spatial scales, they were able to find the north nucleus has redder optical colors.
In addition, the north nucleus contains a compact rotating disk and an extended tidal arm. Both components contain molecular gas mass of Mmol ≈ 3 x 108 MΘ and Mmol ≈ 5 x 108 MΘ. Further investigations from ALMA also pointed out the tidal arm is the largest molecular tidal feature, implying a small chance of shock dissociation.
Further investigations also shows the presence of soft X-ray emission around the quasar nucleus of SDSS J135646.10+102609.0, extending by 20 kiloparsecs (kpc). This is interpreted as thermal gas with a luminosity of LX ≈ 1042 erg s−1 and temperature of KT ≈ 280 eV. With a faint X-ray luminosity of ~ 10, this suggests the X-ray emission is controlled by either photoionization or shocked emission via a quasar-driven superwind. A study also mentions the superwind driven by the quasar is prototypical.
Galactic outflow
SDSS J135646.10+102609.0 has two symmetric outflows originating from its nucleus. The outflows are measured to be 10 kpc and have observed projected expansion velocities of 250 km s−1. Through a presentation of a kinetic model, the deprojected expansion velocity for this outflows are measured ~ 1000 km−1 with expanding shell kinetic energy of 1044-45 erg s−1.
Star formation
Based on observations from ALMA and oxocarbon observations, a low star formation rate of 16 MΘ yr−1 from far-infrared spectral energy and <16 MΘ yr−1 from the molecular content is found in SDSS J135646.10+102609.0. This suggests the active galactic nucleus of the galaxy is likely responsible for high outflow rate. With an outflowing mass of Mmol ≈ 7 x 107 MΘ, and short dynamical time, the outflow could potentially depleting the gas content inside SDSS J135646.10+102609.0 within few million years.
Double-peaked emission lines
SDSS J135646.10+102609.0 is known to be an interesting system. According to long-slit observations, it contains two [O III] λ5007 emission knots proportional to the two nuclei seen in near-infrared imaging suggesting the double peaks are produced by a dual AGN. However when the extended [O III] emission and nuclei were observed again, this creates a speculation the double peaks are only powered by a single AGN.
References
SDSS objects
Luminous infrared galaxies
1379498
Quasars
F13543+1040
Boötes
Galaxy mergers | SDSS J135646.10+102609.0 | Astronomy | 970 |
56,533,153 | https://en.wikipedia.org/wiki/TCMTB | (Benzothiazol-2-ylthio)methyl thiocyanate (TCMTB) is a chemical compound classified as a benzothiazole.
Properties
TCMTB is an oily, flammable, red to brown liquid with a pungent odor that is very slightly soluble in water. It decomposes on heating producing hydrogen cyanide, sulfur oxides, and nitrogen oxides. The degradation products are 2-mercaptobenzothiazole (2-MBT) and 2-benzothiazolesulfonic acid.
Uses
TCMTB is used as wideband microbicide, paint fungicide, and paint gallicide. The active substance approved in 1980 in the United States. It is used, for example, in leather preservation, for the protection of paper products, in wood preservatives, and against germs in industrial water.
In the US, TCMTB is used as a fungicide for seed dressing in cereals, safflower, cotton and sugar beet.
It is also used when dealing with fungal problems when extracting hydrocarbons via fracking.
Approval
TCMTB is not an authorized plant protection product in the European Union.
In Germany, Austria and Switzerland, no plant protection products containing this active substance are authorized.
TCMTB contributes to health problems in tannery workers as it is a potential carcinogen, and is a hepatotoxin. It is also a skin sensitizer, and may cause contact dermatitis in those exposed to the poisonous compound. Hence, it is mainly used in developing countries.
References
Benzothiazoles
Thioethers
Thiocyanates
Biocides
Fungicides | TCMTB | Chemistry,Biology,Environmental_science | 357 |
49,903,401 | https://en.wikipedia.org/wiki/AquaFed | AquaFed is the International Federation of Private Water Operators. It represents more than 400 private operators and partners providing water and sanitation services in more than 40 countries worldwide.
AquaFed advocates for its members, provides them with opportunities to network and build partnerships, and helps them develop their business. Its mission is for governments and donors to understand the expertise of its members and their innovative technology solutions, and to promote public-private partnerships.
AquaFed has 4 principal missions:
Promote public private partnerships (PPP)
For a private water operators, a public-private partnership (PPP) defines a contractual arrangement between a public agency (federal, state or local) and a private sector entity for a water or wastewater related project or service. AquaFed aims to promote and encourage these partnerships to improve water and wastewater services.
Create a collaborative water community
AquaFed is a platform for its members and partners to exchange, share, learn and collaborate to improve the water sector all together.
Contribute to the United Nations' 2030 Agenda for Sustainable Development
The 2030 Agenda for Sustainable Development seeks to end poverty and hunger, guarantee the human rights of all, achieve gender equality and ensure the lasting protection of the planet and its natural resources. AquaFed and its members help to contribute to these goals by working on assuring the access to safe water and sanitation in a sustainable way.
Guarantee the human right to safe water and sanitation
Drinking safe and clean water and having access to sanitation is a human right. AquaFed and its members' main mission is ensuring the implementation of systems allowing anyone to have access to this right.
Partners
AquaFed is partnered with UN-Water, the World Water Council, Sanitation and Water for All, and the COP29.
Members
ABCONSINDCON Brazilian National Association of Water and Wastewater Provider
ACQUE
Agbar
ANDESS Chile Chilean National Association of Health Services Companies
Aqualia
Balibago Waterworks
Bosaq
Cambodian Water Supply Association
Eranove
Eurawasser
FP2E Professional Federation of Water Companies
IME Mediterranean Water Institute
Lydec
LYSA
Macao Water
Metro Pacific Water
NAWC American National Association of Water Companies
Palyja
REMONDIS
Sénégalaise des Eaux
SEEG
Sodeci
SUEZ
Veolia
Vergnet Hydro Water and energy supplier for Africa
References
Sanitation
SAUR Sevan
SODECI
Water
Sustainability
Environment and health
| AquaFed | Environmental_science | 470 |
28,096,212 | https://en.wikipedia.org/wiki/PASTA%20domain | The PASTA domain is a small protein domain that can bind to the beta-lactam ring portion of various β-lactam antibiotics. The domain was initially discovered in 2002 by Yeats and colleagues as a region of sequence similarity found in penicillin binding proteins and PknB-like kinases found in some bacteria. The name is an acronym derived from PBP and Serine/Threonine kinase Associated domain.
Structure
The PASTA domain adopts a structure composed of an alpha-helix followed by three beta strands. Recent structural studies show that the extracellular region of PknB (protein kinase B) that is composed of four PASTA domains shows a linear arrangement of the domains.
Species distribution
PASTA domains are found in a variety of bacterial species including gram-positive Bacillota and Actinomycetota.
References
Protein domains | PASTA domain | Biology | 173 |
69,144,833 | https://en.wikipedia.org/wiki/Westerhout%2049-2 | Westerhout 49-2 (W49-2) is a very massive and luminous star in the H II region Westerhout 49. At a mass of 250 solar masses (although with significant uncertainty) and a luminosity of over , it is one of the most massive and most luminous known stars.
Properties
Westerhout 49-2 is located within the H II region Westerhout 49, about 11.1 kiloparsecs from the Sun. The star is heavily reddened, by nearly 5 magnitudes in the K band, the most of any star in the region. Westerhout 49-2 is classified as an evolved slash star, with a spectral type of O2-3.5If*. The star is one of the most luminous stars known, with a luminosity of , and has a temperature of about 35,500 K, corresponding to a radius of over 55 times that of the Sun.
Uncertainties
There is significant uncertainty about Westerhout 49-2's properties. One estimate using mass-luminosity relations finds a mass between 90 and . Its mass is likely higher than the theoretical upper limit of 150 M☉, which means it could be a binary, if x-rays are detected. Westerhout 49-1, 49-2 and 49-12 are all bright x-ray sources, which means they could all be binary stars and their masses would be lower than the predicted mass if they were single stars.
Notes
References
O-type supergiants
Aquila (constellation)
Emission-line stars | Westerhout 49-2 | Astronomy | 327 |
23,455,097 | https://en.wikipedia.org/wiki/C2H3ClO2 | {{DISPLAYTITLE:C2H3ClO2}}
The molecular formula C2H3ClO2 may refer to:
Chloroacetic acid, organochlorine carboxylic acid and building-block in organic synthesis
Methyl chloroformate, the methyl ester of chloroformic acid | C2H3ClO2 | Chemistry | 70 |
34,957,675 | https://en.wikipedia.org/wiki/List%20of%20environmental%20social%20science%20journals | This is a list of articles about academic journals in environmental social science.
A
Antipode
Area
C
Case Studies in the Environment
Children, Youth and Environments
Conservation and Society
Cultural Geographies
D
Disasters
E
Ecological Economics
Ecology and Society
Energy & Environment
Energy Policy
Energy Research & Social Science
Environment and Behavior
Environment and Planning
Environment and Urbanization
Environmental and Resource Economics
Environmental Health Perspectives
Environmental Research Letters
Environmental Science & Technology
Environmental Sociology
Environmental Values
G
Geoforum
Global Environmental Change
Global Environmental Politics
H
Hastings West-Northwest Journal of Environmental Law and Policy
Human Ecology
I
Indoor and Built Environment
International Journal of Ecology & Development
International Regional Science Review
J
The Journal of Environment & Development
Journal of Environmental Assessment Policy and Management
Journal of Environmental Economics and Management
Journal of Environmental Studies and Sciences
Journal of Environmental Psychology
Journal of Political Ecology
L
Land Economics
N
Natural Resources Forum
Nature and Culture
O
Organization & Environment
P
Papers in Regional Science
Population and Environment
Progress in Human Geography
R
Review of Environmental Economics and Policy
S
Society & Natural Resources
W
Water Resources Research
See also
List of environmental economics journals
List of environmental journals
List of environmental periodicals
List of forestry journals
List of planning journals
Lists of academic journals
External links
Environment and Society: Scholarly Journals
Environmental social science
Journals
Environmental social science journals
Social science journals | List of environmental social science journals | Environmental_science | 245 |
41,585,002 | https://en.wikipedia.org/wiki/Mlpack | mlpack is a free, open-source and header-only software library for machine learning and artificial intelligence written in C++, built on top of the Armadillo library and the ensmallen numerical optimization library. mlpack has an emphasis on scalability, speed, and ease-of-use. Its aim is to make machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and maximum flexibility for expert users. mlpack has also a light deployment infrastructure with minimum dependencies, making it perfect for embedded systems and low resource devices. Its intended target users are scientists and engineers.
It is open-source software distributed under the BSD license, making it useful for developing both open source and proprietary software. Releases 1.0.11 and before were released under the LGPL license. The project is supported by the Georgia Institute of Technology and contributions from around the world.
Features
Classical machine learning algorithms
mlpack contains a wide range of algorithms that are used to solved real problems from classification and regression in the Supervised learning paradigm to clustering and dimension reduction algorithms. In the following, a non exhaustive list of algorithms and models that mlpack supports:
Collaborative Filtering
Decision stumps (one-level decision trees)
Density Estimation Trees
Euclidean minimum spanning trees
Gaussian Mixture Models (GMMs)
Hidden Markov Models (HMMs)
Kernel density estimation (KDE)
Kernel Principal Component Analysis (KPCA)
K-Means Clustering
Least-Angle Regression (LARS/LASSO)
Linear Regression
Bayesian Linear Regression
Local Coordinate Coding
Locality-Sensitive Hashing (LSH)
Logistic regression
Max-Kernel Search
Naive Bayes Classifier
Nearest neighbor search with dual-tree algorithms
Neighbourhood Components Analysis (NCA)
Non-negative Matrix Factorization (NMF)
Principal Components Analysis (PCA)
Independent component analysis (ICA)
Rank-Approximate Nearest Neighbor (RANN)
Simple Least-Squares Linear Regression (and Ridge Regression)
Sparse Coding, Sparse dictionary learning
Tree-based Neighbor Search (all-k-nearest-neighbors, all-k-furthest-neighbors), using either kd-trees or cover trees
Tree-based Range Search
Class templates for GRU, LSTM structures are available, thus the library also supports Recurrent Neural Networks.
Bindings
There are bindings to R, Go, Julia, Python, and also to Command Line Interface (CLI) using terminal. Its binding system is extensible to other languages.
Reinforcement learning
mlpack contains several Reinforcement Learning (RL) algorithms implemented in C++ with a set of examples as well, these algorithms can be tuned per examples and combined with external simulators. Currently mlpack supports the following:
Q-learning
Deep Deterministic Policy Gradient
Soft Actor-Critic
Twin Delayed DDPG (TD3)
Design features
mlpack includes a range of design features that make it particularly well-suited for specialized applications, especially in the Edge AI and IoT domains. Its C++ codebase allows for seamless integration with sensors, facilitating direct data extraction and on-device preprocessing at the Edge. Below, we outline a specific set of design features that highlight mlpack's capabilities in these environments:
Low number of dependencies
mlpack is low dependencies library which makes it perfect for easy deployment of software. mlpack binaries can be linked statically and deployed to any system with minimal effort. The usage of Docker container is not necessary and even discouraged. This makes it suitable for low resource devices, as it requires only the ensmallen and Armadillo or Bandicoot depending on the type of hardware we are planning to deploy to. mlpack uses Cereal library for serialization of the models. Other dependencies are also header-only and part of the library itself.
Low binary footprint
In terms of binary size, mlpack methods have a significantly smaller footprint compared to other popular libraries. Below, we present a comparison of deployable binary sizes between mlpack, PyTorch, and scikit-learn. To ensure consistency, the same application, along with all its dependencies, was packaged within a single Docker container for this comparison.
Other libraries exist such as Tensorflow Lite, However, these libraries are usually specific for one method such as neural network inference or training.
Example
The following shows a simple example how to train a decision tree model using mlpack, and to use it for the classification. Of course you can ingest your own dataset using the Load function, but for now we are showing the API:// Train a decision tree on random numeric data and predict labels on test data:
// All data and labels are uniform random; 10 dimensional data, 5 classes.
// Replace with a data::Load() call or similar for a real application.
arma::mat dataset(10, 1000, arma::fill::randu); // 1000 points.
arma::Row<size_t> labels =
arma::randi<arma::Row<size_t>>(1000, arma::distr_param(0, 4));
arma::mat testDataset(10, 500, arma::fill::randu); // 500 test points.
mlpack::DecisionTree tree; // Step 1: create model.
tree.Train(dataset, labels, 5); // Step 2: train model.
arma::Row<size_t> predictions;
tree.Classify(testDataset, predictions); // Step 3: classify points.
// Print some information about the test predictions.
std::cout << arma::accu(predictions == 2) << " test points classified as class "
<< "2." << std::endl;The above example demonstrate the simplicity behind the API design, which makes it similar to popular Python based machine learning kit (scikit-learn). Our objective is to simplify for the user the API and the main machine learning functions such as Classify and Predict. More complex examples are located in the examples repository, including documentations for the methods
Backend
Armadillo
Armadillo is the default linear algebra library that is used by mlpack, it provide matrix manipulation and operation necessary for machine learning algorithms. Armadillo is known for its efficiency and simplicity. it can also be used in header-only mode, and the only library we need to link against are either OpenBLAS, IntelMKL or LAPACK.
Bandicoot
Bandicoot is a C++ Linear Algebra library designed for scientific computing, it has the an identical API to Armadillo with objective to execute the computation on Graphics Processing Unit (GPU), the purpose of this library is to facilitate the transition between CPU and GPU by making a minor changes to the source code, (e.g. changing the namespace, and the linking library). mlpack currently supports partially Bandicoot with objective to provide neural network training on the GPU. The following examples shows two code blocks executing an identical operation. The first one is Armadillo code and it is running on the CPU, while the second one can runs on OpenCL supported GPU or NVIDIA GPU (with CUDA backend)using namespace arma;
mat X, Y;
X.randu(10, 15);
Y.randu(10, 10);
mat Z = 2 * norm(Y) * (X * X.t() - Y);using namespace coot;
mat X, Y;
X.randu(10, 15);
Y.randu(10, 10);
mat Z = 2 * norm(Y) * (X * X.t() - Y);
ensmallen
ensmallen is a high quality C++ library for non linear numerical optimizer, it uses Armadillo or bandicoot for linear algebra and it is used by mlpack to provide optimizer for training machine learning algorithms. Similar to mlpack, ensmallen is a header-only library and supports custom behavior using callbacks functions allowing the users to extend the functionalities for any optimizer. In addition ensmallen is published under the BSD license.
ensmallen contains a diverse range of optimizer classified based on the function type (differentiable, partially differentiable, categorical, constrained, etc). In the following we list a small set of optimizer that available in ensmallen. For the full list please check this documentation website.
Limited memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS)
GradientDescent
FrankWolfe
Covariance matrix adaptation evolution strategy (CMA-ES)
AdaBelief
AdaBound
AdaDelta
AdaGrad
AdaSqrt
Adam
AdaMax
AMSBound
AMSGrad
Big Batch SGD
Eve
FTML
IQN
Katyusha
Lookahead
Momentum SGD
Nadam
NadaMax
NesterovMomentumSGD
OptimisticAdam
QHAdam
QHSGD
RMSProp
SARAH/SARAH+
Stochastic Gradient Descent SGD
Stochastic Gradient Descent with Restarts (SGDR)
Snapshot SGDR
SMORMS3
SPALeRA
SWATS
SVRG
WNGrad
Support
mlpack is fiscally sponsored and supported by NumFOCUS, Consider making a tax-deductible donation to help the developers of the project. In addition mlpack team participates each year Google Summer of Code program and mentors several students.
See also
Armadillo (C++ library)
List of numerical analysis software
List of numerical libraries
Numerical linear algebra
Scientific computing
References
External links
C++ libraries
Data mining and machine learning software
Free computer libraries
Free mathematics software
Free science software
Free software programmed in C++
Free statistical software | Mlpack | Mathematics | 2,052 |
1,176,641 | https://en.wikipedia.org/wiki/Expo.02 | Expo.02 was the 6th Swiss national exposition, which was held from 15 May to 20 October 2002. The exposition took place around the lakes of Neuchâtel, Bienne/Biel and Morat/Murten. It was divided into five sites, which were called Arteplages, due to the proximity of the water (some sites were actually partially or totally built on the water). The five arteplages were located in Neuchâtel, Yverdon-les-Bains, Morat/Murten, Biel/Bienne and on a mobile barge traveling from one site to another. The barge represented the canton of Jura, which does not have access to any one of the three lakes.
Expo.02 was the subject of controversy in Switzerland due to the many financial problems it encountered. It was first scheduled to take place in 2001 (under the name of Expo.01), but the catastrophic organization and lack of funding threatened to put an end to the project, which was saved at the last minute by the Swiss Federal Government, which put in a large amount of public money to save the exhibition. Expo.02 cost 1.6 billion Swiss francs. Most of it came from major Swiss companies which sponsored the different exhibitions on the Arteplages.
According to the organisers, more than 10 million admissions were counted, and the exhibition succeeded in achieving its sole and only goal: the public's pleasure.
Each arteplage was dedicated to a different theme. There was "Nature and Artifice" in Neuchâtel, "I and the Universe" in Yverdon-les-Bains, "Instants and Eternity" in Murten/Morat, "Power and Freedom" in Biel/Bienne and "Sense and Movement" on the mobile platform.
Images
References
External links
Expo-archive.ch (via archive.org)
Official site (via archive.org)
Culture of Switzerland
2002 in Switzerland
Switzerland
Floating architecture
National exhibitions | Expo.02 | Technology,Engineering | 407 |
24,373,897 | https://en.wikipedia.org/wiki/C12H11N3 | {{DISPLAYTITLE:C12H11N3}}
The molecular formula C12H11N3 (molar mass: 197.24 g/mol, exact mass: 197.0953 u) may refer to:
Aniline Yellow, a yellow azo dye and an aromatic amine
1,3-Diphenyltriazene, organic compound | C12H11N3 | Chemistry | 79 |
1,936,530 | https://en.wikipedia.org/wiki/Adirondack%20Bank%20Center | The Adirondack Bank Center at the Utica Memorial Auditorium is a 3,860-seat multi-purpose arena in Utica, New York, with a capacity of 5,700 for concerts. Nicknamed the Aud, it is the home arena of the Utica Comets, the AHL affiliate of the NHL's New Jersey Devils, and Utica City FC of the Major Arena Soccer League.
In 2011, the Utica Memorial Auditorium was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in recognition of its innovative cable suspended roof.
History
The Utica Memorial Auditorium was conceived by then-Utica mayor John T. McKennan, who believed that the city needed a place for entertainment and sporting events. McKennan and the administration that he hired to plan out the process, led by Frank M. Romano, then hired Gilbert Seltzer, a well-known architect, to draw up plans for the building. A site was found along the old Erie Canal, and groundbreaking took place April 15, 1957. The arena was constructed using the world's first pre-stressed dual cable roof system, designed by Lev Zetlin (who would later partner with architect Philip Johnson to construct both the New York State Pavilion "Tent of Tomorrow" seen at the 1964 World's Fair and the Munson-Williams-Proctor Arts Institute, also located in Utica, NY) with "struts" between the cables. John A. Roebling's Sons Company developed the tensioning method for the project. Zetlin's design became the predecessor to the many modern dome designs seen today, and has since influenced many other tensile structures including Madison Square Garden. Seltzer would take the most pride in constructing "The Aud", saying, "This was the first successful use of cables for a roof structure."
"The Aud" was also one of the first stadiums to have telescopic seats. Telescopic bleachers (the bleachers pulled out from below higher levels) were common in stadiums, but Zetlin requested more comfortable seating for the arena.
Work continued through 1958 and into 1959. When the auditorium was finally completed, it became one of just three arenas built without obstructed views. The arena opened on March 13, 1960, with the Greater Utica Industrial Exposition its first event, running three evenings from March 16–19. 96 exhibitors took part in the presentation which drew an attendance of some 45,000. In 1962, it hosted the NCAA Division I Men's Hockey Championship (AKA the "Frozen Four"). In 2017, the arena hosted the Division III "Frozen Four".
Scenes from the 1977 film Slap Shot starring Paul Newman were shot at the auditorium. The original center-hung scoreboard, as seen in the movie, was unusual in that the game time was kept by a digital clock, while the penalty time was kept by analog clocks. This was eventually replaced by a center-hung scoreboard designed by Eversan, which includes a one-line messageboard. "The Aud" also held a Santana concert on February 22, 1973 during their Caravanserai Tour, their only concert in Utica, and the arena has the distinction of being the location of one of the last scheduled Elvis Presley concerts. The concert was scheduled to be on Friday, August 19, 1977, three days after Presley's death on August 16.
In 2011, the Utica Memorial Auditorium was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in recognition of its innovative cable suspended roof.
On June 14, 2013, it was announced that the Peoria Rivermen, the AHL farm team of the National Hockey League's Vancouver Canucks would be relocating to the Utica Memorial Auditorium for the 2013–14 season as the Utica Comets. As the AHL has a strong presence in Western and Central New York State, the league agreed to the move, citing the move would further boost the league's strength in the Northeast while further cutting down on travel expenses. On October 23, 2013, the Comets played at "The Aud", losing 4–1 to the Albany Devils in front of a sold out crowd. Frank Corrado scored the first Comet goal on home ice.
In addition to the Comets, the auditorium plays host to the Utica University Pioneers men's and women's ice hockey teams that play in the United Collegiate Hockey Conference of the NCAA Division III, the Skating Club of Utica, the Jr. Comets youth hockey program and several high school varsity ice hockey teams. It was the former home for the Mohawk Valley Comets of the North American Hockey League, the Mohawk Valley Stars/Comets of the Atlantic Coast Hockey League, the Utica Devils of the American Hockey League, the Utica Blizzard, Utica Bulldogs, and Mohawk Valley Prowlers of the United Hockey League, and the Mohawk Valley IceCats of the North Eastern Hockey League. Both Pioneer hockey teams boast the highest average attendance for a Division III hockey team in the United States, with regular season games frequently selling out.
In recent years, "The Aud" has earned high rankings from hockey circles, earning the #8 spot in "The 10 Coolest Hockey Rinks in the World" list by Complex Magazine, the #8 rank for best AHL arena by Stadium Journey, and #4 in the Pure Hockey Blog's list of the top 6 places to skate for hockey.
Photos and renderings of the Utica Memorial Auditorium are on permanent display at New York's Museum of Modern Art. The museum's collection honors the auditorium as an architectural landmark.
On September 27, 2017, the Upper Mohawk Valley Memorial Auditorium Authority announced a 10-year naming rights deal with locally based Adirondack Bank, amending the official name of "The Aud" to Adirondack Bank Center at the Utica Memorial Auditorium.
In November 2017, work was completed on the 26,000-square-foot expansion that added a new entrance, a half-dozen executive suites, a new women's bathroom, a building-wide sprinkler system and other amenities to the facility. The $10.55 million project was fully funded by the state. A restaurant named "72 Tavern & Grill" was constructed on existing foundation on the West side of the facility that supported underground areas of the Aud. The "72" is in honor of the 72 cables that have held up the roof of the Adirondack Bank Center for more than 50 years.
On June 1, 2018, the Adirondack Bank Center hosted UFC Fight Night: Rivera vs. Moraes.
On June 13, 2018, Mohawk Valley Garden CEO Rob Esche and Major Arena Soccer League (MASL) commissioner Joshua Schaub, along with other officials, announced that Utica will field a professional indoor soccer team — called Utica City Football Club, or UCFC for short — that will play home games at the Adirondack Bank Center at the Utica Memorial Auditorium beginning with the 2018–19 season. The team had previously been known as the Syracuse Silver Knights.
The Nexus Canter, a new sports facility adjacent to the Aud and connected to it via an indoor walkway, was completed in 2022. It includes three playing spaces that can used for ice hockey or turf sports such as indoor soccer. The largest of these, having 1,200 seats, a Jumbotron screen, and multiple luxury boxes, is the home venue for the Utica University women's ice hockey team, the Utica Junior Comets, and the Utica Yeti Lacrosse team . In 2023, Utica University purchased its naming rights, rebranding it the Utica University Nexus Center.
The Professional Women's Hockey League (PWHL) held a five day evaluation camp at the Nexus Centre in December 2023 ahead of its inaugural season; all six PWHL teams participated. The 2024 IIHF Women's World Championship was held at the Adirondack Bank Center and Nexus Center from the 4th to 14th of April, with Canada winning the gold. The 2024 World Lacrosse Box Championships were held at the Adirondack Bank Center and Nexus Center between September 20 and 29 2024. Canada won the Men's Championship, while the United States won the Women's championship.
High school sports
In addition to its regular season high school hockey games, the Utica Memorial Auditorium hosted the New York State Ice Hockey semi-finals and finals every year from its inception to the 2015 Championships. In August 2015, NYSPHSAA announced it would be moving the state tournament to Buffalo's HarborCenter. On March 9–10, 1973, Utica Memorial Auditorium hosted the 11th NYSPHSAA state wrestling tournament. The annual tournament has not returned to Utica since.
References
Further reading
External links
Basketball venues in New York (state)
College ice hockey venues in New York (state)
Historic Civil Engineering Landmarks
Indoor soccer venues in New York (state)
Mixed martial arts venues in New York (state)
Sports venues in Oneida County, New York
Sports venues completed in 1960
Utica Comets
Utica Devils
Wrestling venues in New York (state)
1960 establishments in the United States
Sports in Utica, New York
Buildings and structures in Utica, New York | Adirondack Bank Center | Engineering | 1,892 |
19,725,960 | https://en.wikipedia.org/wiki/Fractal-generating%20software | Fractal-generating software is any type of graphics software that generates images of fractals. There are many fractal generating programs available, both free and commercial. Mobile apps are available to play or tinker with fractals. Some programmers create fractal software for themselves because of the novelty and because of the challenge in understanding the related mathematics. The generation of fractals has led to some very large problems for pure mathematics.
Fractal generating software creates mathematical beauty through visualization. Modern computers may take seconds or minutes to complete a single high resolution fractal image. Images are generated for both simulation (modeling) and random fractals for art. Fractal generation used for modeling is part of realism in computer graphics. Fractal generation software can be used to mimic natural landscapes with fractal landscapes and scenery generation programs. Fractal imagery can be used to introduce irregularity to an otherwise sterile computer generated environment.
Fractals are generated in music visualization software, screensavers and wallpaper generators. This software presents the user with a more limited range of settings and features, sometimes relying a series pre-programmed variables. Because complex images can be generated from simple formula fractals are often used among the demoscene. The generation of fractals such as the Mandelbrot set is time-consuming and requires many computations, so it is often used in benchmarking devices.
History
The generation of fractals by calculation without computer assistance was undertaken by German mathematician Georg Cantor in 1883 to create the Cantor set. Throughout the following years, mathematicians have postulated the existence of numerous fractals. Some were conceived before the naming of fractals in 1975, for example, the Pythagoras tree by Dutch mathematics teacher Albert E. Bosman in 1942.
The development of the first fractal generating software originated in Benoit Mandelbrot's pursuit of a generalized function for a class of shapes known as Julia sets. In 1979, Mandelbrot discovered that one image of the complex plane could be created by iteration. He and programmers working at IBM generated the first rudimentary fractal printouts. This marked the first instance of the generation of fractals by non-linear creations laws or 'escape time fractal'. Loren Carpenter created a two-minute color film called Vol Libre for presentation at SIGGRAPH in 1980. The October 1983 issue of Acorn User magazine carried a BBC BASIC listing for generating fractal shapes by Susan Stepney, now Professor of Computer Science at the University of York. She followed this up in the March 1984 Acorn User with “Snowflakes and other fractal monsters”. Fractals were rendered in computer games as early as 1984 with the creation of Rescue on Fractalus!. From the early 1980s to about 1995 hundreds of different fractal types were formulated.
The generation of fractal images grew in popularity as the distribution of computers with a maths co-processor or floating-point unit in the central processing unit were adopted throughout the 1990s. At this time the rendering of high resolution VGA standard images could take many hours. Fractal generation algorithms display extreme parallelizability. Fractal-generating software was rewritten to make use of multi-threaded processing. Subsequently, the adoption of graphics processing units in computers has greatly increased the speed of rendering and allowed for real-time changes to parameters that were previously impossible due to render delay. 3D fractal generation emerged around 2009. An early list of fractal-generating software was compiled for the book titled Fractals: The Patterns of Chaos by John Briggs published in 1992. Leading writers in the field include Dietmar Saupe, Heinz-Otto Peitgen and Clifford A. Pickover.
Methods
There are two major methods of two dimensional fractal generation. One is to apply an iterative process to simple equations by generative recursion. Dynamical systems produce a series of values. In fractal software values for a set of points on the complex plane are calculated and then rendered as pixels. This computer-based generation of fractal objects is an endless process. In theory, images can be calculated infinitely but in practice are approximated to a certain level of detail. Mandelbrot used quadratic formulas described by the French mathematician Gaston Julia. The maximum fractal dimension that can be produced varies according to type and is sometimes limited according to the method implemented. There are numerous coloring methods that can be applied. One of earliest was the escape time algorithm. Colour banding may appear in images depending on the method of coloring used as well as gradient color density.
Some programs generate geometric self-similar or deterministic fractals such as the Koch curve. These programs use an initiator followed by a generator that is repeated in a pattern. These simple fractals originate from a technique first proposed in 1904 by Koch.
The other main method is with Iterated Function Systems consisting of a number of affine transformations. In the first method each pixel in a fractal image is evaluated according to a function and then coloured, before the same process is applied to the next pixel. The former method represents the classical stochastic approach while the latter implements a linear fractal model. Using recursion allowed programmers to create complex images through simple direction.
Three dimensional fractals are generated in a variety of ways including by using quaternion algebra. Fractals emerge from fluid dynamics modelling simulations as turbulence when contour advection is used to study chaotic mixing. The Buddhabrot method was introduced in 1993. Programs might use fractal heightmaps to generate terrain. Fractals have been generated on computers using the following methods: Menger sponge, Hypercomplex manifold, Brownian tree, Brownian motion, Decomposition, L-systems, Lyapunov fractals, Newton fractals, Pickover stalks and Strange attractors.
Features
Many different features are included in fractal-generating software packages. A corresponding diversity in the images produced is therefore possible. Most feature some form of algorithm selection, an interactive image zoom, and the ability to save files in JPEG, TIFF, or PNG format, as well as the ability to save parameter files, allowing the user to easily return to previously created images for later modification or exploration. The formula, parameters, variables and coloring algorithms for fractal images can be exchanged between users of the same program. There is no universally adopted standard fractal file format.
One feature of most escape time fractal programs or algebraic-based fractals is a maximum iteration setting. Increasing the iteration count is required if the image is magnified so that fine detail is not lost. Limiting the maximum iterations is important when a device's processing power is low. Coloring options often allow colors to be randomised. Options for color density are common because some gradients output hugely variable magnitudes resulting in heavy repetitive banding or large areas of the same color. Because of the convenient ability to add post-processing effects layering and alpha compositing features found in other graphics software have been included. Both 2D and 3D rendering effects such as plasma effect and lighting may be included. Many packages also allow the user to input their own formula, to allow for greater control of the fractals, as well as a choice of color rendering, along with the use of filters and other image manipulation techniques. Some fractal software packages allow for the creation of movies from a sequence of fractal images. Others display render time and allow some form of color cycling and color palette creation tools.
Standard graphics software (such as GIMP) contains filters or plug-ins which can be used for fractal generation. Blender contains a fractal (or random) modifier. Many stand-alone fractal-generating programs can be used in conjunction with other graphics programs (such as Photoshop) to create more complex images. POV-Ray is a ray tracing program which generates images from a text-based scene description that can generate fractals. Scripts on 3ds Max and Autodesk Maya can be used. A number of web-based interfaces for the fractal generation are freely available including Turtle Graphics Renderer. Fractal Lab can generate both 2D and 3D fractals and is available over the web using WebGL. JWildfire is a java-based, open-source fractal flame generator. Mandelbrot Fractal is a fractal explorer written in JavaScript. Fractal Grower is software written in Java for generating Lindenmayer Substitution Fractals (L-systems).
Programs
Because of the butterfly effect, generating fractals can be difficult to master. A small change in a single variable can have an unpredictable effect. Some software presents the user with a steep learning curve and an understanding of chaos theory is advantageous. This includes the characteristics of fractal dimension, recursion and self-similarity exhibited by all fractals.
There are many fractal generating programs available, both free and commercial. Notable fractal generating programs include:
Apophysis – open source IFS software for Microsoft Windows-based systems
Bryce – cross platform commercial software partially developed by Ken Musgrave
Chaotica – commercial IFS software for Windows, Linux and Mac OS. Free for non-commercial use.
Electric Sheep – open source distributed screensaver software, developed by Scott Draves.
Fractint – MS-DOS freeware initially released in 1988 with available source code, later ported to Linux and Windows (as WinFract)
Fyre is a cross-platform open source tool for producing images based on histograms of iterated chaotic functions
Kalles Fraktaler – Windows based fractal zoomer
Milkdrop – music visualization plugin distributed with Winamp
MojoWorld Generator – a defunct landscape generator for Windows
openPlaG – creates fractals by plotting simple functions
Picogen - a cross platform open source terrain generator
Sterling – freeware software for Windows
Terragen – a fractal terrain generator that can render animations for Windows and Mac OS X
Ultra Fractal – proprietary fractal generator for Windows and Mac OS X
Wolfram Mathematica – can be used specifically to create fractal images
XaoS – cross platform open source fractal zooming program
Most of the above programs make two-dimensional fractals, with a few creating three-dimensional fractal objects, such as mandelbulbs and mandelboxes. Mandelbulber is an experimental, cross platform open-source program that generates three-dimensional fractal images. Mandelbulber is adept at producing 3D animations. Mandelbulb 3D is free software for creating 3D images featuring many effects found in 3D rendering environments. Incendia is a 3D fractal program that uses Iterated Function Systems (IFS) for fractal generation. Visions of Chaos, Boxplorer and Fragmentarium also render 3D images.
The open source GnoFract 4D is available.
ChaosPro is freeware fractal creation program. Fraqtive is an open source cross platform fractal generator. MandelX is a free program for rendering fractal images on Windows. WinCIG, Chaoscope, Tierazon, Fractal Forge and Malsys also generate fractal images.
See also
Logarithmic spiral
Software art
Chaos game
References
External links
An Introduction to Fractals by Paul Bourke, May 1991
Fractals
Computer art
1979 software | Fractal-generating software | Mathematics | 2,387 |
57,786,384 | https://en.wikipedia.org/wiki/Target%20controlled%20infusion | Target-controlled infusion (TCI) automates the dosing of intravenous drugs during surgery. After the anesthetist sets the desired parameters in a computer and presses the start button, the system controls the infusion pump, while being monitored by the anesthetist. TCI is as safe and effective as manually controlled infusion.
TCI can be sub-classified according to the target. The suffix 'e' as in TCIe indicates that the target is the effect site, in most cases, the central nervous system or brain. Alternatively, the suffix 'p' denotes plasma, indicating that the device implementing the TCI model is to target the blood plasma. There are important differences in relation to the time taken for effect site equilibration. Studies have demonstrated the clinical safety of the effect-site target model.
Popular TCI models exist for Propofol and the synthetic opioid Remifentanil. The models are based on pharmacokinetic studies and use software embedded in the infusion device. For propofol the Marsh and Schnider models are available and the Minto model is commonly used for remifentanil. In 2017, a project to emulate the TCI models in the python language was published on GitHub.
History
TCI has been used in clinical settings since 1996, initially with propofol.
See also
General anaesthesia#tci
References
Drug delivery devices
General anesthetics | Target controlled infusion | Chemistry | 301 |
37,314,553 | https://en.wikipedia.org/wiki/Deula | Deula is an architectural element in a Hindu temple in the Kalinga architecture style of the Odishan temples in Eastern India. Sometimes the whole temple is also referred to as Deula. The word "deula" in Odia language means a building structure built with a particular style that is seen in most of the temples from Odisha. Deul is also used in English, though the deul temples are also of a different form in the Manbhum region of Western Bengal.
There are three types of Deulas: In terms of the general north Indian terminology, the Rekha Deula (rekha deul) is the sanctuary and the tower over it, respectively the garbhagriha and the shikhara, the Pidha Deula (pida deul) is the mandapa where the faithful are present. The Khakhara deula is an alternative form of tower over the sanctuary, which in shape resembles the oblong gopuram temple gatehouses in southern Dravidian architecture.
Rekha Deula
Rekha in Odia means a straight line. It is a tall building with a shape of sugar loaf, looking like a Shikhara. It covers and protects the sanctum sanctorum (Garbhagriha).
Examples :
The Shikhara of the Lingaraja Temple in Bhubaneswar
The Shikhara of the Jagannath temple in Puri
Jagannath Temple in Nayagarh
Uttaresvara Siva Temple in Bhubaneswar
The Shikhara of Yameshwar Temple in Bhubaneswar
The Shikhara of the Shantinath Shiva Temple at Shihar village near Jayrambati, Bankura, West Bengal
Pidha Deula
It is a square building, typically with a pyramid-shaped roof, rather like the vimana towers over the sanctuaries of temples in southern Dravidian architecture. For the halls or service rooms of the temple.
Examples
The Jagamohana (assembly hall) of the Sun temple in Konârak
The Jagamohana of Yameshwara Temple in Bhubaneswar
The Jagamohana of the Shantinath Shiva Temple in Jayrambati, Bankura, West Bengal
Digambara Jaina Temple, Khandagiri in Bhubaneswar
Khakhara deula
Khakara deula is a rectangular building with a truncated pyramid-shaped roof, like the gopuras. The name comes from Khakharu (gourd) because of the shape of the roof. The temples of the feminine deities as Shakti are temple of that type.
Examples :
Baitala Deula, Bhubaneswar (dedicated to Chamunda)
Varahi Deula, Chaurasi, Puri district (dedicated to Varahi)
Brahmi temple, Chaurasi
Kedara Gouri, Bhubaneswar
Narayani Temple, Khalikote (dedicated to Durga)
Durga Temple, Banki
References
External links
http://orissa.gov.in/e-magazine/Orissareview/nov2005/engpdf/Orissan_Temple_Architecture.pdf
http://www.indoarch.org/arch_glossary.php
Hindu temple architecture
Architectural elements
Monuments and memorials in India
Indian architectural styles
Cultural history of Odisha | Deula | Technology,Engineering | 687 |
3,993,428 | https://en.wikipedia.org/wiki/Maintaining%20power | In horology, a maintaining power is a mechanism for keeping a clock or watch going while it is being wound.
Huygens
The weight drive used by Christiaan Huygens in his early clocks acts as a maintaining power. In this layout, the weight which drives the clock is carried on a pulley and the cord (or chain) supporting the weight is wrapped around the main driving wheel on one side and the rewinding wheel on the other.
The chain then loops down from the rewinding wheel and up again to the main driving wheel via a second pulley carrying a small tensioning weight which ensures the loop stays taut and the chain engages well with the main driving wheel and rewinding wheel.
In the first illustration the clock is fully wound, the driving weight is up and the tensioning weight down, a ratchet on the winding wheel prevents it from turning back. The driving weight pulls the main wheel in the direction of the arrow. In the second illustration the driving weight has reached its lowest point and the tensioning weight is now up, the clock needs to be wound by turning the winding wheel (or by pulling the chain), but during that time the main wheel continues to feel the driving force and the clock will not stop.
The principle was later applied by the French clockmaker Robert Robin who automated the re-winding in his remontoire. The drive- and tensioning-weights were made much smaller and drove the escape wheel directly. It was re-wound by the main train of the clock which turned the fourth pulley and was controlled by a lever attached to the tensioning weight. When this had risen to its upper limit, it started the re-winding process. As the drive weight rose, the tensioning weight fell and at the bottom of its travel it stopped the re-winding.
Bolt and shutter
This is a type of maintaining power which needs to be engaged before re-winding is started. It consists of a weighted arm (bolt) with a ratchet pawl on the end of it which engages with the edge of the first wheel to keep it turning while the weight or spring is wound. To make sure that it was always operated, the hole in the dial through which the clock is wound is covered with a shutter which can be moved out of the way by pushing down on a lever at the side of the dial. This lever also engages the bolt. A similar type of mechanism is sometimes used on turret clocks. Because these take much longer to wind, and are usually wound by trained staff, the bolt carries a segment of a gear wheel rather than a single pawl and is engaged manually.
Harrison
John Harrison invented a form of maintaining power around the mid-1720s. His clocks of the period used a grasshopper escapement which malfunctioned if not driven continuously—even while the clock was being wound. In essence, the maintaining power consists of a disc between the driving drum of the clock and the great wheel. The drum drives the disc, and a spring attached to the disc drives the great wheel. The spring is selected to be slightly weaker than the driving drum, so in normal operation it is fully compressed.
When the pressure from the drum is removed for winding, the ratchet teeth on the edge of the disc engage a pawl and prevent it turning backward. The spring continues to drive the great wheel forward with a force slightly less than normal. When winding is done, the drum drives the disc forward, re-compressing the maintaining spring ready for its next use. The whole mechanism is completely automatic in its operation and has remained one of Harrison's lasting contributions to horology.
References
Timekeeping components | Maintaining power | Technology | 744 |
716,975 | https://en.wikipedia.org/wiki/Metallum%20Martis | Metallum Martis, a 1665 book by Dud Dudley, is the earliest known reference to the use of coal in metallurgical smelting. The book is also referred to as Iron made with Pit-Coale, Sea-Coale, &c. And with the same Fuell to Melt and Fine Imperfect Mettals, And Refine perfect Mettals.
Many attendant difficulties had to be overcome before this fuel could be applied to the purpose of smelting iron. Dudley does not describe in his book how he was using coal, only that he was. In so doing, he described his use successively of an ironworks on Pensnett Chase and at Cradley, of a furnace at Himley, and of a furnace at Hasco Bridge near Gornal.
Dudley does mention several things that indicate what he was doing. The coal he used was the small pieces and slack which were "little or of no use in that inland country" and so brought in no money. This coal debris was left in heaps and "crowded moist slack heat naturally, and kindle in the middle of these great heaps, often sets the coal works on fire" and that "Also from these sulphurous heaps, mixed with ironstone (for out of many of the same pits is gotten much ironstone or mine), the fires heating vast quantities of water, passing through these soughs or adits becometh as hot as the bath at Bath". Dudley describes two rival attempts to smelt iron with coal instigated by supporters of Parliament during the Civil War and the Interregnum. Dudley visited both sites and having examined their furnaces and production methods, when asked his opinion, informed the proprietors that they would fail. The first attempt was by Captain Buck, with the backing of many parliamentary officers including Oliver Cromwell, with technical help from Edward Dagney, an Italian. In the second attempt in the late 1656–67 by Captain John Copley also failed despite Dudley, at no charge, improving the efficiency of Copley's bellows. Dudley reapplied for a patent from Charles II, in 1660 stating "and seeing no man able to perform the mastery of making of iron with pit-coal or sea-coal, ... [without my] laudable inventions the author was, and is, unwilling [that they] should fall to the ground and die with him".
A significant feature of his great work Metallum Martis is a map showing Dudley Castle where he correctly identifies the order and geographic layout of strata of coal and ironstone under survey.
Considered to be the earliest of recorded geologic maps, Metallum Martis marks a turning point in the evolution of scientific rationale concerning the recording and interpretation of geological information. It is considered to have been made at Castle Hill in Dudley by Dud Dudley in 1665.
Notes
References
.
It includes an 18-page extract from, Metallum Martis.
1665 books
Theories | Metallum Martis | Chemistry,Materials_science | 606 |
78,163,821 | https://en.wikipedia.org/wiki/Timi%C8%99oara%20Astronomical%20Observatory | Timișoara Astronomical Observatory is a research institute in Timișoara, Romania, founded on 7 December 1962 by . The scientific activity is coordinated by the Astronomical Institute of the Romanian Academy, and the administrative activity by the local branch of the Romanian Academy.
The Astronomical Observatory building has a basement, a ground floor and the equatorial instrument hall covered with a rotating dome. The dome and the main instrument of the observatory were made with own resources, in Timișoara. The optical instruments placed on the equatorial mount are:
a Cassegrain telescope with a 300/1690 mm Zeiss mirror equipped with an SBIG CCD camera, used for naked-eye observations and CCD stellar photometry, and
a 100/2000 mm Zeiss scope.
Due to the development of Timișoara, the pollution and brightness of the night sky has increased, so the location of the observatory is no longer appropriate. It is desired to move it to Muntele Mic in Caraș-Severin County.
References
Astronomical observatories in Romania
Astronomy institutes and departments
Buildings and structures in Timișoara | Timișoara Astronomical Observatory | Astronomy | 220 |
4,128,748 | https://en.wikipedia.org/wiki/Oxygen-18 | Oxygen-18 (, Ω) is a natural, stable isotope of oxygen and one of the environmental isotopes.
is an important precursor for the production of fluorodeoxyglucose (FDG) used in positron emission tomography (PET). Generally, in the radiopharmaceutical industry, enriched water () is bombarded with hydrogen ions in either a cyclotron or linear accelerator, producing fluorine-18. This is then synthesized into FDG and injected into a patient. It can also be used to make an extremely heavy version of water when combined with tritium (hydrogen-3): or . This compound has a density almost 30% greater than that of natural water.
The accurate measurements of rely on proper procedures of analysis, sample preparation and storage.
Paleoclimatology
In ice cores, mainly Arctic and Antarctic, the ratio of to (known as δ) can be used to determine the temperature of precipitation through time. Assuming that atmospheric circulation and elevation has not changed significantly over the poles, the temperature of ice formation can be calculated as equilibrium fractionation between phases of water that is known for different temperatures. Water molecules are also subject to Rayleigh fractionation as atmospheric water moves from the equator poleward which results in progressive depletion of , or lower δ values. In the 1950s, Harold Urey performed an experiment in which he mixed both normal water and water with oxygen-18 in a barrel, and then partially froze the barrel's contents.
The ratio / (δ) can also be used to determine paleothermometry in certain types of fossils. The fossils in question have to show progressive growth in the animal or plant that the fossil represents. The fossil material used is generally calcite or aragonite, however oxygen isotope paleothermometry has also been done of phosphatic fossils using SHRIMP. For example, seasonal temperature variations may be determined from a single sea shell from a scallop. As the scallop grows, an extension is seen on the surface of the shell. Each growth band can be measured, and a calculation is used to determine the probable sea water temperature in comparison to each growth. The equation for this is:
Where T is temperature in Celsius and A and B are constants.
For determination of ocean temperatures over geologic time, multiple fossils of the same species in different stratigraphic layers would be measured, and the difference between them would indicate long term changes.
Plant physiology
In the study of plants' photorespiration, the labeling of atmosphere by oxygen-18 allows for the measurement of oxygen uptake by the photorespiration pathway. Labeling by gives the unidirectional flux of uptake, while there is a net photosynthetic evolution. It was demonstrated that, under preindustrial atmosphere, most plants reabsorb, by photorespiration, half of the oxygen produced by photosynthesis. Then, the yield of photosynthesis was halved by the presence of oxygen in atmosphere.
18F production
Fluorine-18 is usually produced by irradiation of 18O-enriched water (H218O) with high-energy (about 18 MeV) protons prepared in a cyclotron or a linear accelerator, yielding an aqueous solution of 18F fluoride. This solution is then used for rapid synthesis of a labeled molecule, often with the fluorine atom replacing a hydroxyl group. The labeled molecules or radiopharmaceuticals have to be synthesized after the radiofluorine is prepared, as the high energy proton radiation would destroy the molecules.
Large amounts of oxygen-18 enriched water are used in positron emission tomography centers, for on-site production of 18F-labeled fludeoxyglucose (FDG).
An example of the production cycle is a 90-minute irradiation of 2 milliliters of 18O-enriched water in a titanium cell, through a 25 μm thick window made of Havar (a cobalt alloy) foil, with a proton beam having an energy of 17.5 MeV and a beam current of 30 microamperes.
The irradiated water has to be purified before another irradiation, to remove organic contaminants, traces of tritium produced by a 18O(p,t)16O reaction, and ions leached from the target cell and sputtered from the Havar foil.
See also
Willi Dansgaard – a paleoclimatologist
Isotopes of oxygen
Paleothermometry
Pâté de Foie Gras (short story)
Δ18O
Global meteoric water line
References
Environmental isotopes
Isotopes of oxygen | Oxygen-18 | Chemistry | 969 |
78,647,696 | https://en.wikipedia.org/wiki/Hafnium%28IV%29%20sulfate | Hafnium(IV) sulfate is describes the inorganic chemical compounds with the formula Hf(SO4)2·nH2O, where n can range from 0 to 7. It commonly forms the anhydrous and tetrahydrate forms, which are both white solids.
Structure
Anhydrous hafnium(IV) sulfate consists of a polymeric network of sulfate-bridged hafnium atoms. It is isomorphous with zirconium(IV) sulfate.
Hafnium(IV) sulfate tetrahydrate is isomorphous with zirconium(IV) sulfate tetrahydrate and consists of repeated sheets of Hf(SO4)2(H2O)4, where the sulfate ligands are bidentate.
Preparation and properties
The tetrahydrate is produced by the reaction of hafnium metal or hafnium(IV) oxide with concentrated sulfuric acid followed by evaporation of the solution:
Hf + 2 H2SO4 → Hf(SO4)2 + 2 H2
The anhydrous form can be produced by heating the tetrahydrate to 350 °C. If the anhydrous is heated to 820 °C, it decomposes to hafnium(IV) oxide, sulfur oxides, and oxygen. The mechanism of decomposition has not been fully elucidated.
Various hydrolyzed derivatives of hafnium(IV) oxide, such as are known.
References
Hafnium compounds
Sulfates | Hafnium(IV) sulfate | Chemistry | 321 |
20,616,738 | https://en.wikipedia.org/wiki/Partial%20element%20equivalent%20circuit | Partial element equivalent circuit method (PEEC) is partial inductance calculation used for interconnect problems from early 1970s which is used for numerical modeling of electromagnetic (EM) properties. The transition from a design tool to the full-wave method involves the capacitance representation, the inclusion of time retardation and the dielectric formulation. Using the PEEC method, the problem will be transferred from the electromagnetic domain to the circuit domain where conventional SPICE-like circuit solvers can be employed to analyze the equivalent circuit. By having the PEEC model one can easily include any electrical component e.g. passive components, sources, non-linear elements, ground, etc. to the model. Moreover, using the PEEC circuit, it is easy to exclude capacitive, inductive or resistive effects from the model when it is possible, in order to make the model smaller. As an example, in many applications within power electronics, the magnetic field is a dominating factor over the electric field due to the high current in the systems. Therefore, the model can be simplified by just neglecting capacitive couplings in the model which can simply be done by excluding the capacitors from the PEEC model.
Numerical modeling of electromagnetic properties are used by, for example, the electronics industry to:
Ensure functionality of electric systems
Ensure compliance with electromagnetic compatibility (EMC)
History
The main research activity in this area has been and are performed, by Albert Ruehli at IBM Thomas J. Watson Research Center, starting with a publication in 1972. At that time the foundation of the PEEC method was presented, i.e. the calculation of the partial inductances. The PEEC method was extended to more generalized problems, including dielectric material and retardation effect.
The PEEC method is not one of the most common techniques used in EM simulation software or as a research area but it has just been starting to gain recognition and for the first time there is a session at the 2001 IEEE EMC Symposium named after the technique. In the mid-1990s, two researchers from the University of L'Aquila in Italy, Professor Antonio Orlandi and Professor Giulio Antonini, published their first PEEC paper and are now together with Dr. Ruehli considered the top researchers in the area. Starting year 2006, several research projects have been initiated by the faculty of Computer Science and Electrical Engineering of Luleå University of Technology in Sweden in the focus area of PEEC with the emphasis on computer-based solvers for PEEC.
Application
PEEC is widely used for combined electromagnetic and circuit problems in various areas such as power electronics, antenna design, signal integrity analysis, etc. Using PEEC the designed model of a physical structure is transferred from the electromagnetic domain into the circuit domain. Therefore, external electrical components and circuits can be connected to the equivalent circuit which consists of extracted partial elements, in a straightforward manner. Moreover, since the final model consists of circuit elements, various components can easily be excluded from the circuit to simplify the problem while the accuracy is still ensured. For instance, for low-frequency problems, one can safely remove capacitive couplings without degrading the accuracy of the results and hence reduce the problem size and complexity.
Theory
The classical PEEC method is derived from the equation for the total electric field at a point written as
where is an incident electric field, is a current density, is the magnetic vector potential, is the scalar electric potential, and the electrical conductivity all at observation point . In the figures on the right, an orthogonal metal strip with 3 nodes and 2 cells, and the corresponding PEEC circuit are shown.
By using the definitions of the scalar and vector potentials, the current- and charge-densities are discretized by defining pulse basis functions for the conductors and dielectric materials. Pulse functions are also used for the weighting functions resulting in a Galerkin type solution. By defining a suitable inner product, a weighted volume integral over the cells, the field equation can be interpreted as Kirchhoff's voltage law over a PEEC cell consisting of partial self inductances between the nodes and partial mutual inductances representing the magnetic field coupling in the equivalent circuit. The partial inductances are defined as
for volume cell and . Then, the coefficients of potentials are computed as
and a resistive term between the nodes, defined as
PEEC model reduction
The rigorous full-wave version of the PEEC method is called (Lp,P,R,t) PEEC, where Lp is partial inductance, P is the Maxwell potential coefficient (inverse of capacitance), R is resistance, and t is the time-delay. If available, a reduced model of the full-wave version can be used. For example, if the EIP structure is electrically small, the delay term t can be omitted and the model can be reduced to (Lp,P,R) PEEC model. In addition, if the angular frequency w is sufficiently high so that w*Lp >> R, we can omit R term and use approximate (Lp,P) PEEC model. According to various modeling situations, (Lp) and (Lp,R) models are also useful.
Model Order Reduction (MOR) has become an active research topic for circuit models in general and PEEC models in particular. Integration of a PEEC model directly into a circuit simulator is computationally expensive for two main facts. One is that a large number of circuit elements are generated for complex structures at high frequencies, and the other is that the circuit matrices based on modified nodal analysis (MNA) are usually dense due to full inductive and capacitive coupling. In order to model/simulate such problems efficiently, developing compact model representation via model order reduction is desirable for PEEC modeling.
Discretization
Meshing basics in PEEC
PEEC solvers
Case study
References
External links
Partial Element Equivalent Circuit (PEEC) homepage
Electromagnetic Modelling Process to Improve Cabling of Power Electronic Structures
Numerical differential equations
Computational electromagnetics | Partial element equivalent circuit | Physics | 1,241 |
48,467,240 | https://en.wikipedia.org/wiki/Nordic%20Medical%20Prize | The Nordic Medical Prize (Swedish Nordiska medicinpriset) is a Swedish prize in medicine awarded by the SalusAnsvar/Ulf Nilsonnes Foundation for Medical Research in cooperation with the insurance company Folksam. It is the second largest medical award in the Nordic countries, after the Nobel Prize in Medicine, and includes a monetary prize of one million Swedish kronor. The prize has been awarded since 1998.
Recipients
1998 – Lars Wallentin
1999 – Björn Rydevik
2000 – Jörgen Engel
2002 – Anne-Lise Børresen-Dale
2003 – Rikard Holmdahl and Andrej Tarkowski
2004 – Ulf Lerner and Jukka H. Meurman
2005 – Peter Arner
2006 – Claes Ohlsson and Kalervo Väänänen
2007 – Thomas Sandström
2010 – Markku Kaste, Perttu J. Lindsberg and Turgut Tatlisumak
2012 – Ola Didrik Saugstad
2013 – Eija Kalso and Eva Kosek
2014 – Erkki Isometsä and Gerhard Andersson
2015 – Lars Engelbretsen, Roald Bahr, Jón Karlsson and Michael Kjær
2016 – Heikki Joensuu and Lisa Rydén, Henrik Grönberg and Jonas Hugosson
2017 – Gunhild Waldemar and Kaj Blennow
2018 – Juleen R. Zierath and Patrik Rorsman
See also
List of medicine awards
References
Medicine awards
Swedish awards | Nordic Medical Prize | Technology | 311 |
29,294 | https://en.wikipedia.org/wiki/IBM%20System/360 | The IBM System/360 (S/360) is a family of mainframe computer systems announced by IBM on April 7, 1964, and delivered between 1965 and 1978. System/360 was the first family of computers designed to cover both commercial and scientific applications and a complete range of applications from small to large. The design distinguished between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, featuring 8-bit byte addressing and fixed-point binary, fixed-point decimal and hexadecimal floating-point calculations. The System/360 family introduced IBM's Solid Logic Technology (SLT), which packed more transistors onto a circuit card, allowing more powerful but smaller computers.
System/360's chief architect was Gene Amdahl, and the project was managed by Fred Brooks, responsible to Chairman Thomas J. Watson Jr. The commercial release was piloted by another of Watson's lieutenants, John R. Opel, who managed the launch of IBM's System 360 mainframe family in 1964. The slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB. High-performance models came later. The 1967 IBM System/360 Model 91 could execute up to 16.6 million instructions per second. The larger 360 models could have up to 8 MB of main memory, though that much memory was unusual; a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower (8 microsecond) Large Capacity Storage (LCS) was also available for some models.
The IBM 360 was extremely successful, allowing customers to purchase a smaller system knowing they could expand it, if their needs grew, without reprogramming application software or replacing peripheral devices. It influenced computer design for years to come; many consider it one of history's most successful computers. Application-level compatibility (with some restrictions) for System/360 software is maintained to the present day with the System z mainframe servers.
System/360 history
Background
By the early 1960s, IBM was struggling with the load of supporting and upgrading five separate lines of computers. These were aimed at different market segments and were entirely different from each other. A customer who purchased a machine to handle accounting, such as the IBM 1401, that was now looking for a machine for engineering calculations, such as the IBM 7040, had no reason to select IBM – the 7040 was incompatible with the 1401 and they might as well have been from different companies. Customers were frustrated that major investments, often entirely new machines and programs, were required when seemingly small performance improvements were needed.
In 1961, IBM assembled a task force to chart their developments for the 1960s, known as SPREAD, for Systems Programming, Research, Engineering and Development. In meetings at the New Englander Motor Hotel in Greenwich, Connecticut, SPREAD developed a new concept for the next generation of IBM machines. At the time, new technologies were coming into the market including the introduction of replacement of individual transistors with small-scale integrated circuits and the move to an 8-bit byte from the former 6-bit oriented words. These were going to lead to a new generation of machines, today known as the third generation, from all of the existing vendors.
Where SPREAD differed significantly from previous concepts was what features would be supported. Instead of machines aimed at different market niches, the new concept was effectively the union of all of these designs. A single instruction set architecture (ISA) included instructions for binary, floating-point, and decimal arithmetic, string processing, conversion between character sets (a major issue before the widespread use of ASCII) and extensive support for file handling, among many other features.
This would mean IBM would be introducing yet another line of machines, once again incompatible with their earlier machines. But the new systems would be able to run all of the programs that formerly required different machines. A concern was that there was a risk that their customers, facing the purchase of yet another new and incompatible platform, would simply choose some other vendor. Yet the concept steadily gained support, and six months after being formed, the company decided to implement the SPREAD concept.
A new team was organized under the direction of Bob Evans, who personally persuaded CEO Thomas J. Watson Jr. to develop the new system. Gene Amdahl was the chief architect of the computers themselves, while Fred Brooks was the project lead for the software and Erich Bloch led the development of IBM's hybrid integrated circuit designs, Solid Logic Technology.
"Family" concept
Producing a single machine with support for all of these features would border on impossible. Instead, the SPREAD concept was based on the separation of the defined feature set from its internal operation, with a family of machines with different performance and different internal designs. Specifically, depending on the machine, some instructions might not be directly supported in hardware, and would instead be completed using small programs, in an internal machine-specific code, stored in read only memory, or what today is known as microcode.
So a model intended for use with accounting might choose to implement the decimal math directly in hardware, and leave the floating-point instructions to be handled by the subprograms. This would make floating point on such a system run (much) more slowly, but, critically, it would run. Likewise, a company purchasing a system for engineering support would choose a model with floating-point hardware, and might use it from time to time to run their payroll. Using previous designs, the system that performed floating point would generally not have any support for decimal math at all, and would require the customer to write such a package or buy another machine.
This meant that a single lineup could have machines tailored to match the price and performance niches that formerly demanded entirely separate computer systems. This flexibility greatly lowered barriers to entry. With most other vendors customers had to choose between machines they might outgrow or machines that were potentially too powerful and thus too costly. In practice, this meant that many companies simply did not buy computers. Now, a customer could purchase a machine that solved a particular requirement, knowing they could change the models as their needs changed, without losing support for the programs they were already running.
For instance, in the case of a firm that purchased an accounting system and was now looking to expand their computer support into engineering, this meant they could develop and test their engineering program on the machine they already used. If they ever needed more performance, they could purchase a machine with floating-point hardware, knowing that nothing else would change, it would simply get faster. Even the same peripherals could be used, allowing, for instance, data from the engineering system to be written to tape and then printed using a high-speed line printer already connected to their accounting system. Or they might replace the accounting system outright with a system with the performance to run both tasks.
The idea that a single design could address all the myriad ways that the machines could be used gave rise to the name, "360" is a reference to 360 degrees in a circle, and circles of machines and components featured prominently in IBM's advertising.
Models
IBM initially announced a series of six computers and forty common peripherals. IBM eventually delivered fourteen models, including rare one-off models for NASA. The least expensive model was the Model 20 with as little as 4096 bytes of core memory, eight 16-bit registers instead of the sixteen 32-bit registers of other System/360 models, and an instruction set that was a subset of that used by the rest of the range.
The initial announcement in 1964 included Models 30, 40, 50, 60, 62, and 70. The first three were low- to middle-range systems aimed at the IBM 1400 series market. All three first shipped in mid-1965. The last three, intended to replace the 7000 series machines, never shipped and were replaced with the 65 and 75, which were first delivered in November 1965, and January 1966, respectively.
Later additions to the low-end included models 20 (1966, mentioned above), 22 (1971), and 25 (1968). The Model 20 had several sub-models; sub-model 5 was at the higher end of the model. The Model 22 was a recycled Model 30 with minor limitations: a smaller maximum memory configuration, and slower I/O channels, which limited it to slower and lower-capacity disk and tape devices than on the 30.
The Model 44 (1966) was a specialized model, designed for scientific computing and for real-time computing and process control, featuring some additional instructions, and with all storage-to-storage instructions and five other complex instructions eliminated.
A succession of high-end machines included the Model 67 (1966, mentioned below, briefly anticipated as the 64 and 66), 85 (1969), 91 (1967, anticipated as the 92), 95 (1968), and 195 (1971). The 85 design was intermediate between the System/360 line and the follow-on System/370 and was the basis for the 370/165. There was a System/370 version of the 195, but it did not include Dynamic Address Translation.
The implementations differed substantially, using different native data path widths, presence or absence of microcode, yet were extremely compatible. Except where specifically documented, the models were architecturally compatible. The 91, for example, was designed for scientific computing and provided out-of-order instruction execution (and could yield "imprecise interrupts" if a program trap occurred while several instructions were being read), but lacked the decimal instruction set used in commercial applications. New features could be added without violating architectural definitions: the 65 had a dual-processor version (M65MP) with extensions for inter-CPU signalling; the 85 introduced cache memory. Models 44, 75, 91, 95, and 195 were implemented with hardwired logic, rather than microcoded as all other models.
The Model 67, announced in August 1965, was the first production IBM system to offer dynamic address translation (virtual memory) hardware to support time-sharing. "DAT" is now more commonly referred to as an MMU. An experimental one-off unit was built based on a model 40. Before the 67, IBM had announced models 64 and 66, DAT versions of the 60 and 62, but they were almost immediately replaced with the 67 at the same time that the 60 and 62 were replaced with the 65. DAT hardware would reappear in the S/370 series in 1972, though it was initially absent from the series. Like its close relative, the 65, the 67 also offered dual CPUs.
IBM stopped marketing all System/360 models by the end of 1977.
Backward compatibility
IBM's existing customers had a large investment in software that ran on second-generation machines. Several System/360 models had the option of emulating the customer's existing computer using special hardware and microcode, and an emulation program that enabled existing programs to run on the new machine.
Customers initially had to halt the computer and load the emulation program.
IBM later added features and modified emulator programs to allow emulation of the 1401, 1440, 1460, 1410 and 7010 under the control of an operating system.
The Model 85 and later System/370 maintained the precedent, retaining emulation options and allowing emulators to run under OS control alongside native programs.
Successors and variants
System/360 (excepting the Models 20, 44 and 67) was replaced with the compatible System/370 range in 1970 and Model 20 users were targeted to move to the IBM System/3. (The idea of a major breakthrough with FS technology was dropped in the mid-1970s for cost-effectiveness and continuity reasons.) Later compatible IBM systems include the 4300 family, the 308x family, the 3090, the ES/9000 and 9672 families (System/390 family), and the IBM Z series.
Computers that were mostly identical or compatible in terms of the machine code or architecture of the System/360 included Amdahl's 470 family (and its successors), Hitachi mainframes, the UNIVAC 9000 series, Fujitsu as the Facom, the RCA Spectra 70 series, and the English Electric System 4. The System 4 machines were built under license to RCA. RCA sold the Spectra series to what was then UNIVAC, where they became the UNIVAC Series 70. UNIVAC also developed the UNIVAC Series 90 as successors to the 9000 series and Series 70. The Soviet Union produced a System/360 clone named the ES EVM.
The IBM 5100 portable computer, introduced in 1975, offered an option to execute the System/360's APL.SV programming language through a hardware emulator. IBM used this approach to avoid the costs and delay of creating a 5100-specific version of APL.
Special radiation-hardened and otherwise somewhat modified System/360s, in the form of the System/4 Pi avionics computer, are used in several fighter and bomber jet aircraft. In the complete 32-bit AP-101 version, 4 Pi machines were used as the replicated computing nodes of the fault-tolerant Space Shuttle computer system (in five nodes). The U.S. Federal Aviation Administration operated the IBM 9020, a special cluster of modified System/360s for air traffic control, from 1970 until the 1990s. (Some 9020 software is apparently still used via emulation on newer hardware.)
Table of System/360 models
Model summary
Six of the twenty IBM System/360 models announced either were never shipped or were never released.
Fourteen of the twenty IBM System/360 models announced shipped.
Technical description
Influential features
The System/360 introduced a number of industry standards to the marketplace, such as:
The 8-bit byte (against financial pressure during development to reduce the byte to 4 or 6 bits), rather than adopting the 7030 concept of accessing bytes of variable size at arbitrary bit addresses.
Byte-addressable memory (as opposed to bit-addressable or word-addressable memory)
32-bit words
The Bus and Tag I/O channel standardized in FIPS-60
Commercial use of microcoded CPUs
The IBM hexadecimal floating-point architecture
The EBCDIC character set
Nine-track magnetic tape
Architectural overview
The System/360 series computer architecture specification makes no assumptions on the implementation itself, but rather describes the interfaces and expected behavior of an implementation. The architecture describes mandatory interfaces that must be available on all implementations, and optional interfaces. Some aspects of this architecture are:
Big endian byte ordering
A processor with:
16 32-bit general-purpose registers (R0–R15)
A 64-bit program status word (PSW), which describes (among other things)
Interrupt masks
Privilege states
A condition code
A 24-bit instruction address
An interruption mechanism, maskable and unmaskable interruption classes and subclasses
An instruction set. Each instruction is wholly described and also defines the conditions under which an exception is recognized in the form of program interruption.
A memory (called storage) subsystem with:
8 bits per byte
A special processor communication area starting at address 0
24-bit addressing
Manual control operations that allow
A bootstrap process (a process called Initial Program Load or IPL)
Operator-initiated interrupts
Resetting the system
Basic debugging facilities
Manual display and modifications of the system's state (memory and processor)
An Input/Output mechanism which does not describe the devices themselves
Some of the optional features are:
Binary-coded decimal instructions
Floating-point instructions
Timing facilities (interval timer)
Key-controlled memory protection
All models of System/360, except for the Model 20 and Model 44, implemented that specification.
Binary arithmetic and logical operations are performed as register-to-register and as memory-to-register/register-to-memory as a standard feature. If the Commercial Instruction Set option was installed, packed decimal arithmetic could be performed as memory-to-memory with some memory-to-register operations. The Scientific Instruction Set feature, if installed, provided access to four floating-point registers that could be programmed for either 32-bit or 64-bit floating-point operations. The Models 85 and 195 could also operate on 128-bit extended-precision floating-point numbers stored in pairs of floating-point registers, and software provided emulation in other models. The System/360 used an 8-bit byte, 32-bit word, 64-bit double-word, and 4-bit nibble. Machine instructions had operators with operands, which could contain register numbers or memory addresses. This complex combination of instruction options resulted in a variety of instruction lengths and formats.
Memory addressing was accomplished using a base-plus-displacement scheme, with registers 1 through F (15). A displacement was encoded in 12 bits, thus allowing a 4096-byte displacement (0–4095), as the offset from the address put in a base register.
Register 0 could not be used as a base register nor as an index register (nor as a branch address register), as "0" was reserved to indicate an address in the first 4 KB of memory, that is, if register 0 was specified as described, the value 0x00000000 was implicitly input to the effective address calculation in place of whatever value might be contained within register 0 (or if specified as a branch address register, then no branch was taken, and the content of register 0 was ignored, but any side effect of the instruction was performed).
This specific behavior permitted initial execution of an interrupt routines, since base registers would not necessarily be set to 0 during the first few instruction cycles of an interrupt routine. It isn't needed for IPL ("Initial Program Load" or boot), as one can always clear a register without the need to save it.
With the exception of the Model 67, all addresses were real memory addresses. Virtual memory was not available in most IBM mainframes until the System/370 series. The Model 67 introduced a virtual memory architecture, which MTS, CP-67, and TSS/360 used—but not IBM's mainline System/360 operating systems.
The System/360 machine-code instructions are 2 bytes long (no memory operands), 4 bytes long (one operand), or 6 bytes long (two operands). Instructions are always situated on 2-byte boundaries.
Operations like MVC (Move-Characters) (Hex: D2) can only move at most 256 bytes of information. Moving more than 256 bytes of data required multiple MVC operations. (The System/370 series introduced a family of more powerful instructions such as the MVCL "Move-Characters-Long" instruction, which supports moving up to 16 MB as a single block.)
An operand is two bytes long, typically representing an address as a 4-bit nibble denoting a base register and a 12-bit displacement relative to the contents of that register, in the range (shown here as hexadecimal numbers). The address corresponding to that operand is the contents of the specified general-purpose register plus the displacement. For example, an MVC instruction that moves 256 bytes (with length code 255 in hexadecimal as ) from base register 7, plus displacement , to base register 8, plus displacement , would be coded as the 6-byte instruction "" (operator/length/address1/address2).
The System/360 was designed to separate the system state from the problem state. This provided a basic level of security and recoverability from programming errors. Problem (user) programs could not modify data or program storage associated with the system state. Addressing, data, or operation exception errors made the machine enter the system state through a controlled routine so the operating system could try to correct or terminate the program in error. Similarly, it could recover certain processor hardware errors through the machine check routines.
Channels
Peripherals interfaced to the system via channels. A channel is a specialized processor with the instruction set optimized for transferring data between a peripheral and main memory. In modern terms, this could be compared to direct memory access (DMA). The S/360 connects channels to control units with bus and tag cables; IBM eventually replaced these with Enterprise Systems Connection (ESCON) and Fibre Connection (FICON) channels, but well after the S/360 era.
Byte-multiplexor and selector channels
There were initially two types of channels; byte-multiplexer channels (known at the time simply as "multiplexor channels"), for connecting "slow speed" devices such as card readers and punches, line printers, and communications controllers, and selector channels for connecting high speed devices, such as disk drives, tape drives, data cells and drums. Every System/360 (except for the Model 20, which was not a standard 360) has a byte-multiplexer channel and 1 or more selector channels, though the model 25 has just one channel, which can be either a byte-multiplexor or selector channel. The smaller models (up to the model 50) have integrated channels, while for the larger models (model 65 and above) the channels are large separate units in separate cabinets: the IBM 2870 is the byte-multiplexor channel with up to four selector sub-channels, and the IBM 2860 is up to three selector channels.
The byte-multiplexer channel is able to handle I/O to/from several devices simultaneously at the device's highest rated speeds, hence the name, as it multiplexed I/O from those devices onto a single data path to main memory. Devices connected to a byte-multiplexer channel are configured to operate in 1-byte, 2-byte, 4-byte, or "burst" mode. The larger "blocks" of data are used to handle progressively faster devices. For example, a 2501 card reader operating at 600 cards per minute would be in 1-byte mode, while a 1403-N1 printer would be in burst mode. Also, the byte-multiplexer channels on larger models have an optional selector subchannel section that would accommodate tape drives. The byte-multiplexor's channel address was typically "0" and the selector subchannel addresses were from "C0" to "FF." Thus, tape drives on System/360 were commonly addressed at 0C0–0C7. Other common byte-multiplexer addresses are: 00A: 2501 Card Reader, 00C/00D: 2540 Reader/Punch, 00E/00F: 1403-N1 Printers, 010–013: 3211 Printers, 020–0BF: 2701/2703 Telecommunications Units. These addresses are still commonly used in z/VM virtual machines.
System/360 models 40 and 50 have an integrated 1052-7 console that is usually addressed as 01F, however, this was not connected to the byte-multiplexer channel, but rather, had a direct internal connection to the mainframe. The model 30 attached a different model of 1052 through a 1051 control unit. The models 60 through 75 also use the 1052–7.
Selector channels enabled I/O to high speed devices. These storage devices were attached to a control unit and then to the channel. The control unit let clusters of devices be attached to the channels. On higher speed models, multiple selector channels, which could operate simultaneously or in parallel, improved overall performance.
Control units are connected to the channels with "bus and tag" cable pairs. The bus cables carried the address and data information and the tag cables identified what data was on the bus. The general configuration of a channel is to connect the devices in a chain, like this: Mainframe—Control Unit X—Control Unit Y—Control Unit Z. Each control unit is assigned a "capture range" of addresses that it services. For example, control unit X might capture addresses 40–4F, control unit Y: C0–DF, and control unit Z: 80–9F. Capture ranges had to be a multiple of 8, 16, 32, 64, or 128 devices and be aligned on appropriate boundaries. Each control unit in turn has one or more devices attached to it. For example, you could have control unit Y with 6 disks, that would be addressed as C0-C5.
There are three general types of bus-and-tag cables produced by IBM. The first is the standard gray bus-and-tag cable, followed by the blue bus-and-tag cable, and finally the tan bus-and-tag cable. Generally, newer cable revisions are capable of higher speeds or longer distances, and some peripherals specified minimum cable revisions both upstream and downstream.
The cable ordering of the control units on the channel is also significant. Each control unit is "strapped" as High or Low priority. When a device selection was sent out on a mainframe's channel, the selection was sent from X->Y->Z->Y->X. If the control unit was "high" then the selection was checked in the outbound direction, if "low" then the inbound direction. Thus, control unit X was either 1st or 5th, Y was either 2nd or 4th, and Z was 3rd in line. It is also possible to have multiple channels attached to a control unit from the same or multiple mainframes, thus providing a rich high-performance, multiple-access, and backup capability.
Typically the total cable length of a channel is limited to 200 feet, less being preferred. Each control unit accounts for about 10 "feet" of the 200-foot limit.
Block multiplexer channel
IBM first introduced a new type of I/O channel on the Model 85 and Model 195, the 2880 block multiplexer channel, and then made them standard on the System/370. This channel allowed a device to suspend a channel program, pending the completion of an I/O operation and thus to free the channel for use by another device. A block multiplexer channel can support either standard 1.5 MB/s connections or, with the 2-byte interface feature, 3 MB/s; the latter use one tag cable and two bus cables. On the S/370 there is an option for a 3.0 MB/s data streaming channel with one bus cable and one tag cable.
The initial use for this was the 2305 fixed-head disk, which has 8 "exposures" (alias addresses) and rotational position sensing (RPS).
Block multiplexer channels can operate as a selector channel to allow compatible attachment of legacy subsystems.
Basic hardware components
Being uncertain of the reliability and availability of the then new monolithic integrated circuits, IBM chose instead to design and manufacture its own custom hybrid integrated circuits. These were built on 11 mm square ceramic substrates. Resistors were silk screened on and discrete glass encapsulated transistors and diodes were added. The substrate was then covered with a metal lid or encapsulated in plastic to create a "Solid Logic Technology" (SLT) module.
A number of these SLT modules were then flip chip mounted onto a small multi-layer printed circuit "SLT card". Each card had one or two sockets on one edge that plugged onto pins on one of the computer's "SLT boards" (also referred to as a backplane). This was the reverse of how most other company's cards were mounted, where the cards had pins or printed contact areas and plugged into sockets on the computer's boards.
Up to twenty SLT boards could be assembled side-by-side (vertically and horizontally, max 4 high by 5 wide) to form a "logic gate". Several gates mounted together constituted a box-shaped "logic frame". The outer gates were generally hinged along one vertical edge so they could be swung open to provide access to the fixed inner gates. The larger machines could have more than one frame bolted together to produce the final unit, such as a multi-frame Central Processing Unit (CPU).
Operating system software
The smaller System/360 models used the Basic Operating System/360 (BOS/360), Tape Operating System (TOS/360), or Disk Operating System/360 (DOS/360, which evolved into DOS/VS, DOS/VSE, VSE/AF, VSE/SP, VSE/ESA, and then z/VSE).
The larger models used Operating System/360 (OS/360). IBM developed several levels of OS/360, with increasingly powerful features: Primary Control Program (PCP), Multiprogramming with a Fixed number of Tasks (MFT), and Multiprogramming with a Variable number of Tasks (MVT). MVT took a long time to develop into a usable system, and the less ambitious MFT was widely used. PCP was used on intermediate machines too small to run MFT well, and on larger machines before MFT was available; the final releases of OS/360 included only MFT and MVT. For the System/370 and later machines, MFT evolved into OS/VS1, while MVT evolved into OS/VS2 (SVS) (Single Virtual Storage), then various versions of MVS (Multiple Virtual Storage) culminating in the current z/OS.
When it announced the Model 67 in August 1965, IBM also announced TSS/360 (Time-Sharing System) for delivery at the same time as the 67. TSS/360, a response to Multics, was an ambitious project that included many advanced features. It had performance problems, was delayed, canceled, reinstated, and finally canceled again in 1971. Customers migrated to CP-67, MTS (Michigan Terminal System), TSO (Time Sharing Option for OS/360), or one of several other time-sharing systems.
CP-67, the original virtual machine system, was also known as CP/CMS. CP/67 was developed outside the IBM mainstream at IBM's Cambridge Scientific Center, in cooperation with MIT researchers. CP/CMS eventually won wide acceptance, and led to the development of VM/370 (Virtual Machine) which had a primary interactive "sub" operating system known as VM/CMS (Conversational Monitoring System). This evolved into today's z/VM.
The Model 20 offered a simplified and rarely used tape-based system called TPS (Tape Processing System), and DPS (Disk Processing System) that provided support for the 2311 disk drive. TPS could run on a machine with 8 KB of memory; DPS required 12 KB, which was pretty hefty for a Model 20. Many customers ran quite happily with 4 KB and CPS (Card Processing System). With TPS and DPS, the card reader was used to read the Job Control Language cards that defined the stack of jobs to run and to read in transaction data such as customer payments. The operating system was held on tape or disk, and results could also be stored on the tapes or hard drives. Stacked job processing became an exciting possibility for the small but adventurous computer user.
A little-known and little-used suite of 80-column punched-card utility programs known as Basic Programming Support (BPS) (jocularly: Barely Programming Support), a precursor of TOS, was available for smaller systems.
Component names
IBM created a new naming system for the new components created for System/360, although well-known old names, like IBM 1403 and IBM 1052, were retained. In this new naming system, components were given four-digit numbers starting with 2. The second digit described the type of component, as follows:
Peripherals
IBM developed a new family of peripheral equipment for System/360, carrying over a few from its older 1400 series. Interfaces were standardized, allowing greater flexibility to mix and match processors, controllers and peripherals than in the earlier product lines.
In addition, System/360 computers could use certain peripherals that were originally developed for earlier computers. These earlier peripherals used a different numbering system, such as the IBM 1403 chain printer. The 1403, an extremely reliable device that had already earned a reputation as a workhorse, was sold as the 1403-N1 when adapted for the System/360.
Also available were optical character recognition (OCR) readers IBM 1287 and IBM 1288 which could read Alpha Numeric (A/N) and Numeric Hand Printed (NHP/NHW) Characters from Cashier's rolls of tape to full legal size pages. At the time this was done with very large optical/logic readers. Software was too slow and expensive at that time.
Models 65 and below sold with an IBM 1052–7 as the console typewriter. The 360/85 with feature 5450 uses a display console that was not compatible with anything else in the line; the later 3066 console for the 370/165 and 370/168 use the same basic display design as the 360/85.
The IBM System/360 models 91 and 195 use a graphical display similar to the IBM 2250 as their primary console.
Additional operator consoles were also available. Certain high-end machines could optionally be purchased with a 2250 graphical display, costing upwards of US$100,000; smaller machines could use the less expensive 2260 display or later the 3270.
Direct access storage devices (DASD)
The first disk drives for System/360 were IBM 2302s and IBM 2311s. The first drum for System/360 was the IBM 7320.
The 156 kbit/s 2302 was based on the earlier 1302 and was available as a model 3 with two 112.79 MB modules or as a model 4 with four such modules.
The 2311, with a removable 1316 disk pack, was based on the IBM 1311 and had a theoretical capacity of 7.2 MB, although actual capacity varied with record design. (When used with a 360/20, the 1316 pack was formatted into fixed-length 270 byte sectors, giving a maximum capacity of 5.4MB.)
In 1966, the first 2314s shipped. This device had up to eight usable disk drives with an integral control unit; there were nine drives, but one was reserved as a spare. Each drive used a removable 2316 disk pack with a capacity of nearly 28 MB. The disk packs for the 2311 and 2314 were physically large by today's standards — e.g., the 1316 disk pack was about in diameter and had six platters stacked on a central spindle. The top and bottom outside platters did not store data. Data were recorded on the inner sides of the top and bottom platters and both sides of the inner platters, providing 10 recording surfaces. The 10 read/write heads moved together across the surfaces of the platters, which were formatted with 203 concentric tracks. To reduce the amount of head movement (seeking), data was written in a virtual cylinder from inside top platter down to inside bottom platter. These disks were not usually formatted with fixed-sized sectors as are today's hard drives (though this was done with CP/CMS). Rather, most System/360 I/O software could customize the length of the data record (variable-length records), as was the case with magnetic tapes.
Some of the most powerful early System/360s used high-speed head-per-track drum storage devices. The 3,500 RPM 2301, which replaced the 7320, was part of the original System/360 announcement, with a capacity of 4 MB. The 303.8 kbit/s IBM 2303 was announced on January 31, 1966, with a capacity of 3.913 MB. These were the only drums announced for System/360 and System/370, and their niche was later filled by fixed-head disks.
The 6,000 RPM 2305 appeared in 1970, with capacities of 5 MB (2305–1) or 11 MB (2305–2) per module. Although these devices did not have large capacity, their speed and transfer rates made them attractive for high-performance needs. A typical use was overlay linkage (e.g. for OS and application subroutines) for program sections written to alternate in the same memory regions. Fixed-head disks and drums were particularly effective as paging devices on the early virtual memory systems. The 2305, although often called a "drum" was actually a head-per-track disk device, with 12 recording surfaces and a data transfer rate up to 3 MB/s.
Rarely seen was the IBM 2321 Data Cell, a mechanically complex device that contained multiple magnetic strips to hold data; strips could be randomly accessed, placed upon a cylinder-shaped drum for read/write operations; then returned to an internal storage cartridge. The IBM Data Cell [noodle picker] was among several IBM trademarked "speedy" mass online direct-access storage peripherals (reincarnated in recent years as "virtual tape" and automated tape librarian peripherals). The 2321 file had a capacity of 400 MB, at the time when the 2311 disk drive only had 7.2 MB. The IBM Data Cell was proposed to fill cost/capacity/speed gap between magnetic tapes—which had high capacity with relatively low cost per stored byte—and disks, which had higher expense per byte. Some installations also found the electromechanical operation less dependable and opted for less mechanical forms of direct-access storage.
The Model 44 was unique in offering an integrated single-disk drive as a standard feature. This drive used the 2315 "ramkit" cartridge and provided 1,171,200 bytes of storage.
Tape drives
The 2400-series of 1/2" magnetic tape units consisted of the 2401 and 2402 Models 1-6 Magnetic Tape Units, the 2403 Models 1-6 Magnetic Tape Unit and Control, the 2404 Models 1-3 Magnetic Tape Unit and Control, and the 2803/2804 Models 1 and 2 Tape Control Units. The later 2415 Magnetic Tape Unit and Control, introduced in 1967 contained two, four, or six tape drives and a control in a single unit, and was slower and cheaper. The 2415 drives and control were not marketed separately. With System/360, IBM switched from IBM 7-track to 9-track tape format. Some 2400-series drives could be purchased that read and wrote 7-track tapes for compatibility with the older IBM 729 tape drives. In 1968, the IBM 2420 tape system was released, offering much higher data rates, self-threading tape operation and 1600bpi packing density. It remained in the product line until 1979.
Unit record devices
Punched card devices included the 2501 card reader and the 2540 card reader punch. Virtually every System/360 had a 2540. The 2560 MFCM ("Multi-Function Card Machine") reader/sorter/punch, listed above, was for the Model 20 only. It was notorious for reliability problems (earning humorous acronyms often involving "...Card Muncher" or "Mal-Function Card Machine").
Line printers were the IBM 1403 and the slower IBM 1443.
A paper tape reader, the IBM 2671, was introduced in 1964. It had a rated speed of 1,000 cps. There were also a paper tape reader and paper tape punch from an earlier era, available only as RPQs (Request Price Quotation). The 1054 (reader) and 1055 (punch), which were carried forward (like the 1052 console typewriter) from the IBM 1050 Teleprocessing System. All these devices operated at a maximum of 15.5 characters per second. The paper tape punch from the IBM 1080 System was also available by RPQ, but at a prohibitively expensive price.
Optical character recognition (OCR) devices 1287 and later the 1288 were available on the 360's. The 1287 could read handwritten numerals, some OCR fonts, and cash register OCR paper tape reels. The 1288 'page reader' could handle up to legal size OCR font typewritten pages, as well as handwritten numerals. Both of these OCR devices employed a 'flying spot' scanning principle, with the raster scan provided by a large CRT, and the reflected light density changes were picked up by a high gain photomultiplier tube.
Magnetic ink character recognition (MICR) was provided by the IBM 1412 and 1419 cheque sorters, with magnetic ink printing (for cheque books) on 1445 printers (a modified 1443 that used an MICR ribbon). 1412/1419 and 1445 were mainly used by banking institutions.
Remaining machines
Despite having been sold or leased in very large numbers for a mainframe system of its era, only a few of System/360 computers remain—mainly as non-operating property of museums or collectors. Examples of existing systems include:
The Computer History Museum in Mountain View, California, has a non-working Model 30 on display, as do the Museum of Transport and Technology in Auckland, New Zealand, and the Vienna University of Technology in Austria.
The University of Western Australia Computer Club has a complete Model 40 in storage.
The KCG Computer Museum of Kyoto Computer Gakuin, Japan's first computer school in town, has an IBM System/360 Model 40 on display.
Two Model 20 processors along with numerous peripherals (forming at least one complete system) located in Nürnberg, Germany were purchased on eBay in April/May 2019 for €3710 by two UK enthusiasts who, over the course of some months, moved the machine to Creslow Park in Buckinghamshire, United Kingdom. The system was in a small, abandoned building left untouched for decades, and apparently had been used in that building since all peripherals were still fully wired and interconnected. As of September 2024 the systems have been moved on a long-term loan basis to the System Source Computer Museum in Hunt Valley, Maryland, USA for display and restoration.
The Living Computers: Museum + Labs has a 360 model 30.
A running list of remaining System/360s that are more than just 'front panels' can be found at World Inventory of remaining System/360 CPUs.
Gallery
This gallery shows the operator's console, with register value lamps, toggle switches (middle of pictures), and "emergency pull" switch (upper right of pictures) of the various models.
See also
IBM System/360 architecture
IBM 9020
History of IBM
List of IBM products
IBM System/4 Pi
Gerrit Blaauw
Bob O. Evans
Notes
References
External links
IBM System/360 System Summary 11th edition August 1969
Generations of the IBM 360/370/3090/390 by Lars Poulsen with multiple links and references
Description of a large IBM System/360 model 75 installation at JPL
Illustrations from “Introduction to IBM Data Processing Systems”, 1968: contains photographs of IBM System/360 computers and peripherals
IBM System 360 RPG Debugging Template and Keypunch Card
Video of a two-hour lecture and panel discussion entitled The IBM System/360 Revolution, from the Computer History Museum on 2004-04-07
Original vintage film from 1964 IBM System/360 Computer History Archives Project
Several photos of a dual processor IBM 360/67 at the University of Michigan's academic Computing Center in the late 1960s or early 1970s are included in Dave Mills' article describing the Michigan Terminal System (MTS)
Pictures of an IBM System/360 Model 67 at Newcastle (UK) University
From the IBM Journal of Research and Development
From IBM Systems Journal
Computing platforms
1960s software
Computer-related introductions in 1964
32-bit computers | IBM System/360 | Technology | 9,058 |
15,584,482 | https://en.wikipedia.org/wiki/Pregeometry%20%28physics%29 | In physics, a pregeometry is a hypothetical structure from which the geometry of the universe develops. Some cosmological models feature a pregeometric universe before the Big Bang. The term was championed by John Archibald Wheeler in the 1960s and 1970s as a possible route to a theory of quantum gravity. Since quantum mechanics allowed a metric to fluctuate, it was argued that the merging of gravity with quantum mechanics required a set of more fundamental rules regarding connectivity that were independent of topology and dimensionality. Where geometry could describe the properties of a known surface, the physics of a hypothetical region with predefined properties, "pregeometry" might allow one to work with deeper underlying rules of physics that were not so strongly dependent on simplified classical assumptions about the properties of space.
No single proposal for pregeometry has gained wide consensus support in the physics community. Some notions related to pregeometry predate Wheeler, other notions depart considerably from his outline of pregeometry but are still associated with it. A 2006 paper provided a survey and critique of pregeometry or near-pregeometry proposals up to that time. A summary of these is given below:
Discrete spacetime by Hill A proposal anticipating Wheeler's pregeometry, though assuming some geometric notions embedded in quantum mechanics and special relativity. A subgroup of Lorentz transformations with only rational coefficients is deployed. Energy and momentum variables are restricted to a certain set of rational numbers. Quantum wave functions work out to be a special case semi-periodical functions though the nature of wave functions is ambiguous since the energy-momentum space cannot be uniquely interpreted.
Discrete-space structure by Dadić and Pisk Spacetime as an unlabeled graph whose topological structure entirely characterizes the graph. Spatial points are related to vertices. Operators define the creation or annihilation of lines which develop into a Fock space framework. This discrete-space structure assumes the metric of spacetime and assumes composite geometric objects so it is not a pregeometric scheme in line with Wheeler's original conception of pregeometry.
Pregeometric graph by Wilson Spacetime is described by a generalized graph consisting of a very large or infinite set of vertices paired with a very large or infinite set of edges. From that graph emerge various constructions such as vertices with multiple edges, loops, and directed edges. These in turn support formulations of the metrical foundation of space-time.
Number theory pregeometry by Volovich Spacetime as a non-Archimedean geometry over a field of rational numbers and a finite Galois field where rational numbers themselves undergo quantum fluctuations.
Causal sets by Bombelli, Lee, Meyer and Sorkin All of spacetime at very small scales is a causal set consisting of locally finite set of elements with a partial order linked to the notion of past and future in macroscopic spacetime and causality between point-events. Derived from the causal order is the differential structure and the conformal metric of a manifold. A probability is assigned to a causal set becoming embedded in a manifold; thus there can be a transition from a discrete Planck scale fundamental unit of volume to a classical large scale continuous space.
Random graphs by Antonsen Spacetime is described by dynamical graphs with points (associated with vertices) and links (of unit length) that are created or annihilated according to probability calculations. The parameterization of graphs in a metaspace gives rise to time.
Bootstrap universe by Cahill and Klinger An iterative map composed of monads and the relations between them becomes a tree-graph of nodes and links. A definition of distance between any two monads is defined and from this and probabilistic mathematical tools emerges a three-dimensional space.
Axiomatic pregeometry by Perez-Bergliaffa, Romero and Vucetich An assortment of ontological presuppositions describes spacetime a result of relations between objectively existing entities. From presuppositions emerges the topology and metric of Minkowski spacetime.
Cellular networks by Requardt Space is described by a graph with densely entangled sub-clusters of nodes (with differential states) and bonds (either vanishing at 0 or directed at 1). Rules describe the evolution of the graph from a chaotic patternless pre-Big Bang condition to a stable spacetime in the present. Time emerges from a deeper external-parameter "clock-time" and the graphs lead to a natural metrical structure.
Simplicial quantum gravity by Lehto, Nielsen and Ninomiya Spacetime is described as having a deeper pregeometric structure based on three dynamical variables, vertices of an abstract simplicial complex, and a real-valued field associated with every pair of vertices; the abstract simplicial complex is set to correspond with a geometric simplicial complex and then geometric simplices are stitched together into a piecewise linear space. Developed further, triangulation, link distance, a piecewise linear manifold, and a spacetime metric arise. Further, a lattice quantization is formulated resulting in a quantum gravity description of spacetime.
Quantum automaton universe by Jaroszkiewicz and Eakins Event states (elementary or entangled) are provided topological relationships via tests (Hermitian operators) endowing the event states with evolution, irreversible acquisition of information, and a quantum arrow of time. Information content in various ages of the universe modifies the tests so the universe acts as an automaton, modifying its structure. Causal set theory is then worked out within this quantum automaton framework to describe a spacetime that inherits the assumptions of geometry within standard quantum mechanics.
Rational-number spacetime by Horzela, Kapuścik, Kempczyński and Uzes A preliminary investigation into how all events might be mapped with rational number coordinates and how this might help to better understand a discrete spacetime framework.
Further reading
Some additional or related pregeometry proposals are:
Akama, Keiichi. "An Attempt at Pregeometry: Gravity with Composite Metric"
Requardt, Mandred; Roy, Sisir. "(Quantum) Space-Time as a Statistical Geometry of Fuzzy Lumps and the Connection with Random Metric Spaces"
Sidoni, Lorenzo. "Horizon thermodynamics in pregeometry"
References
Misner, Thorne, and Wheeler ("MTW"), Gravitation (1971) §44.4 "Not geometry, but pregeometry as the magic building material", §44.5 "Pregeometry as the calculus of prepositions"
Mathematical physics
Quantum gravity | Pregeometry (physics) | Physics,Mathematics | 1,358 |
2,193,987 | https://en.wikipedia.org/wiki/Decomposition%20%28computer%20science%29 | Decomposition in computer science, also known as factoring, is breaking a complex problem or system into parts that are easier to conceive, understand, program, and maintain.
Overview
Different types of decomposition are defined in computer sciences:
In structured programming, algorithmic decomposition breaks a process down into well-defined steps.
Structured analysis breaks down a software system from the system context level to system functions and data entities as described by Tom DeMarco.<ref>Tom DeMarco (1978). Structured Analysis and System Specification. New York, NY: Yourdon, 1978. , .</ref>
Object-oriented decomposition breaks a large system down into progressively smaller classes or objects that are responsible for part of the problem domain.
According to Booch, algorithmic decomposition is a necessary part of object-oriented analysis and design, but object-oriented systems start with and emphasize decomposition into objects.
More generally, functional decomposition in computer science is a technique for mastering the complexity of the function of a model. A functional model of a system is thereby replaced by a series of functional models of subsystems.
Decomposition topics
Decomposition paradigm
A decomposition paradigm in computer programming is a strategy for organizing a program as a number of parts, and usually implies a specific way to organize a program text. Typically the aim of using a decomposition paradigm is to optimize some metric related to program complexity, for example a program's modularity or its maintainability.
Most decomposition paradigms suggest breaking down a program into parts to minimize the static dependencies between those parts, and to maximize each part's cohesiveness. Popular decomposition paradigms include the procedural, modules, abstract data type, and object oriented paradigms.
Though the concept of decomposition paradigm is entirely distinct from that of model of computation, they are often confused. For example, the functional model of computation is often confused with procedural decomposition, and the actor model of computation is often confused with object oriented decomposition.
Decomposition diagram
A decomposition diagram shows a complex, process, organization, data subject area, or other type of object broken down into lower level, more detailed components. For example, decomposition diagrams may represent organizational structure or functional decomposition into processes. Decomposition diagrams provide a logical hierarchical decomposition of a system.
See also
Code refactoring
Component-based software engineering
Dynamization
Duplicate code
Event partitioning
How to Solve It''
Integrated Enterprise Modeling
Personal information management
Readability
Subroutine
References
External links
Object Oriented Analysis and Design
On the Criteria To Be Used in Decomposing Systems into Modules
Software design
Decomposition methods | Decomposition (computer science) | Engineering | 514 |
2,154,371 | https://en.wikipedia.org/wiki/Extreme%20ultraviolet%20lithography | Extreme ultraviolet lithography (EUVL, also known simply as EUV) is a technology used in the semiconductor industry for manufacturing integrated circuits (ICs). It is a type of photolithography that uses 13.5 nm extreme ultraviolet (EUV) light from a laser-pulsed tin (Sn) plasma to create intricate patterns on semiconductor substrates.
, ASML Holding is the only company that produces and sells EUV systems for chip production, targeting 5 nanometer (nm) and 3 nm process nodes.
The EUV wavelengths that are used in EUVL are near 13.5 nanometers (nm), using a laser-pulsed tin (Sn) droplet plasma to produce a pattern by using a reflective photomask to expose a substrate covered by photoresist. Tin ions in the ionic states from Sn IX to Sn XIV give photon emission spectral peaks around 13.5 nm from 4p64dn – 4p54dn+1 + 4dn−14f ionic state transitions.
History and economic impact
In the 1960s, visible light was used for the production of integrated circuits, with wavelengths as small as 435 nm (mercury "g line").
Later, ultraviolet (UV) light was used, at first with a wavelength of 365 nm (mercury "i line"), then with excimer wavelengths, first of 248 nm (krypton fluoride laser), then 193 nm (argon fluoride laser), which was called deep UV.
The next step, going even smaller, was called extreme UV, or EUV. The EUV technology was considered impossible by many.
EUV light is absorbed by glass and air, so instead of using lenses to focus the beams of light as done previously, mirrors in vacuum would be needed. A reliable production of EUV was also problematic. Then, leading producers of steppers Canon and Nikon stopped development, and some predicted the end of Moore's law.
In 1991, scientists at Bell Labs published a paper demonstrating the possibility of using a wavelength of 13.8 nm for the so-called soft X-ray projection lithography.
To address the challenge of EUV lithography, researchers at Lawrence Livermore National Laboratory, Lawrence Berkeley National Laboratory, and Sandia National Laboratories were funded in the 1990s to perform basic research into the technical obstacles. The results of this successful effort were disseminated via a public/private partnership Cooperative R&D Agreement (CRADA) with the invention and rights wholly owned by the US government, but licensed and distributed under approval by DOE and Congress. The CRADA consisted of a consortium of private companies and the Labs, manifested as an entity called the Extreme Ultraviolet Limited Liability Company (EUV LLC).
Intel, Canon, and Nikon (leaders in the field at the time), as well as the Dutch company ASML and Silicon Valley Group (SVG) all sought licensing. Congress denied the Japanese companies the necessary permission, as they were perceived as strong technical competitors at the time and should not benefit from taxpayer-funded research at the expense of American companies. In 2001 SVG was acquired by ASML, leaving ASML as the sole benefactor of the critical technology.
By 2018, ASML succeeded in deploying the intellectual property from the EUV-LLC after several decades of developmental research, with incorporation of European-funded EUCLIDES (Extreme UV Concept Lithography Development System) and long-standing partner German optics manufacturer ZEISS and synchrotron light source supplier Oxford Instruments. This led MIT Technology Review to name it "the machine that saved Moore's law". The first prototype in 2006 produced one wafer in 23 hours. As of 2022, a scanner produces up to 200 wafers per hour. The scanner uses Zeiss optics, which that company calls "the most precise mirrors in the world", produced by locating imperfections and then knocking off individual molecules with techniques such as ion beam figuring.
This made the once small company ASML the world leader in the production of scanners and monopolist in this cutting-edge technology and resulted in a record turnover of 27.4 billion euros in 2021, dwarfing their competitors Canon and Nikon, who were denied IP access. Because it is such a key technology for development in many fields, the United States licenser pressured Dutch authorities to not sell these machines to China. ASML has followed the guidelines of Dutch export controls and until further notice will have no authority to ship the machines to China.
Along with multiple patterning, EUV has paved the way for higher transistor densities, allowing the production of higher-performance processors. Smaller transistors also require less power to operate, resulting in more energy-efficient electronics.
Market growth projection
According to a report by Pragma Market Research, the global extreme ultraviolet (EUV) lithography market is projected to grow from US$8,957.8 million in 2024 to US$17,350 million by 2030, at a compound annual growth rate (CAGR) of 11.7%. This significant growth reflects the rising demand for miniaturized electronics in various sectors, including smartphones, artificial intelligence, and high-performance computing.
Fab tool output
Requirements for EUV steppers, given the number of layers in the design that require EUV, the number of machines, and the desired throughput of the fab, assuming 24 hours per day operation.
Masks
EUV photomasks work by reflecting light, which is achieved by using multiple alternating layers of molybdenum and silicon. This is in contrast to conventional photomasks which work by blocking light using a single chromium layer on a quartz substrate. An EUV mask consists of 40–50 alternating silicon and molybdenum layers; this is a multilayer which acts to reflect the extreme ultraviolet light through Bragg diffraction; the reflectance is a strong function of incident angle and wavelength, with longer wavelengths reflecting more near normal incidence and shorter wavelengths reflecting more away from normal incidence. The multilayer may be protected by a thin ruthenium layer, called a capping layer. The pattern is defined in a tantalum-based absorbing layer over the capping layer.
Blank photomasks are mainly made by two companies: AGC Inc. and Hoya Corporation. Ion-beam deposition equipment mainly made by Veeco is often used to deposit the multilayer. A blank photomask is covered with photoresist, which is then baked (solidified) in an oven, and later the pattern is defined on the photoresist using maskless lithography with an electron beam. This step is called exposure. The exposed photoresist is developed (removed), and the unprotected areas are etched. The remaining photoresist is then removed. Masks are then inspected and later repaired using an electron beam. Etching must be done only in the absorbing layer and thus there is a need to distinguish between the capping and the absorbing layer, which is known as etch selectivity and is unlike etching in conventional photomasks, which only have one layer critical to their function.
Tool
An EUV tool (EUV photolithography machine) has a laser-driven tin (Sn) plasma light source, reflective optics comprising multilayer mirrors, contained within a hydrogen gas ambient. The hydrogen is used to keep the EUV collector mirror, as the first mirror collecting EUV emitted over a large range in angle (~2π sr) from the Sn plasma, in the source free of Sn deposition. Specifically, the hydrogen buffer gas in the EUV source chamber or vessel decelerates or possibly pushes back Sn ions and Sn debris traveling toward the EUV collector (collector protection) and enable a chemical reaction of Sn(s) + 4H(g) -> SnH4(g) to remove Sn deposition on the collector in the form of SnH4 gas (collector reflectivity restoration).
EUVL is a significant departure from the deep-ultraviolet lithography standard. All matter absorbs EUV radiation. Hence, EUV lithography requires vacuum. All optical elements, including the photomask, must use defect-free molybdenum/silicon (Mo/Si) multilayers (consisting of 50 Mo/Si bilayers, which theoretical reflectivity limit at 13.5 nm is ~75%) that act to reflect light by means of interlayer wave interference; any one of these mirrors absorb around 30% of the incident light, so the mirror temperature control is important.
EUVL systems, as of 2002-2009, contain at least two condenser multilayer mirrors, six projection multilayer mirrors and a multilayer object (mask). Since the mirrors absorb 96% of the EUV light, the ideal EUV source needs to be much brighter than its predecessors. EUV source development has focused on plasmas generated by laser or discharge pulses. The mirror responsible for collecting the light is directly exposed to the plasma and is vulnerable to damage from high-energy ions and other debris such as tin droplets, which require the costly collector mirror to be replaced every year.
Resource requirements
The required utility resources are significantly larger for EUV compared to 193 nm immersion, even with two exposures using the latter. At the 2009 EUV Symposium, Hynix reported that the wall plug efficiency was ~0.02% for EUV, i.e., to get 200 watts at intermediate focus for 100 wafers per hour, one would require 1 megawatt of input power, compared to 165 kilowatts for an ArF immersion scanner, and that even at the same throughput, the footprint of the EUV scanner was ~3× the footprint of an ArF immersion scanner, resulting in productivity loss. Additionally, to confine ion debris, a superconducting magnet may be required.
A typical EUV tool weighs nearly 200 tons and costs around 180 million USD.
EUV tools consume at least 10× more energy than immersion tools.
Summary of key features
The following table summarizes key differences between EUV systems in development and ArF immersion systems which are widely used in production today:
The different degrees of resolution among the 0.33 NA tools are due to the different illumination options. Despite the potential of the optics to reach sub-20 nm resolution, secondary electrons in resist practically limit the resolution to around 20 nm (more on this below).
Light source power, throughput, and uptime
Neutral atoms or condensed matter cannot emit EUV radiation. Ionization must precede EUV emission in matter. The thermal production of multicharged positive ions is only possible in a hot dense plasma, which itself strongly absorbs EUV. As of 2025, the established EUV light source is a laser-pulsed tin plasma. The ions absorb the EUV light they emit and are easily neutralized by electrons in the plasma to lower charge states, which produce light mainly at other, unusable wavelengths, resulting in a much reduced efficiency of light generation for lithography at higher plasma power density.
The throughput is tied to the source power, divided by the dose. A higher dose requires a slower stage motion (lower throughput) if pulse power cannot be increased.
EUV collector reflectivity degrades ~0.1–0.3% per billion 50 kHz pulses (~10% in ~2 weeks), leading to loss of uptime and throughput, while even for the first few billion pulses (within one day), there is still 20% (±10%) fluctuation. This could be due to the accumulating Sn residue mentioned above which is not completely cleaned off. On the other hand, conventional immersion lithography tools for double-patterning provide consistent output for up to a year.
Recently, the NXE:3400B illuminator features a smaller pupil fill ratio (PFR) down to 20% without transmission loss. PFR is maximized and greater than 0.2 around a metal pitch of 45 nm.
Due to the use of EUV mirrors which also absorb EUV light, only a small fraction of the source light is finally available at the wafer. There are 4 mirrors used for the illumination optics and 6 mirrors for the projection optics. The EUV mask or reticle is itself an additional mirror. With 11 reflections, only ~2% of the EUV source light is available at the wafer.
The throughput is determined by the EUV resist dose, which in turn depends on the required resolution. A dose of 40 mJ/cm2 is expected to be maintained for adequate throughput.
Tool uptime
The EUV light source limits tool uptime besides throughput. In a two-week period, for example, over seven hours downtime may be scheduled, while total actual downtime including unscheduled issues could easily exceed a day. A dose error over 2% warrants tool downtime.
The wafer exposure throughput steadily expanded up to around 1000 wafers per day (per system) over the 2019–2022 period, indicating substantial idle time, while at the same time running >120 wafers per day on a number of multipatterned EUV layers, for an EUV wafer on average.
Comparison to other lithography light sources
EUV (10–121 nm) is the band longer than X-rays (0.1–10 nm) and shorter than the hydrogen Lyman-alpha line.
While state-of-the-art 193 nm ArF excimer lasers offer intensities of 200 W/cm2, lasers for producing EUV-generating plasmas need to be much more intense, on the order of 1011 W/cm2. A state-of-the-art ArF immersion lithography 120 W light source requires no more than 40 kW electrical power, while EUV sources are targeted to exceed 40 kW.
The optical power target for EUV lithography is at least 250 W, while for other conventional lithography sources, it is much less. For example, immersion lithography light sources target 90 W, dry ArF sources 45 W, and KrF sources 40 W. High-NA EUV sources are expected to require at least 500 W.
EUV-specific optical issues
Reflective optics
A fundamental aspect of EUVL tools, resulting from the use of reflective optics, is the off-axis illumination (at an angle of 6°, in different direction at different positions within the illumination slit) on a multilayer mask (reticle). This leads to shadowing effects resulting in asymmetry in the diffraction pattern that degrade pattern fidelity in various ways as described below. For example, one side (behind the shadow) would appear brighter than the other (within the shadow).
The behavior of light rays within the plane of reflection (affecting horizontal lines) is different from the behavior of light rays out of the plane of reflection (affecting vertical lines). Most conspicuously, identically sized horizontal and vertical lines on the EUV mask are printed at different sizes on the wafer.
The combination of the off-axis asymmetry and the mask shadowing effect leads to a fundamental inability of two identical features even in close proximity to be in focus simultaneously. One of EUVL's key issues is the asymmetry between the top and bottom line of a pair of horizontal lines (the so-called "two-bar"). Some ways to partly compensate are the use of assist features as well as asymmetric illumination.
An extension of the two-bar case to a grating consisting of many horizontal lines shows similar sensitivity to defocus. It is manifest in the critical dimension (CD) difference between the top and bottom edge lines of the set of 11 horizontal lines.
Polarization by reflection also leads to partial polarization of EUV light, which favors imaging of lines perpendicular to the plane of the reflections.
Pattern shift from defocus (non-telecentricity)
The EUV mask absorber, due to partial transmission, generates a phase difference between the 0th and 1st diffraction orders of a line-space pattern, resulting in image shifts (at a given illumination angle) as well as changes in peak intensity (leading to linewidth changes) which are further enhanced due to defocus. Ultimately, this results in different positions of best focus for different pitches and different illumination angles. Generally, the image shift is balanced out due to illumination source points being paired (each on opposite sides of the optical axis). However, the separate images are superposed and the resulting image contrast is degraded when the individual source image shifts are large enough. The phase difference ultimately also determines the best focus position.
The multilayer is also responsible for image shifting due to phase shifts from diffracted light within the multilayer itself. This is inevitable due to light passing twice through the mask pattern.
The use of reflection causes wafer exposure position to be extremely sensitive to the reticle flatness and the reticle clamp. Reticle clamp cleanliness is therefore required to be maintained. Small (milliradian-scale) deviations in mask flatness in the local slope, coupled with wafer defocus. More significantly, mask defocus has been found to result in large overlay errors. In particular, for a 10 nm node metal 1 layer (including 48 nm, 64 nm, 70 nm pitches, isolated, and power lines), the uncorrectable pattern placement error was 1 nm for 40 nm mask z-position shift. This is a global pattern shift of the layer with respect to previously defined layers. However, features at different locations will also shift differently due to different local deviations from mask flatness, e.g., from defects buried under the multilayer. It can be estimated that the contribution of mask non-flatness to overlay error is roughly 1/40 times the peak-to-valley thickness variation. With the blank peak-to-valley spec of 50 nm, ~1.25 nm image placement error is possible. Blank thickness variations up to 80 nm also contribute, which lead to up to 2 nm image shift.
The off-axis illumination of the reticle is also the cause of non-telecentricity in wafer defocus, which consumes most of the 1.4 nm overlay budget of the NXE:3400 EUV scanner even for design rules as loose as 100 nm pitch. The worst uncorrectable pattern placement error for a 24 nm line was about 1.1 nm, relative to an adjacent 72 nm power line, per 80 nm wafer focus position shift at a single slit position; when across-slit performance is included, the worst error is over 1.5 nm in the wafer defocus window In 2017, an actinic microscope mimicking a 0.33 NA EUV lithography system with 0.2/0.9 quasar 45 illumination showed that an 80 nm pitch contact array shifted −0.6 to 1.0 nm while a 56 nm pitch contact array shifted −1.7 to 1.0 nm relative to a horizontal reference line, within a ±50 nm defocus window.
Wafer defocus also leads to image placement errors due to deviations from local mask flatness. If the local slope is indicated by an angle α, the image is projected to be shifted in a 4× projection tool by , where DOF is the depth of focus. For a depth of focus of 100 nm, a small local deviation from flatness of 2.5 mrad (0.14°) can lead to a pattern shift of 1 nm.
Simulations as well as experiments have shown that pupil imbalances in EUV lithography can result in pitch-dependent pattern placement errors. Since the pupil imbalance changes with EUV collector mirror aging or contamination, such placement errors may not be stable over time. The situation is specifically challenging for logic devices, where multiple pitches have critical requirements at the same time. The issue is ideally addressed by multiple exposures with tailored illuminations.
Slit position dependence
The direction of illumination is also highly dependent on slit position, essentially rotated azimuthally. Nanya Technology and Synopsys found that horizontal vs. vertical bias changed across slit with dipole illumination. The rotating plane of incidence (azimuthal range within −25° to 25°) is confirmed in the SHARP actinic review microscope at CXRO which mimics the optics for EUV projection lithography systems. The reason for this is a mirror is used to transform straight rectangular fields into arc-shaped fields. In order to preserve a fixed plane of incidence, the reflection from the previous mirror would be from a different angle with the surface for a different slit position; this causes non-uniformity of reflectivity. To preserve uniformity, rotational symmetry with a rotating plane of incidence is used. More generally, so-called "ring-field" systems reduce aberrations by relying on the rotational symmetry of an arc-shaped field derived from an off-axis annulus. This is preferred, as reflective systems must use off-axis paths, which aggravate aberrations. Hence identical die patterns within different halves of the arc-shaped slit would require different OPC. This renders them uninspectable by die-to-die comparison, as they are no longer truly identical dies. For pitches requiring dipole, quadrupole, or hexapole illumination, the rotation also causes mismatch with the same pattern layout at a different slit position, i.e., edge vs. center. Even with annular or circular illumination, the rotational symmetry is destroyed by the angle-dependent multilayer reflectance described above. Although the azimuthal angle range is about ±20° (field data indicated over 18°) on 0.33 NA scanners, at 7 nm design rules (36–40 nm pitch), the tolerance for illumination can be ±15°, or even less. Annular illumination nonuniformity and asymmetry also significantly impact the imaging. Newer systems have azimuthal angle ranges going up to ±30°. On 0.33 NA systems, 30 nm pitch and lower already suffer sufficient reduction of pupil fill to significantly affect throughput.
The larger incident angle for pitch-dependent dipole illumination trend across slit does not affect horizontal line shadowing so much, but vertical line shadowing does increase going from center to edge. In addition, higher-NA systems may offer limited relief from shadowing, as they target tighet pitches.
The slit position dependence is particularly difficult for the tilted patterns encountered in DRAM. Besides the more complicated effects due to shadowing and pupil rotation, tilted edges are converted to stair shape, which may be distorted by OPC. In fact, the 32 nm pitch DRAM by EUV will lengthen up to at least 9F2 cell area, where F is the active area half-pitch (traditionally, it had been 6F2). With a 2-D self-aligned double-patterning active area cut, the cell area is still lower at 8.9F2.
Aberrations, originating from deviations of optical surfaces from subatomic (<0.1 nm) specifications as well as thermal deformations and possibly including polarized reflectance effects, are also dependent on slit position, as will be further discussed below, with regard to source-mask optimization (SMO). The thermally induced aberrations are expected to exhibit differences among different positions across the slit, corresponding to different field positions, as each position encounters different parts of the deformed mirrors. Ironically, the use of substrate materials with high thermal and mechanical stability make it more difficult to compensate wavefront errors.
In combination with the range of wavelengths, the rotated plane of incidence aggravates the already severe stochastic impact on EUV imaging.
Wavelength bandwidth (chromatic aberration)
Unlike deep ultraviolet (DUV) lithography sources, based on excimer lasers, EUV plasma sources produce light across a broad range of wavelengths roughly spanning a 2% FWHM bandwidth near 13.5 nm (13.36nm – 13.65nm at 50% power). EUV (10–121nm) is the band longer than X-Rays (0.1–10nm) and shorter than the hydrogen Lyman-alpha line.
Though the EUV spectrum is not completely monochromatic, nor even as spectrally pure as DUV laser sources, the working wavelength has generally been taken to be 13.5 nm. In actuality, the reflected power is distributed mostly in the 13.3-13.7 nm range. The bandwidth of EUV light reflected by a multilayer mirror used for EUV lithography is over +/-2% (>270 pm); the phase changes due to wavelength changes at a given illumination angle may be calculated
and compared to the aberration budget. Wavelength dependence of reflectance also affects the apodization, or illumination distribution across the pupil (for different angles); different wavelengths effectively 'see' different illuminations as they are reflected differently by the multilayer of the mask. This effective source illumination tilt can lead to large image shifts due to defocus. Conversely, the peak reflected wavelength varies across the pupil due to different incident angles. This is aggravated when the angles span a wide radius, e.g., annular illumination. The peak reflectance wavelength increases for smaller incident angles. Aperiodic multilayers have been proposed to reduce the sensitivity at the cost of lower reflectivity but are too sensitive to random fluctuations of layer thicknesses, such as from thickness control imprecision or interdiffusion.
A narrower bandwidth would increase sensitivity to mask absorber and buffer thickness on the 1 nm scale.
Flare
Flare is the presence of background light originating from scattering off of surface features which are not resolved by the light. In EUV systems, this light can be EUV or out-of-band (OoB) light that is also produced by the EUV source. The OoB light adds the complication of affecting the resist exposure in ways other than accounted for by the EUV exposure. OoB light exposure may be alleviated by a layer coated above the resist, as well as 'black border' features on the EUV mask. However, the layer coating inevitably absorbs EUV light, and the black border adds EUV mask processing cost.
Line tip effects
A key challenge for EUV is the counter-scaling behavior of the line tip-to-tip (T2T) distance as half-pitch (hp) is scaled down. This is in part due to lower image contrast for the binary masks used in EUV lithography, which is not encountered with the use of phase shift masks in immersion lithography. The rounding of the corners of the line end leads to line end shortening, and this is worse for binary masks. The use of phase-shift masks in EUV lithography has been studied but encounters difficulties from phase control in thin layers as well as the bandwidth of the EUV light itself. More conventionally, optical proximity correction (OPC) is used to address the corner rounding and line-end shortening. In spite of this, it has been shown that the tip-to-tip resolution and the line tip printability are traded off against each other, being effectively CDs of opposite polarity.
In unidirectional metal layers, tip-to-tip spacing is one of the more severe issues for single exposure patterning. For the 40 nm pitch vertical lines, an 18 nm nominal tip-to-tip drawn gap resulted in an actual tip-to-tip distance of 29 nm with OPC, while for 32 nm pitch horizontal lines, the tip-to-tip distance with a 14 nm nominal gap went to 31 nm with OPC. These actual tip-to-tip distances define a lower limit of the half-pitch of the metal running in the direction perpendicular to the tip. In this case, the lower limit is around 30 nm. With further optimization of the illumination (discussed in the section on source-mask optimization), the lower limit can be further reduced to around 25 nm.
For larger pitches, where conventional illumination can be used, the line tip-to-tip distance is generally larger. For the 24 nm half-pitch lines, with a 20 nm nominally drawn gap, the distance was actually 45 nm, while for 32 nm half-pitch lines, the same nominal gap resulted in a tip-to-tip distance of 34 nm. With OPC, these become 39 nm and 28 nm for 24 nm half-pitch and 32 nm half-pitch, respectively.
Enhancement opportunities for EUV patterning
Assist features
Assist features are often used to help balance asymmetry from non-telecentricity at different slit positions, due to different illumination angles, starting at the 7 nm node, where the pitch is ~ 41 nm for a wavelength ~13.5 nm and NA=0.33, corresponding to k1 ~ 0.5. However, the asymmetry is reduced but not eliminated, since the assist features mainly enhance the highest spatial frequencies, whereas intermediate spatial frequencies, which also affect feature focus and position, are not much affected. The coupling between the primary image and the self images is too strong for the asymmetry to be eliminated by assist features; only asymmetric illumination can achieve this. Assist features may also get in the way of access to power/ground rails. Power rails are expected to be wider, which also limits the effectiveness of using assist features, by constraining the local pitch. Local pitches between 1× and 2× the minimum pitch forbid assist feature placement, as there is simply no room to preserve the local pitch symmetry. In fact, for the application to the two-bar asymmetry case, the optimum assist feature placement may be less than or exceed the two-bar pitch. Depending on the parameter to be optimized (process window area, depth of focus, exposure latitude), the optimum assist feature configuration can be very different, e.g., pitch between assist feature and bar being different from two-bar pitch, symmetric or asymmetric, etc..
At pitches smaller than 58 nm, there is a tradeoff between depth of focus enhancement and contrast loss by assist feature placement. Generally, there is still a focus-exposure tradeoff as the dose window is constrained by the need to have the assist features not print accidentally.
An additional concern comes from shot noise; sub-resolution assist features (SRAFs) cause the required dose to be lower, so as not to print the assist features accidentally. This results in fewer photons defining smaller features (see discussion in section on shot noise).
As SRAFs are smaller features than primary features and are not supposed to receive doses high enough to print, they are more susceptible to stochastic dose variations causing printing errors; this is particularly prohibitive for EUV, where phase-shift masks may need to be used.
Source-mask optimization
Due to the effects of non-telecentricity, standard illumination pupil shapes, such as disc or annular, are not sufficient to be used for feature sizes of ~20 nm or below (10 nm node and beyond). Instead certain parts of the pupil (often over 50%) must be asymmetrically excluded. The parts to be excluded depend on the pattern. In particular, the densest allowed lines need to be aligned along one direction and prefer a dipole shape. For this situation, double exposure lithography would be required for 2D patterns, due to the presence of both X- and Y-oriented patterns, each requiring its own 1D pattern mask and dipole orientation. There may be 200–400 illuminating points, each contributing its weight of the dose to balance the overall image through focus. Thus the shot noise effect (to be discussed later) critically affects the image position through focus, in a large population of features.
Double- or multiple-patterning would also be required if a pattern consists of sub-patterns which require significantly different optimized illuminations, due to different pitches, orientations, shapes, and sizes.
Impact of slit position and aberrations
Largely due to the slit shape, and the presence of residual aberrations, the effectiveness of SMO varies across slit position. At each slit position, there are different aberrations and different azimuthal angles of incidence leading to different shadowing. Consequently, there could be uncorrected variations across slit for aberration-sensitive features, which may not be obviously seen with regular line-space patterns. At each slit position, although optical proximity correction (OPC), including the assist features mentioned above, may also be applied to address the aberrations, they also feedback into the illumination specification, since the benefits differ for different illumination conditions. This would necessitate the use of different source-mask combinations at each slit position, i.e., multiple mask exposures per layer.
The above-mentioned chromatic aberrations, due to mask-induced apodization, also lead to inconsistent source-mask optimizations for different wavelengths.
Pitch-dependent focus windows
The best focus for a given feature size varies as a strong function of pitch, polarity, and orientation under a given illumination. At 36 nm pitch, horizontal and vertical darkfield features have more than 30 nm difference of focus. The 34 nm pitch and 48 nm pitch features have the largest difference of best focus regardless of feature type. In the 48–64 nm pitch range, the best focus position shifts roughly linearly as a function of pitch, by as much as 10–20 nm. For the 34–48 nm pitch range, the best focus position shifts roughly linearly in the opposite direction as a function of pitch. This can be correlated with the phase difference between the zero and first diffraction orders. Assist features, if they can fit within the pitch, were found not to reduce this tendency much, for a range of intermediate pitches, or even worsened it for the case of 18–27 nm and quasar illumination. 50 nm contact holes on 100 nm and 150 pitches had best focus positions separated by roughly 25 nm; smaller features are expected to be worse. Contact holes in the 48–100 nm pitch range showed a 37 nm best focus range. The best focus position vs. pitch is also dependent on resist. Critical layers often contain lines at one minimum pitch of one polarity, e.g., darkfield trenches, in one orientation, e.g., vertical, mixed with spaces of the other polarity of the other orientation. This often magnifies the best focus differences, and challenges the tip-to-tip and tip-to-line imaging.
Reduction of pupil fill
A consequence of SMO and shifting focus windows has been the reduction of pupil fill. In other words, the optimum illumination is necessarily an optimized overlap of the preferred illuminations for the various patterns that need to be considered. This leads to lower pupil fill providing better results. However, throughput is affected below 20% pupil fill due to absorption.
Phase shift masks
A commonly touted advantage of EUV has been the relative ease of lithography, as indicated by the ratio of feature size to the wavelength multiplied by the numerical aperture, also known as the k1 ratio. An 18 nm metal linewidth has a k1 of 0.44 for 13.5 nm wavelength, 0.33 NA, for example. For the k1 approaching 0.5, some weak resolution enhancement including attenuated phase shift masks has been used as essential to production with the ArF laser wavelength (193 nm), whereas this resolution enhancement is not available for EUV. In particular, 3D mask effects including scattering at the absorber edges distort the desired phase profile. Also, the phase profile is effectively derived from the plane wave spectrum reflected from the multilayer through the absorber rather than the incident plane wave. Without absorbers, near-field distortion also occurs at an etched multilayer sidewall due to the oblique incidence illumination; some light traverses only a limited number of bilayers near the sidewall. Additionally, the different polarizations (TE and TM) have different phase shifts..Fundamentally, a chromeless phase shift mask enables pitch splitting by suppression of the zeroth diffracted order on the mask, but fabricating a high quality phase shift mask for EUV is certainly not a trivial task. One possible way to achieve this is through spatial filtering at the Fourier plane of the mask pattern. At Lawrence Berkeley National Lab, the light of the zeroth order is a centrally obscured system, and the +/-1 diffracted orders will be captured by the clear aperture, providing a functional equivalent to the chromeless phase shift mask while using a conventional binary amplitude mask.
EUV photoresist exposure: the role of electrons
EUV light generates photoelectrons upon absorption by matter. These photoelectrons in turn generate secondary electrons, which slow down before engaging in chemical reactions. At sufficient doses 40 eV electrons are known to penetrate 180 nm thick resist leading to development. At a dose of 160 μC/cm2, corresponding to 15 mJ/cm2 EUV dose assuming one electron/photon, 30 eV electrons removed 7 nm of PMMA resist after standard development. For a higher 30 eV dose of 380 μC/cm2, equivalent to 36 mJ/cm2 at one electron/photon, 10.4 nm of PMMA resist are removed. These indicate the distances the electrons can travel in resist, regardless of direction.
The degree of photoelectron emission from the layer underlying the EUV photoresist has been shown to affect the depth of focus. Unfortunately, hardmask layers tend to increase photoelectron emission, degrading the depth of focus. Electrons from defocused images in the resist can also affect the best focus image.
The randomness of the number of secondary electrons is itself a source of stochastic behavior in EUV resist images. The scale length of electron blur itself has a distribution. Intel demonstrated with a rigorous simulation that EUV-released electrons scatter distances larger than 15 nm in EUV resists.
The electron blur is also affected by total internal reflection from the top surface of the resist film.
Effect of underlying layers
Secondary electrons from layers underneath the resist can affect the resist profile as well as pattern collapse. Hence, selection of such both the underlayer and the layer under that layer are important considerations for EUV lithography. Moreover, the electrons from defocused images can aggravate the stochastic nature of the image.
Contamination effects
Resist outgassing
Due to the high efficiency of absorption of EUV by photoresists, heating and outgassing become primary concerns. One well-known issue is contamination deposition on the resist from ambient or outgassed hydrocarbons, which results from EUV- or electron-driven reactions. Organic photoresists outgas hydrocarbons while metal oxide photoresists outgas water and oxygen and metal (in a hydrogen ambient); the last is uncleanable. The carbon contamination is known to affect multilayer reflectivity while the oxygen is particularly harmful for the ruthenium capping layers (relatively stable under EUV and hydrogen conditions) on the EUV multilayer optics.
Tin redeposition
Atomic hydrogen in the tool chambers is used to clean tin and carbon which deposit on the EUV optical surfaces. Atomic hydrogen is produced by EUV light directly photoionizing H2:
hν + H2 → H+ + H + e−.
Electrons generated in the above reaction may also dissociate H2 to form atomic hydrogen:
e− + H2 → H+ + H + 2e−.
The reaction with tin in the light source (e.g., tin on an optical surface in the source) to form volatile SnH4 (stannane) that can be pumped out from the source proceeds via the reaction
Sn(s) + 4 H(g) → SnH4(g).
The SnH4 can reach the coatings of other EUV optical surfaces, where it redeposits Sn via the reaction
SnH4 → Sn(s) + 2 H2(g).
Redeposition may also occur by other intermediate reactions.
The redeposited Sn might be subsequently removed by atomic-hydrogen exposure. However, overall, the tin cleaning efficiency (the ratio of the removed tin flux from a tin sample to the atomic-hydrogen flux to the tin sample) is less than 0.01%, due to both redeposition and hydrogen desorption, leading to formation of hydrogen molecules at the expense of atomic hydrogen. The tin cleaning efficiency for tin oxide is found roughly twice higher than that of tin (with a native oxide layer of ~2 nm on it). Injecting a small amount of oxygen to the light source may improve the tin cleaning rate.
Hydrogen blistering
Hydrogen also reacts with metal-containing compounds to reduce them to metal, and diffuses through the silicon and molybdenum in the multilayer, eventually causing blistering. Capping layers that mitigate hydrogen-related damage often reduce reflectivity to well below 70%. Capping layers are known to be permeable to ambient gases including oxygen and hydrogen, as well as susceptible to the hydrogen-induced blistering defects. Hydrogen may also react with the capping layer, resulting in its removal. TSMC proposed some means for mitigating hydrogen blistering defects on EUV masks, which may impact productivity.
Tin spitting
Hydrogen can penetrate molten tin (Sn), creating hydrogen bubbles inside it. If the bubbles move at the molten tin surface, then it bursts with tin, resulting in tin spreading over a large angle range. This phenomenon is called tin spitting and is one of EUV Collector contamination sources.
Resist erosion
Hydrogen also reacts with resists to etch or decompose them. Besides photoresist, hydrogen plasmas can also etch silicon, albeit very slowly.
Membrane
To help mitigate the above effects, the latest EUV tool introduced in 2017, the NXE:3400B, features a membrane that separates the wafer from the projection optics of the tool, protecting the latter from outgassing from the resist on the wafer. The membrane contains layers which absorb DUV and IR radiation, and transmits 85–90% of the incident EUV radiation. There is of course, accumulated contamination from wafer outgassing as well as particles in general (although the latter are out of focus, they may still obstruct light).
EUV-induced plasma
EUV lithographic systems using EUV light operate in 1–10 Pa hydrogen background gas. The plasma is a source of VUV radiation as well as electrons and hydrogen ions This plasma is known to etch exposed materials.
In 2023, a study supported at TSMC was published which indicated net charging by electrons from the plasma as well as from electron emission. The charging was found to occur even outside the EUV exposure area, indicating that the surrounding area had been exposed to electrons.
Due to chemical sputtering of carbon by the hydrogen plasma, there can be generation of nanoparticles, which can obstruct the EUV resist exposure.
Mask defects
Reducing defects on extreme ultraviolet (EUV) masks is currently one of the most critical issues to be addressed for commercialization of EUV lithography. Defects can be buried underneath or within the multilayer stack or be on top of the multilayer stack. Mesas or protrusions form on the sputtering targets used for multilayer deposition, which may fall off as particles during the multilayer deposition. In fact, defects of atomic scale height (0.3–0.5 nm) with 100 nm FWHM can still be printable by exhibiting 10% CD impact. IBM and Toppan reported at Photomask Japan 2015 that smaller defects, e.g., 50 nm size, can have 10% CD impact even with 0.6 nm height, yet remain undetectable.
Furthermore, the edge of a phase defect will further reduce reflectivity by more than 10% if its deviation from flatness exceeds 3 degrees, due to the deviation from the target angle of incidence of 84 degrees with respect to the surface. Even if the defect height is shallow, the edge still deforms the overlying multilayer, producing an extended region where the multilayer is sloped. The more abrupt the deformation, the narrower the defect edge extension, the greater the loss in reflectivity.
EUV mask defect repair is also more complicated due to the across-slit illumination variation mentioned above. Due to the varying shadowing sensitivity across the slit, the repair deposition height must be controlled very carefully, being different at different positions across the EUV mask illumination slit.
Multilayer reflectivity random variations
GlobalFoundries and Lawrence Berkeley Labs carried out a Monte Carlo study to simulate the effects of intermixing between the molybdenum (Mo) and silicon (Si) layers in the multilayer that is used to reflect EUV light from the EUV mask. The results indicated high sensitivity to the atomic-scale variations of layer thickness. Such variations could not be detected by wide-area reflectivity measurements but would be significant on the scale of the critical dimension (CD). The local variation of reflectivity could be on the order of 10% for a few nm standard deviation.
Multilayer damage
Multiple EUV pulses at less than 10 mJ/cm2 could accumulate damage to a Ru-capped Mo/Si multilayer mirror optic element. The angle of incidence was 16° or 0.28 rads, which is within the range of angles for a 0.33 NA optical system.
Pellicles
Production EUV tools need a pellicle to protect the mask from contamination. Pellicles are normally expected to protect the mask from particles during transport, entry into or exit from the exposure chamber, as well as the exposure itself. Without pellicles, particle adders would reduce yield, which has not been an issue for conventional optical lithography with 193 nm light and pellicles. However, for EUV, the feasibility of pellicle use is severely challenged, due to the required thinness of the shielding films to prevent excessive EUV absorption. Particle contamination would be prohibitive if pellicles were not stable above 200 W, i.e., the targeted power for manufacturing.
Heating of the EUV mask pellicle (film temperature up to 750 K for 80 W incident power) is a significant concern, due to the resulting deformation and transmission decrease. ASML developed a 70 nm thick polysilicon pellicle membrane, which allows EUV transmission of 82%; however, less than half of the membranes survived expected EUV power levels. SiNx pellicle membranes also failed at 82 W equivalent EUV source power levels. At target 250 W levels, the pellicle is expected to reach 686 degrees Celsius, well over the melting point of aluminum. Alternative materials need to allow sufficient transmission as well as maintain mechanical and thermal stability. However, graphite, graphene or other carbon nanomaterials (nanosheets, nanotubes) are damaged by EUV due to the release of electrons and also too easily etched in the hydrogen cleaning plasma expected to be deployed in EUV scanners. Hydrogen plasmas can also etch silicon as well. A coating helps improve hydrogen resistance, but this reduces transmission and/or emissivity, and may also affect mechanical stability (e.g., bulging).
Wrinkles on pellicles can cause CD nonuniformity due to uneven absorption; this is worse for smaller wrinkles and more coherent illumination, i.e., lower pupil fill.
In the absence of pellicles, EUV mask cleanliness would have to be checked before actual product wafers are exposed, using wafers specially prepared for defect inspection. These wafers are inspected after printing for repeating defects indicating a dirty mask; if any are found, the mask must be cleaned and another set of inspection wafers are exposed, repeating the flow until the mask is clean. Any affected product wafers must be reworked.
TSMC reported starting limited use of its own pellicle in 2019 and continuing to expand afterwards, and Samsung is planning pellicle introduction in 2022.
Hydrogen bulging defects
As discussed above, with regard to contamination removal, hydrogen used in recent EUV systems can penetrate into the EUV mask layers. TSMC indicated in its patent that hydrogen would enter from the mask edge. Once trapped, bulge defects or blisters were produced, which could lead to film peeling. These are essentially the blister defects which arise after a sufficient number of EUV mask exposures in the hydrogen environment. TSMC proposed some means for mitigating hydrogen blistering defects on EUV masks, which may impact productivity.
EUV stochastic issues
EUV lithography is particularly sensitive to stochastic effects. In a large population of features printed by EUV, although the overwhelming majority are resolved, some suffer complete failure to print, e.g. missing holes or bridging lines. A known significant contribution to this effect is the dose used to print. This is related to shot noise, to be discussed further below. Due to the stochastic variations in arriving photon numbers, some areas designated to print actually fail to reach the threshold to print, leaving unexposed defect regions. Some areas may be overexposed, leading to excessive resist loss or crosslinking. The probability of stochastic failure increases exponentially as feature size decreases, and for the same feature size, increasing distance between features also significantly increases the probability. Line cuts which are misshapen are a significant issue due to potential arcing and shorting. Yield requires detection of stochastic failures down to below 1e-12.
The tendency to stochastic defects is worse from defocus over a large pupil fill.
Multiple failure modes may exist for the same population. For example, besides bridging of trenches, the lines separating the trenches may be broken. This can be attributed to stochastic resist loss, from secondary electrons. The randomness of the number of secondary electrons is itself a source of stochastic behavior in EUV resist images.
The coexistence of stochastically underexposed and overexposed defect regions leads to a loss of dose window at a certain post-etch defect level between the low-dose and high-dose patterning cliffs. Hence, the resolution benefit from shorter wavelength is lost.
The resist underlayer also plays an important role. This could be due to the secondary electrons generated by the underlayer. Secondary electrons may remove over 10 nm of resist from the exposed edge.
The defect level is on the order of 1K/mm2. In 2020, Samsung reported that 5 nm layouts had risks for process defects and had started implementing automated check and fixing.
Photon shot noise also leads to stochastic edge placement error. The photon shot noise is augmented to some degree by blurring factors such as secondary electrons or acids in chemically amplified resists; when significant the blur also reduces the image contrast at the edge. An edge placement error (EPE) as large as 8.8 nm was measured for a 48 nm pitch EUV-printed metal pattern.
With the natural Poisson distribution due to the random arrival and absorption times of the photons, there is an expected natural dose (photon number) variation of at least several percent 3 sigma, making the exposure process susceptible to stochastic variations. The dose variation leads to a variation of the feature edge position, effectively becoming a blur component. Unlike the hard resolution limit imposed by diffraction, shot noise imposes a softer limit, with the main guideline being the ITRS line width roughness (LWR) spec of 8% (3s) of linewidth. Increasing the dose will reduce the shot noise, but this also requires higher source power.
The two issues of shot noise and EUV-released electrons point out two constraining factors: 1) keeping dose high enough to reduce shot noise to tolerable levels, but also 2) avoiding too high a dose due to the increased contribution of EUV-released photoelectrons and secondary electrons to the resist exposure process, increasing the edge blur and thereby limiting the resolution. Aside from the resolution impact, higher dose also increases outgassing and limits throughput, and crosslinking occurs at very high dose levels. For chemically amplified resists, higher dose exposure also increases line edge roughness due to acid generator decomposition.
Even with higher absorption at the same dose, EUV has a larger shot noise concern than the ArF (193 nm) wavelength, mainly because it is applied to thinner resists.
Due to stochastic considerations, the IRDS 2022 lithography roadmap now acknowledges increasing doses for smaller feature sizes. However, an upper limit to how much dose can be increased is imposed by resist loss.
Due to resist thinning with increased dose, EUV stochastic defectivity limits will define a narrow CD or dose window. The thinner resist at higher incident dose reduces absorption, and hence, absorbed dose.
EUV resolution will likely be compromised by stochastic effects. Stochastic defect densities have exceeded 1/cm2, at 36 nm pitch. In 2024, an EUV resist exposure by ASML revealed a missing+bridging 32 nm pitch contact hole defect density floor >0.25/cm2 (177 defects per wafer), made worse with thinner resist. ASML indicated 30 nm pitch would not use direct exposure but double patterning. Intel did not use EUV for 30 nm pitch.
Pupil fill ratio
For pitches less than half-wavelength divided by numerical aperture, dipole illumination is necessary. This illumination fills at most a leaf-shaped area at the edge of the pupil. However, due to 3D effects in the EUV mask, smaller pitches require even smaller portions of this leaf shape. Below 20% of the pupil, the throughput and dose stability begin to suffer. Higher numerical aperture allows a higher pupil fill to be used for the same pitch, but depth of focus is significantly reduced.
A larger pupil fill is more susceptible to stochastic fluctuations from point to point in the pupil.
Use with multiple-patterning
EUV is anticipated to use double-patterning at around 34 nm pitch with 0.33 NA. This resolution is equivalent to '1Y' for DRAM. In 2020, ASML reported that 5 nm M0 layer (30 nm minimum pitch) required double-patterning.
In H2 2018, TSMC confirmed that its 5 nm EUV scheme still used multi-patterning, also indicating that mask count did not decrease from its 7 nm node, which used extensive DUV multi-patterning, to its 5 nm node, which used extensive EUV. EDA vendors also indicated the continued use of multi-patterning flows. While Samsung introduced its own 7 nm process with EUV single-patterning, it encountered severe photon shot noise causing excessive line roughness, which required higher dose, resulting in lower throughput. TSMC's 5 nm node uses even tighter design rules. Samsung indicated smaller dimensions would have more severe shot noise.
In Intel's complementary lithography scheme at 20 nm half-pitch, EUV would be used only in a second line-cutting exposure after a first 193 nm line-printing exposure.
Multiple exposures would also be expected where two or more patterns in the same layer, e.g., different pitches or widths, must use different optimized source pupil shapes. For example, when considering a staggered bar array of 64 nm vertical pitch, changing the horizontal pitch from 64 nm to 90 nm changes the optimized illumination significantly. Source-mask optimization that is based on line-space gratings and tip-to-tip gratings only does not entail improvements for all parts of a logic pattern, e.g., a dense trench with a gap on one side.
In 2020, ASML reported that for the 3 nm node, center-to-center contact/via spacings of 40 nm or less would require double- or triple-patterning for some contact/via arrangements.
For the 24–36 nm metal pitch, it was found that using EUV as a (second) cutting exposure had a significantly wider process window than as a complete single exposure for the metal layer. However, using a second exposure in the LELE approach for double patterning does not get around the vulnerability to stochastic defects.
Multiple exposures of the same mask are also expected for defect management without pellicles, limiting productivity similarly to multiple-patterning.
Self-aligned litho-etch-litho-etch (SALELE) is a hybrid SADP/LELE technique whose implementation has started in 7 nm.
Self-aligned litho-etch-litho-etch (SALELE) has become an accepted form of double-patterning to be used with EUV.
Single-patterning extension: anamorphic high-NA
A return to extended generations of single-patterning would be possible with higher numerical aperture (NA) tools. An NA of 0.45 could require retuning of a few percent. Increasing demagnification could avoid this retuning, but the reduced field size severely affects large patterns (one die per 26 mm × 33 mm field) such as the many-core multi-billion transistor 14 nm Xeon chips. by requiring field stitching of two mask exposures.
In 2015, ASML disclosed details of its anamorphic next-generation EUV scanner, with an NA of 0.55. These machines cost around USD 360 million. The demagnification is increased from 4× to 8× only in one direction (in the plane of incidence). However, the 0.55 NA has a much smaller depth of focus than immersion lithography. Also, an anamorphic 0.52 NA tool has been found to exhibit too much CD and placement variability for 5 nm node single exposure and multi-patterning cutting.
Depth of focus being reduced by increasing NA is also a concern, especially in comparison with multi-patterning exposures using 193 nm immersion lithography:
High-NA EUV tools focus horizontal and vertical lines differently from low-NA systems, due to the different demagnfication for horizontal lines.
High-NA EUV tools also suffer from obscuration, which can cause errors in the imaging of certain patterns.
The first high-NA tools are expected at Intel by 2025 at earliest.
For sub-2nm nodes, high-NA EUV systems will be affected by a host of issues: throughput, new masks, polarization, thinner resists, and secondary electron blur and randomness. Reduced depth of focus requires resist thickness less than 30 nm, which in turn increases stochastic effects, due to reduced photon absorption.
Electron blur is estimated to be at least ~2 nm, which is enough to thwart the benefit of High-NA EUV lithography.
Beyond high-NA, ASML in 2024 announced plans for the development of a hyper-NA EUV tool with an NA beyond 0.55, such as an NA of 0.75 or 0.85. These machines could cost USD 720 million each and are expected to be available in 2030. A problem with Hyper-NA is polarization of the EUV light causing a reduction in image contrast.
Beyond EUV wavelength
A much shorter wavelength (~6.7 nm) would be beyond EUV, and is often referred to as BEUV (beyond extreme ultraviolet). With current technology, BEUV wavelengths would have worse shot noise effects without ensuring sufficient dose. (The generally accepted 'border' of UV is 10nm below which the (soft) x-ray region begins.)
References
Further reading
Michael Purvis, An Introduction to EUV Sources for Lithography, ASML, STROBE, 2020-09-25.
Igor Fomenkov, EUV Source for Lithography in HVM - performance and prospects, ASML Fellow, Source workshop, Amsterdam, 2019-11-05.
Related links
EUV presents economic challenges
Industry mulls 6.7-nm wavelength EUV
Lithography (microfabrication)
Extreme ultraviolet | Extreme ultraviolet lithography | Chemistry,Materials_science | 12,510 |
18,149,779 | https://en.wikipedia.org/wiki/Drostanolone | Drostanolone, or dromostanolone, is an anabolic–androgenic steroid (AAS) of the dihydrotestosterone (DHT) group which was never marketed. An androgen ester prodrug of drostanolone, drostanolone propionate, was formerly used in the treatment of breast cancer in women under brand names such as Drolban, Masteril, and Masteron. This has also been used non-medically for physique- or performance-enhancing purposes.
Pharmacology
Pharmacodynamics
Like other AAS, drostanolone is an agonist of the androgen receptor (AR). It is not a substrate for 5α-reductase and is a poor substrate for 3α-hydroxysteroid dehydrogenase (3α-HSD), and therefore shows a high ratio of anabolic to androgenic activity. As a DHT derivative, drostanolone is not a substrate for aromatase and hence cannot be aromatized into estrogenic metabolites. While no data are available on the progestogenic activity of drostanolone, it is thought to have low or no such activity similarly to other DHT derivatives. Since the drug is not 17α-alkylated, it is not known to cause hepatotoxicity.
Chemistry
Drostanolone, also known as 2α-methyl-5α-dihydrotestosterone (2α-methyl-DHT) or as 2α-methyl-5α-androstan-17β-ol-3-one, is a synthetic androstane steroid and a derivative of DHT. It is specifically DHT with a methyl group at the C2α position.
History
Drostanolone and its ester drostanolone propionate were first described in 1959. Drostanolone propionate was first introduced for medical use in 1961.
Society and culture
Generic names
Drostanolone is the generic name of the drug and its , , and . It has also been referred to as dromostanolone.
Legal status
Drostanolone, along with other AAS, is a schedule III controlled substance in the United States under the Controlled Substances Act.
Potential side effects
Like other AAS, drostanolone can cause a variety of side effects, including:
Virilization: This refers to the development of masculine characteristics in women, such as deepening of the voice, increased body hair growth, and clitoral enlargement.
Acne: AAS can increase sebum production, leading to acne.
Hair loss: Drostanolone can accelerate male pattern baldness.
Cardiovascular issues: AAS can negatively affect cholesterol levels and increase the risk of cardiovascular disease.
Liver damage: Although drostanolone is not 17α-alkylated, high doses or prolonged use can still potentially damage the liver.
Mood swings: AAS can cause aggression, irritability, and mood swings.
Non-medical uses
Drostanolone is used by some bodybuilders and athletes to increase muscle mass and strength. It is often used during "cutting cycles" to help preserve muscle mass while losing body fat. However, the use of AAS for non-medical purposes is not recommended due to the potential for serious side effects.
Synthesis
Bolazine is when react 2 eq. with hydrazine to give dimer
Treatment of DHT (androstan-17β-ol-3-one, stanolone) [521-18-6] (1) with methyl formate and the strong base sodium methoxide gives [4033-95-8] (2). The newly added formyl function in the product is shown in the enol form. Catalytic hydrogenation reduces that function to a methyl group (3). The addition of hydrogen from the bottom face of the molecule leads to the formation of β-methyl isomer where the methyl group occupies the higher-energy axial position. Strong base-induced equilibration of the methyl group leads to the formation of the sterically favoured equatorial α-methyl isomer, affording dromostanolone (4).
References
External links
Masteron (drostanolone propionate) - William Llewellyn's Anabolic.org
Cyclopentanols
Anabolic–androgenic steroids
Androstanes
Ketones | Drostanolone | Chemistry | 927 |
44,182,725 | https://en.wikipedia.org/wiki/Composite%20Higgs%20models | In particle physics, composite Higgs models (CHM) are speculative extensions of the Standard Model (SM) where the Higgs boson is a bound state of new strong interactions. These scenarios are models for physics beyond the SM presently tested at the Large Hadron Collider (LHC) in Geneva.
In all composite Higgs models the Higgs boson is not an elementary particle (or point-like) but has finite size, perhaps around 10−18 meters. This dimension may be related to the Fermi scale (100 GeV) that determines the strength of the weak interactions such as in β-decay, but it could be significantly smaller. Microscopically the composite Higgs will be made of smaller constituents in the same way as nuclei are made of protons and neutrons.
History
Often referred to as "natural" composite Higgs models, CHMs are constructions that attempt to alleviate fine-tuning or "naturalness" problem of the Standard Model.
These typically engineer the Higgs boson as a naturally light pseudo-Goldstone boson or Nambu-Goldstone field, in analogy to the pion (or more precisely, like the K-mesons) in QCD. These ideas were introduced by Georgi and Kaplan as a clever variation on technicolor theories to allow for the presence of a physical low mass Higgs boson.
These are forerunners of Little Higgs theories.
In parallel, early composite Higgs models arose from the heavy top quark and its renormalization group infrared fixed point, which implies a strong coupling of the Higgs to top quarks at high energies.
This formed the basis of top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks. This was described by Yoichiro Nambu and subsequently developed by Miransky, Tanabashi, and Yamawaki
and Bardeen, Hill, and Lindner,
who connected the theory to the renormalization group and improved its predictions.
While these ideas are still compelling, they suffer from a "naturalness problem", a large degree of fine-tuning.
To remedy the fine tuning problem, Chivukula, Dobrescu, Georgi and Hill introduced the "Top See-Saw" model in which the composite scale is reduced to the several TeV (trillion electron volts, the energy scale of the LHC). A more recent version of the Top Seesaw model of Dobrescu and Cheng has an acceptable
light composite Higgs boson.
Top Seesaw models have a nice geometric interpretation in theories of extra dimensions, which
is most easily seen via dimensional deconstruction (the latter approach does away with the technical details of the geometry of the extra spatial dimension and gives a renormalizable D-4 field theory). These schemes also anticipate "partial compositeness".
These models are discussed in the extensive review of strong dynamical theories of Hill and Simmons.
CHMs typically predict new particles with mass around a TeV (or tens of TeV as in the Little Higgs schemes) that are excitations or ingredients of the composite Higgs, analogous to the resonances in nuclear physics. The new particles could be produced and detected in collider experiments if the energy of the collision exceeds their mass or could produce deviations from the SM predictions in "low energy observables" – results of experiments at lower energies. Within the most compelling scenarios each Standard Model particle has a partner with equal quantum numbers but heavier mass. For example, the photon, W and Z bosons have heavy replicas with mass determined by the compositeness scale, expected around 1 TeV.
Though naturalness requires that new particles exist with mass around a TeV which could be discovered at LHC or future experiments, nonetheless as of 2018, no direct or indirect signs that the Higgs or other SM particles are composite has been detected.
From the LHC discovery of 2012, it is known that there exists a physical Higgs boson
(a weak iso-doublet) that condenses to break the electro-weak symmetry. This differs from the prediction ordinary technicolor theories where new strong dynamics directly breaks the electro-weak symmetry without the need of a physical Higgs boson.
The CHM proposed by Georgi and Kaplan was based on known gauge theory dynamics that produces the Higgs doublet as a Goldstone boson. It was later realized, as with the case of Top Seesaw models described above, that this can naturally arise in five-dimensional theories, such as the Randall–Sundrum scenario or by dimensional deconstruction. These scenarios can also be realized in hypothetical strongly coupled conformal field theories (CFT) and the AdS-CFT correspondence. This spurred activity in the field. At first the Higgs was a generic scalar bound state. In the influential work the Higgs as a Goldstone boson was realized in CFTs. Detailed phenomenological studies showed that within this framework agreement with experimental data can be obtained with a mild tuning of parameters.
The more recent work on the holographic realization of CHM, which is based on the AdS/QCD correspondence, provided an explicit realization of the strongly coupled sector of CHM and the computation of meson masses, decay constants and the top-partner mass.
Examples
CHM can be characterized by the mass (m) of the lightest new particles and their coupling (g). The latter is expected to be larger than the SM couplings for consistency. Various realizations of CHM exist that differ for the mechanism that generates the Higgs doublet. Broadly they can be divided in two categories:
Higgs is a generic bound state of strong dynamics.
Higgs is a Goldstone boson of spontaneous symmetry breaking
In both cases the electro-weak symmetry is broken by the condensation of a Higgs scalar doublet. In the first type of scenario there is no a priori reason why the Higgs boson is lighter than the other composite states and moreover larger deviations from the SM are expected.
Higgs as Goldstone boson
These are essentially Little Higgs theories.
In this scenario the existence of the Higgs boson follows from the symmetries of the theory. This allows to explain why this particle is lighter than the rest of the composite particles whose mass is expected from direct and indirect tests to be around a TeV or higher. It is assumed that the composite sector has a global symmetry spontaneously broken to a subgroup where and are compact Lie groups. Contrary to technicolor models the unbroken symmetry must contain the SM electro-weak group According to Goldstone's theorem the spontaneous breaking of a global symmetry produces massless scalar particles known as Goldstone bosons. By appropriately choosing the global symmetries it is possible to have Goldstone bosons that correspond to the Higgs doublet in the SM. This can be done in a variety of ways
and is completely determined by the symmetries. In particular group theory determines the quantum numbers of the Goldstone bosons. From the decomposition of the adjoint representation one finds
where is the representation of the Goldstone bosons under The phenomenological request that a Higgs doublet exists selects the possible symmetries. Typical example is the pattern
that contains a single Higgs doublet as a Goldstone boson.
The physics of the Higgs as a Goldstone boson is strongly constrained by the symmetries and determined by the symmetry breaking scale that controls their interactions. An approximate relation exists between mass and coupling of the composite states,
In CHM one finds that deviations from the SM are proportional to
where is the electro-weak vacuum expectation value. By construction these models approximate the SM to arbitrary precision if is sufficiently small. For example, for the model above with global symmetry the coupling of the Higgs to W and Z bosons is modified as
Phenomenological studies suggest and thus at least a factor of a few larger than . However the tuning of parameters required to achieve is inversely proportional to so that viable scenarios require some degree of tuning.
Goldstone bosons generated from the spontaneous breaking of an exact global symmetry are exactly massless. Therefore, if the Higgs boson is a Goldstone boson the global symmetry cannot be exact. In CHM the Higgs potential is generated by effects that explicitly break the global symmetry Minimally these are the SM Yukawa and gauge couplings that cannot respect the global symmetry but other effects can also exist. The top coupling is expected to give a dominant contribution to the Higgs potential as this is the largest coupling in the SM. In the simplest models one finds a correlation between the Higgs mass and the mass of the top partners,
In models with as suggested by naturalness this indicates fermionic resonances with mass around Spin-1 resonances are expected to be somewhat heavier. This is within the reach of future collider experiments.
Partial compositeness
One ingredient of modern CHM is the hypothesis of partial compositeness proposed by D.B. Kaplan. This is similar to a (deconstructed) extra dimension, in which every Standard Model particle has a heavy partner(s) that can mix with it. In practice, the SM particles are linear combinations of elementary and composite states:
where denotes the mixing angle.
Partial compositeness is naturally realized in the gauge sector, where an analogous phenomenon happens quantum chromodynamics and is known as – mixing (after the photon and rho meson – two particles with identical quantum numbers which engage in similar intermingling). For fermions it is an assumption that in particular requires the existence of heavy fermions with equal quantum numbers to S.M. quarks and leptons. These interact with the Higgs through the mixing. One schematically finds the formula for the S.M. fermion masses,
where subscripts L and R mark the left and right mixings, and is a composite sector coupling.
The composite particles are multiplets of the unbroken symmetry H. For phenomenological reasons this should contain the custodial symmetry SU(2)×SU(2) extending the electro-weak symmetry SU(2)×U(1). Composite fermions often belong to representations larger than the SM particles. For example, a strongly motivated representation for left-handed fermions is the (2,2) that contains particles with exotic electric charge or with special experimental signatures.
Partial compositeness ameliorates the phenomenology of CHM providing a logic why no deviations from the S.M. have been measured so far. In the so-called anarchic scenarios the hierarchies of S.M. fermion masses are generated through the hierarchies of mixings and anarchic composite sector couplings. The light fermions are almost elementary while the third generation is strongly or entirely composite. This leads to a structural suppression of all effects that involve first two generations that are the most precisely measured. In particular flavor transitions and corrections to electro-weak observables are suppressed. Other scenarios are also possible with different phenomenology.
Experiments
The main experimental signatures of CHM are:
New heavy partners of Standard Model particles, with SM quantum numbers and masses around a TeV
Modified SM couplings
New contributions to flavor observables
Supersymmetric models also predict that every Standard Model particle will have a heavier partner. However, in supersymmetry the partners have a different spin: they are bosons if the SM particle is a fermion, and vice versa. In composite Higgs models the partners have the same spin as the SM particles.
All the deviations from the SM are controlled by the tuning parameter ξ. The mixing of the SM particles determines the coupling with the known particles of the SM. The detailed phenomenology depends strongly on the flavor assumptions and is in general model-dependent. The Higgs and the top quark typically have the largest coupling to the new particles. For this reason third generation partners are the most easy to produce and top physics has the largest deviations from the SM. Top partners have also special importance given their role in the naturalness of the theory.
After the first run of the LHC direct experimental searches exclude third generation fermionic resonances up to 800 GeV. Bounds on gluon resonances are in the multi-TeV range and somewhat weaker bounds exist for electro-weak resonances.
Deviations from the SM couplings is proportional to the degree of compositeness of the particles. For this reason the largest departures from the SM predictions are expected for the third generation quarks and Higgs couplings. The first have been measured with per mille precision by the LEP experiment. After the first run of the LHC the couplings of the Higgs with fermions and gauge bosons agree with the SM with a precision around 20%. These results pose some tension for CHM but are compatible with a compositeness scale f~TeV.
The hypothesis of partial compositeness allows to suppress flavor violation beyond the SM that is severely constrained experimentally. Nevertheless, within anarchic scenarios sizable deviations from the SM predictions exist in several observables. Particularly constrained is CP violation in the Kaon system and lepton flavor violation for example the rare decay μ->eγ. Overall flavor physics suggests the strongest indirect bounds on anarchic scenarios. This tension can be avoided with different flavor assumptions.
Summary
The nature of the Higgs boson remains a conundrum. Philosophically, the Higgs boson is either
a composite state, built of more fundamental constituents, or it is connected to other states in nature by a symmetry such as supersymmetry (or some blend of these concepts). So far there is no evidence of either compositeness or supersymmetry.
The fact that nature provides a single (weak isodoublet) scalar field that ostensibly uniquely generates fundamental particle masses has yet to be explained.
At present, we have no idea what mass / energy scale will reveal additional information about the Higgs boson that may shed useful light on these issues. While theorists remain busy concocting explanations, this limited insight poses a major challenge to experimental particle physics: We have no clear idea whether feasible accelerators might provide new useful information beyond the S.M. It is hoped that upgrades in luminosity and energy at the LHC may possibly provide new clues.
See also
Alternatives to the Standard Higgs Model
Two-Higgs-doublet model
Preon
References
Physics beyond the Standard Model
Hypothetical composite particles | Composite Higgs models | Physics | 3,017 |
4,326,149 | https://en.wikipedia.org/wiki/Hann%20function | The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing. The function, with length and amplitude is given by:
For digital signal processing, the function is sampled symmetrically (with spacing and amplitude ):
which is a sequence of samples, and can be even or odd. (see ) It is also known as the raised cosine window, Hann filter, von Hann window, etc.
Fourier transform
The Fourier transform of is given by:
Discrete transforms
The Discrete-time Fourier transform (DTFT) of the length, time-shifted sequence is defined by a Fourier series, which also has a 3-term equivalent that is derived similarly to the Fourier transform derivation:
The truncated sequence is a DFT-even (aka periodic) Hann window. Since the truncated sample has value zero, it is clear from the Fourier series definition that the DTFTs are equivalent. However, the approach followed above results in a significantly different-looking, but equivalent, 3-term expression:
An N-length DFT of the window function samples the DTFT at frequencies for integer values of From the expression immediately above, it is easy to see that only 3 of the N DFT coefficients are non-zero. And from the other expression, it is apparent that all are real-valued. These properties are appealing for real-time applications that require both windowed and non-windowed (rectangularly windowed) transforms, because the windowed transforms can be efficiently derived from the non-windowed transforms by convolution.
Name
The function is named in honor of von Hann, who used the three-term weighted average smoothing technique on meteorological data. However, the term Hanning function is also conventionally used, derived from the paper in which the term hanning a signal was used to mean applying the Hann window to it. The confusion arose from the similar Hamming function, named after Richard Hamming.
See also
Window function
Apodization
Raised cosine distribution
Raised-cosine filter
Page citations
References
External links
Hann function at MathWorld
Signal processing | Hann function | Technology,Engineering | 438 |
4,234,894 | https://en.wikipedia.org/wiki/AP%20Calculus | Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / BC, AB / BC Calc or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular or honors calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (including arc length in polar coordinates and calculating area)
Arc length calculations using integration
Integration by parts
Improper integrals
Differential equations for logistic growth
Using partial fractions to integrate rational functions
It can be seen from the tables that the pass rate (score of 3 or higher) of AP Calculus BC is higher than AP Calculus AB. It can also be noted that about 1/3 as many take the BC exam as take the AB exam. A possible explanation for the higher scores on BC is that students who take AP Calculus BC are more prepared and advanced in math. The 5-rate is consistently over 40% (much higher than almost all the other AP exams).
AB sub-score distribution
AP Exam
The College Board intentionally schedules the AP Calculus AB exam at the same time as the AP Calculus BC exam to make it impossible for a student to take both tests in the same academic year, though the College Board does not make Calculus AB a prerequisite class for Calculus BC. Some schools do this, though many others only require precalculus as a prerequisite for Calculus BC. The AP awards given by College Board count both exams. However, they do not count the AB sub-score piece of the BC exam.
Format
The structures of the AB and BC exams are identical. Both exams are three hours and fifteen minutes long, comprising a total of 45 multiple choice questions and six free response questions. They are usually administered on a Monday or Tuesday morning in May.
The two parts of the multiple choice section are timed and taken independently.
Students are required to put away their calculators after 30 minutes have passed during the Free-Response section, and only at that point may begin Section II Part B. However, students may continue to work on Section II Part A during the entire Free-Response time, although without a calculator during the later two thirds.
Scoring
The multiple choice section is scored by computer, with a correct answer receiving 1 point, with omitted and incorrect answers not affecting the raw score. This total is multiplied by 1.2 to calculate the adjusted multiple-choice score.
The free response section is hand-graded by hundreds of AP teachers and professors each June. The raw score is then added to the adjusted multiple choice score to receive a composite score. This total is compared to a composite-score scale for that year's exam and converted into an AP score of 1 to 5.
For the Calculus BC exam, an AB sub-score is included in the score report to reflect their proficiency in the fundamental topics of introductory calculus. The AB sub-score is based on the correct number of answers for questions pertaining to AB-material only.
See also
AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism
AP Precalculus
Glossary of calculus
Mathematics education in the United States
Stand and Deliver (1988 film)
References
External links
AP Calculus AB
College Board description of the AP Calculus AB course content
College Board description of the AP Calculus AB examination
AP Calculus BC
College Board description of the AP Calculus BC course content
College Board description of the AP Calculus BC examination
Further reading
AP courses in mathematics
Calculus
Advanced Placement
zh:大学先修课程#科目 | AP Calculus | Mathematics | 1,005 |
77,646,195 | https://en.wikipedia.org/wiki/Famiraprinium | Famiraprinium (also known as SR-95103) is a GABAA receptor antagonist used in scientific research.
It antagonizes certain GABAA receptors with an inhibition constant of 2.2 μM.
Effects
Like other GABA antagonists, it triggers epilepsy-like symptoms. These effects can be antagonized by GABAA agonists like muscimol, proving it is an antagonist.
References
Pyridazines
Carboxylic acids
Phenyl compounds
GABAA receptor antagonists
Convulsants | Famiraprinium | Chemistry | 119 |
4,055,928 | https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29 | In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.
Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory.
From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics.
For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory).
In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.
History
In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it.
Definition
Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure.
Domain
The domain of a structure is an arbitrary set; it is also called the of the structure, its (especially in universal algebra), its (especially in model theory, cf. universe), or its . In classical first-order logic, the definition of a structure prohibits the empty domain.
Sometimes the notation or is used for the domain of but often no notational distinction is made between a structure and its domain (that is, the same symbol refers both to the structure and its domain.)
Signature
The signature of a structure consists of:
a set of function symbols and relation symbols, along with
a function that ascribes to each symbol a natural number
The natural number of a symbol is called the arity of because it is the arity of the interpretation of
Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field.
Interpretation function
The interpretation function of assigns functions and relations to the symbols of the signature. To each function symbol of arity is assigned an -ary function on the domain. Each relation symbol of arity is assigned an -ary relation on the domain. A nullary (-ary) function symbol is called a constant symbol, because its interpretation can be identified with a constant element of the domain.
When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol and its interpretation For example, if is a binary function symbol of one simply writes rather than
Examples
The standard signature for fields consists of two binary function symbols and where additional symbols can be derived, such as a unary function symbol (uniquely determined by ) and the two constant symbols and (uniquely determined by and respectively).
Thus a structure (algebra) for this signature consists of a set of elements together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers the real numbers and the complex numbers like any other field, can be regarded as -structures in an obvious way:
In all three cases we have the standard signature given by
with and
The interpretation function is:
is addition of rational numbers,
is multiplication of rational numbers,
is the function that takes each rational number to and
is the number and
is the number
and and are similarly defined.
But the ring of integers, which is not a field, is also a -structure in the same way. In fact, there is no requirement that of the field axioms hold in a -structure.
A signature for ordered fields needs an additional binary relation such as or and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word.
The ordinary signature for set theory includes a single binary relation A structure for this signature consists of a set of elements and an interpretation of the relation as a binary relation on these elements.
Induced substructures and closed subsets
is called an (induced) substructure of if
and have the same signature
the domain of is contained in the domain of and
the interpretations of all function and relation symbols agree on
The usual notation for this relation is
A subset of the domain of a structure is called closed if it is closed under the functions of that is, if the following condition is satisfied: for every natural number every -ary function symbol (in the signature of ) and all elements the result of applying to the -tuple is again an element of
For every subset there is a smallest closed subset of that contains It is called the closed subset generated by or the hull of and denoted by or . The operator is a finitary closure operator on the set of subsets of .
If and is a closed subset, then is an induced substructure of where assigns to every symbol of σ the restriction to of its interpretation in Conversely, the domain of an induced substructure is a closed subset.
The closed subsets (or induced substructures) of a structure form a lattice. The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail.
Examples
Let be again the standard signature for fields. When regarded as -structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms.
The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a subfield.
The most obvious way to define a graph is a structure with a signature consisting of a single binary relation symbol The vertices of the graph form the domain of the structure, and for two vertices and means that and are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph. For example, let be a graph consisting of two vertices connected by an edge, and let be the graph consisting of the same vertices but no edges. is a subgraph of but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs.
Homomorphisms and embeddings
Homomorphisms
Given two structures and of the same signature σ, a (σ-)homomorphism from to is a map that preserves the functions and relations. More precisely:
For every n-ary function symbol f of σ and any elements , the following equation holds:
.
For every n-ary relation symbol R of σ and any elements , the following implication holds:
where , is the interpretation of the relation symbol of the object theory in the structure , respectively.
A homomorphism h from to is typically denoted as , although technically the function h is between the domains , of the two structures , .
For every signature σ there is a concrete category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphisms.
A homomorphism is sometimes called strong if:
For every n-ary relation symbol R of the object theory and any elements such that , there are such that and
The strong homomorphisms give rise to a subcategory of the category σ-Hom that was defined above.
Embeddings
A (σ-)homomorphism is called a (σ-)embedding if it is one-to-one and
for every n-ary relation symbol R of σ and any elements , the following equivalence holds:
(where as before , refers to the interpretation of the relation symbol R of the object theory σ in the structure , respectively).
Thus an embedding is the same thing as a strong homomorphism which is one-to-one.
The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom.
Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom.
Example
As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore H is a subobject of G which is not an induced substructure.
Homomorphism problem
The following problem is known as the homomorphism problem:
Given two finite structures and of a finite relational signature, find a homomorphism or show that no such homomorphism exists.
Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. Therefore, the complexity of CSP can be studied using the methods of finite model theory.
Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem.
Structures and first-order logic
Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer.
Satisfaction relation
Each first-order structure has a satisfaction relation defined for all formulas in the language consisting of the language of together with a constant symbol for each element of which is interpreted as that element.
This relation is defined inductively using Tarski's T-schema.
A structure is said to be a model of a theory if the language of is the same as the language of and every sentence in is satisfied by Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms.
Definable relations
An -ary relation on the universe (i.e. domain) of the structure is said to be definable (or explicitly definable cf. Beth definability, or -definable, or definable with parameters from cf. below) if there is a formula such that
In other words, is definable if and only if there is a formula such that
is correct.
An important special case is the definability of specific elements. An element of is definable in if and only if there is a formula such that
Definability with parameters
A relation is said to be definable with parameters (or -definable) if there is a formula with parameters from such that is definable using Every element of a structure is definable using the element itself as a parameter.
Some authors use definable to mean definable without parameters, while other authors mean definable with parameters. Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists.
Implicit definability
Recall from above that an -ary relation on the universe of is explicitly definable if there is a formula such that
Here the formula used to define a relation must be over the signature of and so may not mention itself, since is not in the signature of If there is a formula in the extended language containing the language of and a new symbol and the relation is the only relation on such that then is said to be implicitly definable over
By Beth's theorem, every implicitly definable relation is explicitly definable.
Many-sorted structures
Structures as defined above are sometimes called s to distinguish them from the more general s. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers.
Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols:
If V is a vector space over a field F, the corresponding two-sorted structure consists of the vector domain , the scalar domain , and the obvious functions, such as the vector zero , the scalar zero , or scalar multiplication .
Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly.
In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory.
Other generalizations
Partial algebras
Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g. x y (x + y = y + x). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a variety. Universal algebra solves this problem by adding a unary function symbol −1.
In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity.
Structures for typed languages
In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type.
Higher-order languages
There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language.
Structures that are proper classes
In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class.
In Bertrand Russell's Principia Mathematica, structures were also allowed to have a proper class as their domain.
See also
Notes
References
External links
Semantics section in Classical Logic (an entry of Stanford Encyclopedia of Philosophy)
Mathematical logic
Mathematical structures
Model theory
Universal algebra | Structure (mathematical logic) | Mathematics | 3,862 |
39,477,473 | https://en.wikipedia.org/wiki/Boletus%20variipes | Boletus variipes is a species of mycorrhizal bolete fungus in the family Boletaceae, native to North America. It was originally described by American mycologist Charles Horton Peck in 1888.
Taxonomy
First described by C. H. Peck in 1888, with Boletus variipes var. fagicola described by Smith and Thiers in 1971.
A 2010 paper analyzing the genetic relationships within Boletus found that what was classified at the time as B. variipes was not monophyletic. Populations from east of the Rocky Mountains were sister to B. hiratsukae of Japan, with those from Central America and southeastern North America were sister to that combined lineage. This required the latter group to be renamed. A third population—from the Philippines—that has been known as B. variipes was more distantly related.
Description
Boletus variipes is a dry, velvety to patchy tan or brown-gray mushroom with frequently prominent white to off-white reticulation on its darker brown stipe. It has a broad, convex to almost flat cap between , with a tendency to become cracked or finely patched in maturity. The flesh is white underside pore surface with pores which appear full when young, yellowing to olive as spores mature with a density of 1 to 2 pores per millimetre. The stipe is between 8 and 15 cm long and from 1 to 3.5 cm thick with slightly narrower ends or a widening base. The flesh of the cap and stipe does not discolor when cut or bruised. Spore prints are olive/brown.
Similar species
Boletus variipes is closely related to Boletus edulis.
Distribution and habitat
It is common throughout eastern North America and has been documented in Costa Rica. It is often found under oaks (Quercus) and in mixed deciduous forests of aspen, maple and beech in eastern North America.
Uses
While its odor and taste are mild, the species is a choice edible mushroom.
See also
List of Boletus species
List of North American boletes
References
variipes
Edible fungi
Fungi described in 1888
Fungi of the United States
Taxa named by Charles Horton Peck
Fungi without expected TNC conservation status
Fungus species | Boletus variipes | Biology | 461 |
14,129,063 | https://en.wikipedia.org/wiki/Bradykinin%20receptor%20B2 | {{DISPLAYTITLE:Bradykinin receptor B2}}
Bradykinin receptor B2 is a G-protein coupled receptor for bradykinin, encoded by the BDKRB2 gene in humans.
Mechanism
The B2 receptor (B2R) is a G protein-coupled receptor, probably coupled to Gq and Gi. A 2022 Nature cryo-EM study of human B2R-Gq complexes by Jinkeng Sheng et al. investigated the proximal activation mechanisms of B2R. Sheng et al. propose that upon B2R binding bradykinin or kallidin to a "bulky orthosteric binding pocket," the phenylalanine F8 or F9 residue of bradykinin or kallidin respectively interacts with a "conserved toggle switch" W283. This hydrophobic interaction facilitates the outward movement of transmembrane domain 6 (TM6) of B2R on the cytoplasmic side of the membrane, as well as outward movement of F279, a key residue within the conserved PIF motif of GPCRs (involving proline, isoleucine and phenylalanine). This rearrangement of the PIF motif disrupts the ionic lock formed by the DRY motif and pushes the NPxxY motif towards the activated state, opening an "intracellular cleft" for insertion of the α5-helix of Gq.
Gq stimulates phospholipase C to increase intracellular free calcium and Gi inhibits adenylate cyclase. Furthermore, the receptor stimulates the mitogen-activated protein kinase pathways. It is ubiquitously and constitutively expressed in healthy tissues.
The B2 receptor forms a complex with angiotensin converting enzyme (ACE), and this is thought to play a role in cross-talk between the renin-angiotensin system (RAS) and the kinin–kallikrein system (KKS). The heptapeptide angiotensin (1-7) also potentiates bradykinin action on B2 receptors.
Kallidin also signals through the B2 receptor. An antagonist for the receptor is Hoe 140 (icatibant).
Function
The 9 amino acid bradykinin peptide elicits several responses including vasodilation, edema, smooth muscle spasm and nociceptor stimulation.
Gene
Alternate start codons result in two isoforms of the protein.
See also
Bradykinin receptor
References
Further reading
External links
G protein-coupled receptors | Bradykinin receptor B2 | Chemistry | 534 |
40,622 | https://en.wikipedia.org/wiki/Ohmmeter | An ohmmeter is an electrical instrument that measures electrical resistance (the opposition offered by a circuit or component to the flow of electric current). Multimeters also function as ohmmeters when in resistance-measuring mode. An ohmmeter applies current to the circuit or component whose resistance is to be measured. It then measures the resulting voltage and calculates the resistance using Ohm’s law .
An ohmmeter should not be connected to a circuit or component that is carrying a current or is connected to a power source. Power should be disconnected before connecting the ohmmeter. Ohmmeters can be either connected in series or parallel based on requirements (whether resistance being measured is part of circuit or is a shunt resistance).
Micro-ohmmeters (microhmmeter or micro ohmmeter) make measurements of low resistance. Megohmmeters (also a trademarked device Megger) measure large values of resistance. The unit of measurement for resistance is the ohm (Ω).
Design evolution
The first ohmmeters were based on a type of meter movement known as a 'ratiometer'. These were similar to the galvanometer type movement encountered in later instruments, but instead of hairsprings to supply a restoring force they used conducting 'ligaments'. These provided no net rotational force to the movement. Also, the movement was wound with two coils. One was connected via a series resistor to the battery supply. The second was connected to the same battery supply via a second resistor and the resistor under test. The indication on the meter was proportional to the ratio of the currents through the two coils. This ratio was determined by the magnitude of the resistor under test. The advantages of this arrangement were twofold. First, the indication of the resistance was completely independent of the battery voltage (as long as it actually produced some voltage) and no zero adjustment was required. Second, although the resistance scale was non linear, the scale remained correct over the full deflection range. By interchanging the two coils a second range was provided. This scale was reversed compared to the first. A feature of this type of instrument was that it would continue to indicate a random resistance value once the test leads were disconnected (the action of which disconnected the battery from the movement). Ohmmeters of this type only ever measured resistance as they could not easily be incorporated into a multimeter design. Insulation testers that relied on a hand cranked generator operated on the same principle. This ensured that the indication was wholly independent of the voltage actually produced.
Subsequent designs of ohmmeter provided a small battery to apply a voltage to a resistance via a galvanometer to measure the current through the resistance (battery, galvanometer and resistance all connected in series). The scale of the galvanometer was marked in ohms, because the fixed voltage from the battery assured that as resistance is increased, the current through the meter (and hence deflection) would decrease. Ohmmeters form circuits by themselves, therefore they cannot be used within an assembled circuit. This design is much simpler and cheaper than the former design, and was simple to integrate into a multimeter design and consequently was by far the most common form of analogue ohmmeter. This type of ohmmeter suffers from two inherent disadvantages. First, the meter needs to be zeroed by shorting the measurement points together and performing an adjustment for zero ohms indication prior to each measurement. This is because as the battery voltage decreases with age, the series resistance in the meter needs to be reduced to maintain the zero indication at full deflection. Second, and consequent on the first, the actual deflection for any given resistor under test changes as the internal resistance is altered. It remains correct at the centre of the scale only, which is why such ohmmeter designs always quote the accuracy "at centre scale only".
A more accurate type of ohmmeter has an electronic circuit that passes a constant current (I) through the resistance, and another circuit that measures the voltage (V) across the resistance. These measurements are then digitized with an analog digital converter (adc) after which a microcontroller or microprocessor make the division of the current and voltage according to Ohm's law and then decode these to a display to offer the user a reading of the resistance value they're measuring at that instant. Since these type of meters already measure current, voltage and resistance all at once, these type of circuits are often used in digital multimeters.
Precision ohmmeters
For high-precision measurements of very small resistances, the above types of meter are inadequate. This is partly because the change in deflection itself is small when the resistance measured is too small in proportion to the intrinsic resistance of the ohmmeter (which can be dealt with through current division), but mostly because the meter's reading is the sum of the resistance of the measuring leads, the contact resistances and the resistance being measured. To reduce this effect, a precision ohmmeter has four terminals, called Kelvin contacts. Two terminals carry the current from and to the meter, while the other two allow the meter to measure the voltage across the resistor. In this arrangement, the power source is connected in series with the resistance to be measured through the external pair of terminals, while the second pair connects in parallel with the galvanometer which measures the voltage drop. With this type of meter, any voltage drop due to the resistance of the first pair of leads and their contact resistances is ignored by the meter. This four terminal measurement technique is called Kelvin sensing, after William Thomson, Lord Kelvin, who invented the Kelvin bridge in 1861 to measure very low resistances. The Four-terminal sensing method can also be utilized to conduct accurate measurements of low resistances.
References
https://www.codrey.com/electrical/ohmmeter-working-and-types/
External links
DC Metering Circuits chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series.
Electrical resistance and conductance
Electrical meters
Electronic test equipment
Impedance measurements | Ohmmeter | Physics,Mathematics,Technology,Engineering | 1,259 |
177,533 | https://en.wikipedia.org/wiki/STS-107 | STS-107 was the 113th flight of the Space Shuttle program, and the 28th and final flight of Space Shuttle Columbia. The mission ended on February 1, 2003, with the Space Shuttle Columbia disaster which killed all seven crew members and destroyed the space shuttle. It was the 88th post-Challenger disaster mission.
The flight launched from Kennedy Space Center in Florida on January 16, 2003. It spent 15 days, 22 hours, 20 minutes, 32 seconds in orbit. The crew conducted a multitude of international scientific experiments. The disaster occurred during reentry while the orbiter was over Texas.
Immediately after the disaster, NASA convened the Columbia Accident Investigation Board to determine the cause of the disintegration. The source of the failure was determined to have been caused by a piece of foam that broke off during launch and damaged the thermal protection system (reinforced carbon-carbon panels and thermal protection tiles) on the leading edge of the orbiter's left wing. During re-entry the damaged wing slowly overheated and came apart, eventually leading to loss of control and disintegration of the vehicle. The cockpit window frame is now exhibited in a memorial inside the Space Shuttle Atlantis Pavilion at the Kennedy Space Center.
The damage to the thermal protection system on the wing was similar to that of Atlantis which had also sustained damage in 1988 during STS-27, the second mission after the Space Shuttle Challenger disaster. However, the damage on STS-27 occurred at a spot that had more robust metal (a thin steel plate near the landing gear), and that mission survived the re-entry.
Crew
Crew seat assignments
Mission highlights
STS-107 carried the SPACEHAB Research Double Module (RDM) on its inaugural flight, the Freestar experiment (mounted on a Hitchhiker Program rack), and the Extended Duration Orbiter pallet. SPACEHAB was first flown on STS-57.
On the day of the experiment, a video taken to study atmospheric dust may have detected a new atmospheric phenomenon, dubbed a "TIGER" (Transient Ionospheric Glow Emission in Red).
On board Columbia was a copy of a drawing by Petr Ginz, the editor-in-chief of the magazine Vedem, who depicted what he imagined the Earth looked like from the Moon when he was a 14-year-old prisoner in the Terezín concentration camp. The copy was in the possession of Ilan Ramon and was lost in the disintegration. Ramon also traveled with a dollar bill received from the Lubavitcher Rebbe.
An Australian experiment, created by students from Glen Waverley Secondary College, was designed to test the reaction of zero gravity on the web formation of the Australian garden orb weaver spider.
Major experiments
Examples of some of the experiments and investigations on the mission.
In SPACEHAB RDM:
9 commercial payloads with 21 investigations;
4 payloads for the European Space Agency with 14 investigations;
1 payload for ISS Risk Mitigation;
18 payloads NASA's Office of Biological and Physical Research (OBPR) with 23 investigations.
In the payload bay attached to RDM:
Combined Two-Phase Loop Experiment (COM2PLEX);
Miniature Satellite Threat Reporting System (MSTRS);
Star Navigation (STARNAV).
FREESTAR
Critical Viscosity of Xenon-2 (CVX-2);
Space Experiment Module (SEM-14);
Mediterranean Israeli Dust Experiment (MEIDEX);
Low Power Transceiver (LPT);
Solar Constant Experiment-3 (SOLCON-3);
Shuttle Ozone Limb Sounding Experiment (SOLSE-2);
Additional payloads
Shuttle Ionospheric Modification with Pulsed Local Exhaust Experiment (SIMPLEX);
Ram Burn Observation (RAMBO).
Because much of the data was transmitted during the mission, there was still large return on the mission objectives even though Columbia was lost on re-entry. NASA estimated that 30% of the total science data was saved and collected through telemetry back to ground stations. Around 5-10% more data was saved and collected through recovering samples and hard drives intact on the ground after the Space Shuttle Columbia disaster, increasing the total data of saved experiments despite the disaster from 30% to 35-40%.
About five or six Columbia payloads encompassing many experiments were successfully recovered in the debris field. Scientists and engineers were able to recover 99% of the data for one of the six FREESTAR experiments, Critical Viscosity of Xenon-2 (CVX-2), that flew unpressurized in the payload bay during the mission after recovering the viscometer and hard drive damaged but fully intact in the debris field in Texas. NASA recovered a commercial payload, Commercial Instrumentation Technology Associates (ITA) Biomedical Experiments-2 (CIBX-2), and ITA was able to increase the total data saved from STS-107 from 0% to 50% for this payload. This experiment studied treatments for cancer, and the micro-encapsulation experiment part of the payload was completely recovered, increasing from 0% data to 90% data after recovering the samples fully intact for this experiment. In this same payload were numerous crystal-forming experiments by hundreds of elementary and middle school students from all across the United States. Miraculously most of their experiments were found intact in CIBX-2, increasing from 0% data to 100% fully recovered data. The BRIC-14 (moss growth experiment) and BRIC-60 (Caenorhabditis elegans roundworm experiment) samples were found intact in the debris field within a radius in east Texas. 80-87% of these live organisms survived the catastrophe. The moss and roundworms experiments' original primary mission was not nominal due to the lack of having the samples immediately after landing in their original state (they were discovered many months after the crash), but these samples helped the scientific community greatly in the field of astrobiology and helped form new theories about microorganisms surviving a long trip in outer space while traveling on meteorites or asteroids.
Re-entry
Columbia began re-entry as planned, but the heat shield was compromised due to damage sustained during the ascent. The heat of re-entry was free to spread into the damaged portion of the orbiter, ultimately causing its disintegration and the death of all seven astronauts.
The accident triggered a 7-month investigation and a search for debris, and over 85,000 pieces were collected throughout the initial investigation. This amounted to roughly 38 percent of the orbiter vehicle.
Insignia
The mission insignia itself is the only patch of the shuttle program that is entirely shaped in the orbiter's outline. The central element of the patch is the microgravity symbol, μg, flowing into the rays of the astronaut symbol.
The mission inclination is portrayed by the 39-degree angle of the astronaut symbol to the Earth's horizon. The sunrise is representative of the numerous experiments that are the dawn of a new era for continued microgravity research on the International Space Station and beyond. The breadth of science and the exploration of space is illustrated by the Earth and stars. The constellation Columba (the dove) was chosen to symbolize peace on Earth and the Space Shuttle Columbia. The seven stars also represent the mission crew members and honor the original astronauts who paved the way to make research in space possible. Six stars have five points, the seventh has six points like a Star of David, symbolizing the Israeli Space Agency's contributions to the mission.
An Israeli flag is adjacent to the name of Payload Specialist Ramon, who was the first Israeli in space. The crew insignia or 'patch' design was initiated by crew members Dr. Laurel Clark and Dr. Kalpana Chawla. First-time crew member Clark provided most of the design concepts as Chawla led the design of her maiden voyage STS-87 insignia. Clark also pointed out that the dove in the Columba constellation was mythologically connected to the explorers the Argonauts who released the dove.
Wake-up calls
Throughout the shuttle program, sleeping astronauts were often awakened each morning by songs and short pieces of music chosen by their families, friends, and Mission Control, a tradition dating back to the Gemini and Apollo programs. While the crew of STS-107 worked shifts in "red" and "blue" teams to work around the clock, on this mission each shift was still awoken with a "wake-up call"; the only other two-shift shuttle mission to do so was STS-99.
Gallery
See also
List of Space Shuttle missions
Outline of space science
Notes
References
Literature
External links
NASA's Space Shuttle Columbia and Her Crew
NASA STS-107 Crew Memorial web page
NASA's STS-107 Space Research Web Site
Spaceflight Now: STS-107 Mission Report
Press Kit
Article describing experiments which survived the disaster
Article: Astronaut Laurel Clark from Racine, WI
Status reports Detailed NASA status reports for each day of the mission.
Space accidents and incidents in the United States
Space Shuttle Columbia disaster
Space Shuttle missions
Space program fatalities
Spacecraft launched in 2003
Articles containing video clips
February 2003
2003 in Texas
January 2003
2003 in Louisiana
Kalpana Chawla | STS-107 | Engineering | 1,861 |
44,028,105 | https://en.wikipedia.org/wiki/Rhenish-Westphalian%20Coal%20Syndicate | The Rhenish-Westphalian Coal Syndicate (ger.: Rheinisch-Westfälisches Kohlen-Syndikat; abbreviated as RWKS) was a cartel established in 1893 in Essen bringing together the major coal producers in the Ruhr.
The syndicate was set up as coal producers moved towards using shipping rather than railways to deliver their coal to Rotterdam. The cartel co-operated with the Dutch Coal Trade Union, to whom they gave the sole distribution rights for Westphalian coal. Daniël George van Beuningen of the Steenkolen Handels Vereniging was a leading figure in this relationship, greatly increasing the amount of coal imported to Rotterdam and resulting in the cost of using Rhine based barges dropping as their greater use also stimulated technical innovation.
This arrangement led to Rotterdam becoming not just the leading coal transhipment port in the Netherlands but also evolving into the major bunker port in Europe. In 1913 this coal transhipment accounted for over two thirds of the total shipping on the Rhine. By this time the Rhenish-Westphalian Coal Syndicate accounted for 93% of the coal output in the Ruhr and 54% of Germany as a whole.
Emil Kirdorf, an early Nazi party member, was one of the main founders of the Rhenish-Westphalian Coal Syndicate. After the defeat of Nazi Germany in World War II, many members of the coal industry were arrested for their role in the Third Reich.
History
The RWKS was founded in February 1893 by Emil Kirdorf as the successor to various smaller mining cartels. The syndicate, as the main energy supplier to the German Reich and the main coke supplier in continental Europe, was always economically important and controversial:
In 1900, poor planning and/or excessive profit-seeking led to the so-called coal shortage, a supply crisis.
In 1901, the pricing policy of the Rhenish-Westphalian Coal Syndicate triggered the Kartellenquete, a committee of inquiry into the role of the cartels.
Between 1904 and 1911 there were greater tensions between the RWKS and the mining treasury (the Prussian state and its Ruhr mines). In 1904, the syndicate thwarted the takeover of the Hibernia mining company by the Prussian state, which in turn retaliated by being reluctant to approve new explorations.
In 1912 the Prussian state mines were associated with the RWKS, but this association was terminated in 1913.
In 1915, the syndicate threatened to collapse due to the conflicting interests of its members, which had intensified due to the war. The extension of the syndicate contract was only possible under government pressure.
After the November Revolution of 1918, the coal syndicate was transformed into a semi-public corporation with the participation of the Free State of Prussia with expanded co-determination.
In 1923, its headquarters were briefly relocated to Hamburg during the French occupation of the Ruhr area.
In 1934 the cartel was supplemented by the mines in the Aachen mining district and in 1935 by those in the Saarland, and was then sometimes called the “West German coal syndicate”.
In 1941, the RWKS, as a forced cartel, became part of the Reich Coal Association, a steering association of the National Socialist economy.
Before the end of the war in 1945, the sales areas of the Bavarian pitch coal mines within the RWKS were determined.
In 1945 the cartel was officially dissolved by the occupying forces. The military government of the British occupied zone had 44 leading representatives of the syndicate members arrested on September 7, 1945. However, the functions of the RWKS were essentially retained; they were taken over and exercised by successor organizations, which were primarily named differently. These were the Deutscher Kohlenverkauf (German Coal Sale) from 1947 to 1952 and the Gemeinschaftsorganisation Ruhrkohle (GEORG) (Ruhr Coal Community Organization) from 1952 to 1956. The later Ruhrkohle AG, which began in 1968, can be seen as a continuation of the RWKS in corporate form.
See also
F. H. Fentener van Vlissingen
References
External links
Cartels
Organizations established in 1893
1893 in economic history
Energy in Europe
International trade organizations
Coal organizations
Companies based in Essen
Companies of Prussia
Rotterdam
Coal industry | Rhenish-Westphalian Coal Syndicate | Engineering | 865 |
29,975,836 | https://en.wikipedia.org/wiki/Wave%20Motion%20%28journal%29 | Wave Motion is a peer-reviewed scientific journal published by Elsevier. It covers research on the physics of waves, with emphasis on the areas of acoustics, optics, geophysics, seismology, electromagnetic theory, solid and fluid mechanics. Original research articles on analytical, numerical and experimental aspects of wave motion are covered.
The journal was established in 1979 by editor in chief Jan D. Achenbach. In 2011, Andrew N. Norris joined as co-editor in chief, and became sole editor in chief in 2012. The role of editor in chief passed to William J. Parnell in 2017 and K.W. Chow became deputy editor in chief at this time.
Abstracting and indexing
The journal is abstracted and indexed in Applied Mechanics Reviews, Current Contents/Engineering, Computing & Technology, Current Contents/Physics, Chemical, & Earth Sciences, Compendex, Inspec, Mathematical Reviews, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, Wave Motion has a 2022 5 year impact factor of 1.9 and an impact factor of 2.2.
See also
List of periodicals published by Elsevier
References
Physics journals
Waves
Academic journals established in 1979
Elsevier academic journals
English-language journals | Wave Motion (journal) | Physics | 253 |
32,441 | https://en.wikipedia.org/wiki/Video | Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types.
Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities, and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcasts, magnetic tape, optical discs, computer files, and network streaming.
Etymology
The word video comes from the Latin verb video (I see).
History
Analog video
Video developed from facsimile systems developed in the mid-19th century. Early mechanical video scanners, such as the Nipkow disk, were patented as early as 1884, however, it took several decades before practical video systems could be developed, many decades after film. Film records using a sequence of miniature photographic images visible to the eye when the film is physically examined. Video, by contrast, encodes images electronically, turning the images into analog or digital electronic signals for transmission or recording.
Video technology was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) television systems. Video was originally exclusively live technology. Live video cameras used an electron beam, which would scan a photoconductive plate with the desired image and produce a voltage signal proportional to the brightness in each part of the image. The signal could then be sent to televisions, where another beam would receive and display the image. Charles Ginsburg led an Ampex research team to develop one of the first practical video tape recorders (VTR). In 1951, the first VTR captured live images from television cameras by writing the camera's electrical signal onto magnetic videotape.
Video recorders were sold for $50,000 in 1956, and videotapes cost US$300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.
Digital video
Digital video is capable of higher quality and, eventually, a much lower cost than earlier analog technology. After the commercial introduction of the DVD in 1997 and later the Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allow even inexpensive personal computers and smartphones to capture, store, edit, and transmit digital video, further reducing the cost of video production and allowing programmers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent digital television transition are in the process of relegating analog video to the status of a legacy technology in most parts of the world. The development of high-resolution video cameras with improved dynamic range and color gamuts, along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth, has caused digital video technology to converge with film technology. the use of digital cameras in Hollywood has surpassed the use of film cameras.
Characteristics of video streams
Number of frames per second
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa, etc.) specify 25 frame/s, while NTSC standards (United States, Canada, Japan, etc.) specify 29.97 frame/s. Film is shot at a slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.
Interlaced vs. progressive
Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is the optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.
NTSC, PAL, and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.
When displaying a natively interlaced signal on a progressive scan device, the overall spatial resolution is degraded by simple line doubling—artifacts, such as flickering or "comb" effects in moving parts of the image that appear unless special signal processing eliminates them. A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD, or satellite source on a progressive scan device such as an LCD television, digital video projector, or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.
Aspect ratio
Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular, and this can be described by a ratio between width and height. The ratio of width to height for a traditional television screen is 4:3, or about 1.33:1. High-definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.
Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard and the corresponding anamorphic widescreen formats. The 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display.
The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Reportgrowing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat's are watched in their entirety nine times more frequently than landscape video ads.
Color model and depth
The color model uses the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically, YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television, and YCbCr is used for digital video.
The number of distinct colors a pixel can represent depends on the color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block, and the same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.
Video quality
Video quality can be measured with formal metrics like peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized methods is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video, followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying."
Video compression method (digital only)
Uncompressed video delivers maximum quality, but at a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern compression standards are MPEG-2, used for DVD, Blu-ray, and satellite television, and MPEG-4, used for AVCHD, mobile phones (3GP), and the Internet.
Stereoscopic
Stereoscopic video for 3D film and other applications can be displayed using several different methods:
Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters.
Anaglyph 3D, where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcasts or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content.
One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that synchronize to the video to alternately block the image for each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications, such as in a Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two.
Formats
Different layers of video transmission and storage each provide their own set of formats to choose from.
For transmission, there is a physical connector and signal protocol (see List of video connectors). A given physical link can carry certain display standards that specify a particular refresh rate, display resolution, and color space.
Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format, for which a number is available.
Analog video
Analog video is a video signal represented by one or more analog signals. Analog color video signals include luminance (Y) and chrominance (C). When combined into one channel, as is the case among others with NTSC, PAL, and SECAM, it is called composite video. Analog video may be carried in separate channels, as in two-channel S-Video (YC) and multi-channel component video formats.
Analog video is used in both consumer and professional television production applications.
Digital video
Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface.
Transport medium
Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed-circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards.
Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport stream, SMPTE 2022 and SMPTE 2110.
Display standards
Digital television
Digital television broadcasts use the MPEG-2 and other video coding formats and include:
ATSC – United States, Canada, Mexico, Korea
Digital Video Broadcasting (DVB) – Europe
ISDB – Japan
ISDB-Tb – uses the MPEG-4 video coding format – Brazil, Argentina
Digital multimedia broadcasting (DMB) – Korea
Analog television
Analog television broadcast standards include:
Field-sequential color system (FCS) – US, Russia; obsolete
Multiplexed Analogue Components (MAC) – Europe; obsolete
Multiple sub-Nyquist sampling encoding (MUSE) – Japan
NTSC – United States, Canada, Japan
EDTV-II "Clear-Vision" - NTSC extension, Japan
PAL – Europe, Asia, Oceania
PAL-M – PAL variation, Brazil
PAL-N – PAL variation, Argentina, Paraguay and Uruguay
PALplus – PAL extension, Europe
RS-343 (military)
SECAM – France, former Soviet Union, Central Africa
CCIR System A
CCIR System B
CCIR System G
CCIR System H
CCIR System I
CCIR System M
An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.
Computer displays
Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available.
Recording
Early television was almost exclusively a live medium, with some programs recorded to film for historical purposes using Kinescope. The analog video tape recorder was commercially introduced in 1951. The following list is in rough chronological order. All formats listed were sold to and used by broadcasters, video producers, or consumers; or were important historically.
VERA (BBC experimental format ca. 1952)
2" Quadruplex videotape (Ampex 1956)
1" Type A videotape (Ampex)
1/2" EIAJ (1969)
U-matic 3/4" (Sony)
1/2" Cartrivision (Avco)
VCR, VCR-LP, SVR
1" Type B videotape (Robert Bosch GmbH)
1" Type C videotape (Ampex, Marconi and Sony)
2" Helical Scan Videotape (IVC) (1975)
Betamax (Sony) (1975)
VHS (JVC) (1976)
Video 2000 (Philips) (1979)
1/4" CVC (Funai) (1980)
Betacam (Sony) (1982)
VHS-C (JVC) (1982)
HDVS (Sony) (1984)
Video8 (Sony) (1986)
Betacam SP (Sony) (1987)
S-VHS (JVC) (1987)
Pixelvision (Fisher-Price) (1987)
UniHi 1/2" HD (1988)
Hi8 (Sony) (mid-1990s)
W-VHS (JVC) (1994)
Digital video tape recorders offered improved quality compared to analog recorders.
Betacam IMX (Sony)
D-VHS (JVC)
D-Theater
D1 (Sony)
D2 (Sony)
D3
D5 HD
D6 (Philips)
Digital-S D9 (JVC)
Digital Betacam (Sony)
Digital8 (Sony)
DV (including DVC-Pro)
HDCAM (Sony)
HDV
ProHD (JVC)
MicroMV
MiniDV
Optical storage mediums offered an alternative, especially in consumer applications, to bulky tape formats.
Blu-ray Disc (Sony)
China Blue High-definition Disc (CBHD)
DVD (was Super Density Disc, DVD Forum)
Professional Disc
Universal Media Disc (UMD) (Sony)
Enhanced Versatile Disc (EVD, Chinese government-sponsored)
HD DVD (NEC and Toshiba)
HD-VMD
Capacitance Electronic Disc
Laserdisc (MCA and Philips)
Television Electronic Disc (Teldec and Telefunken)
VHD (JVC)
Video CD
Digital encoding formats
A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder. The compressed data format usually conforms to a standard video coding format. The compression is typically lossy, meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video.
CCIR 601 (ITU-T)
H.261 (ITU-T)
H.263 (ITU-T)
H.264/MPEG-4 AVC (ITU-T + ISO)
H.265
M-JPEG (ISO)
MPEG-1 (ISO)
MPEG-2 (ITU-T + ISO)
MPEG-4 (ISO)
Ogg-Theora
VP8-WebM
VC-1 (SMPTE)
See also
General
Index of video-related articles
Sound recording and reproduction
Video editing
Videography
Video format
360-degree video
Cable television
Color television
Telecine
Timecode
Volumetric capture
Video usage
Closed-circuit television
Fulldome
Interactive video
Video art
Video feedback
Video sender
Video synthesizer
Videotelephony
Video screen recording software
Bandicam
CamStudio
Camtasia
Zight App
Fraps
References
External links
Format Descriptions for Moving Images
Digital television
High-definition television
Display technology
Television terminology
History of television
Media formats
Data compression | Video | Engineering | 3,908 |
224,389 | https://en.wikipedia.org/wiki/Direct%20insolation | Direct insolation is the insolation measured at a given location on Earth with a surface element perpendicular to the Sun's rays, excluding diffuse insolation (the solar radiation that is scattered or reflected by atmospheric components in the sky). Direct insolation is equal to the solar irradiance above the atmosphere minus the atmospheric losses due to absorption and scattering. While the solar irradiance above the atmosphere varies with the Earth–Sun distance and solar cycles, the losses depend on the time of day (length of light's path through the atmosphere depending on the solar elevation angle), cloud cover, humidity, and other impurities.
Simplified formula
A simple formula gives the approximate level of direct insolation when there are no clouds:
where AM is the airmass given by
with θ being the zenith angle (90° minus the altitude) of the sun.
For the sun at the zenith, this gives 947 W/m2. However, another source states that direct sunlight under these conditions, with 1367 W/m2 above the atmosphere, is about 1050 W/m2, and total insolation about 1120 W/m2.
Average direct insolation
For practical purposes, a time-average of the direct insolation over the course of the year is commonly used. This averaging takes into account the absence of sunlight during the night, increased scatter in the morning and evening hours, average effects of cloud cover and smog, as well as seasonal variations of the mid-day solar elevation.
Units of measurement
Direct insolation is measured in (W/m2) or kilowatt-hours per square meter per day (kW·h/(m2·day)).
1 kW·h/(m2·day) = 1,000 W · 1 hour / ( 1 m2 · 24 hours) = 41.67 W/m2
In the case of photovoltaics, average direct insolation is commonly measured in terms of peak direct insolation as kWh/(kWp·y)
(kilowatt hours per year per kilowatt peak rating)
Applications
Since radiation directly from the sun can be focussed with mirrors and lens, it can be applied to concentrated solar thermal (CST) systems. Due to clouds and aerosols, the direct insolation can fluctuate throughout the day, so forecasting the available resource is important in these applications
References
External links
National Science Digital Library - Direct Insolation
Atmospheric radiation
Visibility | Direct insolation | Physics,Mathematics | 507 |
142,902 | https://en.wikipedia.org/wiki/List%20of%20historic%20houses | List of historic houses is a link page for any stately home or historic house.
Algeria
Villa Montfeld, Algiers
Australia
List of historic houses in South Australia
Houses in New South Wales
Houses in Sydney
List of heritage houses in Sydney
Belgium
List of castles and châteaux in Belgium
China
Historic houses in Hangzhou
Denmark
List of historic houses in Denmark
List of historic houses in metropolitan Copenhagen
Estonia
List of palaces and manor houses in Estonia
France
List of châteaux in France
Ireland
List of historic houses in the Republic of Ireland
Italy
List of palaces in Italy
Preserved Ancient Roman Imperial edifices are quite abundant on the Palatine Hill.
Latvia
List of palaces and manor houses in Latvia
Libya
Karamanly House Museum
Mexico
List of historic house museums in Mexico
Morocco
Dar Adiyel
Dar Ba Mohammed Chergui
Dar Cherifa
Dar Glaoui
Dar Jamai, Fez
Dar Jamai, Meknes
Dar Moqri
Dar Mnebhi, Fez
Dar Mnebhi, Marrakesh
Dar Moulay Ali
Dar Si Said
Mouassine Museum
Kasbah Amridil
Villa Taylor, Marrakesh
Netherlands
Rietveld Schröder House
Poland
Holy Father John Paul II Family Home in Wadowice
South Africa
Eerste Pastorie Winburg
Sweden
List of castles and palaces in Sweden
United Kingdom
List of country houses in the United Kingdom
References:
United States
List of historic houses in Florida
List of historic houses in Kentucky
List of historic houses in Massachusetts
Historic houses in Missouri
Historic houses in Nebraska
Historic houses in Virginia
Historic houses in Pennsylvania
See also
United States National Register of Historic Places listings
List of abbeys and priories
List of buildings
List of castles
List of museums
References
External links
Estonian Manors Portal - the English version introduces the 438 well-preserved manors (manor-houses) in Estonia
Architecture lists
Historic preservation
List
Lists of buildings and structures | List of historic houses | Engineering | 371 |
5,190,807 | https://en.wikipedia.org/wiki/Sunway%20Pyramid | Sunway Pyramid is a shopping mall located in Bandar Sunway, Subang Jaya, Selangor which was developed by the Sunway Group.
History
Sunway Pyramid was designed by the design director of Sunway, Nelson Yong. After several architects were unable to meet expectations for the building's façade, Yong assumed responsibility for the interior design and architectural concept. According to Yong, he immediately had an inspiration and sketched the iconic lion head, the pyramid and roughly how the building facade would look like after meeting with Sunway's chairman, Tan Sri Dr. Jeffrey Cheah.
The shopping mall was opened in July 1997. The mall was constructed and designed in the Egyptian Revival architectural style, with a prominent giant lion statue in the main entrance. This statue was designed to resemble the Great Sphinx of Giza with the large pyramid behind it. According to various sources, the face of the statue was actually intended to be a face of a man, with some resemblance to the founder and current chairman of the Sunway Group, Tan Sri Dr. Jeffrey Cheah. Once the government heard about it, they rejected the proposal citing religious conflicts between the ancient religion of Egypt and modern-day Islam. This story was reportedly confirmed by Cheah himself. Therefore, a face of a lion was ultimately chosen instead.
In 2007, an expansion known as SP2 increased the net lettable area from 886,000 sq. ft. to 1,656,000 sq. ft. From October to December 2011, the 'Canopy Walk Extension Phase 3' project improved the northeast dining zone, now known as Oasis Boulevard East. Another expansion, SP3, was completed in 2015, adding a net lettable area of 62,000 sq. ft. This area is now known as Sunway Pyramid West and is linked to the main complex via an air-conditioned bridge.
The mall features two anchor tenants which are Parkson and Jaya Grocer. Former anchor tenants include AEON, which operated in the mall for 16 years and Cold Storage, which reportedly closed in 2019.
In 2024, a reconfigured retail space spanning 300,000 sq. ft. themed 'The Oasis' opened on November 1st, 2024, replacing the previous space occupied by AEON. The space now houses several new tenants such as a LEGO store, a tech megastore (TMT), a Brands for Less outlet and a MUJI department store (the largest in Malaysia). The lower ground section, 'Oasis Avenue' (which replaced the former AEON supermarket) currently houses new dining establishments such as DIN by Din Tai Fung, TORII Teppanyaki, Dandy, VCR, Burger King, and The Coffee Bean and Tea Leaf, as well as the Jaya Grocer anchor tenant (the supermarket was previously located at the basement level). The area is directly linked to the shopping mall through multiple entrances, elevators, and escalators, providing seamless access.
A second expansion, 'Terrace', will be an outdoor space located near the main entrance (facing the New Pantai Expressway), and is slated for completion in the second quarter of 2025.
Management
The mall is managed by Sunway REIT Management Sdn Bhd.
Mall zones
Asian Avenue
Oasis Boulevard East
Fashion Central
Marrakesh
Orange Zone
Blue Zone
Green Zone
Red Zone
The Link
Oasis
Terrace (slated for completion in 2025)
Access
Rail
The mall is connected to the Sunway Lagoon BRT station by a pedestrian bridge and is nearby Setia Jaya station or USJ 7 station.
The mall and its surrounding areas were previously served by Sunway Monorail, which operated between the year 2000 until 2007.
Car
In 2007, a dedicated ramp from New Pantai Expressway to the mall's car park was constructed and is located in front of the mall. The intersection of the Kuala Lumpur–Port Klang highway Federal Route 2 and Damansara–Puchong Expressway lies nearby to the mall, along with an interchange with Shah Alam Expressway. The parking lot can accommodate 10,000 cars. The parking lot uses a ticketless and cashless parking system.
Walkways
In 2014, parts of the Sunway Monorail were modified into an extensive elevated walkway network with shade, now known as Canopy Walk. The Canopy Walk links the mall directly to most nearby landmarks, thus eliminating the need for pedestrians to cross the road or drive to get to places in the city. The western section of the Canopy Walk links the mall to Sunway University, Sunway Lagoon and Monash University Malaysia, as well as the SunU-Monash BRT station located in front of the university. Whereas the eastern section of the Canopy Walk, also known as the Eco Walk, links the mall to Sunway Pinnacle, Menara Sunway, Sunway Medical Centre and Sunway Geo Avenue, as well as the SunMed BRT station, which is also connected to Taylor's University via an additional 800-meter walkway completed in 2015.
As of 2024, the southern section of the Canopy Walk (from Sunway University to Monash University) is undergoing upgrading works until November 2024 due to the construction of the new Sunway Square located in the Sunway South Quay, which will include an upcoming performing arts venue as well as an extension of Sunway University. The Canopy Walk section from Sunway University to Sunway Square will be upgraded with a new design and air-conditioning.
The mall is also directly accessible from Sunway Pyramid Hotel and Sunway Resort Hotel via The Link, a retail and dining zone completed in 2022 which contains esteemed restaurant franchises such as Din Tai Fung and Haidilao Hot Pot (which replaced the area formerly occupied by Taste Enclave, a food court). The Link is also connected to the Sunway Lagoon Surf Beach entrance via a flight of escalators.
Gallery
See also
Egyptian Revival
Sunway University
Sunway Lagoon
References
1997 establishments in Malaysia
Shopping malls in Selangor
Shopping malls established in 1997
Pyramids in Asia
Subang Jaya
Sunway Group
Ice rinks | Sunway Pyramid | Engineering | 1,221 |
35,091,347 | https://en.wikipedia.org/wiki/Kano%20River%20Project | Kano River Project is a modern integrated agricultural land use development in Northern Nigeria. River Kano also locally called Kogin Kano. The project is a large scale irrigation project developed under the authority of Hadejia-Juma’are River Basin Development Authority.
First phase of the Project
The project was started in 1971, and the initial research was conducted in 1976–77 and restricted to 3000 acres.
Commission
The Kano River project irrigation was commission by the former vice president, Prof.Yemi Osinbajo. It was commission in 2023.
Environment and historical development
The idea of the project might have started in the 1960s following extensive land use surveys and technical assistance by the British Overseas Development Authority (ODA) and USAID. The principal engineering partners of KRP is Netherlands Engineering and Construction Company (NEDECCO). The project started in earnest after the Nigerian civil war in the late 1960s. Kano River Project (KRP) covers an extensive floodplain coveringRiver Kano, River Challawa, and their convergence through the Hadejia and Jama'are Rivers. The floodplains of these were only locally tapped unless the development of the KRP. The construction of the Tiga and Challawa Gorge dams upstream was the backbone of KRP a development that stalled flooding. The maximum extent of flooding has declined from 300,000 ha in the 1960s to around 70,000 to 100,000 ha. The Federal Government of Nigeria took over the custody of KRP through the Hadejia Jama'are River Basin Development Authority.
Other officials who participated at the commissioning were the Fair Chief for Jigawa State Service of Water Assets, Hon. Ibrahim Muhammed Garba, Birnin Kudu Nearby Government Executive, Magagi Yusuf, State Government authorities as well as certain Chiefs and Agent Heads of the Bureaucratic Service of Water Assets and Sterilization, with local area leaders, women, youths, and children in the community.
Economic values
KRP is meant to be a large scale agricultural project with focus on irrigation. This major irrigation scheme is planned to cover 66,000 ha. KRP is sub divided into categories, for now only 22,000 ha or KRP 1 is being developed. The project is dependent on Tiga Dam, Bagauda Dam, and Challawa Dam and the floodplains around them. It is suggested that the net economic benefits of the floodplain (agriculture, fishing, fuelwood) were at least US$32 per 1000 m3 of water (at 1989 prices). UNEP finds that, the returns per crops grown in the Kano River Project were at most only US$1.73 per 1000 m3 and when the operational costs are included, the net benefits of the Project are reduced to US$0.04 per 1000 m3. The development of KRP has changed the economic conditions of many local people who are actively engaged in irrigation activities. Various cash crops are produced under the KRP irrigation projects. These include tomato, pepper, rice, wheat, corn, okro and many others grown for local consumption. The produce are mainly sent to local markets in Kano and to many places in southern Nigeria.
Challenges
KRP is challenged for causing landscape desiccation in the Lake Chad basin through impounding of water in dams. Release of water from dams also causes flooding downstream. KRP cannot be a success considering the fact that since its commencement in 1960s/1970s even the KRP 1 is yet to be fully developed. Another challenge is land tenure, the way and manner land is managed is not transparent. Management of water is also one of the challenges plaguing efficiency and sustainability of the KRP. Pollution is also a critical ecological challenge. The major source of pollution are agrochemical overdose and industrial effluents.
References
Agriculture in Nigeria
Irrigation projects | Kano River Project | Engineering | 780 |
24,316,816 | https://en.wikipedia.org/wiki/Astronomical%20Society%20Ru%C4%91er%20Bo%C5%A1kovi%C4%87 | Astronomical Society Ruđer Bošković () is an astronomical society in Belgrade, Serbia. Founded in 1934 by a group of students, it is the oldest one in the Balkans. Initially having only several members, today it gathers more than 700 astronomy lovers. It is named after Ruđer Bošković.
The main role of the Society is popularisation of astronomy. The Society also practices amateur astronomy observations. To accomplish this, in 1964, the Society founded the Public Observatory, which is still located in adapted Despot's Tower in Kalemegdan, Belgrade. The Belgrade Planetarium, one of the only two planetariums in Serbia, is also founded by the society, in 1970. It is located in the lower part of Kalemegdan Fortress, in a former Turkish bath. The Society publishes a popular science magazine called Vasiona since 1953.
History
A group of students at the Belgrade University back at the 1934 decided to form an astronomical society focused on amateur observations and astronomy popularization. Before the World War II, the Society published magazine Saturn, organized popular lectures and few observation trips. Thanks to the Society's efforts, the translation of the book Stars and Atoms by sir Arthur Eddington was published. All activities in Society were banned by during the German occupation in 1941.
Society continued its work in 1953 as the Astronomical Society Ruđer Bošković and has begun to publish magazine Vasiona. Publication of books was also one of the Society's main activities: Ruđer Bošković's Eclipses of the Sun and the Moon (translated and commented by Nenad Janković) was published.
In 1964 Society has established the Public Observatory, located in the Despot's Tower in Kalemegdan. At the terrace of Observatory a Zeiss refracting telescope (110/200) is placed. The Society also got a planetarium ZKP-2 in 1970 and opened the Belgrade Planetarium with diameter of 8 m and 80 seats. Planetarium is mainly visited by the students of primary and high schools.
During the 60s, 70s and 80s amateur astronomical observations of the Sun, occultations, binary and variable stars and planetary phenomena were regular. Too much light pollution around the Observatory in the last 5–10 years has limited Society's activities mainly on theoretical works and application of computers in astronomy. Every year Society organizes a few courses for the popularization of astronomy, like: the Astronomy Course for beginners held two times annually, Belgrade Astronomy Weekend (BAW) in June which consists of several lectures on different topics and Summer Astronomical Gatherings in August and September with several lectures on similar topic. All courses are held in the Society's planetarium.
See also
Astronomy in Serbia
Belgrade Planetarium
Vasiona
List of astronomical societies
References
External links
Official website
English main page
Astronomy in Serbia
Amateur astronomy organizations
Scientific organizations based in Serbia
1934 establishments in Yugoslavia
1934 establishments in Serbia | Astronomical Society Ruđer Bošković | Astronomy | 590 |
22,305,302 | https://en.wikipedia.org/wiki/G%C3%B6m%C3%B6ri%20trichrome%20stain | Gömöri trichrome stain is a histological stain used on muscle tissue.
It can be used to test for certain forms of mitochondrial myopathy.
It is named for George Gömöri, who developed it in 1950.
References
External links
Staining
1950 introductions | Gömöri trichrome stain | Chemistry,Biology | 54 |
13,689,691 | https://en.wikipedia.org/wiki/Whangaroa | Whangaroa, also known as Whangaroa Village to distinguish it from the larger area of the former Whangaroa County, is a settlement on Whangaroa Harbour in the Far North District of New Zealand. It is 8 km north-west of Kaeo and 35 km north-west of Kerikeri. The harbour is almost landlocked and is popular both as a fishing spot in its own right and as a base for deep-sea fishing.
History
The harbour was the scene of one of the most notorious incidents in early New Zealand history, the Boyd massacre. In December 1809 almost all the crew and 70 passengers were killed as utu (revenge) for the mistreatment of Te Ara, the son of a Ngāti Uru chief, who had been in the crew of the ship. Several days later the ship was burnt out after gunpowder was accidentally ignited. Relics of the Boyd are now in a local museum.
On 16 July 1824 on a voyage to Sydney from Tahiti, the crew and passengers of the colonial schooner Endeavour (Capt John Dibbs) stopped in Whangaroa Harbour. An altercation with the local Māori Ngāti Pou hapū (subtribe) of the Ngā Puhi iwi resulted in an incident where Maori warriors took control of the Endeavour and menaced the crew. The situation was defused by the timely arrival of the Ngāti Uru chief Te Ara, of Boyd fame.
In February 1827, the famous Ngā Puhi chief Hongi Hika was engaged in warfare against the tribes of Whangaroa. Acting contrary to the orders of Hongi Hika, some of his warriors plundered and burnt Wesleydale, the Wesleyan mission that had been established in June 1823 at Kaeo, nine kilometres from Whangaroa. The missionaries, Rev Turner and his wife and three children, together with Rev. Messrs, Hobbs and Stack, and Mr Wade and wife, were 'compelled to flee from Whangarooa (sic) for their lives'. They were conveyed by ship to Sydney, NSW. During a skirmish Hongi Hika was shot in the chest by one of his warriors. On 6 March 1828 Hongi Hika died at Whangaroa. There is no actual evidence that Hongi himself plundered the mission; he was busily pursuing the enemy and being wounded. Nor is there any direct evidence to implicate anybody else An alternate idea was put forward by William Williams of the CMS. " It appears beyond doubt, though our Wesleyan Friends are loath to believe it, that it was their own chief, Tepui, was the instigator of the whole business". The local Ngatiuru had made the land available to the mission. For years the missionaries had lived amongst them and grown prosperous while the tribe still ate fern root. There was no prospect of the missionaries moving on and no prospect of them becoming acceptable neighbours. They had not joined the tribe. They had set up their own tribe which was steadily wearing down the authority of the Ngatiuru leadership.
By the latter 19th century, the Whangaroa Harbour had become an important location for the kauri gum digging trade.
Demographics
Statistics New Zealand describes Whangaroa as a rural settlement. It covers and had an estimated population of as of with a population density of people per km2. Whangaroa is part of the larger Whakarara statistical area.
Whangaroa had a population of 141 in the 2023 New Zealand census, a decrease of 3 people (−2.1%) since the 2018 census, and an increase of 39 people (38.2%) since the 2013 census. There were 75 males and 66 females in 75 dwellings. The median age was 65.3 years (compared with 38.1 years nationally). There were 6 people (4.3%) aged under 15 years, 6 (4.3%) aged 15 to 29, 54 (38.3%) aged 30 to 64, and 72 (51.1%) aged 65 or older.
People could identify as more than one ethnicity. The results were 89.4% European (Pākehā); 10.6% Māori; 6.4% Pasifika; 2.1% Asian; 2.1% Middle Eastern, Latin American and African New Zealanders (MELAA); and 4.3% other, which includes people giving their ethnicity as "New Zealander". English was spoken by 97.9%, Māori language by 4.3%, and other languages by 8.5%. No language could be spoken by 2.1% (e.g. too young to talk). The percentage of people born overseas was 19.1, compared with 28.8% nationally.
Religious affiliations were 29.8% Christian, 2.1% Hindu, 2.1% Buddhist, and 2.1% other religions. People who answered that they had no religion were 57.4%, and 6.4% of people did not answer the census question.
Of those at least 15 years old, 24 (17.8%) people had a bachelor's or higher degree, 81 (60.0%) had a post-high school certificate or diploma, and 33 (24.4%) people exclusively held high school qualifications. The median income was $26,200, compared with $41,500 nationally. 9 people (6.7%) earned over $100,000 compared to 12.1% nationally. The employment status of those at least 15 was that 27 (20.0%) people were employed full-time, 15 (11.1%) were part-time, and 3 (2.2%) were unemployed.
References
Wises New Zealand Guide, 7th Edition, 1979. p. 508.
External links
Photographs of Whangaroa held in Auckland Libraries' heritage collections.
Far North District
Populated places in the Northland Region
Whaingaroa
Kauri gum | Whangaroa | Physics | 1,238 |
26,538,461 | https://en.wikipedia.org/wiki/Portuguese%20units%20of%20measurement | Portuguese units were used in Portugal, Brazil, and other parts of the Portuguese Empire until the adoption of the metric system in the 19th century and have continued in use in certain contexts since.
The various systems of weights and measures used in Portugal until the 19th century combine remote Roman influences with medieval influences from northern Europe and Islam. These influences are obvious in the names of the units. The measurement units themselves were, in many cases, inherited from a distant past. From the Romans, Portugal inherited names like (), (), , (), (), (), (). From medieval northern Europe, Portugal inherited names like (, ), (, ), (, ), (, ), (Fr. ), etc. From the Moors, Portugal receive unit names like (Arabic: ), (Arabic: ), (Arabic: ), (Arabic: ), (Arabic: ), (Arabic: ), (Arabic: ), etc. The Roman and northern European influences were more present in the north. The Islamic influence was more present in the south of the country. Fundamental units like the and the were imported by the northwest of Portugal in the 11th century, before the country became independent of León.
The gradual long-term process of standardization of weights and measures in Portugal is documented mainly since the mid-14th century. In 1352, municipalities requested standardization in a parliament meeting (). In response, Afonso IV decided to set the () of Lisbon as standard for the linear measures used for color fabrics across the country. A few years later, Pedro I carried a more comprehensive reform, as documented in the parliament meeting of 1361: the of Santarém should be used for weighing meat; the of Lisbon would be the standard for the remaining weights; cereals should be measured by the of Santarém; the of Lisbon should be used for wine. With advances, adjustments and setbacks, this framework predominated until the end of the 15th century.
In 1455, Afonso V accepted the coexistence of six regional sets of standards: Lisbon, Santarém, Coimbra, Porto, Guimarães and Ponte de Lima. Two important weight standards coexisted, one given by the mark (variant of the Cologne mark), and another given by the mark (variant of the Troyes mark). Colonha was used for precious metals and coinage and was used for (avoirdupois). The by mark was abolished by João II in 1488.
The official system of units in use in Portugal from the 16th to the 19th century was the system introduced by Manuel I around 1499–1504. The most salient aspect of this reform was the distribution of bronze weight standards (nesting weight piles) to the cities and towns of the kingdom. The reform of weights is unparalleled in Europe until this time, due to the number of distributed standards (132 are identified), their sizes (64 to 256 marks) and their elaborate decoration. In 1575, Sebastian I distributed bronze standards of capacity measures to the main towns. The number of distributed standards was smaller and uniformity of capacity measures was never achieved.
The first proposal for the adoption of the decimal metric system in Portugal appears in Chichorro's report on weights and measures (, 1795 ). Two decades later, in 1814, Portugal was the second country in the world – after France itself – to officially adopt the metric system. The system then adopted reused the names of the Portuguese traditional units instead of the original French names (e.g.: for metre; for litre; and for kilogram). However, several difficulties prevented the implementation of the new system and the old Portuguese customary units continued to be used, both in Portugal and in Brazil (which became an independent country in 1822). The metric system was finally adopted by Portugal and its remaining colonies in 1852, this time using the original names of the units. Brazil continued to use the Portuguese customary units until 1862, only then adopting the metric system.
Route units
Length units
Mass units
Volume units
See also
Spanish customary units
References
Barroca, M.J. (1992) «Medidas-Padrão Medievais Portuguesas», Revista da Faculdade de Letras. História, 2ªa Série, vol. 9, Porto, pp. 53–85.
Branco, Rui Miguel Carvalhinho (2005) The Cornerstones of Modern Government. Maps, Weights and Measures and Census in Liberal Portugal (19th Century), European University Institute, Florença.
Dicionário Enciclopédico Lello Universal, Porto: Lello & Irmão, 2002.
Gama Barros, H. ([1922]~1950) «Pesos e medidas», História da Administração Pública em Portugal nos Séculos XII a XV: 2ª Edição, Torquato de Sousa Soares (dir.), Tomo X, p. 13-115.
Monteverde, Emilio Achilles (1861) Manual Encyclopedico para Uzo das Escolas de Instrucção Primaria, Lisboa: Imprensa Nacional.
Paixão, Fátima & Jorge, Fátima Regina (2006) «Success and constraints in the adoption of the metric system in Portugal», The Global and the Local: The History of Science and the Cultural Integration of Europe. Proceedings of the 2nd ICESHS (Cracow, Poland 6-9, 2006).
Pinto, A.A. (1986) "Isoléxicas Portuguesas (Antigas Medidas de Capacidade)", Revista Portuguesa de Filologia, vol. XVIII (1980-86), p. 367-590.
Seabra Lopes, L. (2000) "Medidas Portuguesas de Capacidade: duas Tradições Metrológicas em Confronto Durante a Idade Média", Revista Portuguesa de História, 34, p. 535-632.
Seabra Lopes, L. (2003) "Sistemas Legais de Medidas de Peso e Capacidade, do Condado Portucalense ao Século XVI", Portugalia: Nova Série, XXIV, Faculdade de Letras, Porto, p. 113-164.
Seabra Lopes, L. (2005) "A Cultura da Medição em Portugal ao Longo da História", Educação e Matemática, nº 84, Setembro-Outubro de 2005, p. 42-48.
Seabra Lopes, L. (2018a) "As Pilhas de Pesos de Dom Manuel I: Contributo para a sua Caracterização, Inventariação e Avaliação", Portugalia: Nova Série, vol. 39, Universidade do Porto, p. 217-251; a German translation of this paper is published as: "Die Einsatzgewichte König Manuels I: Ein Beitrag zu ihrer Beschreibung, Bestandsaufnahme und Gewichtsbestimmung", Maβ und Gewicht: Zeitschrift für Metrologie, nr. 130, 2019, p. 4078-4109
Seabra Lopes, L. (2018b) A Metrologia em Portugal em Finais do Século XVIII e a 'Memória sobre Pesos e Medidas' de José de Abreu Bacelar Chichorro (1795), Revista Portuguesa de História, vol. 49, 2018, p. 157-188.
Seabra Lopes, L. (2019) "The Distribution of Weight Standards to Portuguese Cities and Towns in the Early 16th Century: Administrative, Demographic and Economic Factors", Finisterra, vol. 54 (112), Centro de Estudos Geográficos, Lisboa, p. 45-70.
Silva Lopes, João Baptista da (1849) Memoria sobre a Reforma dos Pezos e Medidas em Portugal segundo o Sistema Metrico-Decimal, Imprensa Nacional, Lisboa.
Trigoso, S.F.M. (1815) "Memória sobre os pesos e medidas portuguesas e sobre a introdução do sistema metro-decimal", Memórias Económicas da Academia Real das Ciências de Lisboa, vol. V, Lisboa, p. 336-411.
References
Systems of units
Obsolete units of measurement
Units of measurement by country | Portuguese units of measurement | Mathematics | 1,806 |
2,903,742 | https://en.wikipedia.org/wiki/10%20Bo%C3%B6tis | 10 Boötis is a suspected astrometric binary star system in the northern constellation of Boötes, located around 528 light years away from the Sun. It is visible to the naked eye under suitable viewing conditions as a dim, white-hued star with an apparent visual magnitude of 5.76. Its magnitude is diminished by an extinction of 0.17 due to interstellar dust. This system is moving away from the Earth with a heliocentric radial velocity of +6 km/s.
The visible component is an ordinary A-type main-sequence star with a stellar classification of A0 Vs, where the 's' notation indicates "sharp" absorption lines. It is 337 million years old with a moderate rotation rate, showing a projected rotational velocity of 75 km/s. The star has 2.87 times the mass of the Sun and about 2.7 times the Sun's radius. It is radiating 113 times the Sun's luminosity from its photosphere at an effective temperature of 9,441 K.
References
A-type main-sequence stars
Boötes
BD+22 2650
Bootis, 10
121996
068276
5255 | 10 Boötis | Astronomy | 240 |
77,537,529 | https://en.wikipedia.org/wiki/Meeting%20science | The meeting science is an emerging scientific discipline dedicated to the study, analysis, and optimization of professional meetings. Its primary goal is to enhance the effectiveness, productivity, and satisfaction of participants by applying scientific methods and principles.
History
Meetings have always been a central element of management, and interest in their optimization developed in the early 21st century with an increasing number of meetings in professional environments. This interest grew significantly after the global COVID-19 crisis, which led many organizations to adopt hybrid work modes. Previously, various economic sectors had initiated efforts to define and formalize meeting practices.
The universality of the principles and practices of meeting science facilitates its adoption beyond the corporate world. It is integrated into diverse organizations, including local governments, military, associations, and foundations.
Simultaneously, a related field called facilitation emerged. Unlike meeting science, which aims to make operators autonomous in applying best practices, facilitation involves methodological experts who intervene in a targeted manner during events to improve efficiency.
Origins
Lean management
Inspired by Toyota's practices in Japan, lean management introduced the principle of short-interval meetings to manage operations, often associated with visual management.
Agile approaches
With the publication of the Agile Manifesto in 2001, these approaches spread through the implementation of frameworks like Scrum, which includes specific meetings such as sprint planning and retrospectives, and the daily stand-up.
Sociocracy and holacracy
Sociocracy and holacracy are governance models introduced in the 1970s and early 2000s, respectively, focused on putting people at the center of performance. They define precise meeting modalities. Sociocracy is based on four principles: decision-making by consent, organization in circles, double-linking between circles, and election without candidates. Holacracy proposes governance meetings and tactical meetings.
United States
In the United States, meeting science emerged in the 2000s. Steven Rogelberg and Joseph Allen are pioneers, laying the foundations of this scientific discipline. Their academic work is summarized in The Cambridge handbook of meeting science, which explores various meeting aspects, including the meeting recovery syndrome, a concept that explores to the conditions individuals experience post-meeting.
Many American authors have published works on meeting science. Rogelberg's The surprising science of meetings offers insights into agenda setting, participant engagement, and decision processes. Joseph Allen, a student of Rogelberg, continues research at the University of Utah on entitativity, a concept developed by Donald T. Campbell in the 1960s. Allen has also written about remote meetings in the context of hybrid work. Patrick Lencioni, in Death by meeting (2004), proposes a simple committee model for executive teams, describing necessary rituals. Elise Keith, in Where the action is, presents a periodic table of meetings with 16 different formats. Paul Axtell, in Meetings matter (2015), provides a humanistic perspective on meetings.
The Harvard Business Review is also a resource on meeting science, featuring articles by experts such as Roger Schwartz on effective agenda writing, Eunice Eun on reducing unnecessary meetings, Steven Rogelberg on improving meetings, Sabina Nawaz on creating norms for executive teams, and Paul Axtell on questions to improve meetings.
McKinsey has published articles offering insights on meeting organization and efficiency.
United Kingdom
In the United Kingdom Alan Palmer published Talk Lean in 2014, describing an approach developed in France in the 1990s by Philippe de Lapoyade and Alain Garnier, called Discipline Interactifs. This approach emphasizes precisely formulating the goal of an exchange, whether it is a managerial act, a sales interview, or a meeting. Helen Chapman, in The meeting book (2016), presents concepts and illustrations contributing to meeting success.
France
In France, Alain Cardon proposed an original approach called delegated processes in the late 1990s to improve recurring meeting practices, particularly for executive committees and hierarchical teams. In 2001, Michel Guillou coined the term réuniologie as "the art of organizing effective meetings."
In 2017 the École Internationale de Réuniologie, International School of Meeting Science in English, was founded and registered the trademark réuniologie with the National Institute of Industrial Property in France. The school assists organizations in improving their meeting practices and combating meeting-itis. Louis Vareille, the founder, defined the meeting-itis and proposed solutions in his book Meeting-itis, make it stop!.
In his work, Louis Vareille develops concepts related to meeting science from various authors:
William Schutz's human element theory: analyzes individual behavior in groups and measures to ensure active contribution.
Amy Edmondson's psychological safety: influences team dynamics and meeting functioning. Her book The fearless organization (2018) is a key reference.
Max Ringelmann's social loafing: describes the optimal number of meeting participants.
Other French authors have also contributed to the discipline. Romain David and Didier Noyé, in Réinventez vos réunions, provide a synthetic and operational vision of the levers to activate for meeting efficiency. Sacha Lopez, David Lemesle, and Marc Bourguignon offer practical perspectives in their Guide de survie aux réunions, drawing on their expertise in facilitation.
Study areas
The meeting science explores various aspects of meetings:
Planning and structure: designing, defining objectives, structuring the agenda, and preparing meetings.
Group dynamics: analyzing participant interactions, roles, and behaviors.
Technologies and tools: impact of digital tools and communication technologies.
Productivity and efficiency: measuring productivity.
Participant satisfaction: surveys on participant satisfaction and engagement, and evaluating decisions and outcomes. Agile development approaches like return on time invested (ROTI) facilitate these practices.
Methods
Meeting science uses various methodologies to improve practices:
Observation: analyzing behaviors and interactions during meetings.
Surveys: collecting data on participant perceptions and satisfaction.
Experiments: controlled conditions to test meeting techniques' effectiveness.
Training and transformation: training programs to adjust practices.
Governance: analyzing and adjusting committee structures for optimal efficiency.
Meeting science also integrates techniques to ensure participant engagement in remote and hybrid meetings, using digital tools for meeting design, facilitation, and evaluation. Since 2023, artificial intelligence offers new features for meetings, including agenda design, translation, transcription, and summary writing.
Contexts
Meeting science can be applied to various contexts, including:
Team meetings
Executive and management committees
Project meetings
Steering committees
One-to-one meetings
All hands meetings
References
Further reading
External links
École Internationale de Réuniologie (in French)
Meeting science in Welcome to the jungle (in French)
Meetings
Interdisciplinary subfields
Behavioural sciences
Management science | Meeting science | Biology | 1,334 |
50,229,484 | https://en.wikipedia.org/wiki/Photon%20scanning%20microscopy | The operation of a photon scanning tunneling microscope (PSTM) is analogous to the operation of an electron scanning tunneling microscope, with the primary distinction being that PSTM involves tunneling of photons instead of electrons from the sample surface to the probe tip. A beam of light is focused on a prism at an angle greater than the critical angle of the refractive medium in order to induce total internal reflection within the prism. Although the beam of light is not propagated through the surface of the refractive prism under total internal reflection, an evanescent field of light is still present at the surface.
The evanescent field is a standing wave which propagates along the surface of the medium and decays exponentially with increasing distance from the surface. The surface wave is modified by the topography of the sample, which is placed on the surface of the prism. By placing a sharpened, optically conducting probe tip very close to the surface (at a distance <λ), photons are able to propagate through the space between the surface and the probe (a space which they would otherwise be unable to occupy) through tunneling, allowing detection of variations in the evanescent field and thus, variations in surface topography of the sample. In this manner, PSTM is able to map the surface topography of a sample in much the same way as in electron scanning tunneling microscope.
One major advantage of PSTM is that an electrically conductive surface is no longer necessary. This makes imaging of biological samples much simpler and eliminates the need to coat samples in gold or another conductive metal. Furthermore, PSTM can be used to measure the optical properties of a sample and can be coupled with techniques such as photoluminescence, absorption, and Raman spectroscopy.
History
Conventional optical microscopy utilizing far-field illumination achieves resolution that is restricted by the Abbe diffraction limit. Modern optical microscopes with diffraction limited resolution are therefore capable of resolving features as small as λ/2.3. Researchers have long sought to break the diffraction limit of conventional optical microscopy in order to achieve super-resolution microscopes. One of the first major advances toward this goal was the development of scanning optical microscopy (SOM) by Young and Roberts in 1951. SOM involves scanning individual regions of the sample with a very small field of light illuminated through a diffraction limited aperture. Individual features as small as λ/3 are observed at each scanned point, and the image collected at each point is then compiled together into one image of the sample.
The resolution of these devices was extended beyond the diffraction limit in 1972 by Ash and Nicholls, who first demonstrated the concept of near-field scanning optical microscopy. In NSOM, the object is illuminated through a sub-wavelength sized aperture located at a distance <λ from the sample surface. The concept was first demonstrated using microwaves, however the technique was extended into the field of optical imaging in 1984 by Pohl, Denk, and Lanz, who developed a near-field scanning optical microscope capable of achieving a resolution of λ/20. Along with the development of electron scanning tunneling microscopy in 1982 by Binning et al., this led to the development of the photon scanning tunneling microscope by Reddick and Courjon (independently) in 1989. PSTM combines the techniques of STM and NSOM by creating an evanescent field using total internal reflection in a prism under the sample and detecting sample-induced variations in the evanescent field by tunneling photons into a sharpened optical fiber probe.
Theory
Total internal reflection
A beam of light travelling through a medium of refractive index n1 incident on an interface with a second medium of refractive index n2 (with n1>n2) will be partially transmitted through the second medium and partially reflected back through the first medium if the angle of incidence is less than the critical angle. At the critical angle, the incident beam will be refracted tangent to the interface (i.e. it will travel along the boundary between the two media). At an angle greater than the critical angle (when the incident beam is nearly parallel to the interface) the light will be completely reflected within the first medium, a condition known as total internal reflection. In the case of PSTM, the first medium is a prism, typically made of glass, and the second medium is the air above the prism.
Evanescent field coupling
Under total internal reflection, although no energy is propagated through the second medium, a non-zero electric field is still present in the second medium near the interface. This field exponentially decays with increasing distance from the interface and is known as the evanescent field. Figure 1 shows the optical component of the evanescent field is modulated by the presence of a dielectric sample placed on the interface (the surface of the prism), hence the field contains detailed optical information about the sample surface. Although this image is lost in the diffraction limited far field, a detailed optical image may be constructed by probing the near field region (at a distance <λ) and detecting sample induced modulation of the evanescent field.
This is accomplished through frustrated total internal reflection, also known as evanescent field coupling. This occurs when a third medium (in this case the sharpened fiber probe) of refractive index n3 (with n3>n2) is brought near the interface at a distance <λ. At this distance the third medium overlaps the evanescent field, disrupting the total reflection of light in the first medium and allowing propagation of the wave in the third medium. This process is analogous to quantum tunneling; the photons confined within the first medium are able to tunnel through the second medium (where they cannot exist) into the third medium. In PSTM, the tunneled photons are conducted through the fiber probe into a detector where a detailed image of the evanescent field can then be reconstructed. The degree of coupling between the probe and surface is highly distance dependent, as the evanescent field is an exponentially decaying function of distance from the interface. Hence, the degree of coupling is used to measure the tip to surface distance in order to obtain topographical information about the sample placed on the surface.
Probe-field interaction
The intensity of the evanescent field at a distance z from the surface is given by the relation
I~exp(-γz)
where γ is the decay constant of the field and is represented by
γ = 2k2(n122sin2θi − 1)1/2
where n12=(n1/n2), n1 is the refractive index of the first medium, n2 is the refractive index of the second medium, k is the magnitude of the incident wave vector, and θi is the angle of incidence.The decay constant is used in determining the transmittance of photons from the surface to the probe tip, however the degree of coupling is also highly dependent on the properties of the probe tip such as the length of the probe tip region in contact with the evanescent field, the probe tip geometry, and the size of the aperture (in apertured probes). The degree of optical coupling to the probe tip as a function of height must therefore be determined individually for a given instrument and probe tip. In practice, this is usually determined during instrument calibration by scanning the probe perpendicular to the surface and monitoring the detector signal as a function of tip height. Thus the decay constant is found empirically and is used to interpret the signal obtained during the lateral scan and to set a feedback point for the piezoelectric transducer during constant signal scanning.
Although the decay constant is typically determined through empirical methods, detailed mathematical models of probe–sample coupling interactions that account for probe tip geometry and sample distance have been published by Goumri-Said et al. In many cases the evanescent field is primarily modulated by sample surface topography, hence the detected optical signal can be interpreted as the topography of the sample. However, the refractive index and absorption properties of the sample can cause further changes to the detected evanescent field, making it necessary to separate optical data from topographical data. This is often accomplished by coupling PSTM to other techniques such as AFM (see below). Theoretical models have also been developed by Reddick to account for modulation of the evanescent field by secondary effects such as scattering and absorbance at the sample surface.
Procedure
Figure 2 shows the operation and principle of PSTM. An evanescent field is attained using a laser beam at an attenuated total reflection geometry for total internal reflection within a triangular prism. The sample is placed on a glass or quartz slide, which is affixed to the prism with an index matching gel. The sample then becomes the surface at which total internal reflection occurs. The probe consists of the sharpened tip of an optical fiber attached to a piezoelectric transducer to control fine motion of the probe tip during scanning. The end of the optical fiber is coupled to a photomultiplier tube, which acts as the detector. The probe tip and piezoelectric transducer are housed within a scanner cartridge mounted above the sample. The position of this assembly is manually adjusted to bring the probe tip within tunneling distance of the evanescent field.
As photons tunnel from the evanescent field into the probe tip, they are conducted along the optical fiber to the photomultiplier tube, where they are converted into an electrical signal. The amplitude of the electrical output of the photomultiplier tube is directly proportional to the number of photons collected by the probe, thus allowing measurement of the degree of interaction of the probe with the evanescent field at the sample surface. Since this field exponentially decays with increasing distance from the surface, the degree of intensity of the field corresponds to the height of the probe from the sample surface. The electrical signals are sent to a computer where the topography of the surface is mapped based on the corresponding changes in the detected evanescent field intensity.
The electrical output from the photomultiplier tube is used as constant feedback to the piezoelectric transducer to adjust the height of the tip according to variations in surface topography. The probe must be scanned perpendicular to the sample surface in order to calibrate the instrument and determine the decay constant of the field intensity as a function of probe height. During this scan, a feedback point is set so that the piezoelectric transducer can maintain constant signal intensity during the lateral scan.
Fiber probe tips
The resolution of a PSTM instrument is highly dependent on probe tip geometry and diameter. Probes are typically fabricated via chemical etching of an optical fiber in a solution of HF and can be apertured or apertureless. Using chemical etching, fiber tips with a curvature radius as low as 20 nm have been made. In apertured tips, the sides of the sharpened fiber are sputter coated in a metal or other material. This helps to limit tunneling of photons into the side of the probe in order to maintain more consistent and accurate evanescent field coupling. Due to the rigidity of the fiber probe, even brief contact with the surface will destroy the probe tip.
Larger probe tips have a greater degree of coupling to the evanescent field and will therefore have greater collection efficiency due to a larger area of the optical fiber interacting with the field. The primary limitation of a large tip is the increased probability of collision with rougher surface features as well as photon tunneling into the side of the probe. A narrower probe tip is necessary to resolve more abrupt surface features without collision, however the collection efficiency will be reduced.
Figure 3 shows that fiber probe with metal coating. In metal coated fiber probes, the diameter and geometry of the aperture, or uncoated area at the tip of the probe, determines the collection efficiency. Wider cone angles result in larger aperture diameters and shorter probe lengths, while narrower cone angles result in smaller aperture diameters and longer probes. Double tapered probe tips have been developed in which a long, narrow region of the probe tapers into a tip with a wider cone angle. This provides a wider aperture for greater collection efficiency while still maintaining a long narrow probe tip capable of resolving abrupt surface features with low risk of collision.
PSTM coupled spectroscopy techniques
Photoluminescence
It has been demonstrated that photoluminescence spectra can be recorded utilizing a modified PSTM instrument. Coupling photoluminescence spectroscopy to PSTM allows the observation of emission from local nanoscopic regions of a sample and provides an understanding of how the photoluminescent properties of a material change due to surface morphology or chemical differences in an inhomogeneous sample. In this experiment, a 442 nm He-Cd laser beam under total internal reflection was used as an excitation source. The signal from the optical fiber was first passed through a monochromator before reaching a photomultiplier tube to record the signal. Photoluminescence spectra were recorded from local regions of a ruby crystal sample. A subsequent publication successfully demonstrated the use of PSTM to record the fluorescence spectrum of a Cr3+ ion implanted sapphire cryogenically cooled under liquid nitrogen. This technique allows characterization of individual surface features of semiconductor samples whose photoluminescent properties are highly temperature dependent and must be studied at cryogenic temperatures.
Infrared
PSTM has been modified to record spectra in the infrared range. Utilizing both cascade arc and free electron laser CLIO as infrared light sources, infrared absorbance spectra were recorded from a diazoquinone resin. This mode of operation requires a fluoride glass fiber and HgCdTe detector in order to effectively collect and record the infrared wavelengths used. Furthermore, the fiber tip must be metal coated and oscillated during collection in order to sufficiently reduce background noise. The surface must first be imaged using a wavelength that will not be absorbed by the sample. Next, the light source is stepped through the infrared wavelengths of interest at each point during collection. The spectrum is acquired by analysis of the differences in the images recorded at different wavelengths.
Atomic force microscopy
Figure 4 shows the combination of a PSTM, AFM, and conventional microscope. In PSTM and AFM the silicon nitride cantilever can be used as the optical probe tip in order to simultaneously perform (AFM) and PSTM. This allows comparison of the recorded optical signal with the higher resolution topography data obtained by AFM. Silicon nitride is a suitable material for an optical probe tip as it is optically transparent down to 300 nm. However, since it is not optically conducting, the photons collected by the probe tip must be focused through a lens to the detector instead of travelling through an optical fiber. The instrument can be operated in constant height or constant force mode and resolution is limited to 10–50 nm due to tip convolution. Since the optical signal obtained in PSTM is affected by the optical properties of the sample as well as topography, comparison of the PSTM data with AFM data allows determination of the absorbance of the sample. In one study, the 514 nm absorbance of a Langmuir-Blodgett film of 10,12-pentacosadiynoic acid (PCA) was recorded using this method.
Photo-conductive imaging with atomic force/electron scanning tunneling microscopy
PSTM can be combined with both electron scanning tunneling microscope and AFM in order to simultaneously record optical, conductive, and topological information of a sample. This experimental apparatus, published by Iwata et al., allows the characterization of semiconductors such as photovoltaics, as well as other photo-conductive materials. The experimental configuration utilizes a cantilever consisting of a bent optical fiber sharpened to a tip diameter of less than 100 nm, coated with an ITO layer, and a thin Au layer. Hence, the fiber probe acts as the AFM cantilever for force sensing, is optically conductive to record optical data, and electrically conductive to record current from the sample. The signals from the three detection methods are recorded simultaneously and independently in order to separate topographical, optical, and electrical information from the signals..
This apparatus was used to characterize copper phthalocyanine deposited over an array of gold squares patterned on an ITO substrate affixed to a prism. The prism was illuminated under total internal reflection at 636 nm, 533 nm, and 441 nm (selected from a white light laser using optical filters), allowing photo-conductive imaging at different excitation wavelengths. Copper phthalocyanine is a semiconducting organometallic compound. The conductivity of this compound is high enough for the electric current to travel through the film and tunnel into the probe tip. The photo-conductive properties of this material cause the conductivity to increase under irradiation due to an increase in the number of photo-generated charge carriers. Optical and topographical images of the sample were obtained utilizing the novel imaging technique described above. The changes in photo-conductivity of point-contact areas of the film were observed under different excitation wavelengths.
References
Photonics
Scanning probe microscopy | Photon scanning microscopy | Chemistry,Materials_science | 3,526 |
17,670,991 | https://en.wikipedia.org/wiki/Time%20displacement | Time displacement in sociology refers to the idea that new forms of activities may replace older ones. New activities that cause time displacement are usually technology-based, most common are the information and communication technologies such as Internet and television. Those technologies are seen as responsible for declines of previously more common activities such as in- and out-of-home socializing, work, and even personal care and sleep.
For example, Internet users may spend time online using it as a substitute of other activities that served similar function(s) (watching television, reading printed media, face to face interaction, etc.). Internet is not the first technology to result in time displacement. Earlier, television had a similar impact, as it shifted people's time from activities such as listening to radio, going to movie theaters or, talking in house, or spending time outside it.
See also
Parkinson's law
References
Paul DiMaggio, Eszter Hargittai1, W. Russell Neuman, and John P. Robinson, Social Implications of the Internet, Annual Review of Sociology, Vol. 27: 307-336 (Volume publication date August 2001),
Waipeng Lee and Eddie C. Y. Kuo, Internet and Displacement Effect: Children's Media Use and Activities in Singapore, JCMC 7 (2) January 2002
Time
Sociological terminology | Time displacement | Physics,Mathematics | 271 |
2,476,462 | https://en.wikipedia.org/wiki/Epsilon-induction | In set theory, -induction, also called epsilon-induction or set-induction, is a principle that can be used to prove that all sets satisfy a given property. Considered as an axiomatic principle, it is called the axiom schema of set induction.
The principle implies transfinite induction and recursion.
It may also be studied in a general context of induction on well-founded relations.
Statement
The schema is for any given property of sets and states that, if for every set , the truth of follows from the truth of for all elements of , then this property holds for all sets.
In symbols:
Note that for the "bottom case" where denotes the empty set , the subexpression is vacuously true for all propositions and so that implication is proven by just proving .
In words, if a property is persistent when collecting any sets with that property into a new set and is true for the empty set, then the property is simply true for all sets. Said differently, persistence of a property with respect to set formation suffices to reach each set in the domain of discourse.
In terms of classes
One may use the language of classes to express schemata.
Denote the universal class by .
Let be and use the informal as abbreviation for .
The principle then says that for any ,
Here the quantifier ranges over all sets.
In words this says that any class that contains all of its subsets is simply just the class of all sets.
Assuming bounded separation, is a proper class. So the property is exhibited only by the proper class , and in particular by no set. Indeed, note that any set is a subset of itself and under some more assumptions, already the self-membership will be ruled out.
For comparison to another property, note that for a class to be -transitive means
There are many transitive sets - in particular the set theoretical ordinals.
Related notions of induction
Exportation proves . If is for some predicate , it thus follows that
where is defined as .
If is the universal class, then this is again just an instance of the schema.
But indeed if is any -transitive class, then still and a version of set induction for holds inside of .
Ordinals
Ordinals may be defined as transitive sets of transitive sets. The induction situation in the first infinite ordinal , the set of natural numbers, is discussed in more detail below. As set induction allows for induction in transitive sets containing , this gives what is called transfinite induction and definition by transfinite recursion using, indeed, the whole proper class of ordinals. With ordinals, induction proves that all sets have ordinal rank and the rank of an ordinal is itself.
The theory of Von Neumann ordinals describes such sets and, there, models the order relation , which classically is provably trichotomous and total. Of interest there is the successor operation that maps ordinals to ordinals. In the classical case, the induction step for successor ordinals can be simplified so that a property must merely be preserved between successive ordinals (this is the formulation that is typically understood as transfinite induction.) The sets are -well-founded.
Well-founded relations
For a binary relation on a set , well-foundedness can be defined by requiring a tailored induction property: in the condition is abstracted to , i.e. one always assumes in place of the intersection used in the statement above. It can be shown that for a well-founded relation , there are no infinite descending -sequences and so also .
Further, function definition by recursion with can be defined on their domains, and so on.
Classically, well-foundedness of a relation on a set can also be characterized by the strong property of minimal element existence for every subset.
With dependent choice, it can also be characterized by the weak property of non-existence of infinite descending chains.
For negative predicates
This section concerns the case of set induction and its consequences for predicates which are of a negated form, . Constructively, the resulting statements are generally weaker than set induction for general predicates. To establish equivalences, valid principles such as
,
is commonly made use of, both sides saying that two predicates and can not, for any value, be validated simultaneously. The situation when double-negation elimination is permitted is discussed in the section right after.
Denoting the class by , this amounts to the special case of above with, for any , equal to the false statement .
One has denoting . Writing for the statement that all sets are not members of the class , the induction schema reduces to
In words, a property (a class) such that there is no -minimal set for it is simply the false property (the empty set). (A minimal for a relation is one for which there does not exist another with . Here the membership relation restricted to is considered, i.e. a minimal element with respect to is one without a .)
Infinite descending chains
The antecedent in the above implication may be expressed as . It holds for empty set vacuously. In the presence of any descending membership chain as a function on , the axiom of replacement proves existence of a set that also fulfills this. So assuming the induction principle makes the existence of such a chain contradictory.
In this paragraph, assume the axiom of dependent choice in place of the induction principle. Any consequences of the above antecedent is also implied by the -statement obtained by removing the double-negation, which constructively is a stronger condition. Consider a set with this -property. Assuming the set is inhabited, dependent choice implies the existence of an infinite descending membership chain as sequence, i.e. a function on the naturals. So establishing (or even postulating) the non-existence of such a chain for a set with the -property implies the assumption was wrong, i.e. also .
So set induction relates to the postulate of non-existence of infinite descending chains. But given the extra assumptions required in the latter case, the mere non-existence postulate is relatively weak in comparison.
Self-membership
For a contradiction, assume there exists an inhabited set with the particular property that it is equal to its own singleton set, . Formally, , from which it follows that , and also that all members of share all its properties, e.g. . From the previous form of the principle it follow that , a contradiction.
Discussed using the other auxiliary terminologies above, one studies set induction for the class of sets that are not equal to such an . So in terms of the negated predicate, is the predicate , meaning a set that exhibits has the defining properties of . Using the set builder notation, one is concerned with . Assuming the special property of , any empty intersection statement simplifies to just . The principle in the formulation in terms of reduces to , again a contradiction.
Back to the very original formulation, it is concluded that and is simply the domain of all sets. In a theory with set induction, a with the described recursive property is not actually a set in the first place.
A similar analysis may be applied also to more intricate scenarios. For example, if and were both sets, then the inhabited would exists by pairing, but this also has the -property.
Contrapositive
The contrapositive of the form with negation is constructively even weaker but it is only one double negation elimination away from the regularity claim for ,
With double-negations in antecedent and conclusion, the antecedent may equivalently be replaced with .
Classical equivalents
Disjunctive form
The excluded middle statement for a universally quantified predicate can classically be expressed as follows: Either it holds for all terms, or there exist a term for which the predicate fails
With this, using the disjunctive syllogism, ruling out the possibility of counter-examples classically proves a property for all terms.
This purely logical principle is unrelated to other relations between terms, such elementhood (or succession, see below).
Using that is classically an equivalence, and also using double-negation elimination, the induction principle can be translated to the following statement:
This expresses that, for any predicate , either it holds for all sets, or there is some set for which does not hold while is at the same time true for all elements of . Relating it back to the original formulation: If one can, for any set , prove that implies , which includes a proof of the bottom case , then the failure case is ruled out and, then, by the disjunctive syllogism the disjunct holds.
For the task of proving by ruling out the existence of counter-examples, the induction principle thus plays a similar role as the excluded middle disjunction, but the former is commonly also adopted in constructive frameworks.
Relation to regularity
The derivation in a previous section shows that set induction classically implies
In words, any property that is exhibited by some set is also exhibited by a "minimal set" , as defined above. In terms of classes, this states that every non-empty class has a member that is disjoint from it.
In first-order set theories, the common framework, the set induction principle is an axiom schema, granting an axiom for any predicate (i.e. class). In contrast, the axiom of regularity is a single axiom, formulated with a universal quantifier only over elements of the domain of discourse, i.e. over sets. If is a set and the induction schema is assumed, the above is the instance of the axiom of regularity for . Hence, assuming set induction over a classical logic (i.e. assuming the law of excluded middle), all instances of regularity hold.
In a context with an axiom of separation, regularity also implies excluded middle (for the predicates allowed in ones separation axiom). Meanwhile, the set induction schema does not imply excluded middle, while still being strong enough to imply strong induction principles, as discussed above. In turn, the schema is, for example, adopted in the constructive set theory CZF, which has type theoretic models. So within such a set theory framework, set induction is a strong principle strictly weaker than regularity. When adopting the axiom of regularity and full Separation, CZF equals standard ZF.
History
Because of its use in the set theoretical treatment of ordinals, the axiom of regularity was formulated by von Neumann in 1925.
Its motivation goes back to Skolem's 1922 discussion of infinite descending chains in Zermelo set theory , a theory without regularity or replacement.
The theory does not prove all set induction instances. Regularity is classically equivalent to the contrapositive of set induction for negated statements, as demonstrated. The bridge from sets back to classes is demonstrated below.
Set induction from regularity and transitive sets
Assuming regularity, one may use classical principles, like the reversal of a contrapositive. Moreover, an induction schema stated in terms of a negated predicate is then just as strong as one in terms of a predicate variable , as the latter simply equals . As the equivalences with the contrapositive of set induction have been discussed, the task is to translate regularity back to a statement about a general class . It works in the end because the axiom of separation allows for intersection between sets and classes. Regularity only concerns intersection inside a set and this can be flattened using transitive sets.
The proof is by manipulation of the regularity axiom instance
for a particular subset of the class . Observe that given a class and any transitive set , one may define which has and also . With this, the set may always be replaced with the class in the conclusion of the regularity instance.
The remaining aim is to obtain a statement that also has it replaced in the antecedent, that is, establish the principle holds when assuming the more general . So assume there is some , together with the existence of some transitive set that has as subset. An intersection may be constructed as described and it also has . Consider excluded middle for whether or not is disjoint from , i.e. . If is empty, then also and itself always fulfills the principle. Otherwise, by regularity and one can proceed to manipulate the statement by replacing with as discussed. In this case, one even obtains a slightly stronger statement than the one in the previous section, since it carries the sharper information that and not just .
Transitive set existence
The proof above assumes the existence of some transitive set containing any given set. This may be postulated, the transitive containment axiom.
Existence of the stronger transitive closure with respect to membership, for any set, can also be derived from some stronger standard axioms. This needs the axiom of infinity for as a set, recursive functions on , the axiom of replacement on and finally the axiom of union. I.e. it needs many standard axioms, just sparing the axiom of powerset. In a context without strong separation, suitable function space principles may have to be adopted to enable recursive function definition.
minus infinity also only proves the existence of transitive closures when Regularity is promoted to Set induction.
Comparison of epsilon and natural number induction
The transitive von Neumann model of the standard natural numbers is the first infinite ordinal. There, the binary membership relation "" of set theory exactly models the strict ordering of natural numbers "". Then, the principle derived from set induction is complete induction.
In this section, quantifiers are understood to range over the domain of first-order Peano arithmetic (or Heyting arithmetic ). The signature includes the constant symbol "", the successor function symbol "" and the addition and multiplication function symbols "" resp "". With it, the naturals form a semiring, which always come with a canonical non-strict preorder "", and the irreflexive may be defined in terms of that. Similarly, the binary ordering relation is also definable as .
For any predicate , the complete induction principle reads
Making use of , the principle is already implied by standard form of the mathematical induction schema. The latter is expressed not in terms of the decidable order relation "" but the primitive symbols,
Lastly, a statement may be proven that merely makes use of the successor symbol and still mirrors set induction: Define a new predicate as . It holds for zero by design and so, akin to the bottom case in set induction, the implication is equivalent to just . Using induction, proves that every is either zero or has a computable unique predecessor, a with . Hence . When is the successor of , then expresses . By case analysis, one obtains
Classical equivalents
Using the classical principles mentioned above, the above may be expressed as
It expresses that, for any predicate , either hold for all numbers, or there is some natural number for which does not hold despite holding for all predecessors.
Instead of , one may also use and obtain a related statement. It constrains the task of ruling out counter-examples for a property of natural numbers: If the bottom case is validated and one can prove, for any number , that the property is always passed on to , then this already rules out a failure case.
Moreover, if a failure case exists, one can use the least number principle to even prove the existence of a minimal such failure case.
Least number principle
As in the set theory case, one may consider induction for negated predicates and take the contrapositive. After use of a few classical logical equivalences, one obtains a conditional existence claim.
Let denote the set of natural numbers validating a property . In the Neumann model, a natural number is extensionally equal to , the set of numbers smaller than .
The least number principle, obtained from complete induction, here expressed in terms of sets, reads
In words, if it cannot be ruled out that some number has the property , then it can also not be consistently ruled out that a least such number exists. In classical terms, if there is any number validating , then there also exists a least such number validating . Least here means that no other number is validating . This principle should be compared with regularity.
For decidable and any given with , all can be tested.
Moreover, adopting Markov's principle in arithmetic allows removal of double-negation for decidable in general.
See also
Constructive set theory
Mathematical induction
Non-well-founded set theory
Transfinite induction
Well-founded induction
Mathematical induction
Wellfoundedness | Epsilon-induction | Mathematics | 3,469 |
15,503,838 | https://en.wikipedia.org/wiki/Landscape%20zodiac | A landscape zodiac (or terrestrial zodiac) is a purported map of the stars on a gigantic scale, formed by features in the landscape, such as roads, streams and field boundaries. Perhaps the best known alleged example is the Glastonbury Temple of the Stars, situated around Glastonbury in Somerset, England. The temple is thought by some to depict a colossal zodiac.
Theory
The theory was first put forward in 1935 by Katherine Maltwood, an artist who "discovered" the zodiac in a vision, and held that the "temple" was created by Sumerians about 2700 BC. Interest was re-ignited in 1969 by Mary Caine in an article in the magazine Gandalf's Garden.
The landscape zodiac plays an important role in many occult theories. It has been associated with the Celtic Saints, Grail legend and King Arthur (according to some legends buried in Glastonbury).
Criticism
The idea was examined by two independent studies, one by Ian Burrow in 1975 and the other in 1983 by Tom Williamson and Liz Bellamy, using the standard methods of landscape historical research. Both studies concluded that the evidence contradicted the idea. The eye of Capricorn identified by Maltwood was a haystack. The western wing of the Aquarius phoenix was a road laid in 1782 to run around Glastonbury, and older maps dating back to the 1620s show the road had no predecessors. The Cancer boat (not a crab as would be expected) is made up of a network of eighteenth century drainage ditches and paths. There are some Neolithic paths preserved in the peat of the bog formerly comprising most of the area, but none of the known paths match the lines of the zodiac features. There is no support for this theory, or for the existence of the "temple" in any form, from conventional archaeologists or mainstream historians.
List of landscape zodiacs
Beside the Glastonbury arrangement further zodiacs have been alleged in Britain in following years including:
Kingston upon Thames Zodiac
The Lizard Zodiac, Cornwall
Bodmin Moor Zodiac
The Pumpsaint Zodiac
Nuthampstead Terrestrial Zodiac
The Sheffield Zodiac, South Yorkshire
There is rarely a strong scientific case for these discoveries. Their nebulous existence is in many ways similar to urban myths, ufology, or ley lines. They seem to play a part in personal belief systems; see Valentine (2016). Some are intentionally fictional; for example "The Brighton Zodiac" – created by Sally Hurst, based on the streets of that town – features as a plot device in Robert Rankin's novel The Brightonomicon.
Landscape zodiacs and psychogeography
In the walks around the M25 motorway documented in psychogeographer Iain Sinclair's 2003 novel London Orbital, the walkers trace the mythical Kingston upon Thames Zodiac.
See also
Psychogeography
The Brightonomicon
References
Further reading
Brinsley le Poer Trench(1962) Temple of the Stars
Katherine E. Maltwood (1935) A Guide to Glastonbury's Temple of the Stars
Peter James and Nick Thorpe (1999) Ancient Mysteries, Ballantine Books, New York, pp 298–304
Iain Sinclair (2005) London Orbital, Penguin Books, London,
Mary Caine (2001) The Kingston Zodiac Capall Barn Publishing
Lewis Edwards, The Welsh Temple of the Zodiac (undated mimeographed pamphlet)
John Michell (1975) The Earth Spirit - Its Ways, Shrines and Mysteries
John Michell (1979) Simulacra - with 196 Illustrations of Faces and Figures in Nature London: Thames & Hudson
Sheila Jeffries (1996) Cornwall's Landscape Zodiac St.Keverne:Elderberry Books
R. Nichols (1993)Great Zodiac of Glastonbury Mandrake Press, Thame England
Nigel Ayers (2007)The Bodmin Moor Zodiac Earthly Delights, Lostwithiel, Cornwall
Oliver L. Reiser (1975) This Holyest Erthe: Glastonbury Zodiac and King Arthur's Avalon TRSP Publications
Caroline Hall Hovey (1985) The Somerset Sanctuary, Merlin Books LTD, Devon,
Hugh Newman (2008) Earth Grids - the Secret Patterns of Gaia's Sacred Sites Wooden Books
Hypotheses
History of astrology
Pseudohistory
Pseudoarchaeology | Landscape zodiac | Astronomy | 863 |
13,653,300 | https://en.wikipedia.org/wiki/Kinetic%20proofreading | Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways.
Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further.
As an analogy, if we have a medicine assembly line sometimes produces empty boxes, and we are unable to upgrade the assembly line, then we can increase the ratio of full boxes over empty boxes (specificity) by placing a giant fan at the end. Empty boxes are more likely to be blown off the line (a higher exit rate) than full boxes, even though both kinds' production rates are lowered. By lengthening the final section and adding more giant fans (multistep proofreading), the specificity can be increased arbitrarily, at the cost of decreasing production rate.
Specificity paradox
In protein synthesis, the error rate is on the order of . This means that when a ribosome is matching anticodons of tRNA to the codons of mRNA, it matches complementary sequences correctly nearly all the time. Hopfield noted that because of how similar the substrates are (the difference between a wrong codon and a right codon can be as small as a difference in a single base), an error rate that small is unachievable with a one-step mechanism. Both wrong and right tRNA can bind to the ribosome, and if the ribosome can only discriminate between them by complementary matching of the anticodon, it must rely on the small free energy difference between binding three matched complementary bases or only two.
A one-shot machine which tests whether the codons match or not by examining whether the codon and anticodon are bound will not be able to tell the difference between wrong and right codon with an error rate less than unless the free energy difference is at least 9.2kT, which is much larger than the free energy difference for single codon binding. This is a thermodynamic bound, so it cannot be evaded by building a different machine. However, this can be overcome by kinetic proofreading, which introduces an irreversible step through the input of energy.
Another molecular recognition mechanism, which does not require expenditure of free energy is that of conformational proofreading. The incorrect product may also be formed but hydrolyzed at a greater rate than the correct product, giving the possibility of theoretically infinite specificity the longer you let this reaction run, but at the cost of large amounts of the correct product as well. (Thus there is a tradeoff between product production and its efficiency.) The hydrolytic activity may be on the same enzyme, as in DNA polymerases with editing functions, or on different enzymes.
Multistep ratchet
Hopfield suggested a simple way to achieve smaller error rates using a molecular ratchet which takes many irreversible steps, each testing to see if the sequences match. At each step, energy is expended and specificity (the ratio of correct substrate to incorrect substrate at that point in the pathway) increases.
The requirement for energy in each step of the ratchet is due to the need for the steps to be irreversible; for specificity to increase, entry of substrate and analogue must occur largely through the entry pathway, and exit largely through the exit pathway. If entry were an equilibrium, the earlier steps would form a pre-equilibrium and the specificity benefits of entry into the pathway (less likely for the substrate analogue) would be lost; if the exit step were an equilibrium, then the substrate analogue would be able to re-enter the pathway through the exit step, bypassing the specificity of earlier steps altogether.
Although one test will fail to discriminate between mismatched and matched sequences a fraction of the time, two tests will both fail only of the time, and N tests will fail of the time. In terms of free energy, the discrimination power of N successive tests for two states with a free energy is the same as one test between two states with a free energy .
To achieve an error rate of requires several comparison steps. Hopfield predicted on the basis of this theory that there is a multistage ratchet in the ribosome which tests the match several times before incorporating the next amino acid into the protein.
Experimental examples
Charging tRNAs with their respective amino-acids – the enzyme that charges the tRNA is called aminoacyl tRNA synthetase. This enzyme utilizes a high energy intermediate state to increase the fidelity of binding the right pair of tRNA and amino-acid. In this case, energy is used to make the high-energy intermediate (making the entry pathway irreversible), and the exit pathway is irreversible by virtue of the high energy difference in dissociation.
Homologous recombination – Homologous recombination facilitates the exchange between homologous or almost homologous DNA strands. During this process, the RecA protein polymerizes along a DNA and this DNA-protein filament searches for a homologous DNA sequence. Both processes of RecA polymerization and homology search utilize the kinetic proofreading mechanism.
DNA damage recognition and repair – a certain DNA repair mechanism utilizes kinetic proofreading to discriminate damaged DNA. Some DNA polymerases can also detect when they have added an incorrect base and are able to hydrolyze it immediately; in this case, the irreversible (energy-requiring) step is addition of the base.
Antigen discrimination by T cell receptors – T cells respond to foreign antigens at low concentrations, while ignoring any self-antigens present at much higher concentration. This ability is known as antigen discrimination. T-cell receptors use kinetic proofreading to discriminate between high and low affinity antigens presented on an MHC molecule. The intermediate steps of kinetic proofreading are realized by multiple rounds of phosphorylation of the receptor and its adaptor proteins.
Theoretical considerations
Universal first passage time
Biochemical processes that use kinetic proofreading to improve specificity implement the delay-inducing multistep ratchet by a variety of distinct biochemical networks. Nonetheless, many such networks result in the times to completion of the molecular assembly and the proofreading steps (also known as the first passage time) that approach a near-universal, exponential shape for high proofreading rates and large network sizes. Since exponential completion times are characteristic of a two-state Markov process, this observation makes kinetic proofreading one of only a few examples of biochemical processes where structural complexity results in a much simpler large-scale, phenomenological dynamics.
Topology
The increase in specificity, or the overall amplification factor of a kinetic proofreading network that may include multiple pathways and especially loops is intimately related to the topology of the network: the specificity grows exponentially with the number of loops in the network. An example is homologous recombination in which the number of loops scales like the square of DNA length. The universal completion time emerges precisely in this regime of large number of loops and high amplification.
References
Further reading
Biological processes
DNA replication | Kinetic proofreading | Mathematics,Biology | 1,631 |
77,327,980 | https://en.wikipedia.org/wiki/Jordan%20Peccia | Jordan L. Peccia is an American engineer and Professor of Environmental Engineering at Yale University. He was born in Cut Bank, MT. Since 2005, Peccia has been a member of the Chemical and Environmental Engineering faculty at Yale University, where he holds the Thomas E. Golden endowed professorship., and serves as the department's Chair. He is an elected member of the Connecticut Academy of Science and Engineering. In 2023, Peccia was named Head of Yale’s Benjamin Franklin College.
Academic career
Peccia’s academic work integrates the problem-solving aspects of environmental engineering with microbial genetics and public health. Contributions include determining the infectious risks associated with the land application of sewage sludge, advancing exposure science on the beneficial health impacts of the indoor microbiome, and inventing DNA sequence-based tools for classifying the mold status of a building. Early in the COVID-19 pandemic, Peccia’s lab at Yale demonstrated how SARS-CoV-2 RNA concentrations in domestic wastewater could be a leading indicator (over clinical case monitoring) of COVID-19 outbreaks. Peccia is a member of a group of international scientists that advocated for recognizing the airborne route of transmission during the COVID-19 pandemic. He is the founding chair of the Gordon Research Conference of the Microbiology of the Built Environment
Family
His brother is James D. Peccia III, Major General (retired), United States Air Force.
References
External links
Benjamin Franklin College, Yale University
Chemical and Environmental Engineering, Yale University
Peccia environmental biotechnology lab website, Yale University
People from Cut Bank, Montana
American environmental scientists
Public health researchers
Year of birth missing (living people)
Living people
Environmental engineers
University of Colorado Boulder alumni
Montana State University alumni
Yale University faculty
21st-century American engineers | Jordan Peccia | Environmental_science | 371 |
75,220,295 | https://en.wikipedia.org/wiki/Sotatercept | Sotatercept, sold under the brand name Winrevair is a medication used for the treatment of pulmonary arterial hypertension. It is an activin signaling inhibitor, based on the extracellular domain of the activin type 2 receptor expressed as a recombinant fusion protein with immunoglobulin Fc domain (ACTRIIA-Fc). It is given by subcutaneous injection.
The most common side effects include headache, epistaxis (nosebleed), rash, telangiectasia (spider veins), diarrhea, dizziness, and erythema (redness of the skin).
Sotatercept was approved for medical use in the United States in March 2024, and in the European Union in August 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
In the United States, sotatercept is indicated for the treatment of adults with pulmonary arterial hypertension (PAH, WHO Group 1).
In the European Union, sotatercept, in combination with other pulmonary arterial hypertension therapies, is indicated for the treatment of pulmonary arterial hypertension in adults with WHO Functional Class (FC) II to III, to improve exercise capacity.
Side effects
The most common adverse reactions include headache, epistaxis, rash, telangiectasia, diarrhea, dizziness, and erythema.
Sotatercept causes increases in hemoglobin (red blood cells). High concentrations of red blood cells in blood may increase the risk of blood clots. Sotatercept causes decreases in platelet count, which can result in bleeding problems.
Based on findings in animal studies, sotatercept may impair female and male fertility and cause fetal harm when administered during pregnancy.
History
The US Food and Drug Administration (FDA) approved sotatercept based on evidence of safety and effectiveness from a clinical trial of 323 participants with PAH (WHO group 1 functional class II or III). The trial was conducted at 126 sites in 21 countriesArgentina, Australia, Belgium, Brazil, Canada, the Czech Republic, France, Germany, Israel, Italy, Mexico, the Netherlands, New Zealand, Poland, Serbia, South Korea, Spain, Sweden, Switzerland, the United Kingdom, and the United States. The study included 88 participants inside the United States (43 in the sotatercept group and 45 in the placebo group).
Society and culture
Legal status
Sotatercept was approved for medical use in the United States in March 2024. The FDA granted the application breakthrough therapy designation.
In June 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Winrevair, intended for the treatment of pulmonary arterial hypertension. The applicant for this medicinal product is Merck Sharp & Dohme B.V. Sotatercept was approved for medical use in the European Union in August 2024.
Economics
Following its approval in 2024, the list price of Winrevair as single-vial and double-vial kit was announced at per vial, with an estimated annual cost of $240,000 a year.
Names
Sotatercept is the international nonproprietary name.
Sotatercept is sold under the brand name Winrevair.
Research
It was initially developed to increase bone density but during its early development was found to increase hemoglobin and red blood cell counts, and was subsequently studied for use in anemia associated with multiple conditions including beta thalassemia and multiple myeloma. Development of this drug was superseded by the development of luspatercept (Reblozyl), a modified activin receptor type 2B (ACTRIIB-Fc) based ligand trap with improved properties for anemia. Hypothesizing that this drug might block the effects of activin in promoting pulmonary vascular disease, this molecule was found to inhibit vascular obliteration in multiple models of experimental pulmonary hypertension, providing rationale to reposition sotatercept for PAH in the PULSAR and STELLAR clinical trials for PAH.
References
Further reading
External links
Antihypertensive agents
Drugs developed by Merck & Co.
Peptides
Orphan drugs | Sotatercept | Chemistry | 897 |
64,740,557 | https://en.wikipedia.org/wiki/Lyth%20bound | In cosmological inflation, within the slow-roll paradigm, the Lyth argument places a theoretical upper bound on the amount of gravitational waves produced during inflation, given the amount of departure from the homogeneity of the cosmic microwave background (CMB).
Summary
During slow-roll inflation, the ratio of gravitational waves to inhomogeneities of the CMB is correlated to the inflationary potential steepness.
Temperature inhomogeneities of the CMB were successfully and accurately measured. in the CMB.
There are current CMB polarization experiments (see this article for instance for an overview of gravitational wave observatories) aimed at measuring the primordial gravitational wave signature in the CMB.
However, to date, a significant signal of primordial gravitational waves was not detected. Thus the ratio cannot exceed a certain value.
Thus the steepness of the inflationary potential is bounded.
Detail
The argument was first introduced by David H. Lyth in his 1997 paper "What Would We Learn by Detecting a Gravitational Wave Signal in the Cosmic Microwave Background Anisotropy?" The detailed argument is as follows:
The power spectrum for curvature perturbations is given by:
,
Whereas the power spectrum for tensor perturbations is given by:
,
in which is the Hubble parameter, is the wave number, is the Planck mass and is the first slow-roll parameter given by .
Thus the ratio of tensor to scalar power spectra at a certain wave number , denoted as the so-called tensor-to-scalar ratio , is given by:
.
While strictly speaking is a function of , during slow-roll inflation, it is understood to change very mildly, thus it is customary to simply omit the wavenumber dependence.
Additionally, the numeric pre-factor is susceptible to slight changes owing to more detailed calculations but is usually between .
Although the slow-roll parameter is given as above, it was shown that in the slow-roll limit, this parameter can be given by the slope of the inflationary potential such that:
, in which is the inflationary potential over a scalar field .
Thus, , and the upper bound on placed by CMB measurements and the lack of gravitational wave signal is translated to and upper bound on the steepness of the inflationary potential.
Acceptance and significance
Although the Lyth bound argument was adopted relatively slowly, it has been used in many subsequent theoretical works.
The original argument deals only with the original inflationary time period that is reflected in the CMB signature, which at the time were about 5 e-folds, as opposed to about 8 e-folds to date. However, an effort was made to generalize this argument to the entire span of physical inflation, which corresponds to the order of 50 to 60 e-folds
On the base of these generalized arguments, an unnecessary constraining view arose, which preferred realization of inflation based in large-field models, as opposed to small-field models. This view was prevalent until the last decade which saw a revival in small-field model prevalence due to the theoretical works that pointed to possible likely small-field model candidates. The likelihood of these models was further developed and numerically demonstrated.
References
Inflation (cosmology)
Gravitational waves | Lyth bound | Physics | 664 |
53,516,779 | https://en.wikipedia.org/wiki/Multiple%20Michael/aldol%20reaction | Multiple Michael/aldol reaction (or domino Michael/aldol reaction) is a consecutive series of reactions composed of either Michael addition reactions or aldol reactions. More than two steps of reaction are usually involved. This reaction has been used for synthesis of large macrocyclic or polycyclic ring structures.
Gary Posner and co-workers were the first to report using multiple Michael/aldol reactions to construct macrolide structures. Their method utilized a Michael-Michael-Michael-ring closure (MIMI-MIRC) or a Michael-Michael-aldol-ring closure annulation sequences to assemble acrylates and/or aldehydes together to form substituted 9-, 10-, and 11-membered macrolide structures. Besides synthesis of complex ring structures, multiple Michael/aldol reaction can also be used for rapid production of complex compound libraries.
Aldolases have been used to mediate multiple aldol reactions. Chi-Huey Wong and co-workers had shown that 2-deoxyribose-5-phosphate aldolase and fructose-1, 6-diphosphate aldolase could be used together in a one-pot reaction to connect two aldehydes and one ketone together through sequential aldol reactions. This reaction could be used to generate a variety of carbohydrate derivatives.
See also
Robinson annulation, a classic reaction involving a Michael addition followed by an aldol condensation
References
Organic reactions | Multiple Michael/aldol reaction | Chemistry | 299 |
8,340,340 | https://en.wikipedia.org/wiki/Postmortem%20studies | Postmortem studies are a type of neurobiological research, which provides information to researchers and individuals who will have to make medical decisions in the future. Postmortem researchers conduct a longitudinal study of the brain of an individual, who has some sort of phenomenological condition (i.e. cannot speak, trouble moving left side of body, Alzheimer's, etc.) that is examined after death. Researchers look at certain lesions in the brain that could have an influence on cognitive or motor functions. These irregularities, damage, or other cerebral anomalies observed in the brain are attributed to an individual's pathophysiology and their environmental surroundings. Postmortem studies provide a unique opportunity for researchers to study different brain attributes that would be unable to be studied on a living person.
Postmortem studies allow researchers to determine causes and cure for certain diseases and functions. It is critical for researchers to develop hypotheses, in order to discover the characteristics that are meaningful to a particular disorder. The results that the researcher discovers from the study will help the researcher trace the location in the brain to specific behaviors.
When tissue from a postmortem study is obtained it is imperative that the researcher ensures the quality is adequate to study. This is specifically important when an individual is researching gene expression (i.e. DNA, RNA, and proteins). Some key ways researchers monitor the quality are by determining the pain level/time of death of the individual, pH of the tissue, refrigeration time and temperature of storage, time until the brain tissue is frozen, and the thawing conditions. As well as finding out specific information about the individual's life such as: age, sex, legal or illegal substance use, and a treatment analysis of the individual.
Background
Postmortem studies have been used to further the understanding of the brain for centuries. Before the time of the MRI, CAT Scan, or X-ray it was one of the few ways to study the relation between behavior and the brain.
Broca
Paul Broca used postmortem studies to link a specific area of the brain with speech production.
His research began when he noticed that a patient with an aphasic stroke had lesions in the left hemisphere of his brain. His research and theory continued over time.
The most notable of his research subjects was Tan (named for the only syllable he could utter). Tan had lesions in his brain caused by syphilis. These lesions were determined to cover the area of his brain that was important for speech production.
The area of the brain that Broca identified is now known as Broca's area; damage to this section of the brain can lead to Expressive aphasia.
Wernicke
Karl Wernicke also used postmortem studies to link specific areas of the brain with speech production. However his research focused more on patients who could speak, however their speech made little sense and/or had trouble understanding spoken words or sentences.
His research in language comprehension and the brain also found it to be localized in the left hemisphere, but in a different section. This area is known as Wernicke's area; damage to this section can lead to Receptive aphasia.
Benefits
Postmortem studies allows for researchers to give information that is relevant to individuals by explaining the causes of particular diseases and behaviors. This is in hopes that others can avoid some of these experiences in the future. Postmortem studies also improve medical knowledge and help to determine whether changes happen in the brain itself or in the actual disorder. By doing this researchers are then able to help prioritize experimental studies and integrate the studies into animal and cell research. Another benefit to postmortem studies is that researchers have the ability to make a wide range of discoveries, because of the many different techniques used to obtain tissue samples. Postmortem studies are extremely important and unique despite their limitations.
Limitations
Postmortem brain samples are limited resources, because it is extremely difficult for a researcher to get a hold of an individual's brain. The researchers ask their participants or the families to consent to allowing them to study the loved one's brain, however there has been a falling rates of consent in the last few years. Subsequently, researchers have to use indirect methods to study the locations and processes of the brain. Another limitation to postmortem studies is the continuous funding and the time it takes to conduct a longitudinal study. Postmortem longitudinal studies usually take place at the time of assessment until the time of death about 20–30 years.
References
Research methods
Neuroscience
Forensic pathology | Postmortem studies | Biology | 944 |
7,759,939 | https://en.wikipedia.org/wiki/Tie%20down%20strap | A tie down strap (also known as a ratchet strap, a lashing strap or a tie down) is a fastener used to hold down cargo or equipment during transport. Tie down straps are essentially webbing that is outfitted with tie down hardware. This hardware allows the tie down strap to attach to the area surrounding the cargo or equipment, loop over the cargo or equipment, and/or attach to the cargo or equipment. It usually also includes a method of tensioning the strap, such as a ratchet.
Common types
Two common types of tie-down straps are loop straps and two-piece straps.
Loop straps are a single piece of webbing that is looped around the item to be protected and the two endpoints are brought together at the tie-down fastener for fastening and providing tension.
A two-piece tie-down strap is a single assembly that is constructed out of two separate pieces of webbing each with their own hardware that is fastened at one end to the area surrounding the equipment to be protected and connected to each other, typically at the fastener.
Webbing with a linking device is used for the fastening of goods with trucks, trailers, pallets, boxes, and containers. This is also known as ratchet lashing, ratchet straps, ratchet tie downs, tie down straps and lashing with webbing.
Custom ratchet straps can have hooks such as J hooks, D hooks, E track fittings, and with S hooks being the most popular industry-standard.
See also
Trucker's hitch
Load securing
Strapping
Webbing
References
On 10 April 2024, How to use ratchet straps. Mytee Products
Fasteners | Tie down strap | Engineering | 343 |
41,685 | https://en.wikipedia.org/wiki/Security%20kernel | In telecommunications, the term security kernel has the following meanings:
In computer and communications security, the central part of a computer or communications system hardware, firmware, and software that implements the basic security procedures for controlling access to system resources.
A self-contained usually small collection of key security-related statements that (a) works as a part of an operating system to prevent unauthorized access to, or use of, the system and (b) contains criteria that must be met before specified programs can be accessed.
Hardware, firmware, and software elements of a trusted computing base that implement the reference monitor concept.
References
National Information Systems Security Glossary
Computing terminology | Security kernel | Technology | 132 |
41,096,102 | https://en.wikipedia.org/wiki/Glyoxal-bis%28mesitylimine%29 | Glyoxal-bis(mesitylimine) is an organic compound with the formula H2C2(NC6H2Me3)2 (Me = methyl). It is a yellow solid that is soluble in organic solvents. It is classified as a diimine ligand. It is used in coordination chemistry and homogeneous catalysis. It is synthesized by condensation of 2,4,6-trimethylaniline and glyoxal. In addition to its direct use as a ligand, it is a precursor to imidazole precursors to the popular NHC ligand called IMes.
Related compounds
Glyoxal-bis(triisopropylphenylimine), which is bulkier than glyoxal-bis(mesitylimine).
References
Chelating agents
Imines | Glyoxal-bis(mesitylimine) | Chemistry | 175 |
922,382 | https://en.wikipedia.org/wiki/Smith%E2%80%93Volterra%E2%80%93Cantor%20set | In mathematics, the Smith–Volterra–Cantor set (SVC), ε-Cantor set, or fat Cantor set is an example of a set of points on the real line that is nowhere dense (in particular it contains no intervals), yet has positive measure. The Smith–Volterra–Cantor set is named after the mathematicians Henry Smith, Vito Volterra and Georg Cantor. In an 1875 paper, Smith discussed a nowhere-dense set of positive measure on the real line, and Volterra introduced a similar example in 1881. The Cantor set as we know it today followed in 1883. The Smith–Volterra–Cantor set is topologically equivalent to the middle-thirds Cantor set.
Construction
Similar to the construction of the Cantor set, the Smith–Volterra–Cantor set is constructed by removing certain intervals from the unit interval
The process begins by removing the middle 1/4 from the interval (the same as removing 1/8 on either side of the middle point at 1/2) so the remaining set is
The following steps consist of removing subintervals of width from the middle of each of the remaining intervals. So for the second step the intervals and are removed, leaving
Continuing indefinitely with this removal, the Smith–Volterra–Cantor set is then the set of points that are never removed. The image below shows the initial set and five iterations of this process.
Each subsequent iterate in the Smith–Volterra–Cantor set's construction removes proportionally less from the remaining intervals. This stands in contrast to the Cantor set, where the proportion removed from each interval remains constant. Thus, the Smith–Volterra–Cantor set has positive measure while the Cantor set has zero measure.
Properties
By construction, the Smith–Volterra–Cantor set contains no intervals and therefore has empty interior. It is also the intersection of a sequence of closed sets, which means that it is closed.
During the process, intervals of total length
are removed from showing that the set of the remaining points has a positive measure of 1/2. This makes the Smith–Volterra–Cantor set an example of a closed set whose boundary has positive Lebesgue measure.
Other fat Cantor sets
In general, one can remove from each remaining subinterval at the th step of the algorithm, and end up with a Cantor-like set. The resulting set will have positive measure if and only if the sum of the sequence is less than the measure of the initial interval. For instance, suppose the middle intervals of length are removed from for each th iteration, for some Then, the resulting set has Lebesgue measure
which goes from to as goes from to ( is impossible in this construction.)
Cartesian products of Smith–Volterra–Cantor sets can be used to find totally disconnected sets in higher dimensions with nonzero measure. By applying the Denjoy–Riesz theorem to a two-dimensional set of this type, it is possible to find an Osgood curve, a Jordan curve such that the points on the curve have positive area.
See also
The Smith–Volterra–Cantor set is used in the construction of Volterra's function (see external link).
The Smith–Volterra–Cantor set is an example of a compact set that is not Jordan measurable, see Jordan measure#Extension to more complicated sets.
The indicator function of the Smith–Volterra–Cantor set is an example of a bounded function that is not Riemann integrable on (0,1) and moreover, is not equal almost everywhere to a Riemann integrable function, see Riemann integral#Examples.
References
Fractals
Measure theory
Sets of real numbers
Topological spaces | Smith–Volterra–Cantor set | Mathematics | 763 |
26,475,258 | https://en.wikipedia.org/wiki/Xipranolol | Xipranolol is a beta blocker.
See also
Beta blocker
References
Secondary alcohols
Antihypertensive agents
Beta blockers
Isopropylamino compounds
Ethers
Secondary amines | Xipranolol | Chemistry | 42 |
39,308,049 | https://en.wikipedia.org/wiki/ENERPOS | ENERPOS is the first educational net-zero energy building in the tropics and one of the 13 Net ZEBs in the tropics thanks to its bioclimatic design. Its name comes from the French "énergie positive" ("positive energy" in English). ENERPOS is located on Réunion Island, a French territory in the Indian Ocean. Building an energy-efficient building in such a climate is particularly challenging, but the energy expectations with regard to ENERPOS have been reached, even largely exceeded. ENERPOS is not only an energy-efficient building but also displays various passive methods to reduce energy consumption while providing a comfortable environment for its users. Classes are hosted for both undergraduate diploma and degree courses as well as for the Department of Construction and Energy at the Graduate Engineering School of Réunion Island.
Location
ENERPOS is a university building located in Saint-Pierre on the French island of La Réunion. This island, whose climate is hot and humid, is located in the Indian Ocean, to the east of Madagascar. This area is also often struck by tropical cyclones, generating building difficulties.
Context of Réunion Island
Over 800,000 inhabitants of Réunion Island rely on a limited supply of energy. In addition to that, electricity production on Réunion Island is one of the most polluting on earth, mainly generated from fossil fuels such as coal and fuel. The electricity produced is expensive and is one of the most polluting in the world with of CO produced with every kilowatt hour (close to eight times more than in mainland France). Unfortunately, the ever-increasing demand for energy, due to significant demographic growth, sometimes exceeds the amount of energy available. As a consequence, shortages can occur certain times during austral summers when air conditioning is widely used.
As is the case in other parts of the world, the electricity consumption of buildings represents a significant percentage of the energy used on Réunion Island. Buildings in French Tropical zones, and especially on Réunion Island, have often been awkwardly designed, contrary to former Creole vernacular architecture, importing metropolitan building concepts. Indeed, it has been common practice to build as cheaply as possible for decades, without any consideration for the environment and the climate, making it uncomfortable to live in without air conditioning. Moreover, air conditioning and lighting are usually oversized by designers, this leads to wasteful energy consumption.
The good news is that considerable improvements can be made in reducing energy consumption in the field of building on Réunion Island.
Goal and achievements
Under the general context of energy supply and consumption on Réunion Island, this building was expected to be net-zero energy and to export at least as much energy as its consumption to the polluting electricity grid of the island. To achieve this goal, the first issue to be addressed is the energy consumption of the building and how to build without using air conditioning, a considerable consumer, while providing a comfortable environment for the users. Air conditioning is only used in one-third of the total surface area for computers and servers, the remaining two-thirds are cooled and ventilated naturally. Passive methods are used, requiring people to be active instead of being passive in an active building (François Garde, Ph.D., P.E. and ASHRAE member). At the beginning of the design of ENERPOS in 2005, the main aim was to demonstrate that the overall consumption of the building could be reduced by three times compared to that of a standard building. The result is that only one-seventh of the annual energy consumption of a standard building is used – instead of in a year. To compensate for this, photovoltaic panels are implemented over the rooftops, enabling the production of in 2010 compared to an overall consumption of , making this building one of the 13 Net ZEBs in a tropical climate. Saint Pierre is usually sunny all year and receives a heavy amount of solar radiation with up to in summer. Consequently, the building consumes 7 times less energy than its production, the extra production of electricity being released into the grid.
ENERPOS meets the requirements of the following two performance labels, HQE and PERENE. PERENE is a local label in Réunion Island guiding those wishing to abide by it on how to build in harmony with the corresponding climate (four different classifications are defined depending mainly on the altitude), Saint Pierre being in the hottest zone of the island.
Principles and features
Natural cross ventilation
Human comfort depends on five criteria, two of which concern one’s clothing and metabolism, and three others which focus on air temperature, air humidity and airflow speed. The faster the airflow is on someone the cooler he feels. Moreover, natural ventilation brings sound fresh air into the building when well-designed. Thus, natural ventilation tends to increase comfort as well as health in tropical climates and aims to suppress air conditioning.
The building has been orientated to prevent strong East-South-East trade winds from entering the rooms in winter and still benefit from thermal breezes in summer. The main façades are then orientated North and South, reducing the heat gain on the Western and Southern façades (which are more exposed to solar radiation) at the same time.
Successful ventilation has been obtained by creating a window to wall ratio of 30% using louvres on both opposite sides of the rooms. Louvres are not only useful to regulate the airflow but also to protect against cyclones and break-ins. In addition to that, the surroundings of the building have been well-thought-out to prevent the ground from heating before entering the rooms:
Planting native plants and trees in the patio and around the building to create a microclimate which is as fresh as possible.
Placing car parks under the building instead of next to it for the same reason. Furthermore, it increases soil permeability so that tropical rains do not cause floods but penetrate the ground.
Finally, large ceiling fans are installed in every room, even those using air conditioning. Ceiling fans ensure that even in the absence of the necessary breezes, the airflow needed to feel comfortable in the room is provided for the users. This solution significantly reduces the amount of energy consumed. The overall consumption of the ceiling fans and the split system (the latter used to cool the technical rooms) is only per year compared to a classic air conditioning system in a standard building consuming per year.
Solar shading
Now that the airflow parameter has been dealt with, the next issue to be addressed is air temperature. Most of the heat gained in the rooms is due to solar radiation coming through the glazing.
Barely any glazing has been placed on the Western and Eastern small façades because they are the most likely to receive solar radiation. The two main façades, including the louvres, are protected against direct sun rays thanks to vertical solar shading composed of inclined wooden strips. To be as efficient as possible, solar shading has been simulated with 3-D software. This glazing protection has two main effects: to prevent glare on the desks, which can be very annoying for the students and the employees working in ENERPOS; and to decrease indoor temperature.
Materials
Concerning the envelope, the walls are made of concrete; the roofing is insulated with a layer of polystyrene and a ventilated BIPV (Building Integrated Photovoltaic) over-roof; the solar shading systems are made of wooden strips; the east and west gables are insulated with mineral wood and wooden cladding. The paint used is completely organic and the wooden components have not undergone any specific treatment. No insulation is required on the main facades as they are very efficient in terms of S-value due to the solar shading.
Lighting
Daylighting has also been simulated to ensure a Useful Daylight Index (UDI) of at least 90% in most places. Two classrooms facing the sea on the first floor of the building do not have any artificial lighting. Except for those two classrooms, all classrooms and offices are lit by low energy consumption lights producing an artificial lighting density inferior to that of a standard building, in the classrooms with low energy T-5 lights and in the offices with personal LED desks lamps. These lighting densities are high enough to make these workplaces comfortable to work in and yet reduce both the energy consumption and the thermal heat gain produced by the lights in so far as is possible.
There are multiple switches to control the lights and ceiling fans by the rows of tables in the classrooms since some people could feel hot or do not have enough light while others are comfortable. This avoids wasting electricity.
A Building Management System is used to control the active systems. In the event of people forgetting to turn off the light when leaving a room, a timer will turn off lights automatically after two hours.
Computing
This is similar to the lighting issue as computing systems can affect both energy consumption and thermal heat gain greatly. The main solution adopted in the offices is to use laptops rather than desktop computers since they usually consume less electricity. As for the computer rooms, they are only equipped with screens, mice and keyboards and all central units are located in the air-conditioned technical room. The thermal loads from the computers are thus kept outside the computing rooms.
Photovoltaic production
As previously explained is, the city of Saint Pierre receives a huge amount of solar radiation throughout the year. This natural source of energy has consequently been exploited. The target is to make ENERPOS a positive energy building thanks to solar panels, the PV production must be at least equal to the energy consumption of the building.
Since the photovoltaic panels have been used as an over-roof, the overall surface of the panels has been oversized compared to the energy needs to protect the whole roof from direct sun rays. The low energy consumption of ENERPOS is more than balanced by of these integrated solar panels, generating a total production of over one year. The resulting surplus of energy, not being consumed by the building but released into the grid instead, is then up to in a year.
Furthermore, all of the costs and risks of this installation are provided by the manufacturer, as agreed in the contract, and not by the owner of ENERPOS (that is to say the University of Réunion Island). In exchange, the University of Réunion Island rents the photovoltaic production to the manufacturer who receives the benefit of the electricity fed into the grid for 15 years. After that period, the owner of the building becomes the owner of the PV panels.
Making the users active
The energy consumption of a building not only relies on the way it has been built but also mainly on occupant behaviour. This idea is all the more true for buildings being constructed based on passive designs. Indeed, making ENERPOS a passive building implies that people need to be active to use it to its full capacity.
For example, before turning on the ceiling fans of a classroom, the students have to open the louvres first. It seems to be common sense but as a matter of fact, people do not think about that most of the time. That is why signs are displayed in the rooms explaining how to use a classroom properly to avoid wasting energy. The purpose is to educate students (and teachers) about the way to behave and to make them realize the environmental issues at stake on Réunion Island.
Post-occupancy evaluation
Since ENERPOS is a pioneer project on Réunion Island and in the tropics, it is essential to analyze the consumption distribution, the performance and how people feel in this building. Therefore, a post-occupancy evaluation has been carried out for three hot seasons to assess the comfort level in ENERPOS. Students and teachers were asked to fill in a questionnaire about how they feel in the building while environmental parameters, such as temperatures, humidity and air velocity, were collected.
The main conclusion is that out of 700 students surveyed, the vast majority feel comfortable in ENERPOS in the hot season without any air conditioning. The ultimate objective of this project has then been met.
Results
Among the annual consumption of ENERPOS, the energy end uses are listed in the table.
The contribution of the plug loads to the overall energy consumption is abnormally high compared to a standard building because the air conditioning and lighting parts have been well reduced.
To conclude, the ENERPOS building shows that it is possible to build an educational Net ZEB in the tropics while providing a comfortable environment for people to work and study in. Moreover, the lessons learnt about the ENERPOS project can be applied to green building and Net ZEB projects in hot climates.
See also
Autonomous buildings
Building-integrated photovoltaics
:Category:Low-energy building
Energy conservation
Green building
Home energy monitor
Plug load
Sustainable design
Net-zero energy building
References
http://www.hpbmagazine.org/case-studies/educational/university-of-la-reunions-enerpos-saintpierre-la-reunion-france
http://www.nxtbook.com/nxtbooks/ashrae/hpb_2012summer/
External links
University of Reunion Island
ESIROI
Institute of technology
Piment laboratory
Low-energy building
Sustainable building in France
Sustainable architecture | ENERPOS | Engineering,Environmental_science | 2,687 |
8,933,729 | https://en.wikipedia.org/wiki/Helium%20release%20valve | A helium release valve, helium escape valve or gas escape valve is a feature found on some diving watches intended for saturation diving using helium based breathing gas.
Gas ingress problem
When saturation divers operate at great depths, they live under pressure in a saturation habitat with an atmosphere containing helium or hydrogen. Since helium atoms are the smallest natural gas particles—the atomic radius of a helium atom is 0.49 angstrom and that of a water molecule is about 2.75 angstrom—, they are able to diffuse over about five days into the watch, past the seals which are able to prevent ingress of larger molecules such as water. This is not a problem as long as the watch remains under external pressure, but when decompressing, a pressure difference builds up between the trapped gas inside the watch case and the environment. Depending on the construction of the watch case, seals and crystal, this effect can cause damage to the watch, such as the crystal popping off, as diving watches are designed primarily to withstand external pressure.
Solutions development
Some watch manufacturers manage the internal overpressure effect by simply making the case and sealed connected parts adequately sealed or strong enough to avoid or withstand the internal pressure, but Rolex and Doxa S.A. approached the problem by creating the helium escape valve in the 1960s (first introduced in the Rolex Submariner/Sea-Dweller and the Doxa Conquistador): A small, spring-loaded one-way valve is fitted in the watch case that opens when the differential between internal and external pressure is sufficient to overcome the spring force. As a result, the valve releases the gases trapped inside the watch case during decompression, preventing damage to the watch. The original idea for using a one-way valve came from Robert A. Barth, a US Navy diver who pioneered saturation diving during the US Navy Genesis and SEALAB missions led by Dr. George F. Bond. The patent for the helium escape valve was filed by Rolex on 6 November 1967 and granted on 15 June 1970.
Solutions application
Automatic helium release valves usually don't need any manual operation, but some are backed up by a screw-down crown in the side of the watch, which is unscrewed at the start of decompression to allow the valve to operate. As decompressing saturation divers is a slow working conditions requirements regulated process to prevent sickness and any other harmful medical effects, the helium release valve does not have to be able to cope with extremely rapid decompression scenarios, that can occur in a material/medical pass-through system lock.
Helium release valves can primarily be found on diving watches featuring a water resistance rating greater than 300 m (1000 ft). ISO 6425 defines a diver's watch for mixed-gas diving as: A watch required to be resistant during diving in water to a depth of at least 100 m and to be unaffected by the overpressure of the breathing gas. Models that feature a helium release valve include most of the Omega Seamaster series, Rolex Sea Dweller, Tudor watches Pelagos, some dive watches from the Citizen Watch Co., Ltd, Breitling, Girard-Perregaux, Anonimo, Panerai, Mühle Rasmus by Nautische Instrumente Mühle Glashütte, Deep Blue, Scurfa Watches, all watches produced by Enzo Mechana, Aegir Watches and selected Doxa, selected Victorinox models, Oris models, TAG Heuer Aquaracer models, and the DEL MAR Professional Dive 1000 watch.
Other watch manufacturers such as Seiko and Citizen Watch Co., Ltd still offer high-level dive watches that are guaranteed safe against the effects of mixed-gas diving without needing an additional opening in the case in the form of a release valve. This is normally achieved through the use of special gaskets and monocoque case construction instead of using the more common screw down case-backs.
Saturation diving water resistance management
To enable changing the time or date during their dive, saturation divers have to act somewhat counterintuitive regarding the water resistance management of their diving watches. On the initial and any later blowdown or compression, most saturation divers consciously open the water-resistant crown of their watches to allow the breathing gas inside to equalize the internal pressure to their storage/living environment. This pressure differential mitigation strategy allows them to later open the water-resistant crown at their storage pressure, to be able to adjust their watch if required during their (often weeks long) saturation period under regularly varying pressure levels between worksites. The storage pressure is generally kept equal or only slightly lower than the pressure at the intended divers' working depth. Opening a watch case (by unscrewing a crown) means expanding its internal volume. In a significantly higher external pressure environment, any expansion will be impeded by this environment. Every opening and closing action of a release valve or crown seal involves a risk of dirt, lint or other non-gaseous matter ingress, that can compromise the proper functioning of the seal and watch.
ISO 6425 divers' watches standard for mixed-gas diving decompression testing
The standards and features for diving watches are regulated by the ISO 6425 – Divers' watches international standard. ISO 6425 testing of the water resistance or water-tightness and resistance at a water overpressure as it is officially defined is fundamentally different from non-dive watches, because every single watch has to be tested.
ISO 6425 provides specific additional requirements for testing of diver's watches for mixed-gas diving.
Some specific additional requirements for testing of diver's watches for mixed-gas diving provided by ISO 6425 are:
Test of operation at a gas overpressure. The watch is subject to the overpressure of gas which will actually be used, i.e. 125% of the rated pressure, for 15 days. Then a rapid reduction in pressure to the atmospheric pressure shall be carried out in a time not exceeding 3 minutes. After this test, the watch shall function correctly. An electronic watch shall function normally during and after the test. A mechanical watch shall function normally after the test (the power reserve normally being less than 15 days).
Test by internal pressure (simulation of decompression). Remove the crown together with the winding and/or setting stem. In its place, fit a crown of the same type with a hole. Through this hole, introduce the gas mixture which will actually be used and create an overpressure of the rated pressure/20 bar in the watch for a period of 10 hours. Then carry out the test at the rated water overpressure. In this case, the original crown with the stem shall be refitted beforehand. After this test, the watch shall function correctly.
Gallery
References
Swiss patent CH492246A Montre étanche, MONTRES ROLEX SA, ANDRE ZIBACH, 6 November 1967
Technical Perspective What Saturation Diving Really Means (And What Watchmakers Do About It) It's all about the helium, and not getting killed. Jack Forster, 11 July 11 2017, hodinkee.com
Underwater diving equipment components | Helium release valve | Technology | 1,471 |
54,069,873 | https://en.wikipedia.org/wiki/3-Arylpropiolonitriles | 3-Arylpropiolonitriles (APN) belong to a class of electron-deficient alkyne derivatives substituted by two electron-withdrawing groups – a nitrile and an aryl moieties. Such activation results in improved selectivity towards highly reactive thiol-containing molecules, namely cysteine residues in proteins. APN-based modification of proteins was reported to surpass several important drawbacks of existing strategies in bioconjugation, notably the presence of side reactions with other nucleophilic amino acid residues and the relative instability of the resulting bioconjugates in the blood stream. The latter drawback is especially important for the preparation of targeted therapies, such as antibody-drug conjugates.
Synthesis
The synthesis of 3-arylpropiolonitriles has been the subject of several studies. The most elaborated and often used approach is based on MnO2-mediated free radical oxidation of the corresponding propargylic alcohols obtained using Sonogashira coupling of the corresponding iodo-derivative in the presence of ammonia (Figure 1).
Applications in biotechnology
In bioconjugation (forming a stable covalent link between a biomolecule and a functional payloads, such as fluorescent dyes, cytotoxic agents, or tracers), linking of the payload was classically achieved using maleimide heterobifunctional reagents (for example, see SMCC). However, when administered into living organisms, maleimide-containing bioconjugates were found to be relatively unstable and lose the payload in the blood circulation due to reversibility of the addition reaction between maleimide moiety and cysteine residue of a protein (retro Michael addition). Due to increased stability of bioconjugates obtained with analogous APN-based payloads (a schematic reaction is shown in the Figure 2 below), their use is often preferable when high selectivity and biostability are especially important: namely for the preparation of antibody−drug conjugates and other biologics. Standard procedure for APN protein labeling consists in incubation of a protein containing free cysteine residues with an APN-functionalized probe in PBS buffer at pH 7.5-9.0 at room temperature for 2–12 hours, followed by an optional step of purification of the resulting bioconjugate using size exclusion chromatography or ultrafiltration.
References
Biotechnology
Nitriles
Alkyne derivatives
Aromatic compounds | 3-Arylpropiolonitriles | Chemistry,Biology | 523 |
37,086,118 | https://en.wikipedia.org/wiki/Entoloma%20mathinnae | Entoloma mathinnae is a species of agaric fungus in the family Entolomataceae. Known only from Tasmania, Australia, it was described as new to science in 2009. Mushrooms have light yellow-brown, convex caps up to wide atop stems measuring long.
Taxonomy
The species was described in 2009 in the journal Mycotaxon by Australian mycologists Genevieve Gates, Bryony M. Horton, and Dutch Entoloma authority Machiel Noordeloos. Entoloma mathinnae is classified in the section Entoloma of the genus Entoloma. Species in this section are characterized by having a Tricholoma-like appearance, a smooth cap, and spores that are small and somewhat angular.
The type collection was made in 2008 in the small town of Mathinna, Tasmania. The specific epithet refers to not only the type locality, but also the 19th-century indigenous Australian girl Mathinna, after whom the town is named.
Description
The fruit bodies of the fungus have convex caps with a low umbo, and attain a diameter of . The caps are a light yellow-brown colour that fades somewhat approaching the margin. The cap surface is smooth or somewhat sticky, and the cap margin develops cracks in maturity. The gills are crowded together closely: there are about 80 full-length gills interspersed with 3–5 tiers of lamellulae (short gills that do not extend completely from the cap margin to the stem). The attachment of the gills to the stem ranges from adnate (broadly fused) to emarginate (having a notched edge). Gills are a bright yellow colour throughout. The cylindrical stem measures by thick, tapering slightly at the base. Its surface is fibrillose, and its colour white to pale brown, although sometimes it has grey-violet tones mixed in. Initially solid, the stem hollows with age. The flesh of the mushroom is firm and white, and lacks any distinct taste or odor.
Spores are somewhat angular, with 6 to 8 sides, and dimensions averaging 7.3 by 6.9 μm. The basidia (spore-bearing cells) are four-spored, clamped at their bases, and measure 20–34 by 7–9 μm.
Habitat and distribution
The fungus has been collected from two sites in Tasmania. The northeastern site is a rainforest located at an altitude of about , containing predominantly trees of the species Eucalyptus delegatensis with an understorey of the shrub Leptospermum lanigerum. The southwestern site, a low-altitude wet sclerophyll forest, has the trees Eucalyptus obliqua and an understorey of Leptospermum scoparium and Melaleuca squarrosa. All of the plant associates of E. mathinnae are in the family Myrtaceae. Although it is not known whether the fungus has any specific association with these plants, some Entoloma species are suspected of being mycorrhizal, and members of the Myrtaceae are known to form ectomycorrhizas with fungi. , there are about 100 species of Entoloma known from Tasmania, many of which have not yet been formally described.
See also
List of Entoloma species
References
External links
Entolomatacae of Tasmania Image
Entolomataceae
Fungi described in 2009
Fungi of Australia
Taxa named by Machiel Noordeloos
Fungus species | Entoloma mathinnae | Biology | 710 |
38,904,130 | https://en.wikipedia.org/wiki/Gummadiol | Gummadiol is a lignan hemiacetal. It can be isolated from the heartwood of Gmelina arborea.
References
Lignans
Benzodioxoles
Lactols
Diols | Gummadiol | Chemistry | 43 |
9,162,240 | https://en.wikipedia.org/wiki/Nitro%20Nobel%20Gold%20Medal | The Nitro Nobel Gold Medal is an explosives industry award given by the Nitro Nobel Company of Sweden (now part of Dyno Nobel).
The medal is gold, and features the same obverse as the Nobel Prize, but a different reverse. The medal has sometimes been confused with the Nobel Prize.
The award has only been given three times since its creation in 1967. The recipients are:
1967 — Dr. Robert W. Van Dolah, for the development of a theory he developed to explain the accidental initiation of liquid explosives
1968 — Dr. Melvin A. Cook, for the discovery of slurry explosives
1990 — Dr. Per-Anders Persson for the invention of the Nonel fuze.
See also
List of engineering awards
References
Explosives engineering awards
Swedish awards | Nitro Nobel Gold Medal | Technology,Engineering | 157 |
74,393,286 | https://en.wikipedia.org/wiki/Debora%20%C5%A0ija%C4%8Dki | Debora Šijački is a computational cosmologist whose research involves computational methods for simulating the formation and development of the structures in the universe including galaxies, galaxy clusters, and dark matter, including collaborations in the Illustris project. Originally from Serbia, she was educated in Italy and Germany, and works in the UK as a professor at the University of Cambridge and deputy director of the Kavli Institute for Cosmology.
Education and career
Šijački is originally from Belgrade, the daughter of Serbian physicist Đorđe Šijački|sr|Ђорђе Шијачки and mother Jelena Vasiljevic Serbian psychologist, grew up in Belgrade, capital of ex Yugoslavia, today Serbia. She was an undergraduate at the University of Padua in Italy, and completed a Ph.D. through Ludwig Maximilian University of Munich in Germany in 2007 for research performed at the Max Planck Institute for Astrophysics. Her dissertation, Non Gravitational Heating Mechanisms in Galaxy Clusters, was jointly supervised by Volker Springel and Simon White.
She came to the University of Cambridge in 2007–2010, as a postdoctoral researcher in the Institute of Astronomy. After continued postdoctoral research from 2010 to 2012 in the US at the Harvard–Smithsonian Center for Astrophysics, she returned to the Cambridge Institute of Astronomy in 2013 as a university lecturer. She became Reader in Astrophysics and Cosmology in 2016, and Professor of Astrophysics and Cosmology in 2021.
Recognition
Šijački received the Otto Hahn Medal for her doctoral research. She was the 2019 recipient of the Ada Lovelace Award for High Performance Computing of the Partnership for Advanced Computing in Europe (PRACE), recognizing her "numerous high-impact results in astrophysics based on numerical simulations on state-of-the-art supercomputers".
References
Year of birth missing (living people)
Living people
Cosmologists
Astrophysicists
Women astrophysicists
Computational physicists
Scientists from Belgrade
Serbian women scientists
University of Padua alumni
Ludwig Maximilian University of Munich alumni
Professors of Astrophysics (Cambridge) | Debora Šijački | Physics | 418 |
14,971,322 | https://en.wikipedia.org/wiki/Private%20biometrics | Private biometrics is a form of encrypted biometrics, also called privacy-preserving biometric authentication methods, in which the biometric payload is a one-way, homomorphically encrypted feature vector that is 0.05% the size of the original biometric template and can be searched with full accuracy, speed and privacy. The feature vector's homomorphic encryption allows search and match to be conducted in polynomial time on an encrypted dataset and the search result is returned as an encrypted match. One or more computing devices may use an encrypted feature vector to verify an individual person (1:1 verify) or identify an individual in a datastore (1:many identify) without storing, sending or receiving plaintext biometric data within or between computing devices or any other entity. The purpose of private biometrics is to allow a person to be identified or authenticated while guaranteeing individual privacy and fundamental human rights by only operating on biometric data in the encrypted space. Some private biometrics including fingerprint authentication methods, face authentication methods, and identity-matching algorithms according to bodily features. Private biometrics are constantly evolving based on the changing nature of privacy needs, identity theft, and biotechnology.
Background
Biometric security strengthens user authentication but, until recently, also implied important risks to personal privacy. Indeed, while compromised passwords can be easily replaced and are not personally identifiable information(PII), biometric data is considered highly sensitive due to its personal nature, unique association with users, and the fact that compromised biometrics (biometric templates) cannot be revoked or replaced. Private biometrics have been developed to address this challenge. Private Biometrics provide the necessary biometric authentication while simultaneously minimizing user's privacy exposure through the use of one-way, fully homomorphic encryption.
The Biometric Open Protocol Standard, IEEE 2410-2018, was updated in 2018 to include private biometrics and stated that the one-way fully homomorphic encrypted feature vectors, “...bring a new level of consumer privacy assurance by keeping biometric data encrypted both at rest and in transit.” The Biometric Open Protocol Standard (BOPS III) also noted a key benefit of private biometrics was the new standard allowed for simplification of the API since the biometric payload was always one-way encrypted and therefore had no need for key management.
Fully homomorphic cryptosystems for biometrics
Historically, biometric matching techniques have been unable to operate in the encrypted space and have required the biometric to be visible (unencrypted) at specific points during search and match operations. This decrypt requirement made large-scale search across encrypted biometrics (“1:many identify”) infeasible due to both significant overhead issues (e.g. complex key management and significant data storage and processing requirements) and the substantial risk that the biometrics were vulnerable to loss when processed in plaintext within the application or operating system (see FIDO Alliance, for example).
Biometric security vendors complying with data privacy laws and regulations (including Apple FaceID, Samsung, Google) therefore focused their efforts on the simpler 1:1 verify problem and were unable to overcome the large computational demands required for linear scan to solve the 1:many identify problem.
Today, private biometric cryptosystems overcome these limitations and risks through the use of one-way, fully homomorphic encryption. This form of encryption allows computations to be carried out on ciphertext, allows the match to be conducted on an encrypted dataset without decrypting the reference biometric, and returns an encrypted match result. Matching in the encrypted space offers the highest levels of accuracy, speed and privacy and eliminates the risks associated with decrypting biometrics.
Accuracy: same as plaintext (99%)
The private biometric feature vector is much smaller (0.05% the size of the original biometric template) but yet maintains the same accuracy as the original plaintext reference biometric. In testing using Google's unified embedding for face recognition and clustering CNN (“Facenet”), Labeled Faces in the Wild (LFW) (source), and other open source faces, private biometric feature vectors returned the same accuracy as plaintext facial recognition. Using an 8MB facial biometric, one vendor reported an accuracy rate of 98.7%. The same vendor reported accuracy increased to 99.99% when using three 8MB facial biometrics and a vote algorithm (best two out of 3) to predict.
As the quality of the facial biometric image declined, accuracy degraded very slowly. For 256kB facial images (3% the quality of an 8MB picture), the same vendor reported 96.3% accuracy and that the neural network was able to maintain similar accuracy through boundary conditions including extreme cases of light or background.
Speed: polynomial search (same as plaintext)
The private biometric feature vector is 4kB and contains 128 floating point numbers. In contrast, plaintext biometric security instances (including Apple Face ID) currently use 7MB to 8MB reference facial biometrics (templates). By using the much smaller feature vector, the resulting search performance is less than one second per prediction using a datastore of 100 million open source faces (“polynomial search”). The private biometric test model used for these results was Google's unified embedding for face recognition and clustering CNN (“Facenet”), Labeled Faces in the Wild (LFW) (source), and other open source faces.
Privacy: full compliance with privacy regulations worldwide
As with all ideal one-way cryptographic hash functions, decrypt keys do not exist for private biometrics so it is infeasible to generate the original biometric message from the private biometric feature vector (its hash value) except by trying all possible messages. Unlike passwords, however, no two instances of a biometric are exactly the same or, stated in another way, there is no constant biometric value, so a brute force attack using all possible faces would only produce an approximate (fuzzy) match. Privacy and fundamental human rights are therefore guaranteed.
Specifically, the private biometric feature vector is produced by a one-way cryptographic hash algorithm that maps plaintext biometric data of arbitrary size to a small feature vector of a fixed size (4kB) that is mathematically impossible to invert. The one-way encryption algorithm is typically achieved using a pre-trained convolutional neural network (CNN), which takes a vector of arbitrary real-valued scores and squashes it to a 4kB vector of values between zero and one that sum to one. It is mathematically impossible to reconstruct the original plaintext image from a private biometric feature vector of 128 floating point numbers.
One-way encryption, history and modern use
One-way encryptions offer unlimited privacy by containing no mechanism to reverse the encryption and disclose the original data. Once a value is processed through a one-way hash, it is not possible to discover to the original value (hence the name “one-way”).
History
The first one-way encryptions were likely developed by James H. Ellis, Clifford Cocks, and Malcolm Williamson at the UK intelligence agency GCHQ during the 1960s and 1970s and were published independently by Diffie and Hellman in 1976 (History of cryptography). Common modern one-way encryption algorithms, including MD5 (message digest) and SHA-512 (secure hash algorithm) are similar to the first such algorithms in that they also contain no mechanism to disclose the original data. The output of these modern one-way encryptions offer high privacy but are not homomorphic, meaning that the results of the one-way encryptions do not allow high order math operations (such as match). For example, we cannot use two SHA-512 sums to compare the closeness of two encrypted documents. This limitation makes it impossible for these one-way encryptions to be used to support classifying models in machine learning—or nearly anything else.
Modern use
The first one-way, homomorphically encrypted, Euclidean-measurable feature vector for biometric processing was proposed in a paper by Streit, Streit and Suffian in 2017.
In this paper, the authors theorized and also demonstrated using a small sample size (n=256 faces) that (1) it was possible to use neural networks to build a cryptosystem for biometrics that produced one-way, fully homomorphic feature vectors composed of normalized floating-point values; (2) the same neural network would also be useful for 1:1 verification (matching); and (3) the same neural network would not be useful in 1:many identification tasks since search would occur in linear time (i.e. non polynomial). The paper's first point was (in theory) later shown to be true, and the papers first, second and third points were later shown to be true only for small samples but not for larger samples.
A later tutorial (blog posting) by Mandel in 2018 demonstrated a similar approach to Streit, Streit and Suffian and confirmed using a Frobenius 2 distance function to determine the closeness of two feature vectors. In this posting, Mandel used a Frobenius 2 distance function to determine the closeness of two feature vectors and also demonstrated successful 1:1 verification. Mandel did not offer a scheme for 1:many identification as this method would have required a non polynomial full linear scan of the entire database. The Streit, Streit and Suffian paper attempted a novel “banding” approach for 1:many identification in order to mitigate the full linear scan requirement, but it is now understood that this approach produced too much overlap to help in identification.
First production implementation
The first claimed commercial implementation of private biometrics, Private.id, was published by Private Identity, LLC in May 2018 by using the same method to provide 1:many identification in polynomial time across a large biometrics database (100 million faces).
On the client device, Private.id transforms each reference biometric (template) into a one-way, fully homomorphic, Euclidean-measurable feature vector using matrix multiplication from the neural network that may then be stored locally or transmitted. The original biometric is deleted immediately after the feature vector is computed or, if the solution is embedded in firmware, the biometric is transient and never stored. Once the biometric is deleted, it is no longer possible to lose or compromise the biometric.
The Private.id feature vector can be used in one of two ways. If the feature vector is stored locally, it may be used to compute 1:1 verification with high accuracy (99% or greater) using linear mathematics. If the feature vector is also stored in a Cloud, the feature vector may also be used as input for a neural network to perform 1:many identification with the same accuracy, speed and privacy as the original plaintext reference biometric (template).
Compliance
Private biometrics use the following two properties in deriving compliance with biometric data privacy laws and regulations worldwide. First, the private biometrics encryption is a one-way encryption, so loss of privacy by decryption is mathematically impossible and privacy is therefore guaranteed. Second, since no two instances of a biometric are exactly the same or, stated in another way, there is no constant biometric value, the private biometrics one-way encrypted feature vector is Euclidean Measureable in order to provide a mechanism to determine a fuzzy match in which two instances of the same identity are “closer” than two instances of a different identity.
IEEE Biometric Open Protocol Standard (BOPS III)
The IEEE 2410-2018 Biometric Open Protocol Standard was updated in 2018 to include private biometrics. The specification stated that one-way fully homomorphic encrypted feature vectors, “bring a new level of consumer privacy assurance by keeping biometric data encrypted both at rest and in transit.” IEEE 2410-2018 also noted a key benefit of private biometrics is that the new standard allows for simplification of the API since the biometric payload is always one-way encrypted and there is no need for key management.
Discussion: passive encryption and data security compliance
Private biometrics enables passive encryption (encryption at rest), the most difficult requirement of the US Department of Defense Trusted Computer System Evaluation Criteria (TCSEC). No other cryptosystem or method provides operations on rested encrypted data, so passive encryption—an unfulfilled requirement of the TCSEC since 1983, is no longer an issue.
Private biometrics technology is an enabling technology for applications and operating systems—but itself does not directly address—the auditing and constant protection concepts introduced in the TCSEC.
US DoD Standard Trusted Computer System Evaluation Criteria (TCSEC)
Private biometrics, as implemented in a system that conforms to IEEE 2410-2018 BOPS III, satisfies the privacy requirements of the US Department of Defense Standard Trusted Computer System Evaluation Criteria (TCSEC). The TCSEC sets the basic requirements for assessing the effectiveness of computer security controls built into a computer system (“Orange Book, section B1”). Today, the applications and operating systems contain features that comply with TCSEC levels C2 and B1 except they lack homomorphic encryption and so do not process data encrypted at rest. We typically, if not always, obtained waivers, because there was not a known work around. Adding private biometrics to these operating systems and applications resolves this issue.
For example, consider the case of a typical MySQL database. To query MySQL in a reasonable period of time, we need data that maps to indexes that maps to queries that maps to end user data. To do this, we work with plaintext. The only way to encrypt this is to encrypt the entire data store, and to decrypt the entire data store, prior to use. Since data use is constant, the data is never encrypted. Thus, in the past we would apply for waivers because there was no known work around. Now using private biometrics, we can match and do operations on data that is always encrypted.
Multiple Independent Levels of Security/Safety (MILS) architecture
Private biometrics, as implemented in a system that conforms to IEEE 2410-2018 BOPS III, comply with the standards of the Multiple Independent Levels of Security/Safety (MILS) architecture. MILS builds on the Bell and La Padula theories on secure systems that represent the foundational theories of the US DoD Standard Trusted Computer System Evaluation Criteria (TCSEC), or the DoD “Orange Book.” (See paragraphs above.)
Private biometrics’ high-assurance security architecture is based on the concepts of separation and controlled information flow and implemented using only mechanisms that support trustworthy components, thus the security solution is non-bypassable, evaluable, always invoked and tamper proof. This is achieved using the one-way encrypted feature vector, which elegantly allows only encrypted data (and never stores or processes plaintext) between security domains and through trustworthy security monitors.
Specifically, private biometrics systems are:
Non-bypassable, as plaintext biometrics cannot use another communication path, including lower level mechanisms, to bypass the security monitor since the original biometric is transient at inception (e.g. the biometric template acquired by the client device exists only for a few seconds at inception and then is deleted or never stored).
Evaluable in that the feature vectors are modular, well designed, well specified, well implemented, small and of low complexity.
Always-invoked, in that each and every message is always one-way encrypted independent of security monitors.
Tamperproof in that the feature vector's one-way encryption prevents unauthorized changes and does not use systems that control rights to the security monitor code, configuration and data.
History
Implicit authentication and private equality testing
Unsecure biometric data are sensitive due to their nature and how they can be used. Implicit authentication is a common practice when using passwords, as a user may prove knowledge of a password without actually revealing it. However, two biometric measurements of the same person may differ, and this fuzziness of biometric measurements renders implicit authentication protocols useless in the biometrics domain.
Similarly, private equality testing, where two devices or entities want to check whether the values that they hold are the same without presenting them to each other or to any other device or entity, is well practiced and detailed solutions have been published. However, since two biometrics of the same person may not be equal, these protocols are also ineffective in the biometrics domain. For instance, if the two values differ in τ bits, then one of the parties may need to present 2τ candidate values for checking.
Homomorphic encryption
Prior to the introduction of private biometrics, biometric techniques required the use of plaintext search for matching so each biometric was required to be visible (unencrypted) at some point in the search process. It was recognized that it would be beneficial to instead conduct matching on an encrypted dataset.
Encrypt match is typically accomplished using one-way encryption algorithms, meaning that given the encrypted data, there is no mechanism to get to the original data. Common one-way encryption algorithms are MD5 and SHA-512. However, these algorithms are not homomorphic, meaning that there is no way to compare the closeness of two samples of encrypted data, and thus no means to compare. The inability to compare renders any form of classifying model in machine learning untenable.
Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted match result. Matching in the encrypted space using a one-way encryption offers the highest level of privacy. With a payload of feature vectors one-way encrypted, there is no need to decrypt and no need for key management.
A promising method of homomorphic encryption on biometric data is the use of machine learning models to generate feature vectors. For black-box models, such as neural networks, these vectors can not by themselves be used to recreate the initial input data and are therefore a form of one-way encryption. However, the vectors are euclidean measurable, so similarity between vectors can be calculated. This process allows for biometric data to be homomorphically encrypted.
For instance if we consider facial recognition performed with the Euclidean Distance, when we match two face images using a neural network, first each face is converted to a float vector, which in the case of Google's FaceNet, is of size 128. The representation of this float vector is arbitrary and cannot be reverse-engineered back to the original face. Indeed, the matrix multiplication from the neural network then becomes the vector of the face, is Euclidean measurable but unrecognizable, and cannot map back to any image.
Prior approaches used to solve private biometrics
Prior to the availability of private biometrics, research focused on ensuring the prover's biometric would be protected against misuse by a dishonest verifier through the use of partially homomorphic data or decrypted(plaintext) data coupled with a private verification function intended to shield private data from the verifier. This method introduced a computational and communication overhead which was computationally inexpensive for 1:1 verification but proved infeasible for large 1:many identification requirements.
From 1998 to 2018 cryptographic researchers pursued four independent approaches to solve the problem: cancelable biometrics, BioHashing, Biometric Cryptosystems, and two-way partially homomorphic encryption.
Feature transformation approach
The feature transformation approach “transformed” biometric feature data to random data through the use of a client-specific key or password. Examples of this approach included biohashing and cancelable biometrics.The approach offered reasonable performance but was found to be insecure if the client-specific key was compromised.
Cancelable Biometrics
The first use of indirect biometric templates (later called cancelable biometrics) was proposed in 1998 by Davida, Frankel and Matt. Three years later, Ruud Bolle, Nilini Ratha and Jonathan Connell, working in IBM's Exploratory Computer Vision Group, proposed the first concrete idea of cancelable biometrics.
Cancelable biometrics were defined in these communications as biometric templates that were unique for every application and that, if lost, could be easily cancelled and replaced. The solution was (at the time) thought to provide higher privacy levels by allowing multiple templates to be associated with the same biometric data by storing only the transformed (hashed) version of the biometric template. The solution was also promoted for its ability to prevent linkage of the user's biometric data across various databases since only a transformed version of the biometric template (and not the unencrypted (plaintext) biometric template) was stored for later use.
Cancelable biometrics were deemed useful because of their diversity, reusability and one-way encryption (which, at the time, was referred to as a one-way transformation). Specifically, no cancellable template could be used in two different applications (diversity); it was straightforward to revoke and reissuance a cancellable template in the event of compromise (reusability); and the one-way hash of the template prevented recovery of sensitive biometric data. Finally, it was postulated that the transformation would not deteriorate accuracy.
BioHashing
Research into cancelable biometrics moved into BioHashing by 2004. The BioHashing feature transformation technique was first published by Jin, Ling and Goh and combined biometric features and a tokenized (pseudo-) random number (TRN). Specifically, BioHash combined the biometric template with a user-specific TRN to produce a set of non-invertible binary bit strings that were thought to be irreproducible if both the biometric and the TRN were not presented simultaneously.
Indeed, it was first claimed that the BioHashing technique had achieved perfect accuracy (equal error rates) for faces, fingerprints and palm prints, and the method gained further traction when its extremely low error rates were combined with the claim that its biometric data was secure against loss because factoring the inner products of biometrics feature and TRN was an intractable problem.
By 2005, however, researchers Cheung and Kong (Hong Kong Polytechnic and University of Waterloo) asserted in two journal articles that BioHashing performance was actually based on the sole use of TRN and conjectured that the introduction of any form of biometric become meaningless since the system could be used only with the tokens. These researchers also reported that the non-invertibility of the random hash would deteriorate the biometric recognition accuracy when the genuine token was stolen and used by an impostor (“the stolen-token scenario”).
Biometric cryptosystem approach
Biometric cryptosystems were originally developed to either secure cryptographic keys using biometric features (“key-biometrics binding”) or to directly generate cryptographic keys from biometric features. Biometric cryptosystems used cryptography to provide the system with cryptographic keys protection and biometrics to provide the system with dynamically generated keys to secure the template and biometric system.
The acceptance and deployment of biometric cryptosystem solutions was constrained, however, by the fuzziness related with biometric data. Hence, error correction codes (ECCs), including includes fuzzy vault and fuzzy commitment, were adopted to alleviate the fuzziness of the biometric data. This overall approach proved impractical, however, due to the need for accurate authentication and suffered from security issues due to its need for strong restriction to support authentication accuracy.
Future research on biometric cryptosystems is likely to focus on a number of remaining implementation challenges and security issues involving both the fuzzy representations of biometric identifiers and the imperfect nature of biometric feature extraction and matching algorithms. And, unfortunately, since biometric cryptosystems can, at the current time, be defeated using relatively simple strategies leveraging both weaknesses of the current systems (the fuzzy representations of biometric identifiers and the imperfect nature of biometric feature extraction and matching algorithms), it is unlikely that these systems will be able to deliver acceptable end-to-end system performance until suitable advances are achieved.
Two-way partially homomorphic encryption approach
The two-way partially homomorphic encryption method for private biometrics was similar to the today's private biometrics in that it offered protection of biometric feature data through the use of homomorphic encryption and measured the similarity of encrypted feature data by metrics such as the Hamming and the Euclidean distances. However, the method was vulnerable to data loss due to the existence of secret keys that were to be managed by trusted parties. Widespread adoption of the approach also suffered from the encryption schemes’ complex key management and large computational and data storage requirements.
See also
Homomorphic encryption
Identity management
External links
BOP - Biometrics Open Protocol
Fidoalliance.org
LFWcrop Face Dataset
Cancelable biometrics
EER - equal error rate
Technovelgy.com, Biometric Match
References
Biometrics
Information privacy | Private biometrics | Engineering | 5,370 |
333,633 | https://en.wikipedia.org/wiki/B612%20Foundation | The B612 Foundation is a private nonprofit foundation headquartered in Mill Valley, California, United States, dedicated to planetary science and planetary defense against asteroids and other near-Earth object (NEO) impacts. It is led mainly by scientists, former astronauts and engineers from the Institute for Advanced Study, Southwest Research Institute, Stanford University, NASA and the space industry.
As a non-governmental organization it has conducted two lines of related research to help detect NEOs that could one day strike the Earth, and find the technological means to divert their path to avoid such collisions. It also assisted the Association of Space Explorers in helping the United Nations establish the International Asteroid Warning Network, as well as a Space Missions Planning Advisory Group to provide oversight on proposed asteroid deflection missions.
In 2012, the foundation announced it would design and build a privately financed asteroid-finding space observatory, the Sentinel Space Telescope, to be launched in 2017–2018. Once stationed in a heliocentric orbit around the Sun similar to that of Venus, Sentinel's supercooled infrared detector would have helped identify dangerous asteroids and other NEOs that pose a risk of collision with Earth. In the absence of substantive planetary defense provided by governments worldwide, B612 attempted a fundraising campaign to cover the Sentinel Mission, estimated at $450 million for 10 years of operation. Fundraising was unsuccessful, and the program was cancelled in 2017, with the Foundation pursuing a constellation of smaller satellites instead.
The B612 Foundation is named for the asteroid home of the eponymous hero of Antoine de Saint-Exupéry's 1943 book The Little Prince.
Background
When an asteroid enters the planet's atmosphere it becomes known as a 'meteor'; those that survive and fall to the Earth's surface are then called 'meteorites'. While basketball-sized meteors occur almost daily, and compact car-sized ones about yearly, they usually burn up or explode high above the Earth as bolides, (fireballs), often with little notice. During an average 24-hour period, the Earth sweeps through some 100 million particles of interplanetary dust and pieces of cosmic debris, only a very minor amount of which arrives on the ground as meteorites.
The larger in size asteroids or other near-Earth objects (NEOs) are, the less frequently they impact the planet's atmosphere—large meteors seen in the skies are extremely rare, while medium-sized ones are less so, and much smaller ones are more commonplace. Although stony asteroids often explode high in the atmosphere, some objects, especially iron-nickel meteors and other types descending at a steep angle, can explode close to ground level or even directly impact onto land or sea. In the U.S. State of Arizona, the Meteor Crater (officially named Barringer Crater) formed in a fraction of a second as nearly 160 million tonnes of limestone and bedrock were uplifted, creating its crater rim on formerly flat terrain. The asteroid that produced the Barringer Crater was only about in size; however it impacted the ground at a velocity of and struck with an impact energy of —about 625 times greater than the bomb that destroyed the city of Hiroshima during World War II. Tsunamis can also occur after a medium-sized or larger asteroid impacts an ocean surface or other large body of water.
The probability of a mid-sized asteroid (similar to the one that destroyed the Tunguska River area of Russia in 1908) hitting Earth during the 21st century has been estimated at 30%. Since the Earth is currently more populated than in previous eras, there is a greater risk of large casualties arising from a mid-sized asteroid impact. However, as of the early 2010s, only about a half of one per cent of Tunguska-type NEOs had been located by astronomers using ground-based telescope surveys.
The need for an asteroid detection program has been compared to the need for monsoon, typhoon, and hurricane preparedness. As the B612 Foundation and other organizations have publicly noted, of the different types of natural catastrophes that can occur on our planet, asteroid strikes are the only one that the world now has the technical capability to prevent.
B612 is one of several organizations to propose detailed dynamic surveys of NEOs and preventative measures such as asteroid deflection. Other groups include Chinese researchers, NASA in the United States, NEOShield in Europe, as well as the international Spaceguard Foundation. In December 2009 Roscosmos Russian Federal Space Agency director Anatoly Perminov proposed a deflection mission to the asteroid 99942 Apophis, which at the time had been thought to pose a risk of collision with Earth.
Asteroid deflection workshop
The Foundation evolved from an informal one-day workshop on asteroid deflection strategies during October 2001, organized by Dutch astrophysicist Piet Hut along with physicist and then-U.S. astronaut Ed Lu, presented at NASA's Johnson Space Center in Houston, Texas. Twenty researchers participated, principally from various NASA facilities plus the non-profit Southwest Research Institute, but as well from the University of California, University of Michigan, and the Institute for Independent Study. All were interested in contributing to the proposed creation of an asteroid deflection capability. The seminar participants included Rusty Schweickart, a former Apollo astronaut, and Clark Chapman, a planetary scientist.
Among the proposed experimental research missions discussed were the alteration of an asteroid's spin rate, as well as changing the orbit of one part of a binary asteroid pair. Following the seminar's round-table discussions the workshop generally agreed that the vehicle of choice (needed to deflect an asteroid) would be powered by a low-thrust ion plasma engine. Landing a nuclear-powered plasma engined pusher vehicle on the asteroid's surface was seen as promising, an early proposal that would later encounter a number of technical obstacles. Nuclear explosives were seen as "too risky and unpredictable" for several reasons, warranting the view that gently altering an asteroid's trajectory was the safest approach—but also a method requiring years of advance warning to successfully accomplish.
B612 Project and Foundation
The October 2001 asteroid deflection workshop participants created the "B612 Project" to further their research. Schweickart, along with Drs. Hut, Lu and Chapman, then formed the B612 Foundation on October 7, 2002, with its first goal being to "significantly alter the orbit of an asteroid in a controlled manner". Schweickart became an early public face of the foundation and served as chairman on its board of directors. In 2010, as part of an ad hoc task force on planetary defense, he advocated increasing NASA's annual budget by $250M–$300 million over a 10-year period (with an operational maintenance budget of up to $75 million per year after that) in order to more fully catalog the near-Earth objects (NEOs) that can pose a threat to Earth, and to also fully develop impact avoidance capabilities. That recommended level of budgetary support would permit up to 10–20 years of advance warning in order to create a sufficient window for the required trajectory deflection.
Their recommendations were made to a NASA Advisory Council, but were ultimately unsuccessful in obtaining Congressional funding due to NASA, lacking a legislated mandate for planetary protection, not being permitted to request it. Feeling it would be imprudent to continue waiting for substantive government or United Nations action, B612 began a fundraising campaign in 2012 to cover the approximate US$450 million cost for the development, launch and operations of an asteroid-finding space telescope, to be called Sentinel, with a goal of raising $30 to $40 million per year. The space observatory's objective would be to accurately survey NEOs from an orbit similar to that Venus, creating a large dynamic catalog of such objects that would help identify dangerous Earth-impactors, deemed a necessary precursor to mounting any asteroid deflection mission.
In March and April 2013, several weeks after the Chelyabinsk meteor explosion injured some 1,500 people, the U.S. Congress held hearings for "...the Risks, Impacts and Solutions for Space Threats". They received testimony from B612 head Ed Lu (see video at right), as well as Dr. Donald K. Yeomans, head of NASA's NEO Program Office, Dr. Michael A'Hearn of the University of Maryland and co-chair of a 2009 U.S. National Research Council study on asteroid threats, plus others. The difficulty of quickly intercepting an imminent asteroid threat to Earth was made apparent during the testimony:
As a result of a set of hearings by the NASA Advisory Committee following the Chelyabinsk explosion in 2013, in conjunction with a White House request to double its budget, NASA's Near Earth Object Program funding was increased to $40.5 M/year in its FY2014 (Fiscal Year 2014) budget. It had previously been increased to $20.5 M/year in FY2012 (about 0.1% of NASA's annual budget at the time), from an average of about $4 M/year between 2002 and 2010.
Asteroid hazard reassessment
On Earth Day, April 22, 2014, the B612 Foundation formally presented a revised assessment on the frequency of "city-killer" type impact events, based on research led by Canadian planetary scientist Peter Brown of the University of Western Ontario's (UWO) Centre for Planetary Science and Exploration. Dr. Brown's analysis, "A 500-Kiloton Airburst Over Chelyabinsk and An Enhanced Hazard from Small Impactors", published in the journals Science and Nature, was used to produce a short computer-animated video that was presented to the media at the Seattle Museum of Flight.
The nearly one and a half minute video displayed a rotating globe with the impact points of about 25 asteroids measuring more than one, and up to 600 kilotons of blast force, that struck the Earth from 2000 to 2013 (for comparison, the nuclear bomb that destroyed Hiroshima was equivalent to about 16 kilotons of TNT blast force). Of those impacts between 2000 and 2013, eight of them were as large, or larger, than the Hiroshima bomb. Only one of the asteroids, 2008 TC3, was detected in advance, some 19 hours before exploding in the atmosphere. As was the case with the 2013 Chelyabinsk meteor, no warnings were issued for any of the other impacts.
At the presentation, alongside former NASA astronauts Dr. Tom Jones and Apollo 8 astronaut Bill Anders, Foundation head Ed Lu explained that the frequency of dangerous asteroid impacts hitting Earth was from three to ten times greater than previously believed a dozen or so years ago (earlier estimates had pegged the odds as one per 300,000 years). The latest reassessment is based on worldwide infrasound signatures recorded under the auspices of the Comprehensive Nuclear-Test-Ban Treaty Organization, which monitors the planet for nuclear explosions. Dr. Brown's UWO study used infrasound signals generated by asteroids that released more than a kiloton of TNT explosive force. The study suggested that "city-killer" type impact events similar to the Tunguska event of 1908 actually occur on average about once a century instead of every thousand years, as was once previously believed. The 1908 event occurred in the remote, sparsely populated Tunguska area of Siberia, Russia, and is attributed to the likely airburst explosion of an asteroid or comet that destroyed some 80 million trees over 2,150 square kilometres (830 sq mi) of forests. The higher frequency of these types of events is interpreted as meaning that "blind luck" has mainly prevented a catastrophic impact over an inhabited area that could kill millions, a point made near the video's end.
99942 Apophis
During the first decade of the 2000s, there were serious concerns the 325 metres (1,066 ft) wide asteroid 99942 Apophis posed a risk of impacting Earth in 2036. Preliminary, incomplete data by astronomers using ground-based sky surveys resulted in the calculation of a Level 4 risk on the Torino Scale impact hazard chart. In July 2005, B612 formally asked NASA to investigate the possibility that the asteroid's post-2029 orbit could be in orbital resonance with Earth, which would increase the likelihood of a future impact. The Foundation also asked NASA to investigate whether a transponder should be placed on the asteroid to enable more accurate tracking of how its orbit would be changed by the Yarkovsky effect.
By 2008, B612 had provided estimates on a 30 kilometers-wide corridor, called a "path of risk", that would extend across the Earth's surface if an impact were to occur, as part of its effort to develop viable deflection strategies. The calculated risk-path extended from Kazakhstan across southern Russia through Siberia, across the Pacific, then right between Nicaragua and Costa Rica, crossing northern Colombia and Venezuela, and ending in the Atlantic just before reaching Africa. At that time, a computer simulation estimated Apophis's hypothetical impact in countries, such as Colombia and Venezuela, could have resulted in more than 10 million casualties. Alternately, an impact in the Atlantic or Pacific oceans could produce a deadly tsunami over 240 metres (about 800 ft) in height, capable of destroying many coastal areas and cities.
A series of later, more accurate observations of 99942 Apophis, combined with the recovery of previously unseen data, revised the odds of a collision in 2036 as being virtually nil, and effectively ruled it out.
International involvement
B612 Foundation members assisted the Association of Space Explorers (ASE) in helping obtain United Nations (UN) oversight of NEO tracking and deflection missions through the UN's Committee On the Peaceful Uses of Outer Space (UN COPUOS) along with COPUOS's Action Team 14 (AT-14) expert group. Several members of B612, also members of the ASE, worked with COPUOS since 2001 to establish international involvement for both impact disaster responses, and on deflection missions to prevent impact events. According to Foundation Chair Emeritus Rusty Schweickart in 2013, "No government in the world today has explicitly assigned the responsibility for planetary protection to any of its agencies".
In October 2013, COPUOS's Scientific and Technical Subcommittee approved several measures, later approved by the UN General Assembly in December, to deal with terrestrial asteroid impacts, including the creation of an International Asteroid Warning Network (IAWN) plus two advisory groups: the Space Missions Planning Advisory Group (SMPAG), and the Impact Disaster Planning Advisory Group (IDPAG). The IAWN warning network will act as a clearinghouse for shared information on dangerous asteroids and for any future terrestrial impact events that are identified. The Space Missions Planning Advisory Group will coordinate joint studies of the technologies for deflection missions, and as well provide oversight of actual missions. This is due to deflection missions typically involving a progressive movement of an asteroid's predicted impact point across the surface of the Earth (and also across the territories of uninvolved countries) until the NEO is deflected either ahead of, or behind the planet at the point their orbits intersect. An initial framework of international cooperation at the UN is needed, said Schweickart, to guide the policymakers of its member nations on several important NEO-related aspects. However, as asserted by the Foundation, the new UN measures only constitute a starting point. To be effective they will need to be enhanced by further policies and resources implemented at both the national and supernational levels.
At the time of the UN's policy adoption in New York City, Schweickart and four other ASE members, including B612 head Ed Lu and strategic advisers Dumitru Prunariu and Tom Jones participated in a public forum moderated by Neil deGrasse Tyson not far from the United Nations Headquarters. The panel urged the global community to adopt further important steps for planetary defense against NEO impacts. Their recommendations included:
UN delegates briefing their home countries' policymakers on the UN's newest roles;
having each country's government create detailed asteroid disaster response plans, assigning fiscal resources to deal with asteroid impacts, and delegating a lead agency to handle its disaster response in order to create clear lines of communication from the IAWN to the affected countries;
having their governments support the ASE's and B612's efforts to identify the estimated one million "city-killer" NEOs capable of impacting Earth, by deploying a space-based asteroid telescope, and
committing member states to launching an international test deflection mission within 10 years.
Sentinel Mission
The Sentinel Mission program was the cornerstone of the B612 Foundation's earlier efforts, with its preliminary design and system architecture level reviews planned for 2014, and its critical design review to be conducted in 2015. The infrared telescope would be launched atop a SpaceX Falcon 9 rocket, to be placed into a Venus-trailing Heliocentric orbit around the Sun. Orbiting between the Sun and Earth, the Sun's rays would always be behind the telescope's lens and thus never inhibit the space observatory's ability to detect asteroids or other near-Earth objects (NEOs). From the vantage of its inner-solar system orbit around the Sun, Sentinel would be able to "pick up objects that are currently difficult, if not impossible, to see in advance from Earth", such as occurred with the Chelyabinsk meteor of 2013 that went undetected until its explosion over Chelyabinsk Oblast, Russia. The Sentinel Mission was planned to provide an accurate dynamic catalog of asteroids and other NEOs made available to scientists worldwide from the International Astronomical Union's Minor Planet Center, the data collected would calculate the risk of impact events with our planet, allowing for asteroid deflection by the use of gravity tractors to divert their trajectories away from Earth.
In order to communicate with the spacecraft while it is orbiting the Sun (at about the same distance as Venus), which can be at times as far as 270 million kilometres (170 million miles) from Earth, the B612 Foundation entered into a Space Act Agreement with NASA for the use of their deep space telecommunication network.
Design and operation
Sentinel was designed to perform continuous observation and analysis during its planned -year operational life, although B612 anticipates it may continue to function for up to 10 years. Using its telescope mirror with sensors built by Ball Aerospace (makers of the Hubble Space Telescope's instruments), its mission would be to catalog 90% of asteroids with diameters larger than . There were also plans to catalog smaller Solar System objects as well.
The space observatory would measure by with a mass of and would orbit the Sun at a distance of approximately the same orbital distance as Venus, employing infrared astronomy to identify asteroids against the cold of outer space. Sentinel would scan in the 7- to 15-micron wavelength band across a 5.5 by 2-degree field of view. Its sensor array would consist of 16 detectors with coverage scanning "a 200-degree, full-angle field of regard". B612, working in partnership with Ball Aerospace, was constructing Sentinel's 51 cm aluminum mirror, designed for a large field of view with its infrared sensors cooled to using Ball's two-stage, closed-Stirling-cycle cryocooler.
B612 aimed to produce its space telescope at a significantly lower cost than traditional space science programs by making use of space hardware systems previously developed for earlier programs, rather than designing a brand new observatory. Schweickart stated that about "80% of what we're dealing with in Sentinel is Kepler, 15% Spitzer, 5% new, higher-performance infrared sensors", thus concentrating its R&D funds on the critical area of cryogenically-cooled image sensor technology, producing what it terms will be the most sensitive type of asteroid-finding telescope ever built.
Data gathered by Sentinel would be provided through existing scientific data-sharing networks that include NASA and academic institutions such as the Minor Planet Center in Cambridge, Massachusetts. Given the satellite's telescopic accuracy, Sentinel's data may have proven valuable for other possible future missions, such as asteroid mining.
Mission funding
B612 was attempting to raise approximately $450M to fund the development, launch and operational costs of the telescope, about the cost of a complex freeway interchange, or approximately $100M less than a single Air Force Next-Generation Bomber. The $450 million cost estimate is composed of $250 million to create Sentinel, plus another $200 million for 10 years of operations. In explaining the Foundation's bypassing of possible governmental grants for such a mission, Dr. Lu stated their public fundraising appeal is being driven by "[t]he tragedy of the commons: When it's everybody's problem, it's nobody's problem", referring to a lack of ownership, priority and funding that governments have assigned to asteroid threats, also stating on a different occasion "We're the only ones taking it seriously." According to another B612 board member, Rusty Schweickart, "The good news is, you can prevent it—not just get ready for it! The bad news is, it's hard to get anybody to pay attention to it when there are potholes in the road." After providing earlier Congressional testimony on the issue Schweickart was dismayed to hear from congressional staff members that, while U.S. lawmakers involved in the hearing understood the seriousness of the threat, they would likely not legislate funding for planetary defense as "making the deflection of asteroids a priority might backfire in [their] reelection campaigns".
The Foundation intended to launch Sentinel in 2017–2018, with initiation of data transfer for on-Earth processing anticipated no later than 6 months afterwards.
In the aftermath of the February 2013 Chelyabinsk meteor explosion—where an approximate asteroid entered the atmosphere undetected at about Mach 60, becoming a brilliant superbolide meteor before exploding over Chelyabinsk, Russia—the B612 foundation experienced a "surge of interest" in its project to detect asteroids, with a corresponding increase in funding donations. After providing Congressional testimony Dr. Lu noted that the many online videos recorded of the asteroid's explosion over Chelyabinsk made a significant impact on millions of viewers worldwide, saying "There's nothing like a hundred YouTube videos to do that."
Staff
Leadership
In 2014 eight key staff positions were designated, covering the offices of the chief executive officer (CEO), chief operating officer (COO), Sentinel Program Architecture (SPA), Sentinel Mission Direction (SMD), Sentinel Program Management (SPM), Sentinel Mission Science (SMS) and the Sentinel Standing Review Team (SSRT), plus Public Relations.
Ed Lu, Co-founder, B612 Foundation. Executive Director, Asteroid Institute
Edward Tsang "Ed" Lu (; born July 1, 1963) is a co-founder and the chief executive officer of the B612 Foundation, and as well, a U.S. physicist and a former NASA astronaut. He is a veteran of two Space Shuttle missions and an extended stay aboard the International Space Station which included a six-hour spacewalk outside the station performing construction work. During his three missions he logged a total of 206 days in space.
His education includes an electrical engineering degree from Cornell University, and a Ph.D. in applied physics from Stanford University. Lu became a specialist in solar physics and astrophysics as a visiting scientist at the High Altitude Observatory based in Boulder, Colorado, from 1989 until 1992. In his final year, he held a joint appointment with the Joint Institute for Laboratory Astrophysics at the University of Colorado. Lu performed postdoctoral fellow work at the Institute for Astronomy in Honolulu, Hawaii from 1992 until 1995 before being selected for NASA's Astronaut Corps in 1994.
Lu developed a number of new theoretical advances, which have provided for the first time a basic understanding of the underlying physics of solar flares. Besides his work on solar flares he has published journal articles and scientific papers on a wide range of topics including cosmology, solar oscillations, statistical mechanics, plasma physics, near-Earth asteroids, and is also a co-inventor of the gravitational tractor concept of asteroid deflection.
In 2007 Lu retired from NASA to become the Program Manager on Google's Advanced Projects Team, and also worked with Liquid Robotics as its Chief of Innovative Applications, and at Hover Inc. as its chief technology officer. While still at NASA during 2002 Lu co-founded the B612 Foundation, later serving as its Chair and in 2014 is currently its chief executive officer.
Lu holds a commercial pilot license with multi-engine instrument ratings, accumulating some 1,500 hours of flight time. Among his honors are NASA's highest awards, its Distinguished Service and Exceptional Service medals, as well as the Russian Gagarin, Komorov and Beregovoy Medals.
Tom Gavin, Chairman, Sentinel Standing Review Team
Thomas R. Gavin is the chairman of the B612 Foundation's Sentinel Standing Review Team (SSRT), and a former executive-level manager at NASA. He served with NASA for 30 years, including his position as Associate Director for Flight Programs and Mission Assurance at their Jet Propulsion Laboratory (JPL) organization, and "has been at the forefront in leading many of the most successful U.S. space missions, including Galileo's mission to Jupiter, Cassini–Huygens mission to Saturn, development of Genesis, Stardust, Mars 2001 Odyssey, Mars Exploration Rovers, SPITZER and Galaxy Evolution Explorer programs."
In 2001 he was appointed associate director for flight projects and mission success for NASA's Jet Propulsion Laboratory in May 2001. This was a new position created to provide the JPL Director's Office with oversight of flight projects. He later served as interim director for Solar System exploration. Previously, he was director of JPL's Space Science Flight Projects Directorate, which oversaw the Genesis, Mars 2001 Odyssey, Mars rovers, Spitzer Space Telescope and GALEX projects. He also served as deputy director of JPL's Space and Earth Science Programs Directorate beginning in December 1997. In June 1990 he was appointed spacecraft system manager for the Cassini–Huygens mission to Saturn, and retained that position until the project's successful launch in 1997. From 1968 to 1990 he was a member of the Galileo and Voyager project offices responsible for mission assurance. He received his bachelor's degree in chemistry from Villanova University in Pennsylvania in 1961.
Gavin has been honored on a number of occasions for exceptional work, receiving NASA's Distinguished and Exceptional Service Medals in 1981 for his work on the Voyager space probes program, NASA's Medal for Outstanding Leadership in 1991 for Galileo, and again in 1999 for the Cassini-Hygens mission. In 1997 Aviation Week and Space Technology presented its Laurels Award to him for outstanding achievement in the field of space. He also earned the American Astronomical Society's 2005 Randolph Lovelace II Award for his management of all Jet Propulsion Laboratory and NASA robotic science spacecraft missions.
Scott Hubbard, Sentinel Program Architect
Dr. G. Scott Hubbard is the B612 Foundation's Sentinel Program Architect, as well as a physicist, academic and a former executive-level manager at NASA, the U.S. space agency. He is a professor of Aeronautics and Astronautics at Stanford University and has been engaged in space-related research as well as program, project and executive management for more than 35 years including 20 years with NASA, culminating his career there as director of NASA's Ames Research Center. At Ames he was responsible for overseeing the work of some 2,600 scientists, engineers and other staff. Currently on the SpaceX Safety Advisory Panel, he previously served as NASA's sole representative on the Space Shuttle Columbia Accident Investigation Board, and also as their first Mars Exploration Program director in 2000, successfully restructuring the entire Mars program in the wake of earlier serious mission failures.
Hubbard founded NASA's Astrobiology Institute in 1998; conceived the Mars Pathfinder mission with its airbag landing system and was the manager for their highly successful Lunar Prospector Mission. Prior to joining NASA, Hubbard led a small start-up high technology company in the San Francisco Bay Area and was a staff scientist at the Lawrence Berkeley National Laboratory. Hubbard has received many honors including NASA's highest award, their Distinguished Service Medal, and the American Institute of Aeronautics and Astronautics's Von Karman Medal.
Hubbard was elected to the International Academy of Astronautics, is a Fellow of the American Institute of Aeronautics and Astronautics, has authored more than 50 scientific papers on research and technology and also holds the Carl Sagan Chair at the SETI Institute. His education includes an undergraduate degree in physics and astronomy at Vanderbilt University and a graduate degree in solid state and semiconductor physics at the University of California at Berkeley.
Marc Buie, Sentinel Mission Scientist
Dr. Marc W. Buie (b. 1958) is the foundation's Sentinel Mission Scientist, and as well a U.S. astronomer at Lowell Observatory in Flagstaff, Arizona. Buie received his B.Sc. in physics from Louisiana State University in 1980 and earned his Ph.D. in Planetary Science from the University of Arizona in 1984. He was a post-doctoral fellow at the University of Hawaii from 1985 to 1988. From 1988 to 1991, he worked at the Space Telescope Science Institute where he assisted in the planning of the first planetary observations made by the Hubble Space Telescope.
Since 1983, Pluto and its moons have been a central theme of the research done by Buie, who has published over 85 scientific papers and journal articles. He is also one of the co-discoverers of Pluto's new moons, Nix and Hydra (Pluto II and Pluto III) discovered in 2005.
Buie has worked with the Deep Ecliptic Survey team who have been responsible for the discovery of over a thousand such distant objects. He also studies the Kuiper Belt and transitional objects such as 2060 Chiron and 5145 Pholus, as well as the occasional comets as with the recent Deep impact mission that travelled to Comet Tempel 1, and near-Earth asteroids with the occasional use of the Hubble and Spitzer Space Telescopes. Buie also assists in the development of advanced astronomical instrumentation.
Asteroid 7553 Buie is named in honor of the astronomer, who has also been profiled as part of an article on Pluto in Air & Space Smithsonian magazine.
Harold Reitsema, Sentinel Mission Director
Dr. Harold James Reitsema (b. January 19, 1948, Kalamazoo, Michigan) is the foundation's Sentinel Mission Director and a U.S. astronomer. Reitsema was formerly Director of Science Mission Development at Ball Aerospace & Technologies, the B612 Foundation's prime contractor for designing and building its space telescope observatory. In his early career during the 1980s he was part of the teams that discovered new moons orbiting Neptune and Saturn through ground-based telescopic observations. Using a coronagraphic imaging system with one of the first charge-coupled devices available for astronomical use, they first observed Telesto in April 1980, just two months after being one of the first groups to observe Janus, also a moon of Saturn. Reitsema, as part of a different team of astronomers, observed Larissa in May 1981, by watching the occultation of a star by the Neptune system. Reitsema is also responsible for several advances in the use of false-color techniques applied to astronomical images.
Reitsema was a member of the Halley Multicolour Camera team on the European Space Agency Giotto spacecraft that took close-up images of Comet Halley in 1986. He has been involved in many of NASA's space science missions including the Spitzer Space Telescope, Submillimeter Wave Astronomy Satellite, the New Horizons mission to Pluto and the Kepler Space Observatory project searching for Earth-like planets orbiting distant stars similar to the Sun.
Reitsema participated in the ground-based observations of Deep Impact mission in 2005, observing the impact of the spacecraft on the Tempel 1 comet using the telescopes of the Sierra de San Pedro Mártir Observatory in Mexico, along with colleagues from the University of Maryland and the Mexican National Astronomical Observatory.
Reitsema retired from Ball Aerospace in 2008 and remains a consultant to NASA and the aerospace industry in mission design and Near-Earth Objects. His education includes his B.A. in physics from Calvin College in Grand Rapids, Michigan in 1972 and a Ph.D. in astronomy from New Mexico State University in 1977. Main-belt Asteroid 13327 Reitsema is named after him to honor his achievements.
John Troeltzsch, Sentinel Program Manager
John Troeltzsch is the B612 Foundation's Sentinel Program Manager, a senior U.S. aerospace engineer and as well a program manager with Ball Aerospace & Technologies. Ball Aerospace is the Sentinel's prime contractor responsible for its design and integration, to be later launched aboard a SpaceX Falcon 9 rocket into a Venus-trailing heliocentric orbit around the Sun. Troeltzsch's responsibilities include overseeing all requirements for the observatory's detailed design and build at Ball. As part of his 31 years of service with them, he helped create three of the Hubble Space Telescope's instruments and also managed the Spitzer Space Telescope program until its launch in 2003. Troeltzsch later became the Kepler Mission program manager at Ball in 2007.
Troeltzsch's program management abilities include experience with spacecraft systems engineering and software integration through all phases of space telescope projects, from contract definition through assembly, launch and on-station operational start up. His past project experience includes the Kepler Mission, Hubble's Goddard High Resolution Spectrograph (GHRS) and its COSTAR Space Telescope corrective optics, as well as the cryogenically-cooled instruments on the Spitzer Space Telescope.
Troeltzsch was awarded the NASA Exceptional Public Service Medal for his commitment to the success of the Kepler mission. His education includes a B.Sc. and an M.Sc. in Aerospace Engineering, both from the University of Colorado in 1983 and 1989 respectively, the latter while employed at Ball Aerospace which hired him immediately after the completion of his undergraduate degree.
David Liddle, Chair, Board of Directors
Dr. David Liddle is the foundation's Board Chair and a former technology industry executive and professor of computer science. He also holds the Chair of many boards of directors, including research institutes, in the United States.
Liddle is a partner at the venture capital firm U.S. Venture Partners, and is a co-founder and former CEO of both the Interval Research Corporation and Metaphor Computer Systems, plus a consulting professor of computer science at Stanford University, credited with heading development of the Xerox Star computer system. He served as an executive at the Xerox Corporation and IBM and currently serves on the board of directors of Inphi Corporation, the New York Times and the B612 Foundation. In January 2012, he also joined the board of directors of SRI International.
Liddle also held the chair of the board of trustees for the Santa Fe Institute, a nonprofit theoretical research center, from 1994 to 1999, and served on the U.S.'s DARPA Information, Science and Technology Committee. Additionally, he was Chair of the Computer Science and Telecommunications Board of the U.S. National Research Council due to his work on human-computer interface designs. In a field unrelated to the sciences and technology, Liddle is a Senior Fellow of the Royal College of Art in London, England.
His education includes a B.Sc. in electrical engineering from the University of Michigan and a Ph.D. in Electrical Engineering and Computer Science from the University of Toledo.
Board of directors
As of 2014 the B612 Foundation's board includes Geoffrey Baehr (formerly with Sun Microsystems and U.S. Venture Partners), plus Doctors Chapman, Piet Hut, Ed Lu (also CEO, see Leadership, above), David Liddle (Chair, see Leadership, above), and Dan Durda, a planetary scientist.
Rusty Schweickart, co-founder and chair emeritus
Russell Louis "Rusty" Schweickart (b. October 25, 1935) is a co-founder of the B612 Foundation and chair emeritus of its board of directors. He is also a former U.S. Apollo astronaut, research scientist, Air Force pilot, plus business and government executive. Schweickart, chosen in NASA's third astronaut group, is best known as the lunar module pilot on the Apollo 9 mission, the spacecraft's first crewed flight test on which he performed the first in-space test of the portable life support system used by the Apollo astronauts who walked on the Moon. Prior to joining NASA, Schweickart was a scientist at the Massachusetts Institute of Technology's Experimental Astronomy Laboratory, where he researched upper atmospheric physics and became an expert in star tracking and the stabilization of stellar images, a crucial requirement for space navigation. Schweickart's education includes a B.Sc. in aeronautical engineering and an M.Sc. in Aeronautics–Astronautics, both from the Massachusetts Institute of Technology (MIT), in 1956 and 1963 respectively. His Master's thesis was on the validation of "theoretical models of stratospheric radiance".
After serving as the backup commander of NASA's first crewed Skylab mission (the United States' first space station), he later became director of User Affairs in their Office of Applications. Schweickart left NASA in 1977 to serve for two years as California governor Jerry Brown's assistant for science and technology, and was then appointed by Brown to California's Energy Commission for five and a half years.
Schweickart co-founded the Association of Space Explorers (ASE) with other astronauts in 1984–85 and chaired the ASE's NEO Committee, producing a benchmark report, Asteroid Threats: A Call for Global Response, and submitting it to the United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS). He then co-chaired, along with astronaut Dr. Tom Jones, NASA's Advisory Council's Task Force on Planetary Defense. In 2002 he co-founded B612, also serving as its chair.
Schweickart is a Fellow of the American Astronautical Society, the International Academy of Astronautics and the California Academy of Sciences, as well as an associate fellow of the American Institute of Aeronautics and Astronautics. Among the honors he has received are the Federation Aeronautique Internationale's De la Vaulx Medal in 1970 for his Apollo 9 flight, both of NASA's Distinguished Service and Exceptional Service medals, and, unusual for an astronaut, an Emmy Award from the U.S. National Academy of Television Arts and Sciences for transmitting the first live TV pictures from space.
Clark Chapman, co-founder and board member
Clark Chapman is a B612 board member and "a planetary scientist whose research has specialized in studies of asteroids and cratering of planetary surfaces, using telescopes, spacecraft, and computers. He is a past chair of the Division for Planetary Sciences (DPS) of the American Astronomical Society and was the first editor of the Journal of Geophysical Research: Planets. He is a winner of the Carl Sagan Award for Public Understanding of Science and has worked on the science teams of the MESSENGER, Galileo and Near-Earth Asteroid Rendezvous space missions."
Chapman has a degree from Harvard University and has earned two degrees from the Massachusetts Institute of Technology, including his Ph.D., in the fields of astronomy, meteorology and the planetary sciences, and also served at the Planetary Science Institute in Tucson, Arizona. He is currently on faculty at the Southwest Research Institute of Boulder, Colorado.
Dan Durda, board member
Dr. Daniel David "Dan" Durda (b. October 26, 1965, Detroit, Michigan), is a B612 board member and "a principal scientist in the Department of Space Studies of the Southwest Research Institute's (SwRI) Boulder Colorado. He has more than 20 years experience researching the collisional and dynamical evolution of main-belt and near-Earth asteroids, Vulcanoids, Kuiper belt comets, and interplanetary dust." He is the author of 68 journal and scientific articles and has presented his reports and findings at 22 professional symposiums. He has also taught as adjunct professor in the Department of Sciences at Front Range Community College.
Durda is an active instrument-rated pilot who has flown numerous aircraft, including high performance F/A-18 Hornets and the F-104 Starfighters, and "was a 2004 NASA astronaut selection finalist. Dan is one of three SwRI payload specialists who will fly on multiple suborbital spaceflights on Virgin Galactic's Enterprise and XCOR Aerospace's Lynx."
His education includes a B.Sc. in astronomy from The University of Michigan, plus an M.Sc. and a Ph.D., both in astronomy at the University of Florida, in 1987, 1989 and 1993 respectively. Besides winning the University of Florida's Kerrick Prize "for outstanding contributions in astronomy", Asteroid 6141 Durda is named in his honour.
Strategic advisers
As of July 2014, the Foundation has taken on over twenty key advisers drawn from the sciences, the space industry and other professional fields. Their goals are to provide both advice and critiques, and assist in several other facets of the Sentinel Mission. Included among them are: Dr. Alexander Galitsky, a former Soviet computer scientist and B612 Founding Circle adviser; British Astronomer Royal, cosmologist and astrophysicist Lord Martin Rees, the Baron Rees of Ludlow; U.S. Star Trek director Alexander Singer; U.S. science journalist and writer Andrew Chaikin; British astrophysicist and songwriter Dr. Brian May; U.S. astronomer Carolyn Shoemaker; U.S. astrophysicist Dr. David Brin; Romanian cosmonaut Dumitru Prunariu; U.S. physicist and mathematician Dr. Freeman Dyson; U.S. astrophysicist and former Harvard-Smithsonian Center for Astrophysics head Dr. Irwin Shapiro; U.S. film director Jerry Zucker; British-U.S. balloonist Julian Nott; Dutch astrophysicist and B612 co-founder Dr. Piet Hut; former U.S. ambassador Philip Lader; British cosmologist and astrophysicist Dr. Roger Blandford; U.S. writer and Whole Earth Catalog founder Stewart Brand; U.S. media head Tim O'Reilly; and former U.S. NASA astronaut Dr. Tom Jones.
Tom Jones, strategic adviser
Dr. Thomas David "Tom" Jones (b. January 22, 1955) is a strategic adviser to B612, member of the NASA Advisory Council and a former U.S. astronaut and planetary scientist who has studied asteroids for NASA, engineered intelligence-gathering systems for the CIA, and helped develop advanced mission concepts to explore the Solar System. In his 11 years with NASA he flew on four Space Shuttle missions, logging a total of 53 days in space. His flight time included three spacewalks to install the centerpiece science module of the International Space Station (ISS). His publications include Planetology: Unlocking the Secrets of the Solar System.
After graduating from the U.S. Air Force Academy where he received his B.Sc. in 1977, Jones earned a Ph.D. in Planetary Sciences from the University of Arizona in 1988. His research interests included the remote sensing of asteroids, meteorite spectroscopy, and applications of space resources. In 1990 he joined Science Applications International Corporation in Washington, D.C. as a senior scientist. Dr. Jones performed advanced program planning for NASA's Goddard Space Flight Center's Solar System Exploration Division. His work there included the investigation of future robotic missions to Mars, asteroids, and the outer Solar System.
After a year of training following his selection by NASA he became an astronaut in July 1991. In 1994 he flew as mission specialists on successive flights of various Space Shuttles, running science operations on the "night shift" during STS-59, successfully deploying and retrieving two science satellites. While helping set a shuttle mission endurance record of nearly 18 days in orbit, Jones used Columbia's robotic Canadarm to release the Wake Shield satellite and later grapple it from orbit. His last space flight was in February 2001, helping to deliver the U.S. Destiny Laboratory Module to the ISS where he helped install the laboratory module in a series of three space walks lasting over 19 hours. That installation marked the start of onboard scientific research on the ISS.
Among his honors are NASA's medals and awards for Space Flight, Exceptional Service and Outstanding Leadership, plus the Federation Aeronautique Internationale's (FAI) Komarov Diploma and a NASA Graduate Student Research Fellowship.
Piet Hut, co-founder and strategic adviser
Dr. Piet Hut (b. September 26, 1952, Utrecht, The Netherlands) is a co-founder of the B612 Foundation, one of its strategic advisers, and a Dutch astrophysicist, who divides his time between research in computer simulations of dense stellar systems and broadly interdisciplinary collaborations, ranging from fields in natural science to computer science, cognitive psychology and philosophy. He is currently Program Head in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, New Jersey, former home to Albert Einstein.
Hut's specialization is in "stellar and planetary dynamics; many of his more than two hundred articles are written in collaboration with colleagues from different fields, ranging from particle physics, geophysics and paleontology to computer science, cognitive psychology and philosophy." Dr. Hut was an early adviser to Lu and served as a founding member of the B612 Foundation's board of directors.
Hut has held positions in a number of faculties, including the Institute for Theoretical Physics, Utrecht University (1977–1978); the Astronomical Institute at the University of Amsterdam (1978–1981); Astronomy Department of the University of California, Berkeley (1984–1985) and in the Institute for Advanced Study, in Princeton, N.J. (1981–present). He has held honors, functions, fellowships and memberships in almost 150 different professional organizations, universities and conferences, and published over 225 papers and articles in scientific journals and symposiums, including his first in 1976 on "The Two-Body problem with a Decreasing Gravitational Constant". In 2014 he became a strategic adviser to the B612 Foundation.
His education includes an M.Sc. from the University of Utrecht and a double Ph.D. in particle physics and astrophysics from the University of Amsterdam in 1977 and 1981 respectively. He is the namesource for Asteroid 17031 Piethut honoring his work in planetary dynamics and for his co-founding of B612.
Dumitru Prunariu, strategic adviser
Dr. Dumitru-Dorin Prunariu (, b. 27 September 1952) is a retired Romanian cosmonaut and a strategic advisor to the B612 Foundation. In 1981 he flew an eight-day mission to the Soviet Salyut 6 space station where he and his crewmates completed experiments in astrophysics, space radiation, space technology, and space medicine. He received the Hero of the Socialist Republic of Romania, the Hero of the Soviet Union, the "Hermann Oberth Gold Medal", the "Golden Star Medal" and the Order of Lenin.
Prunariu is a member of the International Academy of Astronautics, the Romanian National COSPAR Committee, and the Association of Space Explorers (ASE). In 1993, until 2004, he was the permanent representative of the ASE at the United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS) and has represented Romania at COPUOS sessions since 1992. He also became the vice-president of the International Institute for Risk, Security and Communication Management (EURISC), and from 1998 to 2004 the president of the Romanian Space Agency. In 2000 he was appointed Associate Professor on Geopolitics within the Faculty of International Business and Economics, Academy of Economic Studies in Bucharest and in 2004 he was elected COPUOS's Chairman of the Scientific and Technical Subcommittee. He was then elected as COPUOS's top level chairman, serving from 2010 to 2012, and also elected as the president of the ASE with a three-year mandate.
Prunariu has co-authored several books on space flight and both presented and published numerous scientific papers. His education includes a degree in aerospace engineering in 1976 from the Politehnica University of Bucharest. His Ph.D. thesis led to improvements in the field of space flight dynamics.
Deflection methods
A number of methods have been devised to 'deflect' an asteroid or other NEO away from an Earth-impacting trajectory, so that it can entirely avoid entering the Earth's atmosphere. Given sufficient advance lead time, a change to the body's velocity of as little as one centimetre per second will allow it to avoid hitting the Earth. Proposed and experimental deflection methods include ion beam shepherds, focused solar energy and the use of mass drivers or solar sails.
Initiating a nuclear explosive device above, on, or slightly beneath, the surface of a threatening NEO is a potential deflection option, with the optimal detonation height dependent upon the NEO's composition and size. In the case of a threatening "rubble pile", the stand off, or detonation height above the surface configuration has been put forth as a means to prevent the potential fracturing of the rubble pile. However, given sufficient advance warning of an asteroid's impact, most scientists avoid endorsing explosive deflection due to the number of potential issues involved. Other methods that can accomplish NEO deflections include:
Gravity tractor
An alternative to an explosive deflection is to move a dangerous asteroid slowly and consistently over time. The effect of a tiny constant thrust can accumulate to deviate an object sufficiently from its predicted course. In 2005 Drs. Ed Lu and Stanley G. Love proposed using a large, heavy uncrewed spacecraft hovering over an asteroid to gravitationally pull the latter into a non-threatening orbit. The method will function due to the spacecraft's and asteroid's mutually gravitational attraction. When the spacecraft counters the gravitational attraction towards the asteroid by the use of, for example, an ion thruster engine, the net effect is that the asteroid is accelerated, or moved, towards the spacecraft and thus slowly deflected from the orbital path that will lead it to a collision with Earth.
While slow, this method has the advantage of working irrespective of an asteroid's composition. It would even be effective on a comet, loose rubble pile, or an object spinning at a high rate. However, a gravity tractor would likely have to spend several years stationed beside and tugging on the body to be effective. The Sentinel Space Telescope's mission is designed to provide the required advance lead time.
According to Rusty Schweickart, the gravitational tractor method also has a controversial aspect because during the process of changing an asteroid's trajectory, the point on Earth where it would most likely hit would slowly be shifted temporarily across the face of the planet. It means the threat for the entire planet might be minimized at a temporary cost of some specific states' security. Schweickart recognizes that choosing the manner and direction the asteroid should be "dragged" may be a difficult international decision, and one that should be made through the United Nations.
An early NASA analysis of deflection alternatives in 2007, stated: "'Slow push' mitigation techniques are the most expensive, have the lowest level of technical readiness, and their ability to both travel to and divert a threatening NEO would be limited unless mission durations of many years to decades are possible." But a year later in 2008 the B612 Foundation released a technical evaluation of the gravity tractor concept, produced on contract to NASA. Their report confirmed that a transponder-equipped tractor "with a simple and robust spacecraft design" can provide the needed towing service for a 140-meters-diameter equivalent, Hayabusa-shaped asteroid or other NEO.
Kinetic impact
When the asteroid is still far from Earth, a means of deflecting the asteroid is to directly alter its momentum by colliding a spacecraft with the asteroid. The further away from the Earth, the smaller the required impact force becomes. Conversely, the closer a dangerous near-Earth Object (NEO) is to Earth at the time of its discovery, the greater the force that is required to make it deviate from its collision trajectory with the Earth. Closer to Earth, the impact of a massive spacecraft is a possible solution to a pending NEO impact.
In 2005, in the wake of the successful U.S. mission that crashed its Deep Impact probe into Comet Tempel 1, China announced its plan for a more advanced version: the landing of a spacecraft probe on a small NEO in order to push it off course. In the 2000s the European Space Agency (ESA) began studying the design of a space mission named Don Quijote, which, if flown, would have been the first intentional asteroid deflection mission ever designed. ESA's Advanced Concepts Team also demonstrated theoretically that a deflection of 99942 Apophis could be achieved by sending a spacecraft weighing less than a tonne to impact against the asteroid.
ESA had originally identified two NEOs as possible targets for its Quijote mission: and (10302) 1989 ML. Neither asteroid represents a threat to Earth. In a subsequent study, two different possibilities were selected: the Amor asteroid 2003 SM84 and 99942 Apophis; the latter is of particular significance to Earth as it will make a close approach in 2029 and 2036. In 2005, ESA announced at the 44th annual Lunar and Planetary Science Conference that its mission would be combined into a joint ESA-NASA Asteroid Impact & Deflection Assessment (AIDA) mission, proposed for 2019–2022. The target selected for AIDA will be a binary asteroid, so that the deflection effect could also be observed from Earth by timing the rotation period of the binary pair. AIDA's new target, a component of binary asteroid 65803 Didymos, will be impacted at a velocity of 22,530 km/h (14,000 mph)
A NASA analysis of deflection alternatives, conducted in 2007, stated: "Non-nuclear kinetic impactors are the most mature approach and could be used in some deflection/mitigation scenarios, especially for NEOs that consist of a single small, solid body."
Funding status
The B612 Foundation is a California 501(c)(3) non-profit, private foundation. Financial contributions to the B612 Foundation are tax-exempt in the United States. Its principal offices are in Mill Valley, California; they were previously located in Tiburon, California.
Fund raising has not gone well for B612 as of June 2015. With an overall goal to raise for the project, the foundation raised only approximately in 2012 and in 2013.
Foundation name
The B612 Foundation is named in tribute to the eponymous home asteroid of the hero of Antoine de Saint-Exupéry's best-selling philosophical fable of Le Petit Prince (The Little Prince). In aviation's early pioneer years of the 1920s, Saint-Exupéry made an emergency landing on top of an African mesa covered with crushed white limestone seashells. Walking around in the moonlight he kicked a black rock and soon deduced it was a meteorite that had fallen from space.
That experience later contributed, in 1943, to his literary creation of Asteroid B-612 in his philosophical fable of a little prince fallen to Earth, with the home planetoid's name having been adapted from one of the mail planes Saint-Exupéry once flew, bearing the registration marking A-612.
Also inspired by the story is an asteroid discovered in 1993, though not identified as posing any threat to Earth, named 46610 Bésixdouze (the numerical part of its designation represented in hexadecimal as 'B612', while the textual part is French for "B six twelve"). As well, a small asteroid moon, Petit-Prince, discovered in 1998 is named in part after The Little Prince.
See also
Qingyang event
99942 Apophis
Asteroid impact prediction
Asteroid impact avoidance
Asteroid Day
Asteroid Terrestrial-impact Last Alert System (ATLAS)
Deep Space Industries
Gravity tractor
List of impact craters on Earth
List of meteor air bursts
NEOShield
Near Earth Object Surveillance Satellite (NEOS Sat)
Planetary Resources
Potentially hazardous object
Spaceguard
Spaceguard Foundation
Tunguska event
United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS)
References
Notes
Citations
Further reading
Lewis, John S. Comet And Asteroid Impact Hazards On A Populated Earth: Computer Modeling, Volume 1, Academic Press, 2000, ,
Powell, Corey S. "How to Deflect a Killer Asteroid: Researchers Come Up With Contingency Plans That Could Help Our Planet Dodge A Cosmic Bullet", Discover, September 18, 2013, pp. 58–60 (subscription).
Schweickart, Lu, Hut and Chapman. "The Asteroid Tugboat: To Prevent An Asteroid From Hitting Earth, A Space Tug Equipped With Plasma Engines Could Give It A Push", October 13, 2003, Scientific American
Steel, Duncan. Rogue Asteroids and Doomsday Comets: The Search for the Million Megaton Menace That Threatens Life on Earth, Wiley & Sons, 1995, [1997], , .
External links
B612 Foundation: early website homepage (archived)
B612 Foundation: Sentinel Mission Factsheet (Feb. 2013, PDF)
Dr. Ed Lu at TEDxMarin: Changing the Course of the Solar System (video, 14:04)
NBC Nightly News: Early-Warning Telescope Could Detect Dangerous Asteroids, broadcast April 22, 2014 (video, 2:27)
Defending Earth from Asteroids with Neil deGrasse Tyson, public presentation and moderated panel discussion with members of the Association of Space Explorers and the B612 Foundation, at the American Museum of Natural History, New York City, October 25, 2013 (video, 58:03)
NEO Threat Detection and Warning: Plans for an International Asteroid Warning Network, Presentation to the United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS) by Dr. Timothy Spahr, Director, Minor Planet Center, Smithsonian Astrophysical Observatory, February 18, 2013 (PDF)
Dr. Ed Lu Congressional Testimony, Washington, D.C., March 20, 2013, United States Senate Sub-Committee on Science and Space: "Assessing the Risks, Impacts and Solutions for Space Threats" (video, 23:49)
(DVD, video, 53:24). Also viewable (within some countries) as
Rusty's Talk: Dinosaur Syndrome Avoidance Project - How Gozit?, a July 17, 2014 presentation before an audience at NASA's Ames Research Center's Director's Colloquium, addressing the status of the three essential elements to avoiding catastrophic asteroid impacts (video, 55:34)
Charities based in California
Planetary defense organizations
Antoine de Saint-Exupéry
Non-profit organizations based in the San Francisco Bay Area
Mountain View, California
Mill Valley, California
Science and technology in the San Francisco Bay Area
Planetary science
Organizations established in 2002
2002 establishments in the United States
Astronomical surveys
Scientific research foundations in the United States
Articles containing video clips
Space science organizations
Rusty Schweickart | B612 Foundation | Astronomy | 12,272 |
36,996,546 | https://en.wikipedia.org/wiki/Chemical%20reaction%20network%20theory | Chemical reaction network theory is an area of applied mathematics that attempts to model the behaviour of real-world chemical systems. Since its foundation in the 1960s, it has attracted a growing research community, mainly due to its applications in biochemistry and theoretical chemistry. It has also attracted interest from pure mathematicians due to the interesting problems that arise from the mathematical structures involved.
History
Dynamical properties of reaction networks were studied in chemistry and physics after the invention of the law of mass action. The essential steps in this study were introduction of detailed balance for the complex chemical reactions by Rudolf Wegscheider (1901), development of the quantitative theory of chemical chain reactions by Nikolay Semyonov (1934), development of kinetics of catalytic reactions by Cyril Norman Hinshelwood, and many other results.
Three eras of chemical dynamics can be revealed in the flux of research and publications. These eras may be associated with leaders: the first is the van 't Hoff era, the second may be called the Semenov–Hinshelwood era and the third is definitely the Aris era.
The "eras" may be distinguished based on the main focuses of the scientific leaders:
van’t Hoff was searching for the general law of chemical reaction related to specific chemical properties. The term "chemical dynamics" belongs to van’t Hoff.
The Semenov-Hinshelwood focus was an explanation of critical phenomena observed in many chemical systems, in particular in flames. A concept chain reactions elaborated by these researchers influenced many sciences, especially nuclear physics and engineering.
Aris’ activity was concentrated on the detailed systematization of mathematical ideas and approaches.
The mathematical discipline "chemical reaction network theory" was originated by Rutherford Aris, a famous expert in chemical engineering, with the support of Clifford Truesdell, the founder and editor-in-chief of the journal Archive for Rational Mechanics and Analysis. The paper of R. Aris in this journal was communicated to the journal by C. Truesdell. It opened the series of papers of other authors (which were communicated already by R. Aris). The well known papers of this series are the works of Frederick J. Krambeck, Roy Jackson, Friedrich Josef Maria Horn, Martin Feinberg and others, published in the 1970s. In his second "prolegomena" paper, R. Aris mentioned the work of N.Z. Shapiro, L.S. Shapley (1965), where an important part of his scientific program was realized.
Since then, the chemical reaction network theory has been further developed by a large number of researchers internationally.<ref>P. De Leenheer, D. Angeli and E. D. Sontag, "Monotone chemical reaction networks" , J. Math. Chem.', 41(3):295–314, 2007.</ref>G. Craciun and C. Pantea, "Identifiability of chemical reaction networks", J. Math. Chem., 44:1, 2008.A. N. Gorban and G. S. Yablonsky, "Extended detailed balance for systems with irreversible reactions", Chemical Engineering Science, 66:5388–5399, 2011.I. Otero-Muras, J. R. Banga and A. A. Alonso, "Characterizing multistationarity regimes in biochemical reaction networks", PLoS ONE,7(7):e39194,2012.
Overview
A chemical reaction network (often abbreviated to CRN) comprises a set of reactants, a set of products (often intersecting the set of reactants), and a set of reactions. For example, the pair of combustion reactions
form a reaction network. The reactions are represented by the arrows. The reactants appear to the left of the arrows, in this example they are H2 (hydrogen), O2 (oxygen) and (carbon). The products appear to the right of the arrows, here they are H2O (water) and CO2 (carbon dioxide). In this example, since the reactions are irreversible and neither of the products are used in the reactions, the set of reactants and the set of products are disjoint.
Mathematical modelling of chemical reaction networks usually focuses on what happens to the concentrations of the various chemicals involved as time passes. Following the example above, let represent the concentration of H2 in the surrounding air, represent the concentration of O2, represent the concentration of H2O, and so on. Since all of these concentrations will not in general remain constant, they can be written as a function of time e.g. , etc.
These variables can then be combined into a vector
and their evolution with time can be written
This is an example of a continuous autonomous dynamical system, commonly written in the form . The number of molecules of each reactant used up each time a reaction occurs is constant, as is the number of molecules produced of each product. These numbers are referred to as the stoichiometry of the reaction, and the difference between the two (i.e. the overall number of molecules used up or produced) is the net stoichiometry. This means that the equation representing the chemical reaction network can be rewritten as
Here, each column of the constant matrix represents the net stoichiometry of a reaction, and so is called the stoichiometry matrix. is a vector-valued function where each output value represents a reaction rate, referred to as the kinetics.
Common assumptions
For physical reasons, it is usually assumed that reactant concentrations cannot be negative, and that each reaction only takes place if all its reactants are present, i.e. all have non-zero concentration. For mathematical reasons, it is usually assumed that is continuously differentiable.
It is also commonly assumed that no reaction features the same chemical as both a reactant and a product (i.e. no catalysis or autocatalysis), and that increasing the concentration of a reactant increases the rate of any reactions that use it up. This second assumption is compatible with all physically reasonable kinetics, including mass action, Michaelis–Menten and Hill kinetics. Sometimes further assumptions are made about reaction rates, e.g. that all reactions obey mass action kinetics.
Other assumptions include mass balance, constant temperature, constant pressure, spatially uniform concentration of reactants, and so on.
Types of results
As chemical reaction network theory is a diverse and well-established area of research, there is a significant variety of results. Some key areas are outlined below.
Number of steady states
These results relate to whether a chemical reaction network can produce significantly different behaviour depending on the initial concentrations of its constituent reactants. This has applications in e.g. modelling biological switches—a high concentration of a key chemical at steady state could represent a biological process being "switched on" whereas a low concentration would represent being "switched off".
For example, the catalytic trigger is the simplest catalytic reaction without autocatalysis that allows multiplicity of steady states (1976):V.I. Bykov, V.I. Elokhin, G.S. Yablonskii, "The simplest catalytic mechanism permitting several steady states of the surface", React. Kinet. Catal. Lett. 4 (2) (1976), 191–198.
This is the classical adsorption mechanism of catalytic oxidation.
Here, A2, B and AB are gases (for example, O2, CO and CO2), Z is the "adsorption place" on the surface of the solid catalyst (for example, Pt), AZ and BZ are the intermediates on the surface (adatoms, adsorbed molecules or radicals).
This system may have two stable steady states of the surface for the same concentrations of the gaseous components.
Stability of steady states
Stability determines whether a given steady state solution is likely to be observed in reality. Since real systems (unlike deterministic models) tend to be subject to random background noise, an unstable steady state solution is unlikely to be observed in practice. Instead of them, stable oscillations or other types of attractors may appear.
Persistence
Persistence has its roots in population dynamics. A non-persistent species in population dynamics can go extinct for some (or all) initial conditions. Similar questions are of interests to chemists and biochemists, i.e. if a given reactant was present to start with, can it ever be completely used up?
Existence of stable periodic solutions
Results regarding stable periodic solutions attempt to rule out "unusual" behaviour. If a given chemical reaction network admits a stable periodic solution, then some initial conditions will converge to an infinite cycle of oscillating reactant concentrations. For some parameter values it may even exhibit quasiperiodic or chaotic behaviour. While stable periodic solutions are unusual in real-world chemical reaction networks, well-known examples exist, such as the Belousov–Zhabotinsky reactions. The simplest catalytic oscillator (nonlinear self-oscillations without autocatalysis)
can be produced from the catalytic trigger by adding a "buffer" step.
where (BZ) is an intermediate that does not participate in the main reaction.
Network structure and dynamical properties
One of the main problems of chemical reaction network theory is the connection between network structure and properties of dynamics. This connection is important even for linear systems, for example, the simple cycle with equal interaction weights has the slowest decay of the oscillations among all linear systems with the same number of states.
For nonlinear systems, many connections between structure and dynamics have been discovered. First of all, these are results about stability. For some classes of networks, explicit construction of Lyapunov functions is possible without apriori assumptions about special relations between rate constants. Two results of this type are well known: the deficiency zero theorem and the theorem about systems without interactions between different components.
The deficiency zero theorem gives sufficient conditions for the existence of the Lyapunov function in the classical free energy form , where is the concentration of the i-th component. The theorem about systems without interactions between different components states that if a network consists of reactions of the form (for , where r is the number of reactions, is the symbol of ith component, , and are non-negative integers) and allows the stoichiometric conservation law (where all ), then the weighted L1 distance between two solutions with the same M(c'') monotonically decreases in time.
Model reduction
Modelling of large reaction networks meets various difficulties: the models include too many unknown parameters and high dimension makes the modelling computationally expensive. The model reduction methods were developed together with the first theories of complex chemical reactions. Three simple basic ideas have been invented:
The quasi-equilibrium (or pseudo-equilibrium, or partial equilibrium) approximation (a fraction of reactions approach their equilibrium fast enough and, after that, remain almost equilibrated).
The quasi steady state approximation or QSS (some of the species, very often these are some of intermediates or radicals, exist in relatively small amounts; they reach quickly their QSS concentrations, and then follow, as dependent quantities, the dynamics of these other species remaining close to the QSS). The QSS is defined as the steady state under the condition that the concentrations of other species do not change.
The limiting step or bottleneck is a relatively small part of the reaction network, in the simplest cases it is a single reaction, which rate is a good approximation to the reaction rate of the whole network.
The quasi-equilibrium approximation and the quasi steady state methods were developed further into the methods of slow invariant manifolds and computational singular perturbation. The methods of limiting steps gave rise to many methods of the analysis of the reaction graph.
References
External links
Specialist wiki on the mathematics of reaction networks
Mathematical chemistry | Chemical reaction network theory | Chemistry,Mathematics | 2,458 |
4,566,476 | https://en.wikipedia.org/wiki/Pulverized%20coal-fired%20boiler | A pulverized coal-fired boiler is an industrial or utility boiler that generates thermal energy by burning pulverized coal (also known as powdered coal or coal dust since it is as fine as face powder in cosmetic makeup) that is blown into the firebox.
The basic idea of a firing system using pulverised fuel is to use the whole volume of the furnace for the combustion of solid fuels. Coal is ground to the size of a fine grain, mixed with air and burned in the flue gas flow. Biomass and other materials can also be added to the mixture. Coal contains mineral matter which is converted to ash during combustion. The ash is removed as bottom ash and fly ash. The bottom ash is removed at the furnace bottom.
This type of boiler dominates coal-fired power stations, providing steam to drive large turbines.
History
Prior to the developments leading to the use of pulverized coal, most boilers utilized grate firing where the fuel was mechanically distributed onto a moving grate at the bottom of the firebox in a partially crushed gravel-like form. Air for combustion was blown upward through the grate carrying the lighter ash and smaller particles of unburned coal up with it, some of which would adhere to the sides of the firebox. In 1918, The Milwaukee Electric Railway and Light Company, later Wisconsin Electric, conducted tests in the use of pulverized coal at its Oneida Street power plant. Those experiments helped Fred L. Dornbrook to develop methods of controlling the pulverized coal's tarry ash residues with boiler feed water tube jackets that served to reduce the surface temperature of the firebox walls and allowed the ash deposits be easily removed. That plant became the first central power station in the United States to use pulverized fuel.
The Oneida Street power plant near Milwaukee's City Hall was decommissioned and renovated in 1987. It is now the site of the Milwaukee Repertory Theatre.
How it works
The concept of burning coal that has been pulverized into a fine powder stems from the belief that if the coal is made fine enough, it will burn almost as easily and efficiently as a gas. The feeding rate of the pulverized coal is controlled by computers, and is varied according to the boiler demand and the amount of air available for drying and transporting fuel. Pieces of coal are crushed between balls or cylindrical rollers that move between two tracks or "races." The raw coal is then fed into the pulverizer along with air heated to about from the boiler. As the coal gets crushed by the rolling action, the hot air dries it and blows out the usable fine coal powder to be used as fuel. The powdered coal from the pulverizer is directly blown to a burner in the boiler. The burner mixes the powdered coal in the air suspension with additional pre-heated combustion air and forces it out of a nozzle similar in action to fuel being atomized by a fuel injector in an internal combustion engine. Under operating conditions, there is enough heat in the combustion zone to ignite all the incoming fuel.
Ash removal
There are two methods of ash removal at furnace bottom:
Dry bottom boiler
Wet bottom boiler, also called Slag tap
The fly ash is carried away with the flue gas and is separated from it into various hoppers along its path, and finally in an ESP or a bag filter.
Current technologies
Pulverized coal power plants are divided into three categories: subcritical pulverized coal (SubCPC) plants, supercritical pulverized coal (SCPC) plants, and ultra-supercritical pulverized coal (USCPC) plants. The primary difference between the three types of pulverized coal boilers are the operating temperatures and pressures. Subcritical plants operate below the critical point of water (647.096 K and 22.064 MPa). Supercritical and ultra-supercritical plants operate above the critical point. As pressures and temperatures increase, so does the operating efficiency. Subcritical plants operate at about 37% efficiency, supercritical plants at about 40%, and ultra-supercritical plants in the 42-45% range.
There are many type of pulverized coal, having different calorific values (CV), such as Indonesian coal or steel grade coal (Indian coal).
Steam locomotives
Pulverized coal firing has been used, to a limited extent, in steam locomotives. For example, see Prussian G 12.
Merchant ships
In 1929, the United States Shipping Board evaluated a pulverized coal-boiler on the steamship Mercer, a 9,500 ton merchant ship. According to its report, the boiler heated with pulverized coal on the Mercer ran at 95% of the efficiency of its best oil-fuelled journey. Firing pulverized coal was also cheaper to operate and install than ship boilers using oil as fuel. First steps towards using Diesel engines as means of propulsion (on smaller ships) were also undertaken by the end of the 1920s ― see Dieselisation.
See also
Coal-water slurry fuel
Fluidized bed combustion
Pulverizer
References
External links
Article on pulverized coal power at the World Resources Institute
University of Stuttgart: Cyclone Furnace
Chinese Coal Imports
Outdoor Wood Boilers
Power station technology
Boilers
Coal technology
Energy conversion | Pulverized coal-fired boiler | Chemistry | 1,079 |
2,839,775 | https://en.wikipedia.org/wiki/Xiaoxue | The traditional Chinese lunisolar calendar divides a year into 24 solar terms (節氣). Xiǎoxuě () is the 20th solar term. It begins when the Sun reaches the celestial longitude of 240° and ends when it reaches the longitude of 255°. It more often refers in particular to the day when the Sun is exactly at the celestial longitude of 240°. In the Gregorian calendar, it usually begins around 22 November and ends around 7 December.
Pentads
虹藏不見, 'Rainbows are concealed from view'. It was believed that rainbows were the results of yin and yang energy mixing; winter, being dominated by yin, would not present rainbows.
天氣上騰地氣下降, 'The Qi of the sky ascends, the qi of the earth descends'
閉塞而成冬, 'Closure and stasis create winter'. The end of mixing between sky and earth, yin and yang, leads to the dormancy of winter.
Date and time
References
External links
Gregory C. Eaves: Soseol (소설, 小雪), first day of snow, Korea.net, 17 Nov 2016.
20
Winter time | Xiaoxue | Physics | 245 |
5,964,169 | https://en.wikipedia.org/wiki/Iodosobenzene | Iodosobenzene or iodosylbenzene is an organoiodine compound with the empirical formula . This colourless solid compound is used as an oxo transfer reagent in research laboratories examining organic and coordination chemistry.
Preparation and structure
Iodosobenzene is prepared from iodobenzene. It is prepared by first oxidizing iodobenzene by peracetic acid. Hydrolysis of resulting diacetate affords "PhIO":
The structure of iodosobenzene has been verified by crystallographically. Related derivatives are also oligomeric. Its low solubility in most solvents and vibrational spectroscopy indicate that it is not molecular, but is polymeric, consisting of –I–O–I–O– chains. The related diacetate, , illustrates the ability of iodine(III) to adopt a T-shaped geometry without multiple bonds.
Theoretical studies show that the bonding between the iodine and oxygen atoms in iodosobenzene represents a single dative I-O sigma bond, confirming the absence of the double I=O bond.
A monomeric derivative iodosylbenzene is known in the form of 2-(tert-butylsulfonyl)iodosylbenzene, a yellow solid. C-I-O angle is 94.78°, C-I and I-O distances are 2.128 and 1.848 Å.
Applications
Iodosobenzene has no commercial uses, but in the laboratory it is employed as an "oxo-transfer reagent." It epoxidizes certain alkenes and converts some metal complexes into the corresponding oxo derivatives. Although it is an oxidant, it is also mildly nucleophilic. These oxo-transfer reactions operate by the intermediacy of adducts PhI=O→M, which release PhI.
A mixture of iodosobenzene and sodium azide in acetic acid converts alkenes to vicinal diazides:.
Safety
This compound is explosive and should not be heated under vacuum.
See also
Dess-Martin reagent
References
Iodanes
Phenyl compounds
Inorganic polymers
Reagents for organic chemistry | Iodosobenzene | Chemistry | 462 |
68,306,965 | https://en.wikipedia.org/wiki/Immortality%20Bus | The Immortality Bus is a 1978 Wanderlodge that has been made to appear as a 38-foot brown coffin.
The bus was used by Zoltan Istvan and various other transhumanist activists during his 2016 US presidential campaign to deliver a Transhumanist Bill of Rights to the US Capitol and to promote the idea that death can be conquered by science. The nearly four-month journey of the art vehicle from San Francisco to Washington, DC in 2015 had embedded journalists and documentarians, including those from The New York Times, Der Spiegel, The Verge, The Telegraph, and others.
On board the bus were drones, virtual reality gear, a 4-foot robot named Jethro Knights, biohacking equipment, posters about transhumanism, and nootropics for riders to try. An open invitation to anyone in America was made to travel on the bus. The Immortality Bus has become one of the most widely recognized life extension activist projects and has been featured in several documentaries and articles on the history of the life extensionist movement.
Journey
After a successful crowd funding campaign of $27,380 on Indiegogo, Zoltan Istvan bought the 1978 Wanderlodge in Sacramento, California. In his front yard in Mill Valley, California, he and his team converted the bus into an art vehicle that resembled a 38-foot casket, including plastic flowers on top.
The Immortality Bus left the San Francisco Bay Area on September 5, 2015. It headed to Tehachipi, California where it attended GrindFest, and riders of the bus, including Vox’s Dylan Matthews and Zoltan Istvan were implanted with microchips. From there the bus headed to Las Vegas, then San Diego, and then Arizona to visit life extension group People Unlimited and the Alcor cryonics facility.
After visiting Alcor the bus traveled to Texas for campaign events and then went to Arkansas to protest against marijuana prohibition. It stopped at events in Mississippi before illegally entering a megachurch in Alabama where activists handed out pamphlets on transhumanism. In Alabama it also visited the historic Freedom Riders museum, where Zoltan argued that cyborg rights will be another upcoming civil rights battle.
In Charlotte, North Carolina, John McAfee (then the Presidential candidate of the Cyber Party) visited the bus and debated Zoltan Istvan.
The Immortality Bus team later made speeches at Florida's Church of Perpetual Life (co-founded by William Faloon), and Zoltan lectured using his avatar in Second Life as part of a virtual event with Terasem.
In its final stages, the bus traversed up the eastern seaboard before arriving on November 14, 2015, to the US Capitol. On the steps of the Supreme Court, Zoltan Istvan wrote the original Transhumanist Bill of Rights before posting it on the US Capitol on November 15. Improved versions of the Transhumanist Bill of Rights have since been made via internet crowdsourcing organized by the Transhumanist Party, with version 2.0 published in 2017 and version 3.0 published in 2018.
After the journey
Once a relatively unknown candidate, the Immortality Bus and the media coverage it generated helped Zoltan Istvan place 4th (behind John McAfee, Gary Johnson, and Jill Stein) in an iQuanti survey of Google searches of all Presidential candidates not Democratic or Republican.
In a feature article on the bus, The New York Times Magazine called the Immortality Bus “the great brown sarcaphogaus of the American Highway. It was a methaphor of life itself.” Short video stories of the Immortality Bus were made by The Atlantic, CNET, BuzzFeed, Vocativ, RT, and Australia's Viceland.
Pulitzer Prize winning journalist Jonathan Weiner wrote that the journey of the Immortality Bus is modeled after Ken Kesey and the Merry Pranksters famous cross country bus trip, which helped inspire a generation of activists. The Immortality Bus is the subject of the closing chapter of the Wellcome Prize winning book by Mark O’Connel, To Be a Machine, and also the subject of a chapter in Radicals by Jamie Bartlett.
The documentary Immortality or Bust , which focused on the Immortality Bus campaign, won the breakout award at the 2019 Raw Science Film Festival in Los Angeles as well as the Best Biohacking Awareness Award at the 2021 GeekFest Toronto. Independent distributor Gravitas picked up the documentary and the film is available on iTunes and Amazon Prime.
The bus is currently parked in long-term storage in Virginia, and Zoltan Istvan is working to donate the bus to a museum that will use it to promote life extension.
Criticism
Some transhumanists were dismayed with the amount of media attention the Immortality Bus received. They believed it was a stunt and sent a frivolous message about the seriousness of the life extension movement. Other transhumanists countered that such activism helps grow the movement and raise awareness. USA Today called the bus "a morbid Oscar Meyer Wienermobile".
External links
http://www.immortalitybus.com/
References
Transhumanism
Individual buses
Art vehicles
Decorated vehicles
Customised buses
Life extension | Immortality Bus | Technology,Engineering,Biology | 1,055 |
67,294,874 | https://en.wikipedia.org/wiki/WASP-80 | WASP-80 is a K-type main-sequence star about 162 light-years away from Earth. The star's age is much younger than the Sun's at 1.352 billion years. WASP-80 could be similar to the Sun in concentration of heavy elements, although this measurement is highly uncertain.
The star was named Petra in 2019 by Jordanian amateur astronomers as part of the NameExoWorlds contest.
Three multiplicity surveys in 2015-2018 did not detect any stellar companions to WASP-80, but a survey in 2020 detected a 0.07 companion candidate at an angular separation 2.132 arcseconds, with a false alarm probability of 3%.
Planetary system
In 2013 a transiting hot Jupiter planet WASP-80 b was detected on a tight, circular orbit. The planet was named Wadirum by Jordanian astronomers in December 2019. Its equilibrium temperature is , while measured temperature of the dayside is 937 K and temperature of the nightside is 851 K. This temperature difference indicates a rather low planetary albedo and weak global transport of heat.
Measurement of the Rossiter–McLaughlin effect in 2015 revealed WASP-80b's is orbit is well-aligned with the equatorial plane of the star, with orbital obliquity equal to 14°.
Although one transmission spectrum of the planetary atmosphere showed signs of ionised potassium, another measurement in 2017 yielded a gray and featureless spectrum, probably due to a high cloud deck or haze in the atmosphere of WASP-80b. The James Webb Space Telescope has characterized the atmospheric composition of WASP-80 b, detecting signs of water vapor and methane on the planet. This discovery not only uncovers the exoplanet's origin and evolution but also fosters a comparative study bridging our solar system's gas giants and diverse exoplanets.
References
Aquila (constellation)
K-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
J20124017-0208391
Petra | WASP-80 | Astronomy | 409 |
206,993 | https://en.wikipedia.org/wiki/Poi%20%28food%29 | Poi is a traditional staple food in the Polynesian diet, made from taro. Traditional poi is produced by mashing cooked taro on a wooden pounding board (), with a carved pestle () made from basalt, calcite, coral, or wood. Modern methods use an industrial food processor to produce large quantities for retail distribution. This initial paste is called . Water is added to the paste during mashing, and again just before eating, to achieve the desired consistency, which can range from highly viscous to liquid. In Hawaii, this is informally classified as either "one-finger", "two-finger", or "three-finger", alluding to how many fingers are required to scoop it up (the thicker the poi, the fewer fingers required to scoop a sufficient mouthful).
Poi can be eaten immediately, when fresh and sweet, or left to ferment and become sour, developing a smell reminiscent of plain yogurt. A layer of water on top can prevent fermenting poi from developing a crust.
History and culture
Poi is thought to have originated in the Marquesas Islands, created some time after initial settlement from Polynesian explorers. While mashing food does occur in other parts of the Pacific, the method involved was more rudimentary. In western Polynesia, the cooked starch was mashed in a wooden bowl using a makeshift pounder out of either the stem of a coconut leaf or a hard, unripe breadfruit with several wooden pegs stuck into it. The origins of poi coincided with the development of basalt pounders in the Marquesas, which soon spread elsewhere in eastern Polynesia, with the exception of New Zealand and Easter Island.
Poi was considered such an important and sacred aspect of daily Hawaiian life that Hawaiians believed that the spirit of Hāloa, the legendary ancestor of the Hawaiian people, was present when a bowl of poi was uncovered for consumption at the family dinner table. Accordingly, all conflict among family members was required to come to an immediate halt.
Hawaiians traditionally cook the starchy, potato-like heart of the taro corm for hours in an underground oven called an imu, which is also used to cook other types of food such as pork, carrots, and sweet potatoes. Breadfruit can also be made into poi (i.e. poi ʻulu), Hawaiians however consider this inferior in taste to that of the taro.
Fermentation
Poi has a paste-like texture and a delicate flavor when freshly prepared in the traditional manner, with a pale purple color that naturally comes from the taro corm. It has a smooth, creamy texture. The flavor changes distinctly once the poi has been made; fresh poi is sweet and edible; each day thereafter, the poi loses sweetness and turns sour due to a natural fermentation that involves Lactobacillus bacteria, yeasts, and Geotrichum fungi. Therefore, some people find fermented poi more palatable if it is mixed with milk or sugar or both. The speed of this fermentation process depends upon the bacterial level present in the poi, but the souring process can be slowed by storing poi in a cool, dark location. To prepare commercial poi that has been stored in a refrigerator, it is squeezed out of the bag into a bowl (sometimes adding water), and a thin layer of water is put over the part exposed to air to keep a crust from forming on top. New commercial preparations of poi require refrigeration, but stay fresh longer and taste sweeter.
Sour poi is still edible, but may be less palatable, and is usually served with salted fish or Hawaiian lomi salmon on the side (as in the lyrics "my fish and poi"). Sourness can be prevented by freezing or dehydrating fresh poi, although the resulting poi after defrosting or rehydrating tends to taste bland when compared to the fresh product. Sour poi has an additional use as a cooking ingredient with a sour flavor (similar to buttermilk), usually in breads and rolls.
Nutrition and dietary and medical uses
Taro is low in fat, high in vitamin A, and abounds in complex carbohydrates.
Poi has been used specifically as a milk substitute for babies, or as a baby food. It is supposed to be easy to digest. It contains no gluten, making it safe to eat for people who have celiac disease or a gluten intolerance.
See also
List of ancient dishes and foods
Fufu – West African dish made from mashed cassava, yams, plantain, and taro
Nilupak – Filipino delicacies made from mashed starchy foods
References
Further reading
Sky Barnhart, "Powered by Poi Kalo, a Legendary Plant, Has Deep Roots in Hawaiian Culture", NO KA 'OI Maui Magazine, July/August 2007. Retrieved on 13 November 2012.
Amy C. Brown and Ana Valiere, "The Medicinal Uses of Poi", The National Center for Biotechnology Information, 23 June 2006. Retrieved on 13 November 2012.
Pamela Noeau Day, "Poi – The Ancient 'New' Superfood", POI, 22 December 2009. Retrieved on 11 November 2012.
Stacy Yuen Hernandez, "Got Poi? The Original Hawaiian Diet", POI, 24 March 2009. Retrieved on 11 November 2012.
Marcia Z. Mager, "What Is Poi Anyway?", POI, 24 March 2009. Retrieved on 11 November 2012.
Craig W. Walsh, "Where Can I Buy Poi?", POI, 26 May 2005. Retrieved on 12 November 2012.
External links
The History of Poi
"Powered By Poi". Maui No Ka 'Oi Magazine, Vol. 11, No. 4 (July 2007).
"Kipahulu Kitchen". Maui No Ka 'Oi Magazine, Vol. 10 No. 2 (April 2006). Article about community commercial kitchen in Kipahulu, Maui, where poi is made.
"Poi". YouTube video about the making of Poi.
Ancient dishes
Cook Islands cuisine
French Polynesian cuisine
Fermented foods
Native Hawaiian cuisine
National dishes
Oceanian cuisine
Polynesian cuisine
Porridges
Staple foods
Taro dishes | Poi (food) | Biology | 1,317 |
4,391,463 | https://en.wikipedia.org/wiki/Durchmusterung | In astronomy, Durchmusterung or Bonner Durchmusterung (BD) is an astrometric star catalogue of the whole sky, published by the Bonn Observatory in Germany from 1859 to 1863, with an extension published in Bonn in 1886. The name comes from ('run-through examination'), a German word used for a systematic survey of objects or data. The term has sometimes been used for other astronomical surveys, including not only stars, but also the search for other celestial objects. Special tasks include celestial scanning in electromagnetic wavelengths shorter or longer than visible light waves.
Original catalog
The Bonner Durchmusterung (abbreviated BD), was initiated by Friedrich Argelander and using observations largely carried out by his assistants, which resulted in a catalogue of the positions and apparent magnitudes of 342,198 stars down to approximate apparent magnitude 9.5 and covering the sky from 90°N to 2°S declination. The catalogue, published in three parts, was accompanied by charts plotting the positions of the stars, and was the basis for the Astronomische Gesellschaft Katalog (AGK) and Smithsonian Astrophysical Observatory Star Catalog (SAO) catalogues of the 20th century. In 1886 Eduard Schönfeld, also in Bonn, published an extension from 2°S to 23°S declination. (A further extension from an observatory in Cordoba Argentina was published in five parts between 1892 and 1932 to cover the southern sky from 22°S to 90°S declination.) BD star numbers are still used and allow the correlation of the work with modern projects.
The format of a BD number is exemplified by "BD−16 1591", which is the BD number of Sirius. This number signifies that in the catalog, Sirius is the 1591st star listed in the declination zone between −16 and −17 degrees, counting from 0 hours right ascension. Stellar positions and zone boundaries use an equinox for the epoch of B1855.0.
Extension
Many astronomical research projects—from studies of celestial mechanics and the Solar System, up to the nascent field of astrophysics—were made possible by the publication of the atlas and data of the Bonner Durchmusterung. However, a deficiency of the BD was that it did not cover the whole sky, because far southern stars are not visible from Germany.
This led the scientific community to supplement the BD with two additional astrometric surveys carried out by observatories located in the Southern Hemisphere: Córdoba, Argentina, and Cape Town, South Africa. The Cordoba Durchmusterung (abbreviated CD, or, less commonly, CoD) was made visually (as was the BD), but the Cape Photographic Durchmusterung (CP or CPD) was conducted by the then-new photographic technique, which had just been shown to have sufficient accuracy. The southern stars are identified by CD and CPD numbers in a manner similar to the BD numbering system.
A few decades later, the positional accuracy of the Durchmusterung catalogues began to be insufficient for many projects. To establish a more exact reference system for the Bonner Durchmusterung, astronomers and geodesists began to work on a fundamental celestial coordinate system based on the Earth's rotation axis, the vernal equinox and the ecliptic plane in the late 19th century. This astrometric project led to the Catalogues of Fundamental Stars of the Berlin observatory, and was used as an exact coordinate frame for the BD and AGK. It was modernized in the 1920s (FK3, mean accuracy ±1″), and in 2000 (FK6, accuracy 0.1″) as successive steps of cosmic geodesy. Together with radioastronomical measurements, the FK6 accuracy was better than ±0.1″.
Modern counterparts
The Hipparcos satellite operated between 1989 and 1993 and observed around 118,000 stars over the whole sky. Three star catalogues were published from its data:
Hipparcos Catalogue (118,000 stars, average accuracy ±0.001″)
Tycho Catalogue (about 1,050,000 stars, with accuracy ±0.03″)
Tycho-2 Catalogue (about 2,500,000 stars), which was improved for double star effects and proper motions using the Astrographic Catalogue observations.
The Gaia space observatory, launched in December 2013, has catalogued a billion stars with an accuracy down to 20 microarcseconds (0.00002″).
References
Further reading
External links
Bonner Durchmusterung (Argelander 1859–1862) (clicking on "bd.gz" downloads the gzipped 10.1MB catalogue)
Cordoba Durchmusterung (Thome 1892–1932) (clicking on "cd.dat.gz" downloads the gzipped 19MB catalogue) (note: the extension might have to be removed with some text editor before opening)
Cape Photographic Durchmusterung (Gill+ 1895–1900) (clicking on "cpd.dat.gz" downloads the gzipped 14.1MB catalogue) (note: the extension might have to be removed with some text editor before opening)
Astronomical catalogues
Astronomical catalogues of stars
Astronomical surveys | Durchmusterung | Astronomy | 1,092 |
476,592 | https://en.wikipedia.org/wiki/Southern%20Illinois%20University%20Carbondale | Southern Illinois University (SIU) is a public research university in Carbondale, Illinois, United States. Chartered in 1869, SIU is the oldest and flagship campus of the Southern Illinois University system. SIU enrolls students from all 50 states and more than 100 countries. Originally founded as a normal college, the university today provides programs in a variety of disciplines, combining a strong liberal arts tradition with a focus on research. SIU was granted limited university status in 1943 and began offering graduate degrees in 1950. A separate campus was established in Edwardsville, Illinois in 1957, eventually becoming Southern Illinois University Edwardsville.
The university is classified among "R2: Doctoral Universities – High research activity". It is also known for its research partnerships, including those with the Argonne and Oak Ridge National Laboratories, the U.S. Geological Survey, the U.S. Fish and Wildlife Service, and NASA. The university is home to hundreds of student organizations, twenty-seven fraternity and sorority chapters, and a nationally-recognized competitive flight team. SIU's intercollegiate athletic teams are collectively known as the Southern Illinois Salukis.
History
Southern Illinois Normal College was chartered by an act of the Twenty-Sixth Illinois General Assembly on March 9, 1869, the second state-supported normal school to be created in Illinois. Carbondale was selected to host the university and a cornerstone-laying ceremony was held on May 17, 1870. Alternate sites considered for the university included Centralia and DuQuoin, among others. The accidental death of a site contractor and other delays prevented the university's opening until 1874. The first session of the university was a summer institute with eight faculty members and an enrollment of 53 students.
In 1876 SIU admitted its first African-American student, Alexander Lane. In 1878 SIU established a program for the Douglas Corps Cadets, beginning a relationship with ROTC programs which lasts into the present day. The original "Old Main" building was destroyed by fire in 1883, and a new one was built in the same spot. The university's first student newspaper, The Normal Gazette, was published in 1888 and its first yearbook, The Sphinx, in 1899. SIU's first sports teams, known as "the Maroons", formed in the 1913-1914 school year.
The Shryock Auditorium was completed in 1918 and dedicated by former U.S. President Taft with a speech in support of the on-going war effort. Post-war prosperity aided the university's growth, and by 1922 it enrolled over 1,000 students. Stagnation occurred with the onset of the Great Depression and the sudden deaths of university presidents Henry Shryock and Roscoe Pulliam. In 1943 SIU was granted limited university status to offer graduate degrees, and in 1947 the Illinois General Assembly officially adopted the name Southern Illinois University. Budget concerns and leadership challenges dogged the presidency of Chester F. Lay, Pulliam's successor, until his resignation in 1948. In that same year, the first formal research conducted at SIU began with Lay's appointment of geneticist Carl C. Lindegren.
Delyte W. Morris was inaugurated as SIU's president in 1949. Morris was SIU's longest-serving president, his 22-year tenure seeing the expansion and transformation of the university. New educational programs, administrative positions, and physical facilities were added, financed by a growth in student population and state-supported bonds. Housing and other amenities for students received particular focus. In 1957 a second campus of SIU was established at Edwardsville, near St. Louis. This school would develop into Southern Illinois University Edwardsville, now a public university within the SIU system.
President Morris left office in 1970. Formal explanations focused on Morris' declining health, but campus unrest due to the Vietnam War, the burning of the Old Main Building in 1969, financial scandals, and distrust amongst SIU's Board of Trustees are speculated to have played a role. The university continued to grow with the creation of law, medical, and dental schools in the early 1970s. Other achievements included the opening of the long-awaited recreation center in 1977, the foundation of Project Achieve by Barbara Kupiec in 1978, and the Saluki men's football team NCAA I-AA national football championship title win in 1983.
SIU's enrollment reached a record enrollment of 24,869 students in 1991, a time when SIU became notorious for its party school reputation. Tensions with the surrounding community resulted in a ban on Halloween celebrations in the mid-1990s as students living in university dormitories were sent home for the holiday. Funding issues stemming from Illinois' state budget crises, including the 2015-2017 budget impasse, and declining student enrollment exacerbated a situation made worse by the unexpected deaths of university presidents Paul Sarvela and Carlo Montemagno. In recent years, a focus on research, building renovations and expansions, and stabilizing enrollment numbers have improved the university's position. Student celebrations like the ones seen in Saturday Night Live's Roadshow have now largely been replaced with the traditions of "Unofficial Halloween" and "Polar Bear". Despite this, SIU was still named ninth in a list of "The Top 10 Schools that Party All Day, Everyday" by College Magazine in 2015.
Academic programs and rankings
Rankings
SIU offers 120 undergraduate majors, with more than 200 specializations, and over 100 minors. Its programs also include 80 master's degrees and 40 doctoral degrees, in addition to professional degrees in law and medicine. The university provides general and professional training ranging from two-year associate degrees to doctoral programs, as well as certificate and non-degree programs meeting the needs of those uninterested in degree education.
SIU enrolls students from all 50 US states and over 100 other nations. The university is classified among "R2: Doctoral Universities – High research activity". In 2022, The Princeton Review included SIU Carbondale among its "Best of the Midwest".
Academic colleges and schools
The various colleges, schools, and academic departments which make up SIU have been reorganized and renamed countless times since the university's founding. SIU's original designation as a teachers' college, or normal school, means many of its current academic programs can trace their establishment to a period before the creation of the college they belong to today. Only the College of Liberal Arts can trace an unbroken lineage to the year SIU was officially granted limited university status in 1943.
The College of Agricultural, Life, and Physical Sciences, and the College of Health and Human Sciences were created from the now-defunct College of Science, College of Agriculture, and College of Applied Science and Arts. The College of Engineering, Computing, Technology, and Math was originally created as the College of Engineering and Technology; the result of a protracted effort to create an independent engineering college. The most recent reordering occurred when the College of Mass Communications and Media Arts became the College of Arts and Media.
College of Agricultural, Life, and Physical Sciences
The College of Agricultural, Life, and Physical Sciences consists of six constituent schools and several pre-health professional programs. It is based in the Agricultural Building, which was constructed in 1957. The college offers experiential opportunities for students in the form of a 2,000+ acre working farm, tree improvement center, and other hands-on activities. SIU is the only public university in Illinois to offer a zoology program, and one of only two to offer programs in botany and microbiology.
College of Arts and Media
The College of Arts and Media consists of six constituent schools, including the School of Architecture, School of Art and Design, School of Arts and Media, School of Music, and School of Journalism and Advertising. As a mixture of liberal arts and digital humanities, the College of Arts and Media combines practical education with programs catering to creative pursuits. SIU offers a number of programs associated with College of Arts and Media students, including the McLeod Summer Playhouse theatre series, the Southern Illinois Symphony Orchestra, and SIU's student newspaper, The Daily Egyptian.
College of Business and Analytics
The College of Business and Analytics consists of three constituent schools, focusing on three major areas of academic focus for the college: the School of Accountancy, the School of Analytics, Finance, and Economics, and the School of Management and Marketing. Due to its association with the College of Liberal Arts, the School of Analytics, Finance, and Economics offers a B.A. degree in economics, one of only a few B.S. programs at SIU to also offer a B.A. option. The college offers numerous research facilities, including a trading floor equipped with Bloomberg terminals. The Saluki Student Investment Fund, a student organization directed by the college, manages a $3.8 million portfolio for the university. The college also offers an online M.B.A. program that was ranked #58 in the nation in 2023.
College of Engineering, Computing, Technology, and Mathematics
The College of Engineering, Computing, Technology, and Mathematics consists of six constituent schools with a wide range of national accreditation. The college is housed in a modern four-building engineering complex located near Campus Lake. The college is one of the few institutions in the United States to offer a concurrent masters with a J.D. degree in Electrical and Computer Engineering and Law. Students in the School of Computing can choose between a B.A. and a B.S. degree in Computer Science, with the option to focus on Graphic Design and/or Game Design and Development by completing a joint minor with the College of Arts and Media.
College of Health and Human Sciences
The College of Health and Human Sciences consists of six constituent schools, with programs ranging from the School of Aviation, School of Automotive, Allied Health, to the School of Justice and Public Safety as well as the School of Psychological and Behavioral Sciences. SIU's School of Aviation, which maintains separate facilities at the Transportation Education Center along with the school of Automotive near Southern Illinois Airport, hosts the nationally recognized Flying Salukis.
College of Liberal Arts
The College of Liberal Arts consists of six constituent schools. The college's programs are augmented with faculty-sponsored research experiences, the ability to mix and match majors and minors to suit preferences and needs, access to internships, study abroad opportunities, and the university honors program. Most of the college's classrooms and offices are found in Faner Hall.
Campus
At the time of SIU's first class in 1874, the university consisted of one three-story building constructed between 1870 and 1874. Many of the university's first buildings were constructed as the university expanded throughout the late 1800s and early 1900s. Major additions were built during the 1960-70s and the 2000-10s. The age of the university is reflected in the various architectural styles on display, including examples of Victorian and Brutalist designs. In addition to its physical facilities, the campus boasts several areas of natural beauty, including Thompson Woods and Campus Lake. Various memorials, monuments, artistic structures, and other sites of interest are also present throughout the campus.
Student amenities
SIU offers a number of modern amenities for the benefit of its students. These include the Student Services Building, the Student Center, Morris Library, the Student Recreation Center, and the Student Health Center.
The Student Services Building contains most of the university's student-related offices. Spread across four floors, students have easy access to help and consultation from advisors at the Undergraduate Admissions Office, Graduate School, Financial Aid Office, University Housing, Career Development Center, and numerous other offices.
The Student Center is a large building near the center of campus which serves as a hub for events held by students and community members. Containing over eight and a half acres of space, the building hosts food vendors, dining and study spaces, a bowling alley and pool room, Esports Arena, the University Bookstore, Sustainability Hub, the Craft Shop, and the Saluki Food Pantry. It is the former home of the WIDB 104.3 FM student-run radio station. It is also the main meeting space for most of SIU's RSOs, as well as the Black Affairs Office, International Student Council, Student Programming Council, and both student governments.
The Student Recreation Center, or "Rec," is the university's primary hub for intramural and fitness activities. Most of the Rec's budget is raised by a student recreation fee included in students' fees, meaning individual students do not need to pay for entrance or membership. Other revenue generated by instructional programs, camps, and community citizens who pay for membership.Indoor facilities include an Olympic-sized pool, areas for basketball, volleyball, racquetball, handball, and squash, a two-story running track, rooms for weightlifting, martial arts, and aerobics, and programs for the disabled.
The Student Health Center is connected to the Student Recreation Center on the east side of campus. The 57,000-square-foot health center offers a medical clinic, pharmacy, wellness resources, psychiatry clinic, sports medicine and physical therapy, and counseling and psychological services. Community partners Southern Illinois Dermatology and the Marion Eye Center also provide services.
Instructional and research facilities
The majority of SIU's instructional and research facilities are enclosed on or within Lincoln Drive, which circles the university's main campus on three sides before connecting with South Illinois Avenue. As the university expanded, new buildings with similar academic purposes to existing buildings were often added in the same location. As such, most students of any of SIU's constituent colleges will only ever use a few of SIU's main buildings.
One of the more recognizable buildings on campus is Pulliam Hall, the home of the School of Education and the location of SIU's iconic clock tower. Pulliam was once known as Carbondale University High School, a functioning high school which served to train teachers. The College of Business occupies nearby Rehn Hall. The Neckers Building, Engineering Building, and Applied Sciences and Arts Building contain most of the university's physical and chemical laboratories as well as lecture halls. The Neckers Building hosts several large telescopes, facilitating regular viewings of astronomical events. The College of Liberal Arts primarily occupies Faner Hall, whose design and size have made it a controversial symbol of the campus. Allegations that Faner was built to be riot-proof are likely apocryphal; however it is true that Faner is almost thirty feet longer than the Titanic. Faner is also the home of the University Museum which holds over 70,000 unique artifacts ranging from local history to original renaissance tapestries. Students of the agricultural sciences will spend their time in the Agricultural Building, which boasts an award-winning flower display and living wall. Students in the media arts occupy the Communications Building, which hosts the annual McLeod Summer Playhouse. SIU's Law School is situated in the Lesar Law Building at the extreme west end of the campus.
All of the buildings on the main campus are connected by footpaths, interspersed with small parks and green areas. More heavily trafficked paths are lit up with brighter lighting at night as a safety feature. Students who choose to drive on campus will need to purchase a parking sticker from SIU's Parking Division or else park at the pay station lot in front of the Student Center. Walking or biking is the preferred method of transport on-campus, although Carbondale and SIU entered into an agreement with Veo Scooters in 2022 to bring electric scooters to the campus during warmer months.
Morris Library is the main library for the Southern Illinois University Carbondale campus. The library holds over four million volumes, 53,000 current periodicals and serials, and over 3.6 million microform units. It also provides access to the statewide automated library system (I-Share) and an array of online collections such as The Lancet, JSTOR, and The Oxford Dictionary of National Biography. The library is a member of the Consortium of Academic and Research Libraries in Illinois, Association of Research Libraries, and the Greater Western Library Alliance. SIU's Special Collections Research Center, which holds unique and rare historical artifacts, and the Geospatial Resources area, which holds over 255,000 maps and 93,000 aerial photographs, are maintained in the library. The library is a registered depository for Illinois, U.S. Federal, and United Nations documents. Delyte's, a coffee shop named after former SIU President Delyte W. Morris, operates near the entrance of the library.
Old campus
SIU's "First Building" was chartered in 1869 and completed in 1874. This building burned in 1883 and was replaced by a building known as "Old Main", which itself burned in 1969. While arson related to Vietnam War unrest continues to be suspected as the primary cause for the 1969 fire, this theory has never been conclusively proven. This second building was never replaced, and a rectangular green space remains where it once stood. This space is surrounded by some of the university's earliest buildings, most of which were built throughout the early 1900s. Collectively, this area of campus is known as "Old Campus".
To the east of the former site of Old Main is Davies Hall and Wheeler Hall, the latter of which served as SIU's library until the construction of the original Morris Library. On the west side is Altgeld Hall, Shryock Auditorium, and the Allyn Building. Altgeld Hall, which served as the university's science and astronomy building before being given to the School of Music, is affectionately known as "The Castle" due to its distinctive design. Similar buildings exist on four other Illinois university campuses, having been built with the funding and direction of Illinois governor John Altgeld. Shryock Auditorium is a large performance hall capped by an iconic domed roof, which was once made entirely of stained glass. The Auditorium was completed in 1918 and is named for SIU's fifth President, Henry Shryock. On the south side of the old campus area is Anthony Hall and Parkinson Laboratory. Anthony Hall was the university's first permanent dormitory structure; today it serves as an administrative office for executive staff. Being a women's dormitory, it is named in honor of Susan B. Anthony. Parkinson Laboratory is named for the university's fourth president, Daniel Parkinson, and has served continuously as the home of SIU's geology department.
Near the Old Campus area is the Old Baptist Foundation building and Woody Hall. The former building is now used as a recital hall and the meeting place of SIU's musical fraternity, while the latter building was completed in the early 1950s to take over Anthony Hall's role as SIU's permanent women's dormitory. Woody Hall today serves as an administrative office space and alumni center.
Natural scenery
SIU's campus has been recognized for its natural beauty. The most striking natural feature of the university is Campus Lake, formerly Thompson Lake, which is a 40-acre spring lake at the southwest end of the campus. The lake has been closed to swimmers for several years due to health concerns, but remains open to canoes and kayaks. In addition to this, the lake is ringed by a 2-mile walking trail popular with joggers and a large frisbee golf course. At the center of the campus is Thompson Woods, an area of natural woodlands crisscrossed by walking paths. The Thompson Woods is a completely natural area which was given to the university as a gift from the eponymous Thompson family, which once owned the woods and surrounding campus areas.
The Dorothy Morris Garden, Kumakura Garden, and Sculpture Garden are a collection of small gardens behind Faner hall. They include a tea house, fish pond, and numerous student-created sculptures. The gardens are located roughly on the former site of the home of Dorothy and Delyte Morris, SIU's eighth president. SIU's Rinella Field, a large green area in front of the East Campus residential area, is named after former Director of Housing Samuel Rinella. The field is often used for impromptu soccer matches as well as SIU's Quidditch team.
SIU's campus is located near Giant City State Park, Shawnee National Forest, and several other areas popular for hiking and camping. The campus also maintains a tagged category of its diverse tree inventory, which includes a rare Dawn Redwood planted in 1950 by William Marberry.
Former facilities
There are a number of derelict facilities on or related to the SIU campus which can still be visited by students. Just west of the Thompson Point housing area sits the remains of the Small Group Housing area, otherwise known as Greek Row. The set of two-story housing structures was originally built to provide safe housing space for the university's growing fraternities and sororities, but this system largely collapsed in later decades. Southern Hills, another abandoned housing area, can be found just south of the East Campus towers along Logan Dr.
Further south of the university at the meeting of E. Pleasant Hill Road and S. Wall St. is the abandoned Marberry Arboretum. Known today by students as the "Bamboo Forest" due to its abundance of overgrown bamboo, the Marberry Arboretum was once owned by SIU faculty member William Marberry. The site contains a wide variety of plant species, but has not been regularly maintained by the Carbondale City Council.
Two completely demolished sites include the blue barracks and the Vocational Technical Institute.
Athletics
Southern Illinois University's intercollegiate athletic teams are collectively known as the Southern Illinois Salukis. The university first sponsored athletic teams during the 1913–14 school year, when they were informally known as the Maroons. Students and faculty began lobbying for a new name and mascot during the late 1940s. On March 19, 1951, the student body voted to change the official name to the Salukis. The selection of the Saluki, a royal dog of ancient Egypt, as the university's mascot is often attributed to its reputation as a fast and tenacious hunter and the southern Illinois region's colloquial nickname, "Little Egypt". The first women's sports teams were formed in 1959, and all athletics programs were merged in 1988.
SIU is classified as an NCAA Division I school. Most varsity SIU teams compete in the Missouri Valley Conference, specifically in basketball, cross country, golf, softball, women's swimming, women's tennis, track and field, and volleyball. The football program competes in the Missouri Valley Football Conference. Men's swimming and diving is part of the Mid-American Conference.
Between the spring of 2018 and the fall of 2019, SIU athletics was led by three-time national coach-of-the-year Jerry Kill. He was replaced by Liz Jarnigan, who left the university in 2021 amid an alleged cover-up scandal. As of 2022, SIU's athletics director is Tim Leonard, former athletics director for Towson University.
Athletic highlights
8 National Championships in men's gymnastics (1963, 1966, 1967, 1972), men's golf (1964), men's tennis (1964), men's basketball (1967, NIT Championship), and football (1983)
53 Olympians including 3 silver medalists and 13 top-ten individual finishes
102 All-Time CSC Academic All-Americans, leading the Missouri Valley Conference
42 NFL players, 9 NBA players, 25 MLB players
In baseball, finished second place in the National Championship in 1968 and 1971
In men's basketball, advanced to the NCAA tournament for six straight seasons between 2002 and 2007, including two trips to the Sweet Sixteen
In women's basketball, was Missouri Valley Conference champions in 2007 and in 2022
In football, was in the playoffs for seven straight years between 2003 and 2009, and advanced to the quarterfinals of the playoffs four times in five years from 2005 to 2009
In softball, has thirteen NCAA appearances and six conference championships, the most recent of which occurred in 2021
Facilities
Saluki Stadium was opened in 2010 to replace McAndrew Stadium, which had served as SIU's principal football stadium for 73 seasons. The $29.9 million stadium has a seating capacity of over 15,000. Coors Light, the official beer of ESPN's College Gameday, began being sold in the stadium in 2017.
The Banterra Center, formerly the SIU Arena prior to 2019, is the home of SIU men's and women's basketball. The 8,284-seat arena was built in 1964 and underwent a $30 million renovation in 2010.
Charlotte West Stadium is SIU's modern softball field and stadium. It was constructed for $1.7 million and opened in 2003. It hosted the Missouri Valley Conference in 2004, 2008, 2012, and 2016.
Davies Gym was built in 1925 and is located on the original main campus of SIU. The facility has been renovated several times, and is currently the home of SIU's volleyball program.
The Dr. Edward J. Shea Natatorium opened in 1977 and is one of the most modern facilities in the Missouri Valley Conference. It features a 770,000 gallon Olympic-sized pool with three underwater viewing stations, underwater speakers, Colorado electronic timing system, rapid sand filter system, and a closed gutter filtration system. The pool is located within the Student Recreation Center near campus.
Richard "Itchy" Jones Stadium opened in 2014 for the use of the Salukis baseball team. The $4.2 million stadium replaced Abe Martin Field, which was built in 1964. The stadium is the first in the United States to install Astro Turf's new 3Di on the base paths. Richard "Itchy" Jones and Abe Martin are both commemorated for their contributions to SIU athletics.
Lew Hartzog Track & Field Complex opened in 2012 and is located directly next to Itchy Jones Stadium. The complex cost $3.96 million and its multi-event synthetic turf infield can be set to accommodate an NCAA regulation soccer pitch or football field. The field is regularly used by the women's soccer team.
Student life
SIU has a vibrant student culture and is home to more than 300 Registered Student Organizations (RSOs). Student groups include honor societies, sports clubs, fraternities and sororities, religious organizations, student governments, and other special interest groups. The largest RSO on campus is the Student Programming Council, which organizes events such as concerts, comedy shows, lectures, film showings, and homecoming celebrations.
On-campus housing
On-campus housing at SIU has developed steadily from the completion of a second women's dormitory in 1953 to the expansive system of tower blocks and apartment buildings that exists today. Housing is provided in residence halls and apartments both on and near campus. Different housing opportunities are offered to undergraduates, graduates, international students, parents, and married couples.
The two main residence hall areas are known as East Campus and West Campus. West Campus, also known as Thompson Point, consists of 11 three-story dormitory structures and was built between 1957 and 1962. East Campus, also known as the Brush Towers, consists of 3 seventeen-story high rises and was built between 1965 and 1968. Each site also includes a commons building and dining hall. The traditional housing contract includes a furnished room, WiFi, utilities, and a dining plan. Residence hall rooms are fully furnished, and many have been modified to meet the needs of specific types of disability. Apartment housing is available at Evergreen Terrace, Wall & Grand, and Elizabeth Apartments.
All single students under the age of 21, not residing with their parents or legal guardians, with fewer than 26 credit hours earned after high school are required to live in University-owned and operated residence halls per university policy. This policy can be circumvented if the student is living in the permanent home of a parent or guardian, provided the home is within 60 miles of campus. Furthermore, university apartment housing is restricted to those students who are married, parents, graduate students, or who are over the age of 21; the effect of this policy means that freshmen and sophomore students often live in dormitories, while older students reside in on and off-campus apartments.
Student government
SIU has two primary bodies of student government responsible for advising the SIU administration on student needs. The student governments are also responsible for distributing funds collected from the student activity fee to eligible RSOs. The two student governments are:
The Undergraduate Student Government (USG)
The Graduate and Professional Student Council (GPSC)
Additionally, one student is elected as a student trustee and appointed by the governor to serve as a voting member of the SIU Board of Trustees.
Greek life
SIU is home to 17 registered fraternities and 10 registered sororities, including 7 multicultural fraternities and sororities. The Greek organizations are governed by the Interfraternity Council, The College Panhellenic Association, The Multicultural Greek Council, and the National Pan-Hellenic Council. They are responsible to the dean of students and the Office of Student Affairs. Popular events held by Greek organizations include the Go Greek Barbecue and the annual "Greek Sing" talent contest.
All members of the Greek organizations at SIU must maintain a 2.0 GPA or higher to be members. The university rigorously restricts hazing and discriminatory induction practices. The first fraternity and sorority appeared on SIU's campus in 1923, although the introduction, chartering, and growth of many of the Greek groups on-campus today occurred during or after the 1940s.
Student newspaper
SIU's student-run newspaper, The Daily Egyptian, has been printed without interruption since the spring of 1921. The Daily Egyptian is published weekly in print and online during the fall and spring semesters. It has a distribution of 7,800 copies and reaches nearly 200 locations. The paper has received more than 25 awards from the Illinois College Press Association. In 2002 it received the National Newspaper Pacemaker Award for General Excellence, and in 2017 and 2018 it received the National Online Pacemaker Award. The Daily Egyptian was one of only a few university newspapers in the United States to own and operate its own printing press. The press was retired in 2015 after nearly 50 years of continuous service.
Gus Bode, a cartoon character created to give satirical commentary on the paper's articles, has appeared regularly in the paper since 1956.
Past editions of The Daily Egyptian and other SIU student newspapers going back to 1888 are maintained on-campus by Morris Library.
Saluki patrol
Founded in 1959, the Saluki Patrol is one of the oldest student security teams in the country. Organized as a form of community policing, the Saluki Patrol assists the Department of Public Safety in their duties by performing foot patrols, conducting traffic enforcement, and serving as crowd control. Members of the Saluki Patrol can often be seen on-campus in the evenings and at major on-campus sporting events.
The Saluki Patrol has continued to evolve and become more professional, with personnel receiving some of the same police training as sworn officers. Many leaders in the law enforcement community both locally and at the state and federal level began their careers as a Saluki Patrol.
Cardboard boat regatta
The Great Cardboard Boat Regatta is an event held every spring semester at Campus Lake. Participants include university students and community members. The goal is to complete three trips around a 200-yard course on the lake using makeshift cardboard boats. There are three different categories for entries: canoes or kayaks, experimental boats, and instant boats (boats created on-site the day of the event).
"Commodore" Richard Archer, a professor of Art and Design, created the regatta as a final examination for students in his freshman design class in 1974. Archer was inspired by Buckminster Fuller, then a distinguished professor at SIU, who had espoused the principle of "doing the most with the least." Participation peaked in the late 1980s and 90s, drawing crowds upwards of 20,000 people and receiving coverage on CNN's Good Morning America.
Saluki startup and weeks of welcome
The Saluki Startup & Weeks of Welcome are held during the first five weeks of the fall semester and include a range of activities designed to introduce new students to campus. Events include job fairs, theater and orchestra auditions, a pep rally, paint and sips, concerts, RSO fairs, a pickleball tournament, board game nights, and organized meetups between the students and faculty of each college.
These events coincide with the DuQuoin State Fair and the annual football game between SIU and SEMO, called the "War for the Wheel". Both of these events are attended by SIU students as part of the Weeks of Welcome.
Competitive teams and professional student organizations
Flying Salukis Flight Team – The Flying Salukis is one of the premier competitive flight teams in the United States. They took first place in the National Intercollegiate Flying Association (NIFA) – regional competition for 7 consecutive years (2011-2017). At the NIFA national championships in 2015, the Flying Salukis won the team's ninth national title. The team has consistently beaten or tied other nationally ranked schools, including the United States Air Force Academy. , the team had qualified for the national championships in 49 of the last 50 years.
Saluki Debate Team – The Saluki Debate Team is an internationally recognized award-winning debate team. Under the direction of debate coach Todd Graham, SIU won the National Parliamentary Tournament of Excellence in 2008, 2013, and 2015. The team also won the National Parliamentary Debate Association National Tournament in 2013 and 2014. They were ranked first in the country over the course of the 2010, 2012, 2013, 2014, and 2015 seasons.
Alt.news 26:46 – SIU's award-winning half-hour alternative TV news magazine. Alt news received an Emmy in the magazine news program category at the 2010 National Academy of Television Arts and Sciences Mid-America Regional Chapter Emmy Awards in St. Louis.
Forestry Club – SIU's Forestry Club is one of the university's many competitive registered student organizations. The Forestry Club was the STIHL Timbersports Midwestern Forester's Conclave champion every year from 1992 to 2009 and once more in 2017, competing in events such as pulp toss, bolt toss, log roll, and axe throw.
American Marketing Association Team – SIU's American Marketing Association Team is a registered student organization in the College of Business and Analytics. The team won national recognition in 2020 by competing in the American Marketing Association Collegiate Case Competition.
Equestrian Team – SIU's Equestrian Team is a registered student organization for students interested in equitation activities. The Equestrian Team competes in many competitions, including those hosted by the Intercollegiate Horse Show Association.
Rover Team – SIU's Rover or "Moonbuggy" Team is a registered student organization in the College of Engineering Computing, Technology, and Mathematics. The organization competes in the Human Exploration Rover Challenge, previously known as the Moonbuggy Race, sponsored annually by NASA in Huntsville, AL. The team placed in the top ten during the 2016 competition.
Saluki CFA Challenge Team – The CFA Challenge Team is a group of students chosen to compete in the CFA Institute Research Challenge. The CFA Challenge Team finished in second place at the St. Louis regional competition between 2016 and 2018 and won the competition in 2021.
Steel Bridge and Concrete Canoe Team – SIU engineering students compete in steel bridge and concrete canoe competitions hosted by the American Society of Civil Engineers and the American Institute of Steel Construction.
Medieval Combat Club – The Medieval Combat Club is a registered student organization and member of the Belegarth Medieval Combat Society. The club is a full contact combat sport with medieval fantasy inspiration, and competes in competition with other local universities, such as University of Illinois Urbana-Champaign.
Saluki Student Investment Fund – The Saluki Student Investment Fund provides undergraduate students with hands-on experience in portfolio management and investment research. Since its inception in 2000, the fund has grown to manage well over $3.5 million in assets in 2021.
Leadership
Systems of administration at SIU have greatly evolved since the university's earliest days. The growth of the university after the appointment of President Delyte Morris led to shorter tenures and a speedier succession of leaders. Many of SIU's Chancellors after this period were selected to serve in an interim capacity, a problem which persists in limited cases to this day. The early deaths of Chancellors Paul D. Sarvela and Carlo Montemagno only exacerbated this issue. The hiring of Austin Lane to fill the position of Chancellor in 2020 ended the succession issues that began after Chancellor Rita Cheng left to become President of Northern Arizona University.
The discrepancy between the title of President and Chancellor began after the founding of Southern Illinois University Edwardsville in 1957, along with the proliferation of associated schools and programs that were created under the tenure of SIU President Delyte Morris. Currently, both SIU Carbondale and SIU Edwardsville are led by Chancellors, who in turn report to the President of the Southern Illinois University System. The current SIU System President is Daniel F. Mahoney.
Many of the buildings on the SIU campus are named after former Presidents and Chancellors. These include the Allyn Building, the Parkinson Laboratory, the Shryock Auditorium, the Pulliam Hall and the Pulliam Industrial Education Building, the Morris Library, the Hiram H. Lesar Law Building, and the Guyon Auditorium in Morris Library.
Notable alumni
There are currently over 250,000 alumni of Southern Illinois University Carbondale worldwide.
Notable SIU alumni include:
Lionel Antoine – former NFL offensive tackle
Houston Antwine – former NFL defensive lineman
Charles Basch – professor of health education at Teachers College, Columbia University
James Belushi – actor and comedian, star of According to Jim, Saturday Night Live, and other films
Jim Bittermann – CNN European correspondent based in Paris
Gus Bode – satirical news commentator and amateur comedian
Frederick J. Brown – artist
Amos Bullocks – former NFL running back
Hannibal Buress – stand-up comedian, actor, writer, and producer
Chris Carr – former NBA player
Jeremy Chinn – NFL safety for the Carolina Panthers
Kim Chizevsky-Nicholls – IFBB pro bodybuilder
Bill Christine – sportswriter, author, and thoroughbred horse racing executive
Sam Coonrod – MLB pitcher for the Philadelphia Phillies
Randy Daniels – former Secretary of State of New York
Don S. Davis – actor and theater professor best known for his role as "General Hammond" on the TV series Stargate SG-1
Open Mike Eagle – hip hop artist and comedian
Lee England Jr. – musician and concert violinist
Steve Finley – former Major League Baseball center fielder, 5-time Gold Glove winner, and 2-time All-Star, World Series Champion
Stephen Franklin – former NFL Linebacker
Dennis Franz – actor best known for his work on NYPD Blue
Walt Frazier – Basketball Hall of Fame inductee named one of the 50 Greatest Players in NBA History
Julio M. Fuentes – Circuit Judge of the United States Court of Appeals for the Third Circuit
Jerry Hairston Jr. – former MLB player
Jim Hart – former NFL quarterback and 4-time Pro Bowl selection
Joan Higginbotham – engineer and NASA astronaut
Kevin House – former NFL wide receiver
Mary Lee Hu – artist and goldsmith
Troy Hudson – former NBA point guard
Muhammad Ijaz-ul-Haq – Pakistani politician and son of former President General Zia-ul-Haq
Brandon Jacobs – NFL running back
Steve James – two-time Oscar nominated film producer
Curt Jones – founder of Dippin' Dots
Darryl Jones – bassist of The Rolling Stones
Yonel Jourdain – NFL running back for the Buffalo Bills
Deji Karim – NFL running back for the Jacksonville Jaguars
Rodney P. Kelly – retired United States Air Force Major General
Timothy Krajcir – serial killer
Tony Laubach – meteorologist and storm chaser featured on Discovery Channel's Storm Chasers as a researcher with TWISTEX
Al Levine – former MLB player
Milcho Manchevski – filmmaker of Macedonia's first Oscar-nominated film
Adrian Matejka – poet, finalist for the Pulitzer Prize, and recipient of the National Book Award in poetry
Carl Mauck – former NFL center
Jenny McCarthy – actress, model, and television host
Melissa McCarthy – actress, comedian, writer, and producer
Donald McHenry – United States ambassador to the United Nations (1979–1981)
Travis Morgan – former USA power lifter
Brett James McMullen – retired United States Air Force Brigadier General
Albert E. Mead – former Governor of Washington
Bryan Mullins – former men's basketball star and current head coach of the Southern Illinois Salukis men's basketball team
Gary Noffke – artist and silversmith
Bob Odenkirk – actor and comedian best known for his role as Saul Goodman/Jimmy McGill on AMC's series Breaking Bad and Better Call Saul
Glenn Poshard – Illinois State Senator and United States Congressman
Sir Curtis Price, KBE – President of the Royal Academy of Music and former president of the Royal Musical Association
James F. Rea – Illinois State Representative and Senator
Jason Ringenberg – founding member of Jason & the Scorchers
Richard Roundtree – actor best known for his work in the 1971 film Shaft
Marion Rushing – former NFL linebacker
John F. "Jack" Sandner – attorney, commodities trader, and former chairman of the Chicago Mercantile Exchange
Randy Savage – professional wrestler; graduated 1971
Bart Scott – NFL Pro Bowl player
Jared Yates Sexton – author, political commentator, and creative writing professor
Derek Shelton – MLB manager for the Pittsburgh Pirates
Sam Silas – NFL Pro Bowl player
Chad Simpson – Micro Award-winning short and flash fiction author
Marilyn Skoglund – Associate Justice of the Vermont Supreme Court
Russ Smith – former NFL guard
Jackie Spinner – author, journalist, and war-time correspondent
Dave Stieb – retired MLB pitcher, 7-time All-Star, pitched no-hitter on September 2, 1990
Joe Swanberg – independent filmmaker with notable filmography in the mumblecore sub-genre
Lena Taylor – Wisconsin Democratic State Senator and member of the Wisconsin 14
Terry Taylor – former NFL cornerback
Mallica Vajrathon – United Nations senior staff member
Chico Vaughn – basketball player
George Vukovich – baseball player
Robert K. Weiss – producer of The Blues Brothers and other films
Ernie Wheelwright – former NFL running back
Adrian White – former NFL safety
Walt Willey – actor best known for his work on All My Children
David Wong – author and online personality
Notable faculty
Robert Corruccini – Distinguished Professor and 1994 Outstanding Scholar; taught from 1978 to 2011 in the College of Liberal Arts, Department of Anthropology; known for his expertise in dental anthropology and epidemiology, formulating a theory of malocclusion
David F. Duncan – professor of health education and 1984 Teacher of the Year; taught from 1978 to 1989; established the Ph.D. program in community health and the masters in health care administration; later served as a policy advisor in the Clinton White House
Buckminster Fuller – taught at SIUC 1959–1970; began as an assistant professor in the School of Art and Design and gained full professorship in 1968; known for his geodesic dome design
Robert S. Gold – professor of health education; pioneer of computer programs for health education and public health; executive vice president of Macro International; founding dean of the University of Maryland School of Public Health
Lori Stewart Gonzalez – assistant professor; 23rd president of Ohio University
Michael D. Higgins – visiting Professor; Politician, Sociologist and President of Ireland
L. Brent Kington – art educator and artist who worked in blacksmithing and sculpture; widely regarded as responsible for the blacksmithing revival in the 1970s
Harris Deller - is an American ceramist. He is well known for his black and white incised porcelain. He spent most of his career teaching at Southern Illinois University and has work on display in the Museum of Contemporary Art and Design in New York, as well as other collections.
William M. Lewis Sr. – director of the Cooperative Fisheries Research Unit 1950–1983 (now called the Fisheries and Illinois Aquaculture Center); chair of the Department of Zoology; president of the American Fisheries Society; received the American Fisheries Society Award of Excellence in 1995
Fazley Bary Malik – professor of theoretical nuclear and atomic physics from 1980 to 2014; Max Planck Societies Senior Fellow (1976 - 1977); Fellow of Bangladesh Academy of Sciences (since 2002); John Wheatley Award by American Physical Society in 2007
William Marberry – assistant professor of botany from 1939 onward; noted local conservationist and creator of the Marberry Arboretum; secured a specimen of the endangered Dawn Redwood after its discovery in China in 1945 which still thrives on the SIU campus today
Harry T. Moore – professor of English and famed biographer of D.H. Lawrence; author of several books on national literature of the 20th century; namesake of the Moore Auditorium
Richard Russo – taught in the English department when his first novel was published in 1986; wrote Nobody's Fool and the Pulitzer Prize-winning Empire Falls, both of which were adapted for the screen and starred Paul Newman
Paul Arthur Schilpp – noted philosopher and educator; instructed general studies courses in philosophy; founding editor of the Library of Living Philosophers
Alan Schoen – discoverer of the gyroid
Paul Martin Simon – taught politics, history, and journalism; Illinois state representative, senator, and lieutenant governor; United States representative and senator; director of the SIU Public Policy Institute (now the Paul Simon Public Policy Institute)
Nicholas Vergette – professor of art and noted potter and sculptor; part of the British sculpting group named the "Piccassettes"
Marianne Webb – professor in the School of Music teaching organ and music theory as a nationally recognized concert organist; chapter dean and member of the American Guild of Organists; designer and namesake of the Marianne Webb organ in Shryock Auditorium on the SIU campus
See also
Alt.news 26:46
Southern Illinois University Press
WSIU-TV
List of monuments and memorials on the SIU-C Campus
References
External links
Carbondale, Illinois
Schools in Jackson County, Illinois
Carbondale
Southern Illinois University Carbondale
Aviation schools in the United States
Forestry education
Schools of mines in the United States
Universities and colleges established in 1869
1869 establishments in Illinois
Buildings and structures in Jackson County, Illinois
Education in Jackson County, Illinois
Tourist attractions in Jackson County, Illinois
Glassmaking schools | Southern Illinois University Carbondale | Materials_science,Engineering | 9,428 |
9,870,240 | https://en.wikipedia.org/wiki/CXCL7 | Chemokine (C-X-C motif) ligand 7 (CXCL7) is a human gene.
The encoded protein, Chemokine (C-X-C motif) ligand is a small cytokine belonging to the CXC chemokine family. It is an isoform of Beta-Thromboglobulin or Pro-Platelet basic protein (PPBP).
It is a protein that is released in large amounts from platelets following their activation. It stimulates various processes including mitogenesis, synthesis of extracellular matrix, glucose metabolism and synthesis of plasminogen activator.
References
Further reading
External links
Cytokines | CXCL7 | Chemistry | 143 |
15,936,520 | https://en.wikipedia.org/wiki/RNA%20extraction | RNA extraction is the purification of RNA from biological samples. This procedure is complicated by the ubiquitous presence of ribonuclease enzymes in cells and tissues, which can rapidly degrade RNA. Several methods are used in molecular biology to isolate RNA from samples, the most common of these is guanidinium thiocyanate-phenol-chloroform extraction.. Usually, the phenol-chloroform solution used for RNA extraction has lower pH, this aids in separating DNA from RNA and leads to a more pure RNA preparation. The filter paper based lysis and elution method features high throughput capacity..
RNA extraction in liquid nitrogen, commonly using a mortar and pestle (or specialized steel devices known as tissue pulverizers) is also useful in preventing ribonuclease activity.
RNase contamination
The extraction of RNA in molecular biology experiments is greatly complicated by the presence of ubiquitous and hardy RNases that degrade RNA samples. Certain RNases can be extremely hardy and inactivating them is difficult compared to neutralizing DNases. In addition to the cellular RNases that are released there are several RNases that are present in the environment. RNases have evolved to have many extracellular functions in various organisms. For example, RNase 7, a member of the RNase A superfamily, is secreted by human skin and serves as a potent antipathogen defence. For these secreted RNases, enzymatic activity may not even be necessary for the RNase's exapted function. For example, immune RNases act by destabilizing the cell membranes of bacteria.
To counter this, equipment used for RNA extraction is usually cleaned thoroughly, kept separate from common lab equipment and treated with various harsh chemicals that destroy RNases. For the same reason, experimenters take special care not to let their bare skin touch the equipment. Broad RNAse inhibitors are also commercially available and sometimes added to in vitro transcription (RNA synthesis) reactions .
See also
Column purification
DNA extraction
Ethanol precipitation
Phenol-chloroform extraction
References
External links
Two-phase wash to solve the ubiquitous contaminant-carryover problem in commercial nucleic-acid extraction kits; by Erik Jue, Daan Witters & Rustem F. Ismagilov; Nature, Scientific reports, 2020.
Biochemical separation processes
Genetics techniques | RNA extraction | Chemistry,Engineering,Biology | 486 |
27,784,358 | https://en.wikipedia.org/wiki/GuruPlug | GuruPlug is a compact and low power plug computer running Linux. It is intended to be a device that could act as a web server, a print server or any other network service. It has local storage in NAND Flash, but also offers USB ports and a Serial ATA port to connect external hard disks.
The first versions of the GuruPlug Plus had no moving parts such as fans. Combined with the low power ARM architecture CPU, this results in both power consumption and noise level being typically lower compared to desktop PCs. However, these units had significant heating issues and were prone to overheating (the lack of a temperature sensor could form a safety issue when the unit is left running for multiple days). Newer versions of the GuruPlug Plus manage the overheating problem by adding a 2-cm fan to the design, although this eliminates the benefit of the silent design. The fan is not software-controllable and makes a sound resembling that of a hair dryer. The standard version of GuruPlug still has no fan and thus produces no noise.
In the area of small and low-power computing, SheevaPlug was its predecessor.
Variants and modifications
The GuruPlug comes in three variants: GuruPlug Server Standard, GuruPlug Server Plus and GuruPlug Display. The Plus version features a second Gigabit Ethernet, an eSATA and MicroSD Slot. The Display version features an HDMI display port.
References
External links
PlugComputer Community
GuruPlug Wiki
Internal photos and performance tests
Linux-based devices
Computer storage devices
Computer-related introductions in 2010 | GuruPlug | Technology | 328 |
1,886,716 | https://en.wikipedia.org/wiki/Murchison%20meteorite | The Murchison meteorite is a meteorite that fell in Australia in 1969 near Murchison, Victoria. It belongs to the carbonaceous chondrite class, a group of meteorites rich in organic compounds. Due to its mass (over ) and the fact that it was an observed fall, the Murchison meteorite is one of the most studied of all meteorites.
In January 2020, cosmochemists reported that the oldest material found on Earth to date are the silicon carbide particles from the Murchison meteorite, which have been determined to be 7 billion years old, about 2.5 billion years older than the 4.54-billion-year age of the Earth and the Solar System. The published study noted that "dust lifetime estimates mainly rely on sophisticated theoretical models. These models, however, focus on the more common small dust grains and are based on assumptions with large uncertainties."
History
On 28 September 1969 at approximately 10:58 a.m. local time, near Murchison, Victoria, in Australia, a bright fireball was observed to separate into three fragments before disappearing, leaving a cloud of smoke. About 30 seconds later, a tremor was heard. Many fragments were found scattered over an area larger than , with individual mass up to ; one, weighing , broke through a roof and fell in hay. The total collected mass of the meteorite exceeds .
Classification and composition
The meteorite belongs to the CM group of carbonaceous chondrites. Like most CM chondrites, Murchison is petrologic type 2, which means that it experienced extensive alteration by water-rich fluids on its parent body before falling to Earth. CM chondrites, together with the CI group, are rich in carbon and are among the most chemically primitive meteorites. Like other CM chondrites, Murchison contains abundant calcium-aluminium-rich inclusions. More than 15 amino acids, some of the basic components of life, have been identified during multiple studies of this meteorite.
In January 2020, astronomers reported that silicon carbide grains from the Murchison meteorite had been determined to be presolar material. The oldest of these grains was found to be 3 ± 2 billion years older than the 4.54 billion years age of the Earth and Solar System, making it the oldest material found on Earth to date.
Organic compounds
Murchison contains common amino acids such as glycine, alanine, and glutamic acid as well as unusual ones such as isovaline and pseudoleucine. A complex mixture of alkanes was isolated as well, similar to that found in the Miller–Urey experiment. Serine and threonine, usually considered to be earthly contaminants, were conspicuously absent in the samples. A specific family of amino acids called diamino acids was identified in the Murchison meteorite as well.
The initial report in 1970 stated that the amino acids were racemic and therefore formed in an abiotic manner, because amino acids of terrestrial proteins are all of the L-configuration of chirality. Later, in 1982, it was reported that the amino acid alanine had an excess of the L-configuration, but this is a protein amino acid which led several scientists to suspect terrestrial contamination according to the argument that it would be "unusual for an abiotic stereoselective decomposition or synthesis of amino acids to occur with protein amino acids but not with non-protein amino acids". But in 1997, L-excesses were also reported for several non-protein amino acids, suggesting an extraterrestrial source for molecular asymmetry in the Solar System. Some amino acids were found to be racemic (equal quantities of right-handed and left-handed). Around the same time, an enrichment in the isotope 15N was reported, however this result and the non-racemicity of alanine (but not of the others) were explained as possibly due to analysis error.
By 2001, the list of organic materials identified in the meteorite was extended to polyols.
The meteorite contained a mixture of left-handed and right-handed amino acids; most amino acids used by living organisms are left-handed in chirality, and most sugars used are right-handed. A team of chemists in Sweden demonstrated in 2005 that this homochirality could have been triggered or catalyzed by the action of a left-handed amino acid such as proline.
Several lines of evidence indicate that the interior portions of well-preserved fragments from Murchison are pristine. A 2010 study using high resolution analytical tools including spectroscopy, identified 14,000 molecular compounds, including 70 amino acids, in a sample of the meteorite. The limited scope of the analysis by mass spectrometry provides for a potential 50,000 or more unique molecular compositions, with the team estimating the possibility of millions of distinct organic compounds in the meteorite.
Nucleobases
Measured purine and pyrimidine compounds were found in the Murchison meteorite. Carbon isotope ratios for uracil and xanthine of δ13C = +44.5‰ and +37.7‰, respectively, indicate a non-terrestrial origin for these compounds. This specimen demonstrates that many organic compounds could have been delivered by early Solar System bodies and may have played a key role in life's origin.
See also
Cosmochemistry
Glossary of meteoritics
Panspermia
Notes
References
External links
Meteorites found in Australia
Geology of Victoria (state)
1969 in science
Modern Earth impact events
September 1969 events in Australia
1960s in Victoria (state)
20th-century astronomical events | Murchison meteorite | Astronomy | 1,160 |
4,628,609 | https://en.wikipedia.org/wiki/Aprotinin | The drug aprotinin (Trasylol, previously Bayer and now Nordic Group pharmaceuticals), is a small protein bovine pancreatic trypsin inhibitor (BPTI), or basic trypsin inhibitor of bovine pancreas, which is an antifibrinolytic molecule that inhibits trypsin and related proteolytic enzymes. Under the trade name Trasylol, aprotinin was used as a medication administered by injection to reduce bleeding during complex surgery, such as heart and liver surgery. Its main effect is the slowing down of fibrinolysis, the process that leads to the breakdown of blood clots. The aim in its use was to decrease the need for blood transfusions during surgery, as well as end-organ damage due to hypotension (low blood pressure) as a result of marked blood loss. The drug was temporarily withdrawn worldwide in 2007 after studies suggested that its use increased the risk of complications or death; this was confirmed by follow-up studies. Trasylol sales were suspended in May 2008, except for very restricted research use. In February 2012 the European Medicines Agency (EMA) scientific committee reverted its previous standpoint regarding aprotinin, and has recommended that the suspension be lifted. Nordic became distributor of aprotinin in 2012.
Chemistry
Aprotinin is a monomeric (single-chain) globular polypeptide derived from bovine lung tissue. It has a molecular weight of 6512 Da and consists of 16 different amino acid types arranged in a chain 58 residues long that folds into a stable, compact tertiary structure of the 'small SS-rich" type, containing 3 disulfides, a twisted β-hairpin and a C-terminal α-helix.
The amino acid sequence for bovine BPTI is RPDFC LEPPY TGPCK ARIIR YFYNA KAGLC QTFVY GGCRA KRNNF KSAED CMRTC GGA. There are 10 positively charged lysine (K) and arginine (R) side chains and only 4 negative aspartate (D) and glutamates (E), making the protein strongly basic, which accounts for the basic in its name. (Because of the usual source organism, BPTI is sometimes referred to as bovine pancreatic trypsin inhibitor.)
The high stability of the molecule is due to the 3 disulfide bonds linking the 6 cysteine members of the chain (Cys5-Cys55, Cys14-Cys38 and Cys30-Cys51). The long, basic lysine 15 side chain on the exposed loop (at top left in the image) binds very tightly in the specificity pocket at the active site of trypsin and inhibits its enzymatic action. BPTI is synthesized as a longer, precursor sequence, which folds up and then is cleaved into the mature sequence given above.
BPTI is the classic member of the protein family of Kunitz-type serine protease inhibitors. Its physiological functions include the protective inhibition of the major digestive enzyme trypsin when small amounts are produced, by cleavage of the trypsinogen precursor during storage in the pancreas.
Mechanism of drug action
Aprotinin is a competitive inhibitor of several serine proteases, specifically trypsin, chymotrypsin and plasmin at a concentration of about 125,000 IU/ml, and kallikrein at 300,000 IU/ml. Its action on kallikrein leads to the inhibition of the formation of factor XIIa. As a result, both the intrinsic pathway of coagulation and fibrinolysis are inhibited. Its action on plasmin independently slows fibrinolysis.
Drug efficacy
In cardiac surgery with a high risk of significant blood loss, aprotinin significantly reduced bleeding, mortality and hospital stay. Beneficial effects were also reported in high-risk orthopedic surgery. In liver transplantation, initial reports of benefit were overshadowed by concerns about toxicity.
In a meta-analysis performed in 2004, transfusion requirements decreased by 39% in coronary artery bypass graft (CABG) surgery. In orthopedic surgery, a decrease of blood transfusions was likewise confirmed.
Drug safety
There have been concerns about the safety of aprotinin. Anaphylaxis (a severe allergic reaction) occurs at a rate of 1:200 in first-time use, but serology (measuring antibodies against aprotinin in the blood) is not carried out in practice to predict anaphylaxis risk because the correct interpretation of these tests is difficult.
Thrombosis, presumably from overactive inhibition of the fibrinolytic system, may occur at a higher rate, but until 2006 there was limited evidence for this association. Similarly, while biochemical measures of renal function were known to occasionally deteriorate, there was no evidence that this greatly influenced outcomes. A study performed in cardiac surgery patients reported in 2006 showed that there was indeed a risk of acute renal failure, myocardial infarction and heart failure, as well as stroke and encephalopathy. The study authors recommend older antifibrinolytics (such as tranexamic acid) in which these risks were not documented. The same group updated their data in 2007 and demonstrated similar findings.
In September 2006, Bayer A.G. was faulted by the FDA for not revealing during testimony the existence of a commissioned retrospective study of 67,000 patients, 30,000 of whom received aprotinin and the rest other anti-fibrinolytics. The study concluded aprotinin carried greater risks. The FDA was alerted to the study by one of the researchers involved. Although the FDA issued a statement of concern they did not change their recommendation that the drug may benefit certain subpopulations of patients. In a Public Health Advisory Update dated October 3, 2006, the FDA recommended that "physicians consider limiting Trasylol use to those situations in which the clinical benefit of reduced blood loss is necessary to medical management and outweighs the potential risks" and carefully monitor patients.
On October 25, 2007, the FDA issued a statement regarding the "Blood conservation using antifibrinolytics" (BART) randomized trial in a cardiac surgery population. The preliminary findings suggest that, compared to other antifibrinolytic drugs (epsilon-aminocaproic acid and tranexamic acid) aprotinin may increase the risk of death. On October 29, 2006 the Food and Drug Administration issued a warning that aprotinin may have serious kidney and cardiovascular toxicity. The producer, Bayer, reported to the FDA that additional observation studies showed that it may increase the chance for death, serious kidney damage, congestive heart failure and strokes. FDA warned clinicians to consider limiting use to those situations where the clinical benefit of reduced blood loss is essential to medical management and outweighs the potential risks. On November 5, 2007, Bayer announced that it was withdrawing Aprotinin because of a Canadian study that showed it increased the risk of death when used to prevent bleeding during heart surgery.
Two studies published in early 2008, both comparing aprotinin with aminocaproic acid, found that mortality was increased by 32 and 64%, respectively. One study found an increased risk in need for dialysis and revascularisation.
No cases of bovine spongiform encephalopathy transmission by aprotinin have been reported, although the drug was withdrawn in Italy due to fears of this.
In vitro use
Small amounts of aprotinin can be added to tubes of drawn blood to enable laboratory measurement of certain rapidly degraded proteins such as glucagon.
In cell biology aprotinin is used as an enzyme inhibitor to prevent protein degradation during lysis or homogenization of cells and tissues.
Aprotinin can be labelled with fluorescein isothiocyanate. The conjugate retains its antiproteolytic and carbohydrate-binding properties and has been used as a fluorescent histochemical reagent for staining glycoconjugates (mucosubstances) that are rich in uronic or sialic acids.
History
Initially named "kallikrein inactivator", aprotinin was first isolated from cow parotid glands in 1930. and independently as a trypsin inhibitor from bovine pancreas in 1936. It was purified from bovine lung in 1964. As it inhibits pancreatic enzymes, it was initially used in the treatment for acute pancreatitis, in which destruction of the gland by its own enzymes is thought to be part of the pathogenesis. Its use in major surgery commenced in the 1960s.
BPTI is one of the most thoroughly studied proteins in terms of structural biology, experimental and computational dynamics, mutagenesis, and folding pathway. It was one of the earliest protein crystal structures solved, in 1970 in the laboratory of Robert Huber, and it's substrate-like interaction mode deciphered in the context of the bovine trypsin complex in 1974. It later also became famous being the first protein to have its structure determined by NMR spectroscopy, in the laboratory of Kurt Wuthrich at the ETH in Zurich in the early 1980s.
Because it is a small, stable protein whose structure had been determined at high resolution by 1975, it was the first macromolecule of scientific interest to be simulated using molecular dynamics computation, in 1977 by J. Andrew McCammon and Bruce Gelin, in the Karplus group at Harvard. That study confirmed the then-surprising fact found in the NMR work that even well-packed aromatic sidechains in the interior of a stable protein can flip over rather rapidly (microsecond to millisecond time scale). Rate constants were determined by NMR for the hydrogen exchange of individual peptide NH groups along the chain, ranging from too fast to measure on the most exposed surface to many months for the most buried hydrogen-bonded groups in the center of the β sheet, and those values also correlate fairly well with degree of motion seen in the dynamics simulations.
BPTI was important in the development of knowledge about the process of protein folding, the self-assembly of a polypeptide chain into a specific arrangement in 3D. The problem of achieving the correct pairings among the 6 Cys sidechains was shown to be especially difficult for the two buried, close-together SS near the BPTI chain termini, requiring a non-native intermediate for folding the mature sequence in vitro (it was later discovered that the precursor sequence folds more easily in vivo). BPTI was the cover image on a protein folding compendium volume by Thomas Creighton in 1992.
Current findings
One scientific study in rats reported that treatment with aprotinin prevents disruption of the blood–brain barrier during the C. neoformans infection. Another study in cell cultures suggests that the drug inhibits SARS-CoV-2 Replication.
References
External links
The MEROPS online database for peptidases and their inhibitors: I02.001
Antifibrinolytics
Proteins | Aprotinin | Chemistry | 2,338 |
66,986,793 | https://en.wikipedia.org/wiki/Above%3A%20Space%20Development%20Corporation | Above: Space Development Corporation (formerly Orbital Assembly Corporation) is an American aerospace company that has announced several widely publicized plans to build various space stations. , no funding for the projects has been announced and construction of the stations has not started.
The Voyager Space Station or Voyager Station (previously the Von Braun Station) is a proposed rotating wheel space station, planned to start construction in 2026. The space station aims to be the first commercial space hotel.
It is proposed that the SpaceX Starship could be used to shuttle space tourists to the Voyager Station, which would accommodate 280 guests and 112 crew members. The cost of a trip to the station has not been officially published, but some estimates are that it would be approximately US$5 million, and would require the passengers to undergo safety and physical training before boarding the shuttle for a day trip to the space station. The cost of the space station has been estimated to be in the "tens of billions". Voyager Station would have partial artificial gravity from its rotation to maintain lunar gravity—approximately of Earth's gravity.
Above Space has also announced a smaller Pioneer Station that can house only 28 people but could be operational earlier.
See also
Gateway Spaceport
List of space stations
References
External links
Above: Space Development Company
Crewed spacecraft
Proposed space stations
2020s in science
Proposed hotels | Above: Space Development Corporation | Astronomy | 265 |
439,227 | https://en.wikipedia.org/wiki/Folk%20devil | Folk devil is a person or group of people who are portrayed in folklore or the media as outsiders and deviant, and who are blamed for crimes or other sorts of social problems.
The pursuit of folk devils frequently intensifies into a mass movement that is called a moral panic. When a moral panic is in full swing, the folk devils are the subject of loosely organized but pervasive campaigns of hostility through gossip and the spreading of urban legends. The mass media sometimes get in on the act or attempt to create new folk devils in an effort to promote controversy. Sometimes the campaign against the folk devil influences a nation's politics and legislation.
Concept
The concept of the folk devil was introduced by sociologist Stanley Cohen in 1972, in his study Folk Devils and Moral Panics, which analysed media controversies concerning Mods and Rockers in the United Kingdom of the 1960s.
Cohen's research was based on the media storm over a violent clash between two youth subcultures, the mods and the rockers, on a bank holiday on a beach in England, 1964. Though the incident only resulted in some property damage without any serious physical injury to any of the individuals involved, several newspapers published sensationalist articles surrounding the event.
Cohen examined articles written about the topic and noted a pattern of distorted facts and misrepresentation, as well as a distinct, simplistic depiction of the respective images of both groups involved in the disturbance. He articulated three stages in the media's reporting on folk devils:
Symbolisation: the folk devil is portrayed in one singular narrative, their appearance and overall identity oversimplified to be easily recognizable.
Exaggeration: the facts of the controversy surrounding the folk devil are distorted, or fabricated all together, fueling the moral crusade.
Prediction: further immoral actions on the part of the folk devil are anticipated.
In the case of the mods and rockers, increased police presence the following year on the bank holiday led to another occurrence of violence. Cohen noted that the depiction of mods and rockers as violent, unruly troublemakers actually led in itself to a rise in deviant behaviour by the subcultures.
Cases
The basic pattern of agitations against folk devils can be seen in the history of witchhunts and similar manias of persecution; the histories of predominantly Catholic and Protestant European countries present examples of adherents of the rival Western Christian faith as folk devils; minorities and immigrants have often been seen as folk devils; in the long history of anti-Semitism, which frequently targets Jews with allegations of dark, murderous practices, such as blood libel; or the Roman persecution of Christians that blamed the military reverses suffered by the Roman Empire on the Christians' abandonment of paganism.
In modern times, political and religious leaders in many nations have sought to present atheists and secularists as deviant outsiders who threaten the social and moral order. The identification of folk devils may reflect the efforts of powerful institutions to displace social anxieties. Some Christian groups alleged that there were fifty million Americans who engaged in some form of devil worship within their lifetimes.
Another example of religious and ethnic discrimination associated with Cohen's folk devil theory would be Islamophobia, the discrimination of Muslims and those perceived as being Middle Eastern in origin. Post-9/11 reactions by Western countries stereotyped Muslims as violent, hateful, and of possessing fanatical extremist ideology. The group was depicted as posing a threat to social peace and safety in the Western world, and was subject to much hostility politically, from the media and from society.
Certain politicians, pundits, and media outlets are reportedly attempting to trigger that fear response by portraying transgender individuals as society’s folk devils, crafting a narrative that paints them as sexual deviants and labeling them as “groomers.” Even as such accusations are debunked or explained, multiple states have introduced or implemented anti-LGBT legislation as a response to the panic. The proposed laws include bans on gender-affirming care, limits on the participation of transgender athletes in sports, requirements for transgender individuals to use public bathrooms based on their assigned sex at birth and restrictions on public drag performances.
Columbine
In a 2014 study, Cohen's theory of the moral panic was applied to the media reaction to the Columbine massacre.
On April 20, 1999, Eric Harris and Dylan Klebold, two students from Columbine High School in Columbine, Colorado, went on a shooting spree which resulted in the deaths of 15 people. News reports in the weeks following the tragedy labelled the shooters as being “obsessed” with goth subculture, and suggested a link between Harris and Klebold's alleged identification with gothic subculture and their acts of violence.
In their attempt to make sense of the Columbine shootings, journalists and other media commentators linked goths to terrorism, Charles and Marilyn Manson, self-mutilation, hostage-taking, gang culture, the Waco cult, the Oklahoma City bombing, Satanism, mass murder, ethnic cleansing in Kosovo, suicide, the Internet, video games, skinhead music, white extremism and Adolf Hitler.
The ABC news program 20/20 aired a special entitled “The Goth Phenomenon” in which it reinforced claims that the shooters were heavily submerged in goth culture, and suggested that individuals of gothic subculture were to blame for homicidal activity in the past.
The hostility and hysteria over the perceived ‘evil’ goth culture amplified in the years following the shooting. Goths were stereotyped in the media as being perpetuators or supporters of violence donned in black trench coats. Several high schools across the United States banned black trench coats and other apparel perceived as being linked to goth culture. Some police departments in the United States labelled gothic subculture as being “gang-based”, and as something that should be subjected to “increased police surveillance”. From the time of the Columbine shooting until 2003, there were reports of individuals sporting what was seen as gothic dress being interrogated, ticketed and arrested. In 2002, U.S. Representative Sam Graves caused Blue Springs, Missouri to be granted US$273,000 to combat the “new gothic threat”.
The backlash against goth subculture after the Columbine shooting draws many parallels to Stanley Cohen's research on the mods and rockers, two other youth subcultures cast as folk devils by society. In both instances the groups were portrayed in one distinct, dumbed-down image, ostracized, stripped of any redeeming qualities, and blamed for wrongdoings in society.
See also
Fear mongering
Labeling theory
Moral panic
Scapegoating
References
Archetypes
Deviance (sociology)
Sociological terminology
Folklore characters
Persecution
Stereotypes
Urban legends
Villains | Folk devil | Biology | 1,385 |
69,050,900 | https://en.wikipedia.org/wiki/Estradiol%20dibenzoate | Estradiol dibenzoate (EDB), also known as estradiol 3,17β-dibenzoate, is an estrogen ester which was developed in the 1930s and was never marketed. It is the C3 and C17β benzoate diester of estradiol. Estradiol dibenzoate has a longer duration of action than estradiol benzoate (estradiol 3-benzoate) by depot injection.
See also
List of estrogen esters § Estradiol esters
References
Abandoned drugs
Benzoate esters
Estradiol esters
Synthetic estrogens | Estradiol dibenzoate | Chemistry | 131 |
38,530,507 | https://en.wikipedia.org/wiki/Pittsburgh%20Glass%20Center | The Pittsburgh Glass Center is a gallery, glass studio, and public-access school dedicated to teaching, creating and promoting studio glass art. It is located on Penn Avenue in the Friendship neighborhood of Pittsburgh. It has features works by Paul Joseph Stankard and classes taught by Dante Marioni, Davide Salvadore, and Cesare Toffolo.
The origins of the Pittsburgh Glass Center date to 1991, when David Stephens, then visual-arts officer of the Pennsylvania Council on the Arts, approached glass artists Ron Desmett and Kathleen Mulcahy, then a professor at Carnegie Mellon University, about the idea of a center for studio glass. It was originally to have been the Elizabeth Glass Center in Elizabeth, Pennsylvania. However, by 1999, the plans had changed and the center was re-oriented to Pittsburgh. It was officially opened in 2001.
The current facility in Friendship is LEED-certified. Its development has aided the growth of Garfield, especially with the adjacent Glass Lofts residential development.
In fall 2010, the Pittsburgh Glass Center entered into talks with Pittsburgh Filmmakers/Pittsburgh Center for the Arts. By May 2011, the talks had failed, with the Pittsburgh Glass Center withdrawing from negotiations.
In 2012, the Glass Center purchased residential housing adjacent to its main gallery space to be used as student and artist-in-residence housing.
By 2012, the center had a $1 million budget, with 10 full-time employees.
References
Museums in Pittsburgh
Art schools in Pennsylvania
Educational institutions established in 2001
Glassmaking schools
Glass museums and galleries
Education in Pittsburgh
Art museums and galleries in Pennsylvania
Glass museums and galleries in the United States
Museums established in 2001
2001 establishments in Pennsylvania | Pittsburgh Glass Center | Materials_science,Engineering | 333 |
311,034 | https://en.wikipedia.org/wiki/Arthur%20Cayley | Arthur Cayley (; 16 August 1821 – 26 January 1895) was a British mathematician who worked mostly on algebra. He helped found the modern British school of pure mathematics, and was a professor at Trinity College, Cambridge for 35 years.
He postulated what is now known as the Cayley–Hamilton theorem—that every square matrix is a root of its own characteristic polynomial, and verified it for matrices of order 2 and 3. He was the first to define the concept of an abstract group, a set with a binary operation satisfying certain laws, as opposed to Évariste Galois' concept of permutation groups. In group theory, Cayley tables, Cayley graphs, and Cayley's theorem are named in his honour, as well as Cayley's formula in combinatorics.
Early life
Arthur Cayley was born in Richmond, London, England, on 16 August 1821. His father, Henry Cayley, was a distant cousin of George Cayley, the aeronautics engineer innovator, and descended from an ancient Yorkshire family. He settled in Saint Petersburg, Russia, as a merchant. His mother was Maria Antonia Doughty, daughter of William Doughty. According to some writers she was Russian, but her father's name indicates an English origin. His brother was the linguist Charles Bagot Cayley. Arthur spent his first eight years in Saint Petersburg. In 1829 his parents were settled permanently at Blackheath, London, where Arthur attended a private school.
At age 14, he was sent to King's College School. The young Cayley enjoyed complex maths problems, and the school's master observed indications of his mathematical genius. He advised the father to educate his son not for his own business, as he had intended, but at the University of Cambridge.
Education
At the age of 17 Cayley began residence at Trinity College, Cambridge, where he excelled in Greek, French, German, and Italian, as well as mathematics. The cause of the Analytical Society had now triumphed, and the Cambridge Mathematical Journal had been instituted by Gregory and Robert Leslie Ellis. To this journal, at the age of twenty, Cayley contributed three papers, on subjects that had been suggested by reading the Mécanique analytique of Joseph Louis Lagrange and some of the works of Laplace.
Cayley's tutor at Cambridge was George Peacock and his private coach was William Hopkins. He finished his undergraduate course by winning the place of Senior Wrangler, and the first Smith's prize. His next step was to take the M.A. degree, and win a Fellowship by competitive examination. He continued to reside at Cambridge University for four years; during which time he took some pupils, but his main work was the preparation of 28 memoirs to the Mathematical Journal.
Law career
Because of the limited tenure of his fellowship it was necessary to choose a profession; like De Morgan, Cayley chose law, and was admitted to Lincoln's Inn, London on 20 April 1846 at the age of 24. He made a specialty of conveyancing. It was while he was a pupil at the bar examination that he went to Dublin to hear William Rowan Hamilton's lectures on quaternions.
His friend J. J. Sylvester, his senior by five years at Cambridge, was then an actuary, resident in London; they used to walk together round the courts of Lincoln's Inn, discussing the theory of invariants and covariants. During these fourteen years, Cayley produced between two and three hundred papers.
Professorship
Around 1860, Cambridge University's Lucasian Professor of Mathematics (Newton's chair) was supplemented by the new Sadleirian professorship, using funds bequeathed by Lady Sadleir, with the 42-year-old Cayley as its first holder. His duties were "to explain and teach the principles of pure mathematics and to apply himself to the advancement of that science." He gave up a lucrative legal practice for a modest salary, but never regretted the exchange, since it allowed him to devote his energies to the pursuit that he liked best. He at once married and settled down in Cambridge, and (unlike Hamilton) enjoyed a home life of great happiness. Sylvester, his friend from his bachelor days, once expressed his envy of Cayley's peaceful family life, whereas the unmarried Sylvester had to fight the world all his days.
At first the Sadleirian professor was paid to lecture over one of the terms of the academic year, but the university financial reform of 1886 freed funds to extend his lectures to two terms. For many years his courses were attended only by a few students who had finished their examination preparation, but after the reform the attendance numbered about fifteen. He generally lectured on his current research topic.
As for his duty to the advancement of mathematical science, he published a long and fruitful series of memoirs ranging over all of pure mathematics. He also became the standing referee on the merits of mathematical papers to many societies both at home and abroad.
In 1872, he was made an honorary fellow of Trinity College, and three years later an ordinary fellow, a paid position. About this time his friends subscribed for a presentation portrait. Maxwell wrote an address praising Cayley's principal works, including his Chapters on the Analytical Geometry of dimensions; On the theory of Determinants; Memoir on the theory of Matrices; Memoirs on skew surfaces, otherwise Scrolls; and On the delineation of a Cubic Scroll.
In addition to his work on algebra, Cayley made fundamental contributions to algebraic geometry. Cayley and Salmon discovered the 27 lines on a cubic surface. Cayley constructed the Chow variety of all curves in projective 3-space. He founded the algebro-geometric theory of ruled surfaces. His contributions to combinatorics include counting the nn–2 trees on n labeled vertices by the pioneering use of generating functions.
In 1876, he published a Treatise on Elliptic Functions. He took great interest in the movement for the university education of women. At Cambridge the first women's colleges were Girton and Newnham. In the early days of Girton College he gave direct help in teaching, and for some years he was chairman of the council of Newnham College, in the progress of which he took the keenest interest to the last.
In 1881, he received from the Johns Hopkins University, Baltimore, where Sylvester was then professor of mathematics, an invitation to deliver a course of lectures. He accepted the invitation, and lectured at Baltimore during the first five months of 1882 on the subject of the Abelian and Theta Functions.
In 1893, Cayley became a foreign member of the Royal Netherlands Academy of Arts and Sciences.
British Association presidency
In 1883, Cayley was President of the British Association for the Advancement of Science. The meeting was held at Southport, in the north of England. As the President's address is one of the great popular events of the meeting, and brings out an audience of general culture, it is usually made as little technical as possible. took for his subject the Progress of Pure Mathematics.
The Collected Papers
In 1889, the Cambridge University Press began the publication of his collected papers, which he appreciated very much. He edited seven of the quarto volumes himself, though suffering from a painful internal malady. He died 26 January 1895 at age 73. His funeral at Trinity Chapel was attended by the leading scientists of Britain, with official representatives from as far as Russia and America.
The remainder of his papers were edited by Andrew Forsyth, his successor as Sadleirian professor, in total thirteen quarto volumes and 967 papers. His work continues in frequent use, cited in more than 200 mathematical papers in the 21st century alone.
Cayley retained to the last his fondness for novel-reading and for travelling. He also took special pleasure in paintings and architecture, and he practiced water-colour painting, which he found useful sometimes in making mathematical diagrams.
Legacy
Cayley is buried in the Mill Road cemetery, Cambridge.
An 1874 portrait of Cayley by Lowes Cato Dickinson and an 1884 portrait by William Longmaid are in the collection of Trinity College, Cambridge.
A number of mathematical terms are named after him:
Cayley's theorem
Cayley–Hamilton theorem in linear algebra
Cayley–Bacharach theorem
Grassmann–Cayley algebra
Cayley–Menger determinant
Cayley diagrams – used for finding cognate linkages in mechanical engineering
Cayley–Dickson construction
Cayley algebra (Octonion)
Cayley graph
Cayley numbers
Cayley's sextic
Cayley table
Cayley–Purser algorithm
Cayley's formula
Cayley–Klein metric
Cayley–Klein model of hyperbolic geometry
Cayley's Ω process
Cayley surface
Cayley transform
Cayley's nodal cubic surface
Cayley's ruled cubic surface
The crater Cayley on the Moon (and consequently the Cayley Formation, a geological unit named after the crater)
Cayley's mousetrap — a card game
Cayleyan
Chasles–Cayley–Brill formula
Hyperdeterminant
Quippian
Tetrahedroid
Bibliography
See also
List of things named after Arthur Cayley
References
Sources
(complete text at Project Gutenberg)
External links
Arthur Cayley Letters to Robert Harley, 1859–1863. Available online through Lehigh University's I Remain: A Digital Archive of Letters, Manuscripts, and Ephemera.
This article incorporates text from the 1916 Lectures on Ten British Mathematicians of the Nineteenth Century by Alexander Macfarlane, which is in the public domain.
1821 births
1895 deaths
Arthur
19th-century English mathematicians
Group theorists
Linear algebraists
Algebraic geometers
Graph theorists
People educated at King's College School, London
Newnham College, Cambridge
Alumni of Trinity College, Cambridge
Fellows of Trinity College, Cambridge
Fellows of the Royal Society
Fellows of the American Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Prussian Academy of Sciences
Members of the Hungarian Academy of Sciences
Presidents of the British Science Association
Presidents of the Royal Astronomical Society
Recipients of the Copley Medal
Royal Medal winners
De Morgan Medallists
Magic squares
Senior Wranglers
Sadleirian Professors of Pure Mathematics
Presidents of the Cambridge Philosophical Society
Historical treatment of octonions | Arthur Cayley | Mathematics | 2,131 |
61,594,600 | https://en.wikipedia.org/wiki/Cervid%20alphaherpesvirus%201 | Cervid alphaherpesvirus 1 (CvHV-1) is a species of virus in the genus Varicellovirus, subfamily Alphaherpesvirinae, family Herpesviridae, and order Herpesvirales.
References
Alphaherpesvirinae | Cervid alphaherpesvirus 1 | Biology | 56 |
75,612,982 | https://en.wikipedia.org/wiki/Spatial%20Planning%20Act%202023 | The Spatial Planning Act 2023 (SPA), now repealed, was one of three laws introduced by the Sixth Labour Government in order to replace New Zealand's Resource Management Act 1991 (RMA). Its purpose was to provide for regional spatial strategies that assisted the purpose of the Natural and Built Environment Act 2023 (NBA) and promote integration in the performance of functions under the NBA, the Land Transport Management Act 2003, the Local Government Act 2002, and the Water Services Entities Act 2022.
The Bill passed its third reading on 15 August 2023, and received royal assent on 23 August 2023. On 23 December 2023, the SPA and NBA were both repealed by the National-led coalition government.
Key provisions
The Spatial Planning Act 2023 requires all regions to have a regional spatial strategy that must align with the geographical boundaries of the region. The Chatham Islands' regional planning committee and offshore islands administered by the Minister of Conservation were excluded from this requirement.
The Spatial Planning Act also outlined the scope, contents, preparation and implementation of the regional spatial strategies including matters of national and regional importance. The Act also entrenched Te Ture Whaimana as the primary direction-setting document for the Waikato and Waipā Rivers, along with activities within their catchments affecting the rivers.
The Spatial Planning Act also required regional spatial strategies to take into account customary marine title areas and identified Māori land. Regional planning committees were also required to comply with Māori consultation arrangements. The Act also outlined the process for consulting with Māori groups.
The Act also contained provisions for cross-regional planning committees to develop plans affecting two or more regions. The Act also outlined the responsibilities and process for the Minister responsible for managing the RMA process.
The Spatial Planning Act also amended several existing laws including the Conservation Act 1987, Environment Act 1986, the Land Transport Management Act 2003, the Local Government Act 2002 and the Water Services Entities Act 2022.
Legislative history
Introduction
In 2020, a review of the Resource Management Act 1991 (RMA) identified various problems with the existing resource management system, and concluded that it could not cope with modern environmental pressures. In January 2021, the Sixth Labour Government announced that the RMA will be replaced by three acts: the core Natural and Built Environment Act, focusing land use and environmental regulation; the Strategic Planning Act, focusing on development laws; and the Climate Change Adaptation Act, focusing on managed retreat and climate change funding.
On 14 November 2022, the Labour Government introduced the Spatial Planning Act into the New Zealand House of Representatives alongside the companion Natural and Built Environment Act (NBA) as part of its RMA reform efforts. The opposition National and ACT parties opposed the two replacement bills, claiming that they created more centralisation, bureaucracy and did little to address the problems with the RMA process. The Green Party expressed concerns about the perceived lack of environment protection in the proposed legislation.
First reading
On 22 November 2022, Environment Minister David Parker introduced the Strategic Planning Act during its first reading. Several Labour and Green MPs including Parker, Rachel Brooking, Tāmati Coffey, Eugenie Sage, Anahila Kanongata'a-Suisuiki, Duncan Webb, Lemauga Lydia Sosene and Angie Warren-Clark argued that the SPA would help simplify the resource consent process for housing, infrastructural development, and spatial planning. By contrast, National and ACT MPs including Scott Simpson, Stuart Smith, Simon Court, Sam Uffindell, and David Bennett expressed concerns about red tape and centralisation, and claimed that the bill would do little to address the housing shortage. The SPA passed its first reading by a margin of 74 (Labour and the Greens) to 45 votes (National, ACT, and Te Pāti Māori), and was referred to the Environment select committee.
Select committee stage
On 27 June 2023, the Environment Committee voted by a majority to progress the SPA to its second reading. These amendments included promoting integration in the functions of the regional spatial strategies (RSS) with the NBA, upholding te Oranga o te Taiao, promoting integration between the RSS and proposed water services entities, clarifying the role of Māori iwi (tribes) and hapū (sub-groups) in the bill, and clarifying the wording around the regional spatial planning process and the transitional process from the RMA framework. The ACT and National parties also published their minority reports. ACT claimed that the SPA would frustrate development by creating more red tape and duplication. National's minority report claimed that the SPA created legal uncertainty, increased bureaucracy, complicated decarbonisation efforts, and undermined property rights.
Second reading
During its second reading on 18 July 2023, Parliament voted by a margin of 71 (Labour, Greens) to 48 (National, ACT, Te Paati Māori, independent Members of Parliament Elizabeth Kerekere and Meka Whaitiri) to endorse the Environment Committee's amendments. The SPA passed its second reading by a margin of 72 (Labour, Greens, Kerekere) to 47 (National, ACT, Te Paati Māori, and Whaitiri). Labour MPs Parker, Brooking, Phil Twyford, Warren-Clark, Arena Williams, Tracey McLellan, and Sosene, and Green MP Sage gave speeches defending the Bill. National MPs Chris Bishop, Simpson, Barbara Kuriger, and Tama Potaka, and ACT MP Court spoke against the Bill.
Third reading
The Bill passed its third reading on 15 August 2023 by a margin of 72 (Labour, Greens, and Kerekere) to 47 (National, ACT, Te Paati Māori, and Whaitiri). Labour MPs Parker, Brooking, Twyford, Warren-Clark, Sarah Pallett, Dan Rosewarne, and Sosene and Green MP Sage spoke in favour of the Bill. National MPs Bishop, Simpson, Kuriger, Potaka, Smith and ACT MP Court opposed the Bill. The Bill received royal assent on 23 August 2023.
Repeal
Following the 2023 New Zealand general election, the National-led coalition government repealed the Spatial Planning Act and Natural and Built Environment Act on 23 December 2023. The country reverted back to the Resource Management Act 1991 while the Government worked on introducing new replacement legislation.
Notes and references
External links
2022 in New Zealand law
2023 in New Zealand law
2022 in the environment
2023 in the environment
Environmental law in New Zealand
Environmental mitigation
Natural resource management
Repealed New Zealand legislation
Urban planning in New Zealand | Spatial Planning Act 2023 | Chemistry,Engineering | 1,325 |
402,567 | https://en.wikipedia.org/wiki/ISO/IEC%205218 | ISO/IEC 5218 Information technology — Codes for the representation of human sexes is an international standard that defines a representation of human sexes through a language-neutral single-digit code. It can be used in information systems such as database applications.
The four codes specified in ISO/IEC 5218 are:
0 = Not known;
1 = Male;
2 = Female;
9 = Not applicable.
The standard specifies that its use may be referred to by the designator "SEX".
The standard explicitly states that no significance is to be placed on the encoding of male as 1 and female as 2; the encoding merely reflects existing practice in the countries that initiated this standard. The standard also explains that it "meets the requirements of most applications that need to code human sexes. It does not provide codes for sexes that may be required in specific medical and scientific applications or in applications that need to code sex information other than for human beings." Since its 2022 revision, the standard also states that its scope does not cover human gender identities and therefore does not provide codes for those.
ISO/IEC 5218 was created by ISO's Data Management and Interchange Technical Committee, proposed in November 1976, and updated in June 2022. The standard is currently maintained by
the ISO/IEC Joint Technical Committee (ISO/IEC JTC 1) subcommittee on Data management and interchange (ISO/IEC JTC 1/SC 32).
This standard is used in several national identification numbers. For example, the first digit of the French INSEE number and the first digit of the Republic of China National Identification Card (Chinese: 中華民國國民身分證) are based on ISO/IEC 5218 values.
References
2004 introductions
05218
Gender | ISO/IEC 5218 | Biology | 349 |
29,557,324 | https://en.wikipedia.org/wiki/John%20von%20Neumann%20Environmental%20Research%20Institute%20of%20the%20Pacific | The John von Neumann Environmental Research Institute of the Pacific is a non profit environmental and anthropological research institute of executive branch of the government of Colombia ascribed to the Ministry of Environment and Sustainable Development and charged with conducting research and investigations on the Pacific littoral and the biodiversity of the Chocó biogeographic hotspot.
References
Chocó Department
Ministry of Environment and Sustainable Development (Colombia)
Environmental research institutes
Research institutes in Colombia
Biological research institutes
Anthropological research institutes
Organizations established in 1993 | John von Neumann Environmental Research Institute of the Pacific | Environmental_science | 95 |
45,150,205 | https://en.wikipedia.org/wiki/Kepler-59b | Kepler-59b is an exoplanet orbiting the star Kepler-59, located in the constellation Lyra. It was discovered by the Kepler telescope in August 2012. It completes an orbit around its parent star once every 11.9 days. It has a radius that is 1.1 times that of the Earth.
References
Terrestrial planets
Exoplanets discovered by the Kepler space telescope
Exoplanets discovered in 2012
Lyra | Kepler-59b | Astronomy | 89 |
10,663,286 | https://en.wikipedia.org/wiki/Gliese%20436%20b | Gliese 436 b (sometimes called GJ 436 b, formally named Awohali) is a Neptune-sized exoplanet orbiting the red dwarf Gliese 436. It was the first hot Neptune discovered with certainty (in 2007) and was among the smallest-known transiting planets in mass and radius, until the much smaller Kepler exoplanet discoveries began circa 2010.
In December 2013, NASA reported that clouds may have been detected in the atmosphere of GJ 436 b.
Nomenclature
In August 2022, this planet and its host star were included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from the United States, were announced in June 2023. Gliese 436 b is named Awohali and its host star is named Noquisi, after the Cherokee words for "eagle" and "star".
Discovery
Awohali was discovered in August 2004 by R. Paul Butler and Geoffrey Marcy of the Carnegie Institute of Washington and University of California, Berkeley, respectively, using the radial velocity method. Together with 55 Cancri e, it was the first of a new class of planets with a minimum mass (M sini) similar to Neptune.
The planet was recorded to transit its star by an automatic process at NMSU on January 11, 2005, but this event went unheeded at the time. In 2007, Michael Gillon from Geneva University in Switzerland led a team that observed the transit, grazing the stellar disc relative to Earth. Transit observations led to the determination of its exact mass and radius, both of which are very similar to that of Neptune, making Awohali at that time the smallest known transiting extrasolar planet. The planet is about four thousand kilometers larger in diameter than Uranus and five thousand kilometers larger than Neptune and slightly more massive. Awohali orbits at a distance of four million kilometers or one-fifteenth the average distance of Mercury from the Sun.
Physical characteristics
The planet's surface temperature is estimated from measurements taken as it passes behind the star to be . This temperature is significantly higher than would be expected if the planet were only heated by radiation from its star, which was prior to this measurement, estimated at 520 K. Whatever energy tidal effects deliver to the planet, it does not affect its temperature significantly. A greenhouse effect would result in a much greater temperature than the predicted 520–620 K.
Its main constituent was initially predicted to be hot "ice" in various exotic high-pressure forms, which would remain solid despite the high temperatures, because of the planet's gravity. The planet could have formed further from its current position, as a gas giant, and migrated inwards with the other gas giants. As it approached its present position, radiation from the star would have blown off the planet's hydrogen layer via coronal mass ejection.
However, when the radius became better known, ice alone was not enough to account for the observed size. An outer layer of hydrogen and helium, accounting for up to ten percent of the mass, was needed on top of the ice to account for the observed planetary radius. This obviates the need for an ice core. Alternatively, the planet may consist of a dense rocky core surrounded by a lesser amount of hydrogen.
Observations of the planet's brightness temperature with the Spitzer Space Telescope suggest a possible thermochemical disequilibrium in the atmosphere of this exoplanet. Results published in Nature suggest that Awohali’s dayside atmosphere is abundant in CO and deficient in methane (CH4) by a factor of ~7,000. This result is unexpected because, based on current models at its temperature, atmospheric carbon should prefer CH4 over CO. In part for this reason, it has also been hypothesized to be a possible helium planet.
In June 2015, scientists reported that the atmosphere of Awohali was evaporating, resulting in a giant cloud around the planet and, due to radiation from the host star, a long trailing tail long.
Orbital characteristics
One orbit around the star Noquisi takes only about two days, 15.5 hours. Awohali orbit is likely misaligned with Noquisi’s rotation. The eccentricity of Awohali’s orbit is inconsistent with models of planetary system evolution. To have maintained its eccentricity over time requires that it be accompanied by another planet.
A study published in Nature found that the orbit of Awohali is nearly perpendicular (inclined by 103.2 degrees) to the stellar equator of Noquisi and suggests that the eccentricity and misalignment of the orbit could have resulted from interactions with a yet undetected companion. The inward migration caused by this interaction could have triggered the atmospheric escape that sustains its giant exosphere.
See also
Janssen
Gliese 581 b
Gliese 876 d
HAT-P-11b
References
Selected media articles
How Do Artists Portray Exoplanets They've Never Seen? 4/9, Scientific American October 2, 2007.
Astronomers Detect Shadow Of Water World In Front Of Nearby Star (from Science Daily).
External links
Leo (constellation)
Transiting exoplanets
Giant planets
Exoplanets discovered in 2004
Exoplanets detected by radial velocity
Hot Neptunes
4
Exoplanets with proper names | Gliese 436 b | Astronomy | 1,097 |
63,988,132 | https://en.wikipedia.org/wiki/Ordered%20topological%20vector%20space | In mathematics, specifically in functional analysis and order theory, an ordered topological vector space, also called an ordered TVS, is a topological vector space (TVS) X that has a partial order ≤ making it into an ordered vector space whose positive cone is a closed subset of X.
Ordered TVSes have important applications in spectral theory.
Normal cone
If C is a cone in a TVS X then C is normal if , where is the neighborhood filter at the origin, , and is the C-saturated hull of a subset U of X.
If C is a cone in a TVS X (over the real or complex numbers), then the following are equivalent:
C is a normal cone.
For every filter in X, if then .
There exists a neighborhood base in X such that implies .
and if X is a vector space over the reals then also:
There exists a neighborhood base at the origin consisting of convex, balanced, C-saturated sets.
There exists a generating family of semi-norms on X such that for all and .
If the topology on X is locally convex then the closure of a normal cone is a normal cone.
Properties
If C is a normal cone in X and B is a bounded subset of X then is bounded; in particular, every interval is bounded.
If X is Hausdorff then every normal cone in X is a proper cone.
Properties
Let X be an ordered vector space over the reals that is finite-dimensional. Then the order of X is Archimedean if and only if the positive cone of X is closed for the unique topology under which X is a Hausdorff TVS.
Let X be an ordered vector space over the reals with positive cone C. Then the following are equivalent:
the order of X is regular.
C is sequentially closed for some Hausdorff locally convex TVS topology on X and distinguishes points in X
the order of X is Archimedean and C is normal for some Hausdorff locally convex TVS topology on X.
See also
References
Functional analysis
Order theory
Topological vector spaces | Ordered topological vector space | Mathematics | 419 |
30,389,209 | https://en.wikipedia.org/wiki/Extinction%20debt | In ecology, extinction debt is the future extinction of species due to events in the past. The phrases dead clade walking and survival without recovery express the same idea.
Extinction debt occurs because of time delays between impacts on a species, such as destruction of habitat, and the species' ultimate disappearance. For instance, long-lived trees may survive for many years even after reproduction of new trees has become impossible, and thus they may be committed to extinction. Technically, extinction debt generally refers to the number of species in an area likely to become extinct, rather than the prospects of any one species, but colloquially it refers to any occurrence of delayed extinction.
Extinction debt may be local or global, but most examples are local as these are easier to observe and model. It is most likely to be found in long-lived species and species with very specific habitat requirements (specialists). Extinction debt has important implications for conservation, as it implies that species may become extinct due to past habitat destruction, even if continued impacts cease, and that current reserves may not be sufficient to maintain the species that occupy them. Interventions such as habitat restoration may reverse extinction debt.
Immigration credit is the corollary to extinction debt. It refers to the number of species likely to migrate to an area after an event such as the restoration of an ecosystem.
Terminology
The term extinction debt was first used in 1994 in a paper by David Tilman, Robert May, Clarence Lehman and Martin Nowak, although Jared Diamond used the term "relaxation time" to describe a similar phenomenon in 1972.
Extinction debt is also known by the terms dead clade walking and survival without recovery when referring to the species affected. The phrase "dead clade walking" was coined by David Jablonski as early as 2001 as a reference to Dead Man Walking, a film whose title is based on American prison slang for a condemned prisoner's last walk to the execution chamber. "Dead clade walking" has since appeared in other scientists' writings about the aftermaths of mass extinctions.
In discussions of threats to biodiversity, extinction debt is analogous to the "climate commitment" in climate change, which states that inertia will cause the earth to continue to warm for centuries even if no more greenhouse gasses are emitted. Similarly, the current extinction may continue long after human impacts on species halt.
Causes
Extinction debt is caused by many of the same drivers as extinction. The most well-known drivers of extinction debt are habitat fragmentation and habitat destruction. These cause extinction debt by reducing the ability of species to persist via immigration to new habitats. Under equilibrium conditions, a species may become extinct in one habitat patch yet continue to survive because it can disperse to other patches. However, as other patches have been destroyed or rendered inaccessible due to fragmentation, this "insurance" effect is reduced and the species may ultimately become extinct.
Pollution may also cause extinction debt by reducing a species' birth rate or increasing its death rate so that its population slowly declines. Extinction debts may also be caused by invasive species or by climate change.
Extinction debt may also occur due to the loss of mutualist species. In New Zealand, the local extinction of several species of pollinating birds in 1870 has caused a long-term reduction in the reproduction of the shrub species Rhabdothamnus solandri, which requires these birds to produce seeds. However, as the plant is slow-growing and long-lived, its populations persist.
Jablonski recognized at least four patterns in the fossil record following mass extinctions:
(1) survival without recovery also called “dead clade walking” – a group dwindling to extinction or relegation to precarious, minor ecological niches
(2) continuity with setbacks patterns disturbed by the extinction event but soon continuing on the previous trajectory
(3) unbroken continuity large-scale patterns continuing with little disruption
(4) unbridled diversification an increase in diversity and species richness, as in the mammals following the end-Cretaceous extinction event
Rate of extinction
Jablonski found that the extinction rate of marine invertebrates was significantly higher in the stage (major subdivision of an epoch – typically 2–10 million years' duration) following a mass extinction than in the stages preceding the mass extinction. His analysis focused on marine molluscs since they constitute the most abundant group of fossils and are therefore the least likely to produce sampling errors. Jablonski suggested that two possible explanations deserved further study:
Post-extinction physical environments differed from pre-extinction environments in ways which were disadvantageous to the "dead clades walking".
Ecosystems that developed after recoveries from mass extinctions may have been less favorable for the "dead clades walking".
Time scale
The time to "payoff" of extinction debt can be very long. Islands that lost habitat at the end of the last ice age 10,000 years ago still appear to be losing species as a result. It has been shown that some bryozoans, a type of microscopic marine organism, became extinct due to the volcanic rise of the Isthmus of Panama. This event cut off the flow of nutrients from the Pacific Ocean to the Caribbean 3–4.5 million years ago. While bryozoan populations dropped severely at this time, extinction of these species took another 1–2 million years.
Extinction debts incurred due to human actions have shorter timescales. Local extinction of birds from rainforest fragmentation occurs over years or decades, while plants in fragmented grasslands show debts lasting 50–100 years. Tree species in fragmented temperate forests have debts lasting 200 years or more.
Theoretical development
Origins in metapopulation models
Tilman et al. demonstrated that extinction debt could occur using a mathematical ecosystem model of species metapopulations. Metapopulations are multiple populations of a species that live in separate habitat patches or islands but interact via immigration between the patches. In this model, species persist via a balance between random local extinctions in patches and colonization of new patches. Tilman et al. used this model to predict that species would persist long after they no longer had sufficient habitat to support them. When used to estimate extinction debts of tropical tree species, the model predicted debts lasting 50–400 years.
One of the assumptions underlying the original extinction debt model was a trade-off between species' competitive ability and colonization ability. That is, a species that competes well against other species, and is more likely to become dominant in an area, is less likely to colonize new habitats due to evolutionary trade-offs. One of the implications of this assumption is that better competitors, which may even be more common than other species, are more likely to become extinct than rarer, less competitive, better dispersing species. This has been one of the more controversial components of the model, as there is little evidence for this trade-off in many ecosystems, and in many empirical studies dominant competitors were least likely species to become extinct. A later modification of the model showed that these trade-off assumptions may be relaxed, but need to exist partially, in order for the theory to work.
Development in other models
Further theoretical work has shown that extinction debt can occur under many different circumstances, driven by different mechanisms and under different model assumptions. The original model predicted extinction debt as a result of habitat destruction in a system of small, isolated habitats such as islands. Later models showed that extinction debt could occur in systems where habitat destruction occurs in small areas within a large area of habitat, as in slash-and-burn agriculture in forests, and could also occur due to decreased growth of species from pollutants. Predicted patterns of extinction debt differ between models, though. For instance, habitat destruction resembling slash-and-burn agriculture is thought to affect rare species rather than poor colonizers. Models that incorporate stochasticity, or random fluctuation in populations, show extinction debt occurring over different time scales than classic models.
Most recently, extinction debts have been estimated through the use models derived from neutral theory. Neutral theory has very different assumptions than the metapopulation models described above. It predicts that the abundance and distribution of species can be predicted entirely through random processes, without considering the traits of individual species. As extinction debt arises in models under such different assumptions, it is robust to different kinds of models. Models derived from neutral theory have successfully predicted extinction times for a number of bird species, but perform poorly at both very small and very large spatial scales.
Mathematical models have also shown that extinction debt will last longer if it occurs in response to large habitat impacts (as the system will move farther from equilibrium), and if species are long-lived. Also, species just below their extinction threshold, that is, just below the population level or habitat occupancy levels required sustain their population, will have long-term extinction debts. Finally, extinction debts are predicted to last longer in landscapes with a few large patches of habitat, rather than many small ones.
Detection
Extinction debt is difficult to detect and measure. Processes that drive extinction debt are inherently slow and highly variable (noisy), and it is difficult to locate or count the very small populations of near-extinct species. Because of these issues, most measures of extinction debt have a great deal of uncertainty.
Experimental evidence
Due to the logistical and ethical difficulties of inciting extinction debt, there are few studies of extinction debt in controlled experiments. However, experiments microcosms of insects living on moss habitats demonstrated that extinction debt occurs after habitat destruction. In these experiments, it took 6–12 months for species to die out following the destruction of habitat.
Observational methods
Long-term observation
Extinction debts that reach equilibrium in relatively short time scales (years to decades) can be observed via measuring the change in species numbers in the time following an impact on habitat. For instance, in the Amazon rainforest, researchers have measured the rate at which bird species disappear after forest is cut down. As even short-term extinction debts can take years to decades to reach equilibrium, though, such studies take many years and good data are rare.
Comparing the past and present
Most studies of extinction debt compare species numbers with habitat patterns from the past and habitat patterns in the present. If the present populations of species are more closely related to past habitat patterns than present, extinction debt is a likely explanation. The magnitude of extinction debt (i.e., number of species likely to become extinct) can not be estimated by this method.
If one has information on species populations from the past in addition to the present, the magnitude of extinction debt can be estimated. One can use the relationship between species and habitat from the past to predict the number of species expected in the present. The difference between this estimate and the actual number of species is the extinction debt.
This method requires the assumption that in the past species and their habitat were in equilibrium, which is often unknown. Also, a common relationship used to equate habitat and species number is the species-area curve, but as the species-area curve arises from very different mechanisms than those in metapopulation based models, extinction debts measured in this way may not conform with metapopulation models' predictions. The relationship between habitat and species number can also be represented by much more complex models that simulate the behavior of many species independently.
Comparing impacted and pristine habitats
If data on past species numbers or habitat are not available, species debt can also be estimated by comparing two different habitats: one which is mostly intact, and another which has had areas cleared and is smaller and more fragmented. One can then measure the relationship of species with the condition of habitat in the intact habitat, and, assuming this represents equilibrium, use it to predict the number of species in the cleared habitat. If this prediction is lower than the actual number of species in the cleared habitat, then the difference represents extinction debt. This method requires many of the same assumptions as methods comparing the past and present.
Examples
Grasslands
Studies of European grasslands show evidence of extinction debt through both comparisons with the past and between present-day systems with different levels of human impacts. The species diversity of grasslands in Sweden appears to be a remnant of more connected landscapes present 50 to 100 years ago. In alvar grasslands in Estonia that have lost area since the 1930s, 17–70% of species are estimated to be committed to extinction. However, studies of similar grasslands in Belgium, where similar impacts have occurred, show no evidence of extinction debt. This may be due to differences in the scale of measurement or the level of specialization of grass species.
Forests
Forests in Flemish Brabant, Belgium, show evidence of extinction debt remaining from deforestation that occurred between 1775 and 1900. Detailed modeling of species behavior, based on similar forests in England that did not experience deforestation, showed that long-lived and slow-growing species were more common than equilibrium models would predict, indicating that their presence was due to lingering extinction debt.
In Sweden, some species of lichens show an extinction debt in fragments of ancient forest. However, species of lichens that are habitat generalists, rather than specialists, do not.
Insects
Extinction debt has been found among species of butterflies living in the grasslands on Saaremaa and Muhu – islands off the western coast of Estonia. Butterfly species distributions on these islands are better explained by the habitat in the past than current habitats.
On the islands of the Azores Archipelago, more than 95% of native forests have been destroyed in the past 600 years. As a result, more than half of arthropods on these islands are believed to be committed to extinction, with many islands likely to lose more than 90% of species.
Vertebrates
Of extinction from past deforestation in the Amazon, 80–90% has yet to occur, based on modeling based on species-area relationships. Local extinctions of approximately 6 species are expected in each 2500 km2 region by 2050 due to past deforestation. Birds in the Amazon rainforest continued to become extinct locally for 12 years following logging that broke up contiguous forest into smaller fragments. The extinction rate slowed, however, as forest regrew in the spaces in between habitat fragments.
Countries in Africa are estimated to have, on average, a local extinction debt of 30% for forest-dwelling primates. That is, they are expected to have 30% of their forest primate species to become extinct in the future due to loss of forest habitat. The time scale for these extinctions has not been estimated.
Based on historical species-area relationships, Hungary currently has approximately nine more species of raptors than are thought to be able to be supported by current nature reserves.
Applications to conservation
The existence of extinction debt in many different ecosystems has important implications for conservation. It implies that in the absence of further habitat destruction or other environmental impacts, many species are still likely to become extinct. Protection of existing habitats may not be sufficient to protect species from extinction. However, the long time scales of extinction debt may allow for habitat restoration in order to prevent extinction, as occurred in the slowing of extinction in Amazon forest birds above. In another example, it has been found that grizzly bears in very small reserves in the Rocky Mountains are likely to become extinct, but this finding allows the modification of reserve networks to better support their populations.
The extinction debt concept may require revision of the value of land for species conservation, as the number of species currently present in a habitat may not be a good measure of the habitat's ability to support species (see carrying capacity) in the future. As extinction debt may last longest near extinction thresholds, it may be hardest to detect the threat of extinction for species that conservation could benefit the most.
Economic analyses have shown that including extinction in management decision-making process changes decision outcomes, as the decision to destroy habitat changes conservation value in the future as well as the present. It is estimated that in Costa Rica, ongoing extinction debt may cost between $88 million and $467 million.
In popular culture
An episode of the CBS series Elementary was named "Dead Clade Walking".
See also
References
Biogeography
Ecology
Debt
Landscape ecology | Extinction debt | Biology | 3,228 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.