id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
2,483,141 | https://en.wikipedia.org/wiki/Homoleptic%20and%20heteroleptic%20compounds | In inorganic chemistry, a homoleptic chemical compound is a metal compound with all ligands identical. The term uses the "homo-" prefix to indicate that something is the same for all. Any metal species which has more than one type of ligand is heteroleptic.
Some compounds with names that suggest that they are homoleptic are in fact heteroleptic, because they have ligands in them which are not featured in the name. For instance dialkyl magnesium complexes, which are found in the equilibrium which exists in a solution of a Grignard reagent in an ether, have two ether ligands attached to each magnesium centre. Another example is a solution of trimethyl aluminium in an ether solvent (such as THF); similar chemistry should be expected for a triaryl or trialkyl borane.
It is possible for some ligands such as DMSO to bind with two or more different coordination modes. It would still be reasonable to consider a complex which has only one type of ligand but with different coordination modes to be homoleptic. For example, the complex dichlorotetrakis(dimethyl sulfoxide)ruthenium(II) features DMSO coordinating via both sulfur and oxygen atoms (though this is not homoleptic since there are also chloride ligands).
Homoleptic examples
Chromium carbonyl
Ferrocyanide
Iron pentacarbonyl
Nickel carbonyl
Tetrakis(triphenylphosphine)palladium(0)
Ferrocene
Uranium hexafluoride
tetraethyl lead
tetramethyl lead
tetrabutyl tin
trimethylaluminium
dimethylmercury
Diethylzinc
triethylborane
Chromate
Permanganate
Ferroin
bis(terpyridine)iron(II)
References
Organometallic chemistry
Inorganic chemistry
Coordination chemistry | Homoleptic and heteroleptic compounds | Chemistry | 389 |
2,576,885 | https://en.wikipedia.org/wiki/Field-replaceable%20unit | A field-replaceable unit (FRU) is a printed circuit board, part, or assembly that can be quickly and easily removed from a computer or other piece of electronic equipment, and replaced by the user or a technician without having to send the entire product or system to a repair facility. FRUs allow a technician lacking in-depth product knowledge to isolate faults and replace faulty components. The granularity of FRUs in a system impacts total cost of ownership and support, including the costs of stocking spare parts, where spares are deployed to meet repair time goals, how diagnostic tools are designed and implemented, levels of training for field personnel, whether end-users can do their own FRU replacement, etc.
Other equipment
FRUs are not strictly confined to computers but are also part of many high-end, lower-volume consumer and commercial products. For example, in military aviation, electronic components of line-replaceable units, typically known as shop-replaceable units (SRUs), are repaired at field-service backshops, usually by a "remove and replace" repair procedure, with specialized repair performed at centralized depot or by the OEM.
History
Many vacuum tube computers had FRUs:
Pluggable units containing one or more vacuum tubes and various passive components
Most transistorized and integrated circuit-based computers had FRUs:
Computer modules, circuit boards containing discrete transistors and various passive components. Examples:
IBM SMS cards
DEC System Building Blocks cards
DEC Flip-Chip cards
Circuit boards containing monolithic ICs and/or hybrid ICs, such as IBM SLT cards.
Vacuum tubes themselves are usually FRUs.
For a short period starting in the late 1960s, some television set manufacturers made solid-state televisions with FRUs instead of a single board attached to the chassis. However modern televisions put all the electronics on one large board to reduce manufacturing costs.
Trends
As the sophistication and complexity of multi-replaceable unit electronics in both commercial and consumer industries have increased, many design and manufacturing organizations have expanded the use of the FRU storage device. Storage is no longer limited to simply identification of the FRU itself, but now also comprises back-up copies of critical system information such as system serial numbers, MAC address and even security information. Some systems will fail to function at all without each FRU in the system being ratified at start-up. Today one cannot assume that the FRU storage device is only used to maintain the FRU ID of the part.
See also
Shop-replaceable unit
Line-replaceable unit
Notes
Electronic engineering
Maintenance | Field-replaceable unit | Technology,Engineering | 525 |
2,227,469 | https://en.wikipedia.org/wiki/Lake%20retention%20time | Lake retention time (also called the residence time of lake water, or the water age or flushing time) is a calculated quantity expressing the mean time that water (or some dissolved substance) spends in a particular lake. At its simplest, this figure is the result of dividing the lake volume by the flow in or out of the lake. It roughly expresses the amount of time taken for a substance introduced into a lake to flow out of it again. The retention time is particularly important where downstream flooding or pollutants are concerned.
Global retention time
The global retention time for a lake (the overall mean time that water spends in the lake) is calculated by dividing the lake volume by either the mean rate of inflow of all tributaries, or by the mean rate of outflow (ideally including evaporation and seepage). This metric assumes that water in the lake is well-mixed (rather than stratified), so that any portion of the lake water is much like any other. In reality, larger and deeper lakes are generally not well-mixed. Many large lakes can be divided into distinct portions with only limited flow between them. Deep lakes are generally stratified, with deeper water mixing infrequently with surface water. These are often better modeled as several distinct sub-volumes of water.
More specific residence times
It is possible to calculate more specific residence time figures for a particular lake, such as individual residence times for sub-volumes (e.g. particular arms), or a residence time distribution for the various layers of a stratified lake. These figures can often better express the hydrodynamics of the lake. However, any such approach remains a simplification and must be guided by an understanding of the processes operating in the lake.
Two approaches can be used (often in combination) to elucidate how a particular lake works: field measurements and mathematical modeling. One common technique for field measurement is to introduce a tracer into the lake and monitor its movement. This can be a solid tracer, such as a float constructed to be neutrally buoyant within a particular water layer, or sometimes a liquid. This approach is sometimes referred to as using a Lagrangian reference frame. Another field measurement approach, using an Eulerian reference frame, is to capture various properties of the lake water (including mass movement, water temperature, electrical conductivity and levels of dissolved substances, typically oxygen) at various fixed positions in the lake. From these can be constructed an understanding of the dominant processes operating in the various parts of the lake and their range and duration.
Field measurements alone are usually not a reliable basis for generating residence times, mainly because they necessarily represent a small subset of locations and conditions. Therefore, the measurements are generally used as the input for numerical models. In theory it would be possible to integrate a system of hydrodynamic equations with variable boundary conditions over a very long period sufficient for inflowing water particles to exit the lake. One could then calculate the traveling times of the particles using a Lagrangian method. However, this approach exceeds the detail available in current hydrodynamic models and the capacity of current computer resources. Instead, residence time models developed for gas and fluid dynamics, chemical engineering, and bio-hydrodynamics can be adapted to generate residence times for sub-volumes of lakes.
Renewal time
One useful mathematical model is the measurement of how quickly inflows are able to refill a lake. Renewal time is a specific measure of retention time, where the focus is on 'how long does it take to completely replace all water in a lake.' This is modeling can only be done with an accurate budget of all water gained and lost by the system. Renewal time simply becomes a question how quickly could the inflows of the lake fill the entire volume of the basin (this does still assume the outflows are unchanged). For example if Lake Michigan was emptied, it would take 99 years for its tributaries to completely refill the lake.
List of residence times of lake water
The residence time listed is taken from the infobox in the associated article unless otherwise specified.
See also
Water cycle: Residence times
References
Further reading
External links
EPA's Great Lakes Factsheet #1
EPA's Great Lakes Atlas
- relationship between residence time of lakes of New Zealand and koaro, smelt and common bully populations.
Aquatic ecology
Retention time | Lake retention time | Biology,Environmental_science | 895 |
14,875,341 | https://en.wikipedia.org/wiki/40S%20ribosomal%20protein%20S10 | 40S ribosomal protein S10 is a protein that in humans is encoded by the RPS10 gene.
Function
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S10E family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
Clinical significance
Variable expression of this gene in colorectal cancers compared to adjacent normal tissues has been observed, although no correlation between the level of expression and the severity of the disease has been found.
Mutations in the RPS10 gene can cause Diamond–Blackfan anemia, a congenital anemia sometimes associated with bone marrow failure.
Interactions
RPS10 has been shown to interact with PTTG1.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Diamond–Blackfan Anemia
OMIM entries on Diamond–Blackfan Anemia
Ribosomal proteins | 40S ribosomal protein S10 | Chemistry | 256 |
30,748,681 | https://en.wikipedia.org/wiki/SHOX2 | Short-stature homeobox 2, also known as homeobox protein Og12X or paired-related homeobox protein SHOT, is a protein that in humans is encoded by the SHOX2 gene.
Function
SHOX2 is a member of the homeobox family of genes that encode proteins containing a 60-amino acid residue motif that represents a DNA-binding domain. Homeobox proteins have been characterized extensively as transcriptional regulators involved in pattern formation in both invertebrate and vertebrate species.
Clinical significance
Several human genetic disorders are caused by aberrations in human homeobox genes. This locus represents a pseudoautosomal homeobox gene that is thought to be responsible for idiopathic short stature, and it is implicated in the short stature phenotype of Turner syndrome patients. This gene is considered to be a candidate gene for Cornelia de Lange syndrome.
SHOX2 localises on chromosome 3, so it is an autosomal and not a pseudoautosomal homeobox (SHOX, which localises on the PAR1 region of chromosome X and Y, has a pseudoautosomal hereditability).
References
Further reading
__notoc__
Transcription factors | SHOX2 | Chemistry,Biology | 253 |
77,638,545 | https://en.wikipedia.org/wiki/JTK-109 | JTK-109 is an antiviral drug which acts as a NS5B RNA-dependent RNA polymerase inhibitor. It was initially developed for the treatment of Hepatitis C, but also shows activity against caliciviruses such as norovirus.
References
4-Chlorophenyl compounds
Antiviral drugs
Benzoic acids
Pyrrolidones
Benzimidazoles
Fluorobenzene derivatives
Cyclohexyl compounds | JTK-109 | Biology | 92 |
38,017,461 | https://en.wikipedia.org/wiki/Extension%20Poly%28A%29%20Test | The extension Poly(A) Test (ePAT) describes a method to determine the poly(A) tail lengths of mRNA molecules. It was developed and described by A. Jänicke et al. in 2012.
The method consists of three separate steps: in the first step, the poly-adenylated RNA is hybridised to a DNA oligonucleotide featuring a poly-deoxythymidine sequence at its 5’ end. Klenow polymerase then catalyses elongation of the mRNA’s 3’ end, using the DNA oligonucleotide as a template. This reaction takes place at 25 °C. In the second step, reverse transcriptase synthesis extends the DNA oligonucleotides that have annealed to the mRNA’s extended 3’ end. In order to ensure that DNA oligomers hybridised to internal poly(A) sequences do not serve as primers for reverse transcription, the second step is carried out at 55 °C. A third and final step involves amplification of the newly synthesised cDNA via PCR. This PCR requires one gene-specific and one universal primer. Analysis of the amplicons’ lengths allows for estimation of the sequence flanked by the two primers, i.e. the poly(A) tail length of the sample mRNA.
According to the authors, measurement of poly(A) tail lengths and their distribution amongst different transcripts, this method can be used to determine the cell’s translation state instead of the more tedious analysis of protein translation states.
References
Molecular biology techniques | Extension Poly(A) Test | Chemistry,Biology | 327 |
11,409,697 | https://en.wikipedia.org/wiki/Wildlife%20of%20Malawi | The wildlife of Malawi is composed of the flora and fauna of the country. Malawi is a landlocked country in southeastern Africa, with Lake Malawi taking up about a third of the country's area. It has around 187 species of mammal, some 648 species of birds have been recorded in the country and around 500 species of fish, many of them endemic, are found in its lakes and rivers. About 20% of the country has been set aside as national parks and game and forest reserves.
Geography
The flora and fauna are much influenced by the geography of the region. Malawi is a land-locked country, dominated by the Great Rift Valley which has a north – south orientation, and is long and between wide. The main feature is Lake Malawi which forms much of the eastern boundary of the country. The lake is drained by the Shire River which flows southwards to join the lower Zambezi in neighbouring Mozambique. Lake Malawi is above sea level but is deep in places. It is bordered on the west by a narrow plain, above which the land rises steeply to form high plateaux, generally between above sea level. To the north, the Nyika Plateau rises to . To the south lie the gently rolling Shire Highlands, and to the extreme south the land falls away towards the Zambezi floodplain. Lake Chilwa, the second biggest lake, is near the Mozambique border and has no outlet.
Malawi has a tropical continental climate which is somewhat influenced by the country's proximity to the sea. Temperatures rise from September until the beginning of the rainy season in November, after which the climate is warm and wet until April after which it becomes cooler and dry. The annual rainfall varies from in different parts of the country. Frosts can be experienced in the highest mountains in the north, and the temperature may reach in Shire Valley in the south.
About 21% of Malawi has been set aside for the protection of its natural flora and fauna, as national parks, forest reserves and wildlife reserves. These include the Kasungu National Park, the Nyika National Park, the Lengwe National Park, the Liwonde National Park and the Lake Malawi National Park.
Flora
The western part of the country lies in the Southern Miombo woodlands ecoregion, characterised by tall trees with a lower storey of shrubs and grasses. The natural vegetation of much of the low- and mid-level areas of Malawi is a form of deciduous forest and shrubland known as Zambezian and mopane woodlands. Between this is mostly miombo woodland, dominated by Brachystegia trees which are often interspersed with Julbernardia and Isoberlinia trees.
Much of the forest has been cleared to make way for agricultural land. Mopane woodland, dominated by Colophospermum mopane, used to be abundant but only a few patches remain. Similarly, Acacia / Combretum woodland has largely been depleted, but larger areas of rainforest remain at mid to high altitudes, especially in the north of the country. The high plateaux are clad in low grasses, heathers and heaths, with many flowering plants blooming after the rainy season. Swamps are found in the Shire Valley and around Lake Chilwa. Wild date palms grow in some highland areas and near the Shire River, and raffia palms are found near upland streams and are common in the Nkhotakota Wildlife Reserve. Around four hundred species of orchid have been recorded in the country, 120 of them epiphytic. They are most abundant in Nyika National Park and growing on the surrounding mountainsides.
Fauna
Mammals
About 187 species of mammals have been recorded in Malawi. Of these, 55 are bats and 52 are rodents. The people living in rural Malawi are mostly subsistence farmers; they do not appreciate their crops being trampled and eaten and will hunt or drive off wild animals. Elephants, lions, leopards, African buffaloes, hippopotamuses and rhinoceroses are present in the country but their numbers are low except in national parks and game reserves. More numerous are jackals and spotted hyenas, African wildcats, caracal and serval. Smaller predators include mongooses, genets, civets, striped polecats, honey badgers, spotted-necked and African clawless otters.
Antelopes occurring in Malawi include the common eland, the greater kudu, the waterbuck, the sable and roan antelopes, the bushbuck, the nyala, the impala, the southern reedbuck and several smaller species of antelope. Primates present in the country include yellow and chacma baboons, vervet monkeys, blue monkeys, thick-tailed and lesser bushbabies.
Birds
Some 648 species of bird have been recorded in Malawi of which 456 are resident and another 94 are migratory within Africa, and some of these may breed in the country. Around 77 species
are flying between eastern Asia and South Africa. Species of global concern which pass through in small numbers include the pallid harrier, lesser kestrel, corn crake and great snipe, as well as lesser flamingo and Malagasy pond heron. Lake Chilwa supports 160 species, some of which are resident.
Malawi is at the southern end of the range for many East African birds, and the northern limit for some South African species. Evergreen forest provides a particularly rich list of bird species, and the miombo woodland supports many species that are found nowhere else. The lakes and marshes are rich in species, with Lake Chilwa having a greater diversity of birds than Lake Malawi.
Fish
There are about five hundred species of fish in Malawi, with over 90% of them being endemic; this is a greater number of species than occur in Europe and North America in total. Along with Lake Tanganyika, Lake Malawi contains a larger number of endemic species than any other freshwater lake in the world. The majority of species present are cichlids, which are mouthbrooders, and many of these species are found in small localised areas of the lakes and nowhere else. Other fish are also present and are hunted as part of the local fishing industry. These include the African catfish, various species of carp, and a small sardine-like fish present in large shoals which are caught by trawling. The commonest fish in Lake Chilwa are Barbus paludinosus, Oreochromis shiranus chilwae, Clarias gariepinus, Brycinus imberi and Gnathonemus.
Insects
Insects are plentiful in Malawi, including large numbers of ants, beetles, crickets, flies, bees and wasps. There are probably thousands of species of butterfly and moth in the country, including butterflies in the families Satyridae, Nymphalidae, Papilionidae and Pieridae, and moths in the Noctuidae.
References
See also
Biota of Malawi
Malawi | Wildlife of Malawi | Biology | 1,414 |
33,742,973 | https://en.wikipedia.org/wiki/ND%20experiment | Neutral Detector (ND) is a detector for particle physics experiments created by the team of physicists in the
Budker Institute of Nuclear Physics, Novosibirsk, Russia.
Experiments with the ND were conducted from 1982 to 1987 at the e+e− storage ring VEPP-2M in the energy range 2E=0.5-1.4 GeV.
Physics
At the beginning of 80s the leading cross sections of the electron-positron annihilation in the final states with charged particles were measured in the energy range 2E=0.5-1.4 GeV. Processes with the neutral particles in the final state were less studied. To investigate the radiative decays of the , , and mesons and other processes involving photons, , and mesons the ND
was constructed. Its distinguishing features are defined by the specially designed electromagnetic calorimeter based on NaI(Tl) scintillation counters.
List of published analyses
Radiative decays
Rare decays of the , , and mesons
Search for rare decays
light scalars and in -meson radiative decays
Non-resonant electron-positron annihilation into hadrons
Test of QED processes
(virtual Compton scattering)
Analyses of other processes
Measurement of the ω-meson parameters
Upper limits on electron width of scalar and tensor mesons , , , , and
Search for
Detector
Based on goals of the physics program the ND consist of
Electromagnetic calorimeter
168 rectangular NaI(Tl) scintillation counters
total mass of NaI(Tl) is 2.6 t
solid angle coverage is 65% of 4π sr
minimum thickness is 32 cm or 12 radiation length
energy resolution for photons is σ/E = 4% /
Charged particle coordinate system
3 layers of coaxial cylindrical 2-d wire proportional chambers in the center of the detector
solid angle coverage is 80% of 4π sr
angular resolution is 0.5° in the azimuthal and 1.5° in the polar direction
surrounded by the 5-mm thick plastic scintillation counter for trigger
Flat (shower) coordinate 2-d wire proportional chambers
2 layers of flat 2-d wire proportional chambers.
angular resolution is 2° in the azimuthal and 3.5° in the polar direction for 0.5 GeV photons
Iron absorber & anti-coincidence counters
The electromagnetic calorimeter is covered by the 10-cm thick iron absorber and plastic scintillation anti-coincidence counters.
Results
Data collected with the ND experiment corresponds to the integrated luminosity 19 pb−1.
Results of the experiments with ND are presented in Ref.,
and are included in the PDG Review.
See also
References
External links
Budker Institute of Nuclear Physics (BINP)
VEPP-2M
NOVOSIBIRSK-ND experiment record on INSPIRE-HEP
Particle detectors
Particle experiments
Experimental particle physics
Particle physics facilities
Budker Institute of Nuclear Physics | ND experiment | Physics,Technology,Engineering | 610 |
2,218,269 | https://en.wikipedia.org/wiki/Polymersome | In biotechnology, polymersomes are a class of artificial vesicles, tiny hollow spheres that enclose a solution. Polymersomes are made using amphiphilic synthetic block copolymers to form the vesicle membrane, and have radii ranging from 50 nm to 5 μm or more. Most reported polymersomes contain an aqueous solution in their core and are useful for encapsulating and protecting sensitive molecules, such as drugs, enzymes, other proteins and peptides, and DNA and RNA fragments. The polymersome membrane provides a physical barrier that isolates the encapsulated material from external materials, such as those found in biological systems.
Synthosomes are polymersomes engineered to contain channels (transmembrane proteins) that allow certain chemicals to pass through the membrane, into or out of the vesicle. This allows for the collection or enzymatic modification of these substances.
The term "polymersome" for vesicles made from block copolymers was coined in 1999. Polymersomes are similar to liposomes, which are vesicles formed from naturally occurring lipids. While having many of the properties of natural liposomes, polymersomes exhibit increased stability and reduced permeability. Furthermore, the use of synthetic polymers enables designers to manipulate the characteristics of the membrane and thus control permeability, release rates, stability and other properties of the polymersome.
Preparation
Several different morphologies of the block copolymer used to create the polymersome have been used. The most frequently used are the linear diblock or triblock copolymers. In these cases, the block copolymer has one block that is hydrophobic; the other block or blocks are hydrophilic. Other morphologies used include comb copolymers, where the backbone block is hydrophilic and the comb branches are hydrophobic, and dendronized block copolymers, where the dendrimer portion is hydrophilic.
In the case of diblock, comb and dendronized copolymers the polymersome membrane has the same bilayer morphology of a liposome, with the hydrophobic blocks of the two layers facing each other in the interior of the membrane. In the case of triblock copolymers the membrane is a monolayer that mimics a bilayer, the central block filling the role of the two facing hydrophobic blocks of a bilayer.
In general they can be prepared by the methods used in the preparation of liposomes. Film rehydration, direct injection method or dissolution method.
Uses
Polymersomes that contain active enzymes and that provide a way to selectively transport substrates for conversion by those enzymes have been described as nanoreactors.
Polymersomes have been used to create controlled release drug delivery systems. Similar to coating liposomes with polyethylene glycol, polymersomes can be made invisible to the immune system if the hydrophilic block consists of polyethylene glycol. Thus, polymersomes are useful carriers for targeted medication.
For in vivo applications, polymersomes are de facto limited to the use of FDA-approved polymers, as most pharmaceutical firms are unlikely to develop novel polymers due to cost issues. Fortunately, there are a number of such polymers available, with varying properties, including:
Hydrophilic blocks
Poly(ethylene glycol) (PEG/PEO)
Poly(2-methyloxazoline)
Hydrophobic blocks
Polydimethylsiloxane (PDMS)
Poly(caprolactone (PCL)
Poly(lactide) (PLA)
Poly(methyl methacrylate) (PMMA)
If enough of the block copolymer molecules that make up a polymersome are cross-linked, the polymersome can be made into a transportable powder.
Polymersomes can be used to make an artificial cell if hemoglobin and other components are added. The first artificial cell was made by Thomas Chang.
See also
Cell (biology)
Liposome
Polymer
Copolymer
Artificial cell
References
Biomolecules
Polymers
Immunology
Pharmacokinetics | Polymersome | Chemistry,Materials_science,Biology | 861 |
13,675,989 | https://en.wikipedia.org/wiki/Nikolas%20Rose | Nikolas Rose is a British sociologist and social theorist. He is Distinguished Honorary Professor at the Research School of Social Sciences, in the College of Arts and Social Sciences at the Australian National University and Honorary Professor at the Institute of Advanced Studies at University College London. From January 2012 to until his retirement in April 2021 he was Professor of Sociology in the Department of Global Health and Social Medicine (previously Social Science, Health & Medicine) at King's College London, having joined King's to found this new Department. He was the Co-Founder and Co-Director of King's ESRC Centre for Society and Mental Health. Before moving to King's College London, he was the James Martin White Professor of Sociology at the London School of Economics, director and founder of LSE's BIOS Centre for the Study of Bioscience, Biomedicine, Biotechnology and Society from 2002 to 2011, and Head of the LSE Department of Sociology (2002–2006). He was previously Professor of Sociology at Goldsmiths, University of London, where he was Head of the Department of Sociology, Pro-Warden for Research and Head of the Goldsmiths Centre for Urban and Community Research and Director of a major evaluation of urban regeneration in South East London. He is a Fellow of the British Academy, the Royal Society of Arts and the Academy of Social Sciences, and a Fellow of the Royal Danish Academy of Science and Letters. He holds honorary doctorates from the University of Sussex, England, and Aarhus University, Denmark.
Biography
Originally trained as a biologist, Nikolas Rose has done extensive research on the history and sociology of psychiatry, on mental health policy and risk, and on the social implications of recent developments in psychopharmacology. He has also published widely on the genealogy of subjectivity, on the history of empirical thought in sociology, and on changing rationalities of political power. He is particularly known for his development of the work of the French historian and philosopher Michel Foucault for the analysis of the politics of our present, and stimulating the revival of studies of governmentality in the Anglo-American world. His own approach to these issues was set out in his 1999 book Powers of Freedom: Reframing Political Thought.
His first book, The Psychological Complex, published in 1985, pioneered a new way of understanding the social history and implications of the discipline of psychology. This was followed in 1996 by Inventing Our Selves: Psychology, Power and Personhood and in1989 by Governing the Soul: the shaping of the private self . These three books are widely recognised as founding texts in a new way of understanding and analysing the links between expertise, subjectivity and political power. Rose argues that the proliferation of the 'psy' disciplines has been intrinsically linked with transformations in governmentality, in the rationalities and technologies of political power in 'advanced and liberal democracies'. (See also governmentality for a description of Rose's development of Foucault's concepts).
In 1989, he founded the History of the Present Research Network, an international network of researchers whose work was influenced by the writings of Michel Foucault Together with Paul Rabinow, he edited the Fourth Volume of Michel Foucault's Essential Works.
In November 2001, he was listed by The Guardian newspaper as one of the top five UK based social scientists (), on the basis of a twenty-year analysis of citations to research papers, and the most cited UK based sociologist.
For six years he was managing editor of the journal Economy & Society, one of the UK's leading interdisciplinary journal of social science, and he is a founder and co-editor of BioSocieties: An interdisciplinary journal for social studies of life sciences.
In 2007 he was awarded an ESRC Professorial Research Fellowship – a three-year project entitled 'Brain, Self and Society in the 21st Century'. In 2013, writing with Joelle Abi-Rached, he published Neuro: the new brain sciences and the management of the mind. He has long advocated for 'revitalizing' the social and human sciences through a 'critical friendship' with the life sciences, setting out the nature and implications of his 'cartography of the present' in a number of widely cited papers and in The Politics of Life Itself, published in 2007.
Throughout his academic career he has been a critical analyst of psychiatry. His first book on this topic, The Power of Psychiatry, a collection edited together with Peter Miller was published in1986. His most recent book Our Psychiatric Future: the politics of mental health was published by Polity Press in October 2018. His recent work has been on the social shaping of mental distress and its biopolitical implications. His book The Urban Brain: Mental Health in the Vital City, written with Des Fitzgerald, was published by Princeton University Press in 2022. His most recent book, ''Questioning Humanity, Being human in a posthuman age, written with Thomas Osborne, was published in 2024.
Nikolas Rose has led many international collaborative research projects, including BIONET, a major collaboration of European and Chinese researchers on the ethical governance of biomedical research in China. He is the Chair of the Neuroscience and Society Network, an international network to encourage critical collaboration between social scientists and neuroscientists, which was funded for several years by the European Science Foundation.
He was previously a member of the Nuffield Council on Bioethics where he was a member of the Council's Working Party on Medical profiling and online medicine: the ethics of 'personalised healthcare' in a consumer age (2008–2010) and on Novel Neurotechnologies: intervening in the human brain. He also served for several years as a member of the Royal Society's Science Policy Committee. He was Co-Director of the first publicly funded UK centre dedicated to synthetic biology based at Imperial College. where he led a team examining the social, ethical, legal and political dimensions of this emerging field. At King's he led a team of researchers exploring the social implications of new developments in biotechnology, and committed to the democratisation of scientific research and technological development, with a particular focus on synthetic biology and neurobiology. For many years he was a member of the Social and Ethical Division of the Human Brain Project, where he led the Foresight Lab based at King's College London which aimed to identify and evaluate the potential impact of the new knowledge and technologies produced by the Human Brain Project in neuroscience, neurology, computing and robotics, and also examined such issues as artificial intelligence and the political, security, intelligence and military uses of novel brain technologies.
His work has been translated into many languages including Swedish, Danish, Finnish, German, Italian, French, Hungarian, Korean, Russian, Chinese, Japanese, Romanian, Portuguese and Spanish.
Selected publications
Books
Questioning Humanity: Being human in a posthuman age, with Thomas Osborne (Edward Elgar, 2024)
The Urban Brain: Mental Health in the Vital City, with Des Fitzgerald (Princeton University Press, 2022)
Our Psychiatric Future: the politics of mental health, (Polity, 2018)
Neuro: The New Brain Sciences and the Management of the Mind, with Joelle M. Abi-Rached (Princeton University Press, 2013)
Governing the Present: Administering Economic, Social and Personal Life, with Peter Miller (Polity, 2008)
The Politics of Life Itself: Biomedicine, Power, and Subjectivity in the Twenty-First Century, (PUP, 2007)
Powers of Freedom: Reframing Political Thought (Cambridge University Press, 1999)
Inventing Our Selves: Psychology, Power and Personhood (Cambridge University Press, 1996)
Governing the Soul: The Shaping of the Private Self (Routledge, 1989, Second edition, Free Associations, 1999)
The Psychological Complex: Psychology, Politics and Society in England, 1869–1939 (Routledge, 1985)
Chapters in edited collections (selected)
'Writing the History of the Present', in Jonathan Joseph, ed., Social Theory: A Reader. Edinburgh: Edinburgh University Press, 2005 (with Andrew Barry and Thomas Osborne) (Reprint of selections from Introduction to Foucault and Political Reason, 1996.)
'Biological Citizenship', in Aihwa Ong and Stephen Collier, eds., Global Assemblages: Technology, Politics and Ethics as Anthropological Problems, pp. 439–463. Oxford: Blackwell, 2005 (with Carlos Novas)
Introduction to The Essential Foucault: Selections from Essential Works of Foucault, 1954–1984, New York: New Press, 2004 (with Paul Rabinow)
'Becoming Neurochemical Selves', in Nico Stehr, ed., Biotechnology, Commerce and Civil Society, Transaction Press, 2004
'The neurochemical self and its anomalies', in R. Ericson, ed., Risk and Morality, pp. 407–437. University of Toronto Press, 2003.
'Power and psychological techniques', in Y. Bates and R. House, eds., Ethically Challenged Professions, pp. 27–46. Ross-on-Wye: PCCS Books, 2003.
'Society, madness, and control', in A. Buchanan, ed., The Care of the Mentally Disordered Offender in the Community, pp. 3–25, Oxford: Oxford University Press (2001)
'At Risk of Madness', in T. Baker and J. Simon, eds., Embracing Risk: The Changing Culture of Insurance and Responsibility, pp. 209–237, Chicago: University of Chicago Press (2001)
Papers in refereed journals (selected)
'Towards neuroecosociality: mental health in adversity', Theory, Culture and Society, 2021: https://doi.org/10.1177%2F0263276420981614
'Revitalizing sociology: urban life and mental illness between history and the present', British Journal of Sociology, 67, 1, 138-160 (With Des Fitzgerald and Ilina Singh)
'Still like 'birds on the wire'', Economy and Society, 2017, 46, 3-4, 303-323
Reading the Human Brain How the Mind Became Legible', Body and Society, 2016, 22 ,2, 140-177: doi:10.1177/1357034X15623363
'Spatial Phenomenotechnics: Making space with Charles Booth and Patrick Geddes', Environment and Planning D: Society and Space, 2004, 22: 209–228 (with Thomas Osborne).
'Neurochemical selves', Society, November/December 2003, 41, 1, 46–59.
'Kontroll', Fronesis, 2003, Nr. 14-15, 82–101.
'The politics of life itself', Theory, Culture and Society (2001), 18(6): 1–30.
'Genetic risk and the birth of the somatic individual', Economy and Society, Special Issue on configurations of risk (2000), 29 (4): 484–513. (with Carlos Novas).
'The biology of culpability: pathological identities in a biological culture', Theoretical Criminology (2000), 4, 1, 5–34.
Notes
External links
Nikolas Rose Personal Website
Brain, Self and Society project
Department of Global Health and Social Medicine
1947 births
Living people
British sociologists
Academics of the London School of Economics
Academics of King's College London
Foucault scholars
Synthetic biologists
Fellows of the Academy of Social Sciences | Nikolas Rose | Biology | 2,376 |
1,802,422 | https://en.wikipedia.org/wiki/Electronic%20article%20surveillance | Electronic article surveillance (EAS) is a type of system used to prevent shoplifting from retail stores, pilferage of books from libraries, or unwanted removal of properties from office buildings. EAS systems typically consist of two components: EAS antennas and EAS tags or labels. EAS tags are attached to merchandise; these tags can only be removed or deactivated by employees when the item is properly purchased or checked out. If merchandise bearing an active tag passes by an antenna installed at an entrance/exit, an alarm sounds alerting staff that merchandise is leaving the store unauthorized. Some stores also have antennas at entrances to restrooms to deter shoppers from taking unpaid-for merchandise into the restroom where they could remove the tags.
History
EAS tags that could be attached to items in stores were invented by Arthur Minasy in 1964. He filed a patent for his "Method and Apparatus for Detecting the Unauthorized Movement of Articles" in 1965 with the patent being granted in 1970.
Types
There are several major types of electronic article surveillance systems:
Electro-magnetic, also known as magneto-harmonic or Barkhausen effect
Acousto-magnetic, also known as magnetostrictive
Radio frequency (RF, 8.2 MHz)
Microwave
Video surveillance systems (to some extent)
Concealed EAS surveillance systems
Concealed EAS systems
Concealed EAS systems have no visible pedestals or hindrance in the store facade. These systems are installed below the floor and dropped from the ceiling and can protect merchandise of retailers from being stolen. There are site conditions and other parameters which enable them to be installed, but often malls insist on concealed system as a mandate to improve the shopping experience.
Electro-magnetic systems
These tags are made of a strip of amorphous metal (metglas), which has a very low magnetic saturation value. Except for permanent tags, this strip is also lined with a strip of ferromagnetic material with a moderate coercive field (magnetic "hardness"). Detection is achieved by sensing harmonics and sum or difference signals generated by the non-linear magnetic response of the material under a mixture of low-frequency (in the 10 Hz to 1000 Hz range) magnetic fields.
When the ferromagnetic material is magnetized, it biases the amorphous metal strip into saturation, where it no longer produces harmonics. Deactivation of these tags is therefore done with magnetization. Activation requires demagnetization.
The EM systems can be used by libraries to protect books and media. In shops, unlike AM and RF, EM can be placed on small or round items and products with foil packaging or metal objects, like cosmetics, baby milk cans, medicines, DIY tools, homeware etc. EM systems can also detect objects placed in foil bags or in metal briefcases.
A further application is the intellectual property (IP) protection against theft: Security paper with embedded microwires, which is used to detect confidential documents if they are removed from a building.
Acousto-magnetic systems
These are similar to magnetic tags in that they are made of two strips: a strip of magnetostrictive, ferromagnetic amorphous metal and a strip of a magnetically semi-hard metallic strip, which is used as a biasing magnet (to increase signal strength) and to allow deactivation. These strips are not bound together but free to oscillate mechanically.
Amorphous metals are used in such systems due to their good magnetoelastic coupling, which implies that they can efficiently convert magnetic energy into mechanical vibrations.
The detectors for such tags emit periodic tonal bursts at about 58 kHz, the same as the resonance frequency of the amorphous strips. This causes the strip to vibrate longitudinally by magnetostriction, and it continues to oscillate after the burst is over. The vibration causes a change in magnetization in the amorphous strip, which induces an AC voltage in the receiver antenna. If this signal meets the required parameters (correct frequency, repetition, etc.), the alarm is activated.
When the semi-hard magnet is magnetized, the tag is activated. The magnetized strip makes the amorphous strip respond much more strongly to the detectors, because the DC magnetic field given off by the strip offsets the magnetic anisotropy within the amorphous metal. The tag can also be deactivated by demagnetizing the strip, making the response small enough so that it will not be detected by the detectors.
AM tags are three dimensional plastic tags, much thicker than electro-magnetic strips and are thus seldom used for books.
Radio frequency (RF) systems
These tags are essentially an LC tank circuit (L for inductor, C for capacitor) that has a resonance peak anywhere from 1.75 MHz to 9.5 MHz. The standard frequency for retail use is 8.2 MHz. Sensing is achieved by sweeping around the resonant frequency and detecting the dip.
Deactivation for 8.2 MHz label tags is typically achieved using a deactivation pad. In the absence of such a device, labels can be rendered inactive by punching a hole, or by covering the circuit with a metallic label, a "detuner". The deactivation pad functions by partially destroying the capacitor. Though this sounds violent, in reality, both the process and the result are unnoticeable to the naked eye. The deactivator causes a micro short circuit in the label. This is done by submitting the tag to a strong electromagnetic field at the resonant frequency, which induces voltages exceeding the capacitor's breakdown voltage.
In terms of deactivation, radio frequency is the most efficient of the three technologies (RF, EM, AM – there are no microwave labels) given that the reliable "remote" deactivation distance can be up to . It also benefits the user in terms of running costs, since the RF de-activator only activates to send a pulse when a circuit is present. Both EM and AM deactivation units are on all the time and consume considerably more electricity. The reliability of "remote" deactivation (i.e. non-contact or non-proximity deactivation) capability makes for a fast and efficient throughput at the checkout.
Efficiency is an important factor when choosing an overall EAS solution given that time lost attempting to deactivate labels can be an important drag of cashier productivity as well as customer satisfaction if unwanted alarms are caused by tags that have not been effectively deactivated at the point of sale.
Deactivation of RF labels is also dependent on the size of the label and the power of the deactivation pad (the larger the label, the greater the field it generates for deactivation to take place. For this reason very small labels can cause issues for consistent deactivation). It is common to find RF deactivation built into barcode flat and vertical scanners at the POS in food retail especially in Europe and Asia where RF EAS technology has been the standard for nearly a decade. In apparel retail deactivation usually takes the form of flat pads of approx. 30x30 cm.
Microwave systems
These permanent tags are made of a non-linear element (a diode) coupled to one microwave and one electrostatic antenna. At the exit, one antenna emits a low-frequency (about 100 kHz) field, and another one emits a microwave field. The tag acts as a mixer re-emitting a combination of signals from both fields. This modulated signal triggers the alarm. These tags are permanent and somewhat costly. They are mostly used in clothing stores and have practically been withdrawn from use.
Source tagging
Source tagging is the application of EAS security tags at the source, the supplier or manufacturer, instead of at the retail side of the chain. For the retailer, source tagging eliminates the labor expense needed to apply the EAS tags themselves, and reduces the time between receipt of merchandise and when the merchandise is ready for sale. For the supplier, the main benefit is the preservation of the retail packaging aesthetics by easing the application of security tags within product packaging. Source tagging allows the EAS tags to be concealed and more difficult to remove.
The high-speed application of EAS labels, suited for commercial packaging processes, was perfected via modifications to standard pressure-sensitive label applicators. Today, consumer goods are source tagged at high speeds with the EAS labels incorporated into the packaging or the product itself.
The most common source tags are AM strips and 8.2 MHz radio frequency labels. Most manufacturers use both when source tagging in the USA. In Europe there is little demand for AM tagging given that the Food and Department Store environments are dominated by RF technology.
Tag pollution
One significant problem from source tagging is something called "tag pollution" caused when non-deactivated tags carried around by customers cause unwanted alarms, decreasing the effectiveness and integrity of the EAS system. The problem is that no store has more than one system. Therefore, if a store actually has an anti-shoplifting system to deactivate a label they will only deactivate the one that is part of their system. If a store does not use an EAS system, they will not deactivate any tags at all. This is often the reason why people trigger an alarm entering a store, which can cause great frustration for both customers and staff.
Discussion
Occasional versus professional shoplifters
EAS systems can provide a solid deterrent against casual theft. The occasional shoplifter, not being familiar with these systems and their mode of operation, will either get caught by them, or preferably, will be dissuaded from attempting any theft in the first place.
Informed shoplifters are conscious of how tags can be removed or deactivated. A common method of defeating RF tags is the use of so-called booster bags. These are typically large paper bags that have been lined with multiple layers of aluminium foil to effectively shield the RF label from detection, much like a Faraday cage. A similar situation would be the loss of signal that a cell phone suffers inside an elevator: The electro-magnetic, or radio waves are effectively blocked, reducing the ability to send or receive information.
However, they may miss some tags or be unable to remove or deactivate all of them, especially if concealed or integrated tags are used. As a service to retailers, many manufacturers integrate security tags in the packaging of their products, or even inside the product itself, though this is rare and not especially desirable either for the retailer or the manufacturer. The practical totality of EAS labels are discarded with the product packaging. This is of particular application in everyday items that consumers might carry on their person to avoid the inconvenience of potentially live reactivated EAS tags when walking in and out of retail stores.
Hard tags, typically used for clothing or ink tags, known as benefit denial tags, may reduce the rate of tag manipulation. Also, shoplifters deactivating or detaching tags may be spotted by the shop staff.
Shoplifting tools are illegal in many jurisdictions, and can, in any case, serve as evidence against the perpetrators. Hence, informed shoplifters, although they decrease their risk of being caught by the EAS, expose themselves to much greater judicial risks if they get caught with tools, booster bags, or while trying to remove tags, as this shows intent to steal.
The possession of shoplifting tools (e.g. lined bags or wire cutters to cut bottle tags) can lead to the suspect being arrested for suspicion of theft or "Going equipped for stealing, etc." within the UK judicial system.
In summary, while even the least expensive EAS systems will catch most occasional shoplifters, a broader range of measures are still required for an effective response that can protect profits without impeding sales.
Tags containing alarms
Tags can be equipped with a built-in alarm which sounds when the tag detects tampering or unauthorized removal from the store. The tag not only triggers the store's electronic article surveillance system, but also sounds an alarm attached to the merchandise. The local alarm continues to sound for several minutes after leaving the store, attracting attention to the shopper carrying the merchandise.
Installation costs
A single EAS detector, suitable for a small shop, is accessible to all retail stores, and should form a part of any coherent loss or profit protection system.
Disposable tags cost a matter of cents and may have been embedded during manufacture. More sophisticated systems are available, which are more difficult to circumvent. These solutions tend to be product category specific as in the case of high value added electronics and consumables; consequently they are more expensive. Examples are "Safers", transparent secure boxes that completely enclose the article to be protected, Spiders that wrap around packaging and Electronic Merchandise Security Systems that allow phones and tablets to be used securely in the store before purchase. All of these require specific detachers or electronic keys at the point-of-sale desk. They have the advantages of being reusable, strong visual deterrents to potential theft.
Tag orientation
Except for microwave, the detection rate for all these tags depends on their orientation relative to the detection loops. For a pair of planar loops forming a Helmholtz coil, magnetic field lines will be approximately parallel in their center. Orienting the tag so that no magnetic flux from the coils crosses them will prevent detection, as the tag won't be coupled to the coils. This shortcoming, documented in the first EAS patents, can be solved by using multiple coils or by placing them in another arrangement such as a figure-of-eight. Sensitivity will still be orientation-dependent but detection will be possible at all orientations.
Detaching
A detacher is used to remove re-usable hard tags. The type of detacher used will depend on the type of tag. There are a variety of detachers available, with the majority using powerful magnets. Any store that uses an anti-shoplifting system and has a detacher should take care to keep it secured such that it cannot be removed. Some detachers actually have security tags inside them, to alert store personnel of them being removed from (or being brought into) the store. With increasing prevalence, stores have metal detectors at the entrance that can warn against the presence of booster bags or detachers.
Electro-magnetic activation and deactivation
Deactivation of magnetic tags is achieved by straightforward magnetization using a strong magnet. Magneto-acoustic tags require demagnetization. However, sticking a powerful magnet on them will bias disposable magnetic tags and prevent resonance in magneto-acoustic tags. Similarly, sticking a piece of metal, such as a large coin on a disposable radio-frequency tag will shield it. Non-disposable tags require stronger magnets or pieces of metal to disable or shield since the strips are inside the casing and thus further away.
Shielding
Most systems can be circumvented by placing the tagged goods in a bag lined with aluminum foil. The booster bag will act as a Faraday cage, shielding the tags from the antennas. Although some vendors claim that their acousto-magnetic systems cannot be defeated by bags shielded with aluminum foil, a sufficient amount of shielding (in the order of 30 layers of standard 20 μm foil) will defeat all standard systems.
Although the amount of shielding required depends on the system, its sensitivity, and the distance and orientation of the tags relative to its antennas, total enclosure of tags is not strictly necessary. Indeed, some shoplifters use clothes lined with aluminum foil. Low-frequency magnetic systems will require more shielding than radio-frequency systems due to their use of near-field magnetic coupling. Magnetic shielding, with steel or mu-metal, would be more effective, but also cumbersome and expensive.
The shielding technique is well known amongst shoplifters and store owners. Some countries have specific laws against it. In any case, possession of such a bag demonstrates a prior-intent to commit a crime, which in many jurisdictions raises shoplifting from misdemeanor to felony status, because they are considered a "burglary tool."
To deter the use of booster bags, some stores have add-on metal detector systems which sense metallic surfaces.
Jamming
Like most systems that rely on transmission of electromagnetic signals through a hostile medium, EAS sensors can be rendered inoperative by jamming. As the signals from tags are very low-power (their cross-section is small, and the exits are wide), jamming requires little power. Evidently, shoplifters will not feel the need to follow radio transmission regulations; hence crude, easy-to-build transmitters will be adequate for them. However, due to their high frequency of operation, building a jammer can be difficult for microwave circuits; these systems are therefore less likely to be jammed. Although jamming is easy to perform, it is also easy to detect. A simple firmware upgrade should be adequate for modern DSP-based EAS systems to detect jamming. Nevertheless, the vast majority of EAS systems do not currently detect it.
Interference and health issues
All electronic article surveillance systems emit electromagnetic energy and thus can interfere with electronics.
Magneto-harmonic systems need to bring the tags to magnetic saturation and thus create magnetic fields strong enough to be felt through a small magnet. They routinely interfere with CRT displays. Demagnetization-remagnetization units also create intense fields.
Acousto-magnetic systems use less power but their signals are pulsed in the 100 Hz range.
Radio-frequency systems tend to be the least interfering because of their lower power and operating frequency in the MHz range, making it easy to shield against them.
A March 2007 study by the Mayo Clinic in Rochester, Minnesota, reported instances where acousto-magnetic EAS systems located at the front of retail stores caused a pacemaker to fail and a defibrillator to trigger, shocking the persons in which they were implanted.
There are also concerns that some installations are intentionally reconfigured to exceed the rated specifications by the manufacturer, thereby exceeding tested and certified magnetic field levels.
See also
Active packaging
Radio-frequency identification
Barkhausen effect
Tattle-Tape
Evaluation of binary classifiers
References
External links
HowStuffWorks:
EAS Security System
Retail processes and techniques
Theft
Security
American inventions
Wireless locating
Automatic identification and data capture
Packaging
Retailing-related crime | Electronic article surveillance | Technology | 3,809 |
2,312,185 | https://en.wikipedia.org/wiki/Missile%20Datcom | Missile DATCOM is a widely used semi-empirical datasheet component build-up method for the prediction of missile aerodynamic coefficients. It is an example of an engineering aeroprediction method which can be used to generate an aerodynamic database of the vehicle’s aerodynamic coefficients at various flight conditions. It has been in continual development for over twenty years, with the latest version released in December 2014. It has traditionally been supplied free of charge by the United States Air Force to American defense contractors. The code is considered restricted under International Traffic in Arms Regulations (ITAR) and should not be distributed outside the United States.
See also
Digital Datcom
Aeroprediction
References
Aerodynamics | Missile Datcom | Chemistry,Engineering | 137 |
7,138,038 | https://en.wikipedia.org/wiki/Calsequestrin | Calsequestrin is a calcium-binding protein that acts as a calcium buffer within the sarcoplasmic reticulum. The protein helps hold calcium in the cisterna of the sarcoplasmic reticulum after a muscle contraction, even though the concentration of calcium in the sarcoplasmic reticulum is much higher than in the cytosol. It also helps the sarcoplasmic reticulum store an extraordinarily high amount of calcium ions. Each molecule of calsequestrin can bind 18 to 50 Ca2+ ions. Sequence analysis has suggested that calcium is not bound in distinct pockets via EF-hand motifs, but rather via presentation of a charged protein surface. Two forms of calsequestrin have been identified. The cardiac form Calsequestrin-2 (CASQ2) is present in cardiac and slow skeletal muscle and the fast skeletal form Calsequestrin-1(CASQ1) is found in fast skeletal muscle. The release of calsequestrin-bound calcium (through a calcium release channel) triggers muscle contraction. The active protein is not highly structured, more than 50% of it adopting a random coil conformation. When calcium binds there is a structural change whereby the alpha-helical content of the protein increases from 3 to 11%. Both forms of calsequestrin are phosphorylated by casein kinase 2, but the cardiac form is phosphorylated more rapidly and to a higher degree. Calsequestrin is also secreted in the gut where it deprives bacteria of calcium ions..
Cardiac calsequestrin
Cardiac calsequestrin (CASQ2) plays an integral role in cardiac regulation. Mutations in the cardiac calsequestrin gene have been associated with cardiac arrhythmia and sudden death. CASQ2 is thought to have a role in regulating cardiac excitation-contraction coupling and calcium-induced calcium release (CICR) in the heart, as overexpression of CASQ2 has been shown to substantially raise the magnitude of cell-averaged ICA-induced calcium transients and spontaneous calcium sparks in isolated heart cells. Furthermore, CASQ2 modulates the CICR mechanism by lengthening to process to functionally recharge the sarcoplasmic reticulum's calcium ion stores. A lack of or mutation in CASQ2 has been directly associated with catecholaminergic polymorphic ventricular tachycardia (CPVT). A mutation can have a significant effect if it disrupts the linear polymerization ability of CASQ2, which directly accounts for its high-capacity to bind Ca2+. In addition, the hydrophobic core of domain II appears to be necessary for CASQ2's function, because a single amino acid mutation that disrupts this hydrophobic core directly leads to molecular aggregates, which are unable to respond to calcium ions.
See also
Catecholaminergic polymorphic ventricular tachycardia
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Catecholaminergic Polymorphic Ventricular Tachycardia
Protein families
Endoplasmic reticulum resident proteins | Calsequestrin | Biology | 669 |
11,899,236 | https://en.wikipedia.org/wiki/Diallel%20cross | A diallel cross is a mating scheme used by plant and animal breeders, as well as geneticists, to investigate the genetic underpinnings of quantitative traits.
In a full diallel, all parents are crossed to make hybrids in all possible combinations. Variations include half diallels with and without parents, omitting reciprocal crosses. Full diallels require twice as many crosses and entries in experiments, but allow for testing for maternal and paternal effects. If such "reciprocal" effects are assumed to be negligible, then a half diallel without reciprocals can be effective.
Common analysis methods utilize general linear models to identify heterotic groups, estimate general or specific combining ability, interactions with testing environments and years, or estimates of additive, dominant, and epistatic genetic effects and genetic correlations.
Mating designs
There are four main types of diallel mating design:
Full diallel with parents and reciprocal F1 crosses
Full diallel as above, but excluding parents
Half diallel with parents, but without reciprocal crosses
Half diallel without parents or reciprocal crosses
References
Genetics
Breeding | Diallel cross | Biology | 226 |
5,989,904 | https://en.wikipedia.org/wiki/Earcon | An earcon is a brief, distinctive sound that represents a specific event or conveys other information. Earcons are a common feature of computer operating systems and applications, ranging from a simple beep to indicate an error, to the customizable sound schemes of modern operating systems that indicate startup, shutdown, and other events.
The name is a pun on the more familiar term icon in computer interfaces. Icon sounds like "eye-con" and is visual, which inspired D.A. Sumikawa to coin "earcon" as the auditory equivalent in a 1985 article, 'Guidelines for the integration of audio cues into computer user interfaces.'
The term is most commonly applied to sound cues in a computer interface, but examples of the concept occur in broadcast media such as radio and television:
The alert signal that indicates a message from the Emergency Broadcast System
The signature three-tone melody that identifies NBC in radio and television broadcasts
Earcons are generally synthesized tones or sound patterns. The similar term auditory icon refers to recorded everyday sounds that serve the same purpose.
Use in assistive technologies
Assistive technologies for computing devices—such as screen readers including ChromeOS's ChromeVox, Android's TalkBack and Apple's VoiceOver—use earcons as a convenient and fast means of conveying to blind or visually impaired users contextual information about the interface they are navigating. Earcons in screen readers largely serve as auditory cues to inform the user that they have selected a particular type of interface element, such as a button, hyperlink or text input field. They can also provide context about the current document or mode, such as whether a web page is loading.
Earcons provide an enhancement to screen reader usage due to their brevity and subtleness, which is an improvement over using much longer spoken cues to provide context: using a short, distinctive beep when an interface's button is selected can be much faster and therefore more convenient to hear than using speech synthesis to say the word "button".
Due to being non-spoken audio sounds, users must learn to associate the earcons with their meanings to be able to fully benefit from them. To help with learning such associations, some screen readers will also speak the meanings of their respective earcons, albeit towards the end of their full description of an interface element. It is recommended that earcons be introduced early on when learning how to use a screen reader to ensure that they become impulsively (and eventually, subconsciously) associated through habitual usage.
See also
Jingle
References
Multimodal interaction
Display technology
Auditory displays
User interface techniques | Earcon | Technology,Engineering | 534 |
3,712,179 | https://en.wikipedia.org/wiki/Glycol%20ethers | Glycol ethers are a class of chemical compounds consisting of alkyl ethers that are based on glycols such as ethylene glycol or propylene glycol. They are commonly used as solvents in paints and cleaners. They have good solvent properties while having higher boiling points than the lower-molecular-weight ethers and alcohols.
History
The name "Cellosolve" was registered in 1924 as a United States trademark by Carbide & Carbon Chemicals Corporation (a division
of Union Carbide Corporation) for "Solvents for Gums, Resins, Cellulose Esters, and the Like". "Ethyl Cellosolve" or simply "Cellosolve" consists mainly of ethylene glycol monoethyl ether and was introduced as a lower-cost solvent alternative to ethyl lactate. "Butyl Cellosolve" (ethylene glycol monobutyl ether) was introduced in 1928, and "Methyl Cellosolve" (ethylene glycol monomethyl ether) in 1929.
Types
Glycol ethers are designated "E-series" or "P-series" for those made from ethylene oxide or propylene oxide, respectively. Typically, E-series glycol ethers are found in pharmaceuticals, sunscreens, cosmetics, inks, dyes and water-based paints, while P-series glycol ethers are used in degreasers, cleaners, aerosol paints and adhesives. Both E- and P-series glycol ethers can be used as intermediates that undergo further chemical reactions, producing glycol diethers and glycol ether acetates. P-series glycol ethers are marketed as having lower toxicity than the E-series.
Health impacts
Most glycol ethers are water-soluble, biodegradable and only a few are considered toxic.
In the early 1990s, studies found higher than expected rates of miscarriages among women who worked in semiconductor plants, which was traced back to glycol ethers used in the photoresist substances that coat semiconductors.
One study suggests that occupational exposure to glycol ethers is related to low motile sperm count, a finding disputed by the chemical industry.
Subclasses
Solvents
Ethylene glycol monomethyl ether (2-methoxyethanol, CH3OCH2CH2OH)
Ethylene glycol monoethyl ether (2-ethoxyethanol, CH3CH2OCH2CH2OH)
Ethylene glycol monopropyl ether (2-propoxyethanol, CH3CH2CH2OCH2CH2OH)
Ethylene glycol monoisopropyl ether (2-isopropoxyethanol, (CH3)2CHOCH2CH2OH)
Ethylene glycol monobutyl ether (2-butoxyethanol, CH3CH2CH2CH2OCH2CH2OH), a widely used solvent in paintings and surface coatings, cleaning products and inks
Ethylene glycol monophenyl ether (2-phenoxyethanol, C6H5OCH2CH2OH)
Ethylene glycol monobenzyl ether (2-benzyloxyethanol, C6H5CH2OCH2CH2OH)
Propylene glycol methyl ether, (1-methoxy-2-propanol, CH3OCH2CH(OH)CH3)
Diethylene glycol monomethyl ether (2-(2-methoxyethoxy)ethanol, methyl carbitol, CH3OCH2CH2OCH2CH2OH)
Diethylene glycol monoethyl ether (2-(2-ethoxyethoxy)ethanol, carbitol cellosolve, CH3CH2OCH2CH2OCH2CH2OH)
Diethylene glycol mono-n-butyl ether (2-(2-butoxyethoxy)ethanol, butyl carbitol, CH3CH2CH2CH2OCH2CH2OCH2CH2OH)
Dipropyleneglycol methyl ether
C12-15 pareth-12 a polyethylene glycol ether used as an emulsifier in cosmetics
Dialkyl ethers
Ethylene glycol dimethyl ether (dimethoxyethane, CH3OCH2CH2OCH3), a higher boiling alternative to diethyl ether and THF, also used as a solvent for polysaccharides, a reagent in organometallic chemistry and in some electrolytes of lithium batteries
Ethylene glycol diethyl ether (diethoxyethane, CH3CH2OCH2CH2OCH2CH3)
Ethylene glycol dibutyl ether (dibutoxyethane, CH3CH2CH2CH2OCH2CH2OCH2CH2CH2CH3)
Esters
Ethylene glycol methyl ether acetate (2-methoxyethyl acetate, CH3OCH2CH2OCOCH3)
Ethylene glycol monoethyl ether acetate (2-ethoxyethyl acetate, CH3CH2OCH2CH2OCOCH3)
Ethylene glycol monobutyl ether acetate (2-butoxyethyl acetate, CH3CH2CH2CH2OCH2CH2OCOCH3)
Propylene glycol methyl ether acetate (1-methoxy-2-propanol acetate)
References
Commodity chemicals | Glycol ethers | Chemistry | 1,212 |
2,449,259 | https://en.wikipedia.org/wiki/Guillermo%20Owen | Guillermo Owen (born 1938) is a Colombian mathematician, and professor of applied mathematics at the Naval Postgraduate School in Monterey, California, known for his work in game theory. He is also the son of the Mexican poet and diplomat Gilberto Owen.
Biography
Guillermo Owen was born May 4, 1938, in Bogotá, Colombia, and obtained a B.S. degree from Fordham University in 1958, and a Ph.D. degree from Princeton University under the guidance of Dr. Harold Kuhn in 1962.
Owen has taught at Fordham University (1961–1969), Rice University (1969–1977) and Los Andes University in Colombia (1978–1982, 2008), apart from having given lectures in many universities in Europe and Latin America. He is currently holding the position of Distinguished Professor of applied mathematics at the Naval Postgraduate School in Monterey, California.
Owen is member of the Colombian Academy of Sciences, The Royal Academy of Arts and Sciences of Barcelona, and the Third World Academy of Sciences. He is associate editor of , and fellow of the International Game Theory Society.
Honors and awards
The Escuela Naval Almirante Padilla of Cartagena gave him an honorary degree of Naval Science Professional in June 2004.
Owen was named Honorary President of the XIV Latin Ibero American Congress on Operations Research - CLAIO 2008. Cartagena, Colombia, September 2008.
The university of Lower Normandy, in Caen, France, gave him an honorary doctorate in October 2017.
Publications
Owen has authored, translated and/or edited thirteen books, and approximately one hundred and forty papers published in journals such as Management Science, Operations Research, , American Political Science Review, and Mathematical Programming, among others. Owen's books include:
1968. Game theory. Academic Press
1970. Finite mathematics and calculus; mathematics for the social and management sciences. With M. Evans Munroe.
1983. Information pooling and group decision making : proceedings of the Second University of California, Irvine, Conference on Political Economy. Edited with Bernard Grofman.
1999. Discrete mathematics and game theory.
2001. Power indices and coalition formation. Edited with Manfred J. Holler.
References
External links
Biography. Naval Postgraduate School in Monterey, California.
1938 births
20th-century Colombian mathematicians
Colombian expatriates in the United States
Game theorists
Living people
Fordham University alumni
Princeton University alumni
Fordham University faculty
Rice University faculty
Naval Postgraduate School faculty
Operations researchers
21st-century Colombian mathematicians | Guillermo Owen | Mathematics | 487 |
1,126,110 | https://en.wikipedia.org/wiki/Photosystem%20II | Photosystem II (or water-plastoquinone oxidoreductase) is the first protein complex in the light-dependent reactions of oxygenic photosynthesis. It is located in the thylakoid membrane of plants, algae, and cyanobacteria. Within the photosystem, enzymes capture photons of light to energize electrons that are then transferred through a variety of coenzymes and cofactors to reduce plastoquinone to plastoquinol. The energized electrons are replaced by oxidizing water to form hydrogen ions and molecular oxygen.
By replenishing lost electrons with electrons from the splitting of water, photosystem II provides the electrons for all of photosynthesis to occur. The hydrogen ions (protons) generated by the oxidation of water help to create a proton gradient that is used by ATP synthase to generate ATP. The energized electrons transferred to plastoquinone are ultimately used to reduce to NADPH or are used in non-cyclic electron flow. DCMU is a chemical often used in laboratory settings to inhibit photosynthesis. When present, DCMU inhibits electron flow from photosystem II to plastoquinone.
Structure of complex
The core of PSII consists of a pseudo-symmetric heterodimer of two homologous proteins D1 and D2. Unlike the reaction centers of all other photosystems in which the positive charge sitting on the chlorophyll dimer that undergoes the initial photoinduced charge separation is equally shared by the two monomers, in intact PSII the charge is mostly localized on one chlorophyll center (70−80%). Because of this, P680+ is highly oxidizing and can take part in the splitting of water.
Photosystem II (of cyanobacteria and green plants) is composed of around 20 subunits (depending on the organism) as well as other accessory, light-harvesting proteins. Each photosystem II contains at least 99 cofactors: 35 chlorophyll a, 12 beta-carotene, two pheophytin, two plastoquinone, two heme, one bicarbonate, 20 lipids, the cluster (including two chloride ions), one non heme and two putative ions per monomer. There are several crystal structures of photosystem II. The PDB accession codes for this protein are , , (3BZ1 and 3BZ2 are monomeric structures of the Photosystem II dimer), , , , , , .
Oxygen-evolving complex (OEC)
The oxygen-evolving complex is the site of water oxidation. It is a metallo-oxo cluster comprising four manganese ions (in oxidation states ranging from +3 to +4) and one divalent calcium ion. When it oxidizes water, producing oxygen gas and protons, it sequentially delivers the four electrons from water to a tyrosine (D1-Y161) sidechain and then to P680 itself. It is composed of three protein subunits, OEE1 (PsbO), OEE2 (PsbP) and OEE3 (PsbQ); a fourth PsbR peptide is associated nearby.
The first structural model of the oxygen-evolving complex was solved using X-ray crystallography from frozen protein crystals with a resolution of 3.8Å in 2001. Over the next years the resolution of the model was gradually increased to 2.9Å. While obtaining these structures was in itself a great feat, they did not show the oxygen-evolving complex in full detail. In 2011 the OEC of PSII was resolved to a level of 1.9Å revealing five oxygen atoms serving as oxo bridges linking the five metal atoms and four water molecules bound to the cluster; more than 1,300 water molecules were found in each photosystem II monomer, some forming extensive hydrogen-bonding networks that may serve as channels for protons, water or oxygen molecules. At this stage, it is suggested that the structures obtained by X-ray crystallography are biased, since there is evidence that the manganese atoms are reduced by the high-intensity X-rays used, altering the observed OEC structure. This incentivized researchers to take their crystals to a different X-ray facilities, called X-ray Free Electron Lasers, such as SLAC in the USA. In 2014 the structure observed in 2011 was confirmed. Knowing the structure of Photosystem II did not suffice to reveal how it works exactly. So now the race has started to solve the structure of Photosystem II at different stages in the mechanistic cycle (discussed below). Currently structures of the S1 state and the S3 state's have been published almost simultaneously from two different groups, showing the addition of an oxygen molecule designated O6 between Mn1 and Mn4, suggesting that this may be the site on the oxygen evolving complex, where oxygen is produced.
Water splitting
Photosynthetic water splitting (or oxygen evolution) is one of the most important reactions on the planet, since it is the source of nearly all the atmosphere's oxygen. Moreover, artificial photosynthetic water-splitting may contribute to the effective use of sunlight as an alternative energy-source.
The mechanism of water oxidation is understood in substantial detail. The oxidation of water to molecular oxygen requires extraction of four electrons and four protons from two molecules of water. The experimental evidence that oxygen is released through cyclic reaction of oxygen evolving complex (OEC) within one PSII was provided by Pierre Joliot et al. They have shown that, if dark-adapted photosynthetic material (higher plants, algae, and cyanobacteria) is exposed to a series of single turnover flashes, oxygen evolution is detected with typical period-four damped oscillation with maxima on the third and the seventh flash and with minima on the first and the fifth flash (for review, see). Based on this experiment, Bessel Kok and co-workers introduced a cycle of five flash-induced transitions of the so-called S-states, describing the four redox states of OEC: When four oxidizing equivalents have been stored (at the S4-state), OEC returns to its basic S0-state. In the absence of light, the OEC will "relax" to the S1 state; the S1 state is often described as being "dark-stable". The S1 state is largely considered to consist of manganese ions with oxidation states of Mn3+, Mn3+, Mn4+, Mn4+. Finally, the intermediate S-states were proposed by Jablonsky and Lazar as a regulatory mechanism and link between S-states and tyrosine Z.
In 2012, Renger expressed the idea of internal changes of water molecules into typical oxides in different S-states during water splitting.
Inhibitors
Inhibitors of PSII are used as herbicides. There are two main chemical families, the triazines derived from cyanuric chloride of which atrazine and simazine are the most commonly used and the aryl ureas which include chlortoluron and diuron (DCMU).
See also
Oxygen evolution
P680
Photosynthesis
Photosystem
Photosystem I
Photosystem II light-harvesting protein
Reaction Centre
Photoinhibition
References
Photosynthesis
Light reactions
Manganese enzymes
EC 1.10.3 | Photosystem II | Chemistry,Biology | 1,562 |
64,694,941 | https://en.wikipedia.org/wiki/Prix%20Paul%20Langevin | The prix Paul-Langevin is a prize created in 1956 and named in honor of Paul Langevin. It has been awarded each year since 1957 by the Société française de physique (SFP). The prize honors French physicists for work in theoretical physics.
The prix Paul Langevin should not be confused with the , which is a prize awarded in mathematics, physics, chemistry, or biology by the Académie des sciences.
Recipients
1957 Yves Ayant
1958 Jacques Winter
1959 Roland Omnès
1960 Philippe Nozières
1961 Cyrano de Dominicis
1962 Jacques Villain
1963 Claude Cohen-Tannoudji
1964 Marcel Froissart
1965 Robert Arvieu
1966 Roger Balian
1967 Jean Lascoux
1968 Émile Daniel
1969 Jean Ginibre
1970 Daniel Bessis
1971 Loup Verlet
1972 Claude Itzykson
1973 André Neveu
1974 Édouard Brézin
1975 Dominique Vautherin
1976 Gérard Toulouse
1977 Jean Zinn-Justin
1978 Jean Iliopoulos
1979 Richard Schaeffer
1980 Roland Seneor and Jacques Magnen
1981 Yves Pomeau
1982 Pierre Fayet
1983 Serge Aubry
1984 Thibault Damour
1985 Mannque Rho
1986 Bernard Julia
1987 Bernard Souillard
1988 Paul Manneville
1989 Jean Bellissard
1990 Pierre Coullet
1991 Jean-Bernard Zuber
1992 Rémy Mosseri
1993 Jean-François Joanny
1994 Dominique Escande
1995 Costas Kounnas
1996 Vincent Hakim
1997 Patrick Mora
1998 Denis Bernard
1999 Pierre Binétruy
2000 Jean-Louis Barrat
2001 Vincent Pasquier
2002 Leticia Cugliandolo and Jorge Kurchan
2004 Bart Van Tiggelen
2005 Satya Majumdar
2008 Rémi Monasson
2009 Alain Barrat
2010 Jean-Philippe Uzan
2015 François Gelis and Ubirajara van Kolck
2016 Silke Biermann and Jesper Jacobsen
2017 Olivier Bénichou and Raphaël Voituriez
2021 Mariana Grana and Cédric Deffayet
2022 Kirone Mallick
2023 Jean-Philippe Colombier
References
French science and technology awards
Physics awards
Awards established in 1956 | Prix Paul Langevin | Technology | 422 |
1,898,396 | https://en.wikipedia.org/wiki/Chrome%20yellow | Chrome yellow is a bright, warm yellow pigment that has been used in art, fashion, and industry. It is the premier orange pigment for many applications.
Production of chrome yellow and related pigments
The raw pigment precipitates as a fine solid upon mixing lead(II) salts and a source of chromate. Approximately 90,000 tons of chrome yellow are produced annually as of 2001.
Chrome yellow pigments are usually encapsulated by coating with transparent oxides that protect the pigment from environmental factors that would diminish their colorant properties.
Related lead sulfochromate pigments are produced by the replacement of some chromate by sulfate, resulting in a mixed lead-chromate-sulfate compositions Pb(CrO4)1-x(SO4)x. This replacement is possible because sulfate and chromate are isostructural. Since sulfate is colorless, sulfochromates with high values of x are less intensely colored than lead chromate. In some cases, chromate is replaced by molybdate.
Permanence
Chrome yellow is moderately resistant to fading from exposure to light when it is chemically pure. Observations have found that over time though, it begins to darken and suffer discoloration by turning brown. This degradation is seen in some of Van Gogh's pieces. According to Gettens, especially when mixed with organic colors, it can take on a green tone. This effect is attributed to reduction of some chromate to chromium(III) oxide. Owing to its high lead content, the pigment is prone to discoloration over time, particularly in the presence of sulfur compounds. Its low cost had doubtlessly contributed to its continued use as an artists' color even though some subsequently discovered yellow pigments are more permanent. Artists began using cadmium yellow instead of chrome yellow when they became aware of chrome yellow's instability.
The pigment tends to react with hydrogen sulfide and darken on exposure to air over time, forming lead sulfide, and it contains the toxic heavy metal lead plus the toxic, carcinogenic chromate. For these reasons, it was replaced by another pigment, cadmium yellow (mixed with enough cadmium orange to produce a color equivalent to chrome yellow). Darkening may also occur from reduction by sulfur dioxide. Good quality pigments have been coated to inhibit contact with gases that can change their color. Cadmium pigments in turn are increasingly replaced with organic pigments such as arylides (Pigment Yellow 65) and isoindoles (PY 110).
Notable occurrences
Vincent van Gogh used chrome yellow in many of his paintings, including his famous Sunflowers series. Studies focusing on the techniques used in Van Gogh's Sunflowers series have revealed how Van Gogh skillfully mixed various shades of chrome yellow to achieve different effects. Chrome yellow has also been used in fashion and textiles, particularly in the 1920s and 1930s. The vibrant color was a popular choice for flapper dresses, hats, and accessories, and was often paired with other bright colors, such as pink and turquoise.
History
The pigment is derived from lead chromate, a chemical compound that was first synthesized in the early 1800s. The discovery of lead chromate, the primary component of chrome yellow, is credited to the French chemist Louis Nicolas Vauquelin. Vauquelin was studying the mineral crocoite, a natural form of lead chromate, when he identified the presence of a new element, chromium. The discovery led to the synthesis of a variety of new pigments, including chrome yellow. Chrome yellow quickly gained popularity among artists and designers for its bright, sunny hue, which was particularly well-suited for use in fashion and textiles. The earliest known use of chrome yellow in a painting is a work by Sir Thomas Lawrence from before 1810. The first recorded use of chrome yellow as a color name in English was in 1818. The pigment was also widely used in industrial applications, such as in the production of paint, plastics, and ceramics.
Safety
Because it contains not only lead but hexavalent chromium, chrome yellow has long been the focus on safety concerns. Its use is highly regulated. Its former use as a food colorant has long been discontinued. The continued wide use of this pigment is attributed to its very low solubility, which suppresses leaching of chromate and lead into biological fluids. The LD50 for rats is 5 g/kg.
See also
List of colors
List of inorganic pigments
References
Further reading
Kühn, H. and Curran, M., Chrome Yellow and Other Chromate Pigments, in Artists’ Pigments. A Handbook of Their History and Characteristics, Vol. 1, L. Feller, Ed., Cambridge University Press, London 1986
External links
Chrome yellow, Colourlex
Pichon, A. Pigment degradation: Chrome yellow’s darker side. Nature Chemistry, 5(11), 2013, 897–897. doi:10.1038/nchem.1789
Inorganic pigments
Lead(II) compounds
Chromates
Alchemical substances
Shades of yellow | Chrome yellow | Chemistry | 1,049 |
66,224,377 | https://en.wikipedia.org/wiki/Tempo%20Automation | Tempo Automation was an American electronics development and manufacturing company based in San Francisco, California.
History
Tempo was founded in 2013 by Jeff McAlvay, Jesse Koenig, and Shashank Samala. They started manufacturing customer orders through their platform in 2016.
In 2015, the company raised $8 million in Series A venture funding, led by Lux Capital. They raised $20 million in Series B funding in 2018, and an additional $45 million in Series C funding in 2019, with investors including Point72 Ventures, Lockheed Martin, and Uncork Capital, bringing their total raised to $74.6 million.
In 2018, the company opened a new factory in San Francisco to produce printed circuit boards for low-volume manufacturers and prototyping. In 2019, Joy Weiss was named president and chief executive officer, replacing founding CEO Jeff McAlvay, who became Tempo's chief process officer.
In November 2022, the company went public through a SPAC merger with ACE Convergence Acquisition Corp., raising $100 million from White Lion Capital. The company was listed on the Nasdaq stock exchange with ticker TMPO on November 23.
In July 2023, the company laid off 62 employees, leaving only 7. Their stock was delisted from Nasdaq in October 2023 after falling below the $50M market cap requirement.
References
External links
Official website
Electronics manufacturing
Printed circuit board manufacturing
Electronics companies established in 2013
Manufacturing companies based in San Francisco
2013 establishments in California | Tempo Automation | Engineering | 302 |
9,683,947 | https://en.wikipedia.org/wiki/Stone%20sculpture | A stone sculpture is an object made of stone which has been shaped, usually by carving, or assembled to form a visually interesting three-dimensional shape. Stone is more durable than most alternative materials, making it especially important in architectural sculpture on the outside of buildings.
Stone carving includes a number of techniques where pieces of rough natural stone are shaped by the controlled removal of stone. Owing to the permanence of the material, evidence can be found that even the earliest societies indulged in some form of stonework, though not all areas of the world have such abundance of good stone for carving as Egypt, Persia(Iran), Greece, Central America, India and most of Europe. Often, as in Indian sculpture, stone is the only material in which ancient monumental sculpture has survived (along with smaller terracottas), although there was almost certainly more wooden sculpture created at the time.
Petroglyphs (also called rock engravings) are perhaps the earliest form: images created by removing part of a rock surface which remains in situ, by incising, pecking, carving, and abrading. Rock reliefs, carved into "living" rock, are a more advanced stage of this. Monumental sculpture covers large works, and architectural sculpture, which is attached to buildings. Historically, much of these types were painted, usually after a thin coat of plaster was applied. Hardstone carving is the carving for artistic purposes of semi-precious stones such as jade, agate, onyx, rock crystal, sard or carnelian, and a general term for an object made in this way. Alabaster or mineral gypsum is a soft mineral that is easy to carve for smaller works and still relatively durable. Engraved gems are small carved gems, including cameos, originally used as seal rings.
Carving stone into sculpture is an activity older than civilization itself, beginning perhaps with incised images on cave walls. Prehistoric sculptures were usually human forms, such as the Venus of Willendorf and the faceless statues of the Cycladic cultures of ancient Greece. Later cultures devised animal, human-animal and abstract forms in stone. The earliest cultures used abrasive techniques, and modern technology employs pneumatic hammers and other devices. But for most of human history, sculptors used a hammer and chisel as the basic tools for carving stone.
Types of stone used in carved sculptures
Soapstone, with a Mohs hardness of about 2, is an easily worked stone, commonly used by beginning students of stone carving.
Alabaster and softer kinds of serpentine, all about 3 on the Mohs scale, are more durable than soapstone. Alabaster, in particular, has long been cherished for its translucence.
Limestone and sandstone, at about 4 on the Mohs scale, are the only sedimentary stones commonly carved. Limestone comes in a popular oolitic variety, about twice as hard as alabaster, that is excellent for carving. The harder serpentines can also reach 4 on the Mohs scale.
Marble, travertine, and onyx are at about 6 on the Mohs scale. Marble has been the preferred stone for sculptors in the European tradition ever since the time of classical Greece. It is available in a wide variety of colors, from white through pink and red to grey and black.
The hardest stone frequently carved is granite, at about 8 on the Mohs scale. It is the most durable of sculptural stones and, correspondingly, an extremely difficult stone to work.
Basalt columns, being even harder than the granite, are less frequently carved. This stone takes on a beautiful black appearance when polished.
Rough and unfinished statues
Rough block forms of unfinished statuary are known and are in museums. Notable are the Akhenaten, Amarna Period statuary found at Akhetaten. One known sculptor, Thutmose (sculptor), has his entire shop excavated at Akhetaten, with many unfinished block forms.
The process of stone sculpture
In the direct method of stone carving, the work usually begins with the selection of stone for carving, the qualities of which will influence the artist's choices in the design process. The artist using the direct method may use sketches but eschews the use of a physical model. The fully dimensional form or figure is created for the first time in the stone itself, as the artist removes material, sketches on the block of stone, and develops the work along the way.
On the other hand, is the indirect method, when the sculptor begins with a clearly defined model to be copied in stone. The models, usually made of plaster or modeling clay, may be fully the size of the intended sculpture and fully detailed. Once the model is complete, a suitable stone must be found to fit the intended design. The model is then copied in stone by measuring with calipers or a pointing machine. This method is frequently used when the carving is done by other sculptors, such as artisans or employees of the sculptor.
Some artists use the stone itself as inspiration; the Renaissance artist Michelangelo claimed that his job was to free the human form hidden inside the block.
Copying by "pointing"
The copying of an original statue in stone, which was very important for Ancient Greek statues, which are nearly all known from copies, was traditionally achieved by "pointing", along with more freehand methods. Pointing involved setting up a grid of string squares on a wooden frame surrounding the original, and then measuring the position on the grid and the distance between grid and statue of a series of individual points, and then using this information to carve into the block from which the copy is made. Robert Manuel Cook notes that Ancient Greek copyists seem to have used many fewer points than some later ones, and copies often vary considerably in the composition as well as the finish.
Roughing out
When he or she is ready to carve, the carver usually begins by knocking off, or "pitching", large portions of unwanted stone. For this task, he may select a point chisel, which is a long, hefty piece of steel with a point at one end and a broad striking surface at the other. A pitching tool may also be used at this early stage; which is a wedge-shaped chisel with a broad, flat edge. The pitching tool is useful for splitting the stone and removing large, unwanted chunks. The sculptor also selects a mallet, which is often a hammer with a broad, barrel-shaped head. The carver places the point of the chisel or the edge of the pitching tool against a selected part of the stone, then swings the mallet at it with a controlled stroke. He must be careful to strike the end of the tool accurately; the smallest miscalculation can damage the stone, not to mention the sculptor’s hand. When the mallet connects to the tool, energy is transferred along the tool, shattering the stone. Most sculptors work rhythmically, turning the tool with each blow so that the stone is removed quickly and evenly. This is the “roughing out” stage of the sculpting process.
Refining
Once the general shape of the statue has been determined, the sculptor uses other tools to refine the figure. A toothed chisel or claw chisel has multiple gouging surfaces which create parallel lines in the stone. These tools are generally used to add texture to the figure. An artist might mark out specific lines by using calipers to measure an area of stone to be addressed and marking the removal area with pencil, charcoal or chalk. The stone carver generally uses a shallower stroke at this point in the process.
Final stages
Eventually, the sculptor has changed the stone from a rough block into the general shape of the finished statue. Tools called rasps and rifflers are then used to enhance the shape into its final form. A rasp is a flat, steel tool with a coarse surface. The sculptor uses broad, sweeping strokes to remove excess stone as small chips or dust. A riffler is a smaller variation of the rasp, which can be used to create details such as folds of clothing or locks of hair.
The final stage of the carving process is polishing. Sandpaper can be used as a first step in the polishing process or sand cloth. Emery, a stone that is harder and rougher than the sculpture media, is also used in the finishing process. This abrading, or wearing away, brings out the colour of the stone, reveals patterns in the surface and adds a sheen. Tin and iron oxides are often used to give the stone a highly reflective exterior. Today, modern stone sculptors use diamond abrasives to sand in the final finishing processes. This can be achieved by hand pads in rough to fine abrasives ranging from 36 grit to 3000 grit. Also, diamond pads mounted on water-cooled rotary air or electric sanders speed the finishing process.
Contemporary techniques
In the 21st century, stone sculpture has grown to encompass technologically advanced tools including robots, super computers, and algorithms. In 2017, Karen LaMonte first displayed Cumulus, her eight-foot-tall, two-and-a-half ton sculpture of a cumulus cloud carved from Italian marble. To create the work, LaMonte collaborated with California Institute of Technology scientists to model conditions needed to create a cumulus cloud. She then replicated the resulting cloud model in marble using a combination of robot and hand carving. "Rarely does someone just start chipping away in stone," LaMonte told Caltech magazine. "Think about Michelangelo; he submerged his wax model of David in water, exposing it layer by layer and carving the marble to match the emerging figure. Three hundred years later, Antonio Canova perfected the pointing machine to transfer exact points from a model onto marble, followed by Benjamin Cheverton's patented 3-D pantograph. Only by using technology could I make the diaphanous solid and the intangible permanent." The sculpture required four weeks of robot-driven carving, followed by four weeks of hand-finishing, to complete.
Gallery
See also
Marble sculpture
Stone carving
Sculpture
List of decorative stones
List of colossal sculptures in situ
Rock-cut architecture
References
External links
Carving a stone column: Pitching, video
Demonstrating how to carve details in a stone sculpture of the Charioteer of Delphi, video
The Cesnola collection of Cypriot art: stone sculpture, a fully digitized collection catalog from The Metropolitan Museum of Art Libraries, which contains material on many stone sculptures
Sculpture materials
Stonemasonry
Sculptures by medium | Stone sculpture | Engineering | 2,142 |
31,586,643 | https://en.wikipedia.org/wiki/Policies%20promoting%20wireless%20broadband%20in%20the%20United%20States | Policies promoting wireless broadband are policies, rules, and regulations supporting the "National Wireless Initiative", a plan to bring wireless broadband Internet access to 98% of Americans.
Spectrum is limited and much of it already in use. This raises the issue of space and strength of supporting the network. The infrastructure has to reach across the entire United States in areas that normally do not have Internet access. The main concept is to bring wireless service to residents in areas that may otherwise not have access to it. The public's interest in this plan is important as the people are the ones who will utilize this service. Network neutrality raises issues on freedom of information and who will have control over how the information is released, or even lack of control.
The Memorandum on Unleashing the Wireless Broadband Revolution claimed that wireless Internet access had the potential to enhance economic competition and improve the quality of life. The Internet is considered an important part of the economy and advanced business opportunities as it is a vital infrastructure. The Code of Federal Regulations says that this is the beginning of the next transformation in information technology, as we encounter the wireless broadband revolution.
Strategy
The initial plan by President George W. Bush was to have broadband availability for all Americans by 2007. In February, 2011, President Obama announced details of the "National Wireless Initiative" or "Wireless Innovation and Infrastructure Initiative".
Federal Communications Commission (FCC) Chairperson Michael K. Powell created the Wireless Broadband Access Task Force to help bring the plan together. The members study existing wireless broadband policies, making recommendations in the FCC's policies for acceleration in the deployment of the wireless technologies and services. This is completed by seeking out the expertise, experience, and advice of consumers, state and local governments, the industry, and other stakeholders. These recommendations are intended to assist with how to make policies and further progress the process for the national wireless plan. They are based on the inquiry of the state of wireless broadband as well as the FCC's policies that impact these services.
Powell commented in a statement that this broadband plan is a catalyst for positive change, bringing resources and jobs to communities across the country. The CTIA The Wireless Association encouraged legislative action that recognizes the unique and invaluable role of wireless in providing Americans Internet access.
This plan included issues such as:
Radio spectrum
How the radio spectrum will support national wireless broadband
Cost of creating the wireless network
Maintaining the network
Who will create the network and infrastructure
Who will support the network and infrastructure
Consideration of what the public interest is in this plan,
The wireless plan's policies,
What the current President of the United States plans to do
How the previous President George W. Bush promoted this plan.
Spectrum
Freeing space
The plan is to free up enough of the radio spectrum from licensed and unlicensed space. To free up space, the FCC intends to hold incentive auctions, spurring innovation, by current licensees voluntarily giving up their spectrum space. The CTIA has requested the FCC to obtain more capacity on the spectrum by placing priority on additional spectrum through the national wireless plan. This hopes to ensure space on the spectrum for the wireless broadband to work. The auction would also require legislation to conduct for the spectrum to be reassigned and reallocated. In this plan, legislation is needed for the FCC to hold these auctions to enable the current spectrum holders to realize a portion of the revenues if they participate. These voluntary incentive auctions for licensees are a critical part of freeing the spectrum, as well as encouraging the government to more efficiently use the spectrum. The auction is intended to bring profit back to the United States and new licensees. However, it was brought up in February 2011 that it is not clear on whether these incentive auctions will take place. One goal of the plan is to reduce the national deficit by approximately $10 billion through these license auctions and other business opportunities. The auctions and increased spectrum efficiency from the government would raise $27.8 billion over the next ten years. The government will be expected to use the spectrum more efficiently using new financial-compensation tools with commitments to using advanced technologies. There is also the idea to spur innovation by using $3 billion of the spectrum proceeds for research and development of newer wireless technologies and applications.
Spectrum sections to use
Wireless requires bandwidth, and because of this, there would need to be enough of the spectrum obtained to sustain the bandwidth of the network. It also needs to be considered that upload links and downstream communication can require a difference amount of space. In the FCC News in 2005, it is mentioned that there needs to be enough spectrum to account for the unbalancing of broadband services. The service typically needs a larger amount of bandwidth for downstream than for upload links. Wireless will only work with adequate spectrum to support the initiative and the many devices, networks, and applications that it will run. President Obama has set a goal of freeing 500 MHz of the spectrum for any wireless device within a decade. The CTIA also supports this within the next ten years. The space would also need to be within the stronger part of the spectrum. According to the 112th United States Congress, the Public Safety Spectrum and Wireless Innovation Act calls for this to be within the 700 MHz D block spectrum for rural and urban areas and was originally requested to be done before the Digital TV transition. The 700 MHz D block refers to the portion of the spectrum between the following frequencies
758 MHz to 763 MHz
788 MHz to 793 MHz.
In its pursuit to support the FCC, the CTIA also recommended the spectrum in the following ranges
1.7 GHz
2.1 GHz bands.
With 4G deployment rising, it is critical to have the airwave space to support future innovation and to avoid the spectrum crunch. This provides clearance in the spectrum that is already allocated to wireless carriers. The CTIA also have requested access to existing utility poles where new construction is not possible. Although the spectrum is wide, the science and physics of the spectrum still create limited amounts of space.
Reserved spectrum
President Obama also plans to increase public safety by reallocating the D block of the spectrum and $500 million within the Wireless Innovation (WIN) Fund. The 700 MHz D block spectrum would be reallocated and integrated for public safety entities. The Communications Act of 1934 would be amended to increase the electromagnetic spectrum by 10 MHz for public safety. An important part of this spectrum plan is that there is already space for public safety. One piece that needs to be determined is how to integrate the 700 MHz D block that will be reallocated with the existing public safety spectrum. The current 20 MHz of public safety spectrum needs to be determined how to be licensed as well. The considerations are nationwide, regional, statewide or some combination in accordance with the public interest.
Review of the spectrum use
In Section 205 of the Public Safety Spectrum and Wireless Innovation Act of the 112th United States Congress, it is required to have a review of the use of the spectrum after a certain period of time of the deployment. After no more than 5 years of the implementation of the wireless network plan, the Commission must conduct a survey and submit a report regarding the public safety spectrum. This includes how the spectrum is being used, recommendations on whether more spectrum is needed and determine if there is opportunity for some of the spectrum to be returned for other commercial purposes. The report intends to make sure that there is the right amount of spectrum and ensure it is being used for the correct purposes, as there is only a limited amount of spectrum overall. From the Administration of Barack Obama, they also would like to test the value of underutilized spectrum to be able to open new avenues of use. Since spectrum space is limited, looking at other spectrum could allow other licensees to move elsewhere, perhaps freeing up more for wireless. Utilizing other spectrum would allow development of advanced technologies. The Secretary of Commerce, National Science Foundation (NSF), Department of Defense, Department of Justice, NASA and other organizations have been designated to create a plan to explore these innovative spectrum-sharing technologies.
Technology and infrastructure
In the 1950s, the town of Ten Sleep, Wyoming, had just set up their phone service using federal subsidies and stringing copper wire to every home. In 2005, they upgraded to fiber optic cable, giving the residents high-speed Internet access. This scenario caught President Obama's eye in terms of success, which he hopes to duplicate with the national wireless broadband. On February 10, 2011, he pointed to this example of what he wants to replicate and hopes it will help progress more economic development by providing Internet to almost all Americans. Brendan Greenley from Business Week magazine does not believe Obama's plan will create another success story. While examples are helpful to reference when creating a plan, not all plans react the same way. There are many different factors involved, such as geography and the type of users involved.
How a technology is designed and built is just as important as the technology itself. Without a proper infrastructure, a national wireless broadband network would not benefit the country. George Ford from the Phoenix Center commented that a reasonable target for broadband would be 95% Internet availability to Americans in five years and questioned the need for coverage across the entire country. There has been a large permeation of Internet users over the last five years. However, there are still reasons to have wired networking. As stated by Brendan Greeley, call centers and data storage facilities placed in smaller towns need the speed and capacity that a wired fiber optic network can provide. Wireless networks pose challenges that wired networks do not. One challenge is the infrastructure of a wireless network, like the spectrum, versus a wired network. For Ten Sleep, they installed fiber optic cable to increase their network speed. Wireless does not have this capability. Fiber optic cable has more capacity than the electromagnetic spectrum, meaning even if the entire spectrum was allocated to the national wireless network, there still would not be the same capacity in fiber optic.
New technologies
Technology is growing at an incredible speed, with one important technology being the speed of information. The newest generation of speed at "4G" is being deployed at rapid speeds throughout the United States by the leading carriers, and promises to be greatly beneficial to the economy and society. Next-generation technology is ten times faster than current speeds and is capable of benefiting all Americans, helping public safety increase, and further progressing the innovation for wireless applications, equipment and services. The advancement of technology is intended to move us forward and catch up with other nations that have already implemented these technologies. The technological advances in wireless broadband, like mobile broadband, provide a solid foundation for improved delivery of services.
Support of the wireless network
A supported infrastructure is important for any network, whether technologically based or socially based. The national wireless network is no different. Under Section 105 Interoperability of the bill S.28, the Commission must establish the technical and operational rules to ensure the national wireless networks are interoperable. It has so far yet to be established as to who will actually support the network, being the government or a private Internet service provider. Rules are to be established to permit a public safety licensee to authorize a service provider to construct and operate the wireless network. It is also in the plan to have the service provider use their licensee spectrum if the authorization would expedite broadband communications. The supporting parties will also have to ensure the safety of the network by protecting and monitoring against cyber attacks and any other form of security threat. It is imperative to have a secure network that is accessible for the nation to use. A safe environment is needed for new capabilities to be secure, trustworthy and provide necessary safeguards for privacy of users and public safety.
Cost
President Obama estimated a one-time investment of $5 billion and a reformation of the Universal Service Fund to help millions of Americans get access to these technologies. Another estimated cost of the wireless broadband plan is $7.2 billion from stimulus funds. Another plan is calling for $10.7 billion to commit to the support, development and deployment of the wireless broadband. Despite the cost, wireless would help with public safety affordance of greater levels of effectiveness and interoperability. With broadband technologies developing, equipment and services are getting faster and cheaper. Again, Obama proposes to pay for the wireless network by having broadcasters give back their privilege to the spectrum for government auction. The auction would then be mostly profit so costs would come from the infrastructure and maintenance of the network, not the spectrum space. It is still questioned whether the costs are too high and if the end benefits outweigh those costs. In a report by George Ford at Fox News Channel, he stated that spending money on the last frontier of broadband has small incremental value. Obama not only estimated a one-time investment, but also stated a public safety cost. He called for a $10.7 billion investment to ensure public safety benefits from the technology and to have $3.2 billion to reallocate the D block of the spectrum as mentioned before. This band of the spectrum would be reserved solely for public safety as stated under current law. Another $7 billion would be needed to support the deployment of the network, and then $500 million from the Wireless Innovation (WIN) Fund for research and development, and to tailor the network to public safety needs. Although many billions of dollars will go towards building this plan, reducing the national deficit by billions of dollars can be considered worthwhile. Again, President Obama does hope that this plan will cut the nation deficit by $9.6 billion over the next ten years.
The thought of whether the cost is too high also raises other points in the media. John Horrigan of the Pew Research Center commented that the high cost of broadband now is why more Americans are not already using it. There is also the consideration that not every American has access to a computer. Although smartphones with Internet capabilities have been on the rise for many years now, there is still a reason the entire nation is not "online". Whether it is the cost of an Internet-ready device or computer, or obtaining Internet service, cost is still a big factor for this technology. In terms of cost, one-third of Americans who do not have broadband access say cost prohibits them from purchasing it.
Impact on the people
Public interest is important for policies promoting wireless broadband for Americans. The interest of the public is important because if the public does not condone the cost and the utility, the nation will have to cover the failure of the deployment. However, the public may also consider this to be an excellent development for our country. Part of the national wireless broadband goal is to enable businesses to grow faster, help students learn more, and assist public safety officers with having the best, state-of-the-art technology and communications available. During his State of the Union address, President Obama announced a National Wireless Initiative to make available high speed wireless Internet services for 98% of Americans. This plan was primarily designed to get this technology to reach more rural areas that otherwise did not have the opportunity of obtaining the service. On February 10, 2011 President Obama was commended for his proposal on pursuing the plan with the idea that this would greatly increase jobs and innovation. The concept of the "last mile" is often brought up for Internet Service Providers (ISP) as they try to expand their network, often time having to stop before the last house on the block because of cost. However, even though this issue happens throughout rural areas, 57% of Americans use broadband services with 91% already having access according to the Pew Internet & American Life Project. Those that have Internet can access an incredible amount of information at any time as long as they have an Internet-ready device. There has also been an increase in the applications that utilize Internet services. The proliferation of wireless applications is on the rise and continues to empower users and communities.
Home Internet usage
A national wireless broadband network is not only about providing Internet access for personal computers in the home, but for anyone with a wireless Internet-ready device. In 2006, the number of households passed over for high-speed Internet was 119 million, and over the past two years, the cable industry has invested $23 billion into their networks. As the number of homes serviced declines, broadband technology is able to develop. Commissioner Robert M. McDowell commented that the broadband technology has been the fastest in penetration of any technology in modern history. With the broadband technology, the number of devices that are Internet-ready has been increasing year after year. Both cell phone and laptops with wireless capabilities have increased Internet usage dramatically and have each grown more prevalent since 2009. It is no longer working professionals with high Internet use, but young adults as well. About 47% of adults go online with a laptop, up from 39% as of April 2009. Also, 40% of adults use a mobile phone, up from 32% in 2009. The world has seen great technologies, and the Internet is one of the fastest growing because of the number of devices designed to utilize it. According to Pew Research, 59% of adults access the Internet wirelessly through some type of wireless device. Again, this is an increase from the 51% in April 2009. Internet access has become a critical part of our lives. The deployment and development of wireless broadband as well as other technologies is critical to ensuring this reliable and ubiquitous service is available to Americans.
Impact on health matters and other public uses
Some have not only considered personal or business related uses of national wireless, but the health related uses for hospitals and their patients. As Blair Levin from the Technology, Innovation and Government Reform Team of President Obama states, this will create a world-class broadband platform allowing modernizing of health care records and reforming education. Being able to have the instant availability to health records for doctors and patients, and being able to teach and be taught from anywhere in the United States is a concept some may never have considered, or thought possible. As John Horrigan from the Pew Research Center stated, people not understanding the technology being of relevancy to them is a bigger barrier than the cost of the technology itself. Michael Powell stated Americans benefit most when policies enable consumers and businesses to fully utilize the benefits of emerging technology.
Public safety
Public safety has been mentioned in terms of the spectrum use and cost. Public safety is also important in regards to public interest. With the implementation of national wireless broadband, this initiative helps improve public safety communications. The Commission for 9/11 has noted that our homeland security is vulnerable due to lack of interoperable wireless communication among first responders. The plan would allow all public safety officials to be on the same network, and get the correct information quicker and safer. With 4G networking, they can be provided with a unique opportunity to deploy a system in conjunction with the commercial infrastructure already available.
Network neutrality
Network neutrality is becoming a big issue with the policies involved in the national wireless plan. If the plan comes together, the question of restricting access to the Internet is reason for concern. Network neutrality is the concept of having no rules and regulations for what consumers are able to access through the Internet by their ISPs. As stated by the CTIA, our economic conditions make it hard to understand why people want to impose network neutrality rules, and inject uncertainty in an industry that seems to be working well for the U.S. This is for both wired and wireless broadband networks. These types of infrastructures cannot be managed for customers and expectations with a one-size-fits-all approach. The debate is due to the types of restrictions ISPs should be allowed to have if consumers are paying for the service they want. As with wired Internet access, the CTIA has stated that they strongly believe that regulation is not necessary and may do more harm than good.
More users are obtaining access to the internet and have the wireless devices to access it. It is no surprise that wireless is the fastest growing broadband service. There is also an increase in the number of users that rely solely on wireless instead of wired connections. Wireless service providers are constantly competing to create the best network with the best service and quality. With more towers and increased advanced technologies, wireless has become a convenient and widely accessible mode of communication. The negative, however, is the technology which wireless Internet uses. As previously stated, the spectrum itself is limited and wireless data networks rely on the finite source.
Since the FCC has developed the plan to open the spectrum for the wireless network, the issue of network neutrality is cause for concern for some. The CTIA stated that the imposition of network neutrality will inject uncertainty in the market. Since this concept supports users having access to the information they want through the Internet, it raises the problem of consumers having limited options. This could ultimately harm consumers and hamper innovation.
Pros vs. cons
Pros:
Provides universal broadband Internet access to majority of Americans
Intends to reduce the national deficit by billions of dollars
Helps America to catch up on economic development that other countries have made years ago
Creates jobs that otherwise continue to be lost
Many Americans already switching to only wireless Internet
Potential to improve public safety communications with everyone being on the same network
Allows widespread access to health records and education
Cons:
Takes away freedom of Americans choosing their own Internet Service Provider
Uses money from the U.S. government (FCC) that could be used for other products and services
Removes current Internet Service Providers from the economy as they will no longer be needed by consumers
Loss of jobs through ISPs
Users may not realize the benefit of the technology and not use it to full potential
Could threaten network neutrality
Possible threat of security, cyber attacks, being one location for all information
See also
Hotspot (Wi-Fi)
National Broadband Plan (United States)
Spectrum auction
Spectrum reallocation
Wireless broadband
References
Broadband
Computer law
Wireless networking
Internet in the United States
Federal Communications Commission | Policies promoting wireless broadband in the United States | Technology,Engineering | 4,401 |
32,634,940 | https://en.wikipedia.org/wiki/Polbase | Polbase (DNA Polymerase Database) is an open repository of DNA polymerase information. Polbase captures information from published research on polymerase activity, and presents it in context with related work. Polbase indexes over 5,000 references from the 1950s to the present and includes hundreds of polymerases and their related mutants. Polbase's collaborative model allows polymerase investigators to complete, correct and validate Polbase's representation of their work.
Content
Polbase features a listing of known polymerases categorized by organism, polymerase family, and selected properties. Each indexed polymerase has its own snapshot page containing links to all its information in the database. All results in Polbase are stored with the relevant experimental details to put them into context. If structure information is available, Polbase links to the polymerase's Protein Data Bank (PDB) entry. All information gathered in Polbase is linked to the original publication where it was reported.
Features
Polymerases by family, organism and properties
Search by author, organism, polymerase name, property, etc.
Browsing by reference
Browsing by author
Browsing by organism
Information sources
Polbase draws information from a variety of sources including PubMed, PDB, and directly from polymerase investigators.
Interconnections
Polbase is connected with various other databases. These include:
The Protein Data Bank
European Bioinformatics Institute
ExPASy Bioinformatics Resource Portal
UniProt
BRENDA
PubMed
Various Scientific Journals
History
Polbase began in March 2009 with a grant from the NIH's SBIR program and was first presented to the public at MIT's DNA and Mutagenesis Meeting
In March 2010 Polbase was presented to a larger audience at the Evolving Polymerases 2010 Conference.
Polbase was also presented in more technical detail at the Rocky 2010 ISMB Conference.
Polbase is described in more detail in the 2012 Nucleic Acids Research Database Issue.
Polbase was built at New England Biolabs by Brad Langhorst and Nicole Nichols with the help of founding collaborators Linda Reha-Krantz, Bill Jack, Cathy Joyce, Stu Linn, Stefan Sarafianos, Sam Wilson, and Roger Woodgate.
References
External links
The DNA Polymerase Database (Polbase)
Enzyme databases
DNA replication | Polbase | Chemistry,Biology | 463 |
2,979,876 | https://en.wikipedia.org/wiki/Apollodorus%20%28painter%29 | Apollodorus Skiagraphos () was an influential Ancient Greek painter of the 5th century BC whose work has since been entirely lost. Apollodorus left a technique behind known as skiagraphia, a way to easily produce shadow, that affected the works not only of his contemporaries but also of later generations. This shading technique uses hatched areas to give the illusion of both shadow and volume.
Life and accomplishments
Little is known about the actual life of Apollodorus, although he was catalogued by the notable historians Plutarch and Pliny the Elder. It was recorded that Apollodorus was active around 480 BCE; his dates of birth and death, however, are not attested in any surviving historical works or fragments of works. He was given different names by those who wrote about him. To Pliny, he was the great painter Apollodorus of Athens; therefore, it can be assumed that he lived and worked in the polis of Athens. But, to Plutarch and Hesychius, he was known as Apollodorus Skiagraphos, "the shadow-painter, named after his greatest legacy.
None of his actual paintings remain, for, due to weathering, almost all ancient Greek paintings have been destroyed, and the elegance and beauty of Greek art can mainly be glimpsed in the Macedonian tombs with their rich artistic programmes, in works such as the Derveni Krater, and in the sculptures and motives that were later copied by the Romans and architectural ruins that remain. The subjects of some of the paintings were recorded, however, by quite a number of ancient Greek historians. Pliny the Elder recorded two paintings, Praying Priest and Ajax Burned by Lightning, that resided in the ancient Greek city of Pergamon which was situated in modern-day Turkey. Other ancient Greek historians cited the painting Odysseus Wearing a Cap and also Heracleidae, a painting that referenced the descendants of Hercules. Also, one of his paintings was supposedly entitled Alcmena and the Daughters of Hercules Supplicating the Athenians.
As demonstrated by the titles of the paintings, it is probable that the majority of his works were similar to the other artists of the era in that their subject matter was most often based around the Greek gods and goddesses or other famous Greek citizens from historical epic poems that were passed on for generations in the oral tradition.
The topics of his paintings may have been unimaginative and common during the time period; however, it was his ingenious technique that made him such a renowned painter. One of the major artistic techniques that Apollodorus developed was called skiagraphia, or shading in English, hence his title “the shadow-painter”. The historian Plutarch recorded an inscription above one of Apollodorus' painting which read, “’Tis no hard thing to reprehend me; But let the men that blame me mend me.” In other words, “You could criticize [skiagraphia] more easily than you could imitate it”.
The type of shading applied by Apollodorus is highly sophisticated and even today people struggle to master skiagraphia. Apollodorus used an intricate way of “crosshatching and the thickening of inner contour lines as well as the admixture of light and dark tones” to show a form of perspective. Though it expanded the use of perspective in the ancient Greek world, skiagraphia was most effective in the depiction of stationary objects such as drapery, fruit, or faces; but it was ineffective in the painting of a body in action or a spatial setting for which perspective is usually used.
Another one of Apollodorus' greatest accomplishments did not have to do with his actual style or technique, but rather with the medium he chose. Apollodorus could very well have been one to the first well-known artists who painted on an easel as opposed to a wall which was the common action of the day.
Effect on contemporaries
Though not much about his life is known, historians have made assumptions about Apollodorus and his works and actions through his contemporaries.
Zeuxis of Heraclea was one of Apollodorus' rivals according to Pliny. Zeuxis was tutored in the arts by Demophilus of Himera and Neseus of Thasos. At one point, Apollodorus even accused Zeuxis of stealing art techniques from others which might very well have been true because Zeuxis was also attributed with the expansion and development of Apollodorus' prized skiagraphia. Zeuxis is said to have innovated skiagraphia by “adding highlights to shading and applying subtly different colours.”
Regardless of what Zeuxis did, he was not the only painter to adapt Apollodorus' creation for his own purposes. Another painter named Parrhasius of Ephesus, also a rival of Zeuxis, helped expand skiagraphia as well. He purportedly used it in a contest against Zeuxis and won because the curtain that Parrhasius had painted looked so real that Zeuxis tried to pull it back. Whereas Zeuxis examined the technique of light and shade in skiagraphia, Parrhasius looked into the contoured lines that help express depth in a spatial way; therefore taking the meaning of skiagraphia even further.
Not only was skiagraphia prominent in Athens, but also its influence extended beyond Athens' borders into the tomb paintings of Vergina, Aineia, and Lefkadia in northern Greece and even into Seuthopolis, a city in what is now modern Bulgaria. Though scarce, some of the tomb frescoes in Seuthopolis used only a limited range of colours; however others in Vergina and Aineia used six or more colours further demonstrating the extent of the transformation of Apollodorus' skiagraphia. Skiagraphia continued to mutate and develop until the age of the Italian Renaissance when it was given a new name: chiaroscuro.
Effect on the development of chiaroscuro
Apollodorus' development of skiagraphia was only the beginning of the gradual development of chiaroscuro. In Italian, chiaro means light and scuro means dark. So the two together symbolize the combination and distribution of light and dark into one to create a more lifelike image. No longer simply used for paintings on canvas of stationary objects, chiaroscuro is used in all types of art, even sculpture, frescoes, and woodcuts. Chiaroscuro is used to produce volume and relief, to unify the objects in a painting, or differentiate them from one another. The simple creation of skiagraphia led to the invention of diverse techniques that continued to be produced from the times of ancient Greece to Gothic times and then it reached its pinnacle in the Italian Renaissance in 14th century. Even today it continues to be important to artists.
In the 15th-century, chiaroscuro was described by Cennino Cennini, a famous Italian painter. He stated that the ideas of gradation between light and dark, skiagraphia, were combined with medieval techniques known as incidendo and matizando, which are a “layerings of white, brown, or black in linear patterns over a uniform colour” to indicate relief and volume. These two were previously used by monks in the illustration of religious manuscripts. The addition of these two techniques to skiagraphia was instrumental in the evolution of chiaroscuro.
Giotto, a Florentine painter, and Cimabue, Giotto's teacher, used chiaroscuro in their late Gothic painting as well, by mixing large amounts of white into the painting, therefore creating an easy transition between tones. In frescoes, mosaics, and manuscript illuminations, artists like Master Honore, a French manuscript painter, and Pietro, a painter and mosaic designer active in the Middle Ages, modelled from underneath with black and white space to create brightness in their works. In the end, Apollodorus' master creation after years of evolution transformed into something that, though it still resembled the original and served the same purpose, was new and thoroughly necessary to all great works of art.
Notes
References
“Apollodorus.” The Columbia Electronic Encyclopedia, Sixth Edition. Columbia University Press., 2003. Answers.com 26 Nov. 2008. http://www.answers.com/topic/apollodorus- painter.
Arafat, Karim. "Zeuxis." The Oxford Companion to Western Art. 2008. 14 May 2006. Oxford Art Online. Lucas Library, Atherton. 26 Nov. 2008. Keyword:http://oxfordartonline.comsubscriber/article/opr.
Bell, Janis C. "Chiaroscuro". Grove Art Online. Oxford: Oxford UP, 2006. 25 Feb. 2007. Oxford Art Online. Lucas Library, Atherton. 26 Nov. 2008.
Pliny. The Natural History of Pliny. Trans. John Bostcock and Henry T. Riley. H.G. Bohn, 1857.
Pollitt, Jerome J. The Art of Ancient Greece : Sources and Documents. New York: Cambridge UP, 1990.
Robertson, Martin. A Shorter History of Greek Art. New York: Cambridge UP, 1981.
External links
Ancient Greek painters
5th-century BC Greek people
Artists of ancient Attica
Year of birth unknown
Year of death unknown
5th-century BC painters
Shadows
Greek male painters | Apollodorus (painter) | Physics | 1,956 |
30,075,351 | https://en.wikipedia.org/wiki/Machine-generated%20data | Machine-generated data is information automatically generated by a computer process, application, or other mechanism without the active intervention of a human. While the term dates back over fifty years, there is some current indecision as to the scope of the term. Monash Research's Curt Monash defines it as "data that was produced entirely by machines OR data that is more about observing humans than recording their choices." Meanwhile, Daniel Abadi, CS Professor at Yale, proposes a narrower definition, "Machine-generated data is data that is generated as a result of a decision of an independent computational agent or a measurement of an event that is not caused by a human action." Regardless of definition differences, both exclude data manually entered by a person. Machine-generated data crosses all industry sectors. Often and increasingly, humans are unaware their actions are generating the data.
Relevance
Machine-generated data has no single form; rather, the type, format, metadata, and frequency respond to some particular business purpose. Machines often create it on a defined time schedule or in response to a state change, action, transaction, or other event. Since the event is historical, the data is not prone to be updated or modified. Partly because of this quality, the U.S. court systems consider machine-generated data as highly reliable.
Machine-generated data is the lifeblood of the Internet of Things (IoT).
Growth
In 2009, Gartner published that data will grow by 650% over the following five years. Most of the growth in data is the byproduct of machine-generated data. IDC estimated that in 2020, there will be 26 times more connected things than people. Wikibon issued a forecast of $514 billion to be spent on the Industrial Internet in 2020.
Processing
Given the fairly static yet voluminous nature of machine-generated data, data owners rely on highly scalable tools to process and analyze the resulting dataset. Almost all machine-generated data is unstructured but then derived into a common structure. Typically, these derived structures contain many data points/columns. With these data points, the challenge lies mostly with analyzing the data. Given high performance requirements along with large data sizes, traditional database indexing and partitioning limits the size and history of the dataset for processing. Alternative approaches exist with columnar databases as only particular "columns" of the dataset would be accessed during particular analysis.
Examples
Web server logs
Call detail records
Financial instrument trades
Network event logs
Logs transmitted from security, network and OS sources to Security information and event management (SIEM) systems
Telemetry collected by the government
Notes
Reference List
Bibliography
Computer data | Machine-generated data | Technology | 539 |
9,548,739 | https://en.wikipedia.org/wiki/Feynman%20Prize%20in%20Nanotechnology | The Feynman Prize in Nanotechnology is an award given by the Foresight Institute for significant advances in nanotechnology. Two prizes are awarded annually, in the categories of experimental and theoretical work. There is also a separate challenge award for making a nanoscale robotic arm and 8-bit adder.
Overview
The Feynman Prize consists of annual prizes in experimental and theory categories, as well as a one-time challenge award. They are awarded by the Foresight Institute, a nanotechnology advocacy organization. The prizes are named in honor of physicist Richard Feynman, whose 1959 talk There's Plenty of Room at the Bottom is considered by nanotechnology advocates to have inspired and informed the start of the field of nanotechnology.
The annual Feynman Prize in Nanotechnology is awarded for pioneering work in nanotechnology, towards the goal of constructing atomically precise products through molecular machine systems. Input on prize candidates comes from both Foresight Institute personnel and outside academic and commercial organizations. The awardees are selected mainly by an annually changing body of former winners and other academics. The prize is considered prestigious, and authors of one study considered it to be reasonably representative of notable research in the parts of nanotechnology under its scope.
The separate Feynman Grand Prize is a $250,000 challenge award to the first persons to create both a nanoscale robotic arm capable of precise positional control, and a nanoscale 8-bit adder, conforming to given specifications. It is intended to stimulate the field of molecular nanotechnology.
History
The Feynman Prize was instituted in the context of Foresight Institute co-founder K. Eric Drexler's advocacy of funding for molecular manufacturing. The prize was first given in 1993. Before 1997, one prize was given biennially. From 1997 on, two prizes were given each year in theory and experimental categories. By awarding these prizes early in the history of the field, the prize increased awareness of nanotechnology and influenced its direction.
The Grand Prize was announced in 1995 at the Fourth Foresight Conference on Molecular Nanotechnology and was sponsored by James Von Ehr and Marc Arnold. In 2004, X-Prize Foundation founder Peter Diamandis was selected to chair the Feynman Grand Prize committee.
Recipients
Single prize
Experimental category
Theory category
See also
Kavli Prize in Nanoscience
IEEE Pioneer Award in Nanotechnology
ISNSCE Nanoscience Award
UPenn NBIC Award for Research Excellence in Nanotechnology
List of physics awards
References
External links
Nanotechnology
Awards established in 1993
Academic awards
Challenge awards
Science and technology awards
American science and technology awards | Feynman Prize in Nanotechnology | Materials_science,Engineering | 537 |
2,225,351 | https://en.wikipedia.org/wiki/Epsilon%20Leonis | Epsilon Leonis (ε Leo, ε Leonis) is the fifth-brightest star in the constellation Leo, consistent with its Bayer designation Epsilon. It is known as Algenubi or Ras Elased Australis. Both names mean "the southern star of the lion's head". Australis is Latin for "southern" and Genubi is Arabic for "south".
Properties
Epsilon Leonis has a stellar classification of G1 II, with the luminosity class of II indicating that, it has evolved into a bright giant. It is much larger and brighter than the Sun with a luminosity 282 times and a radius 21 times solar. Consequently, its absolute magnitude is actually –1.49, making it one of the more luminous stars in the constellation, significantly more than Regulus. Its apparent brightness, though, is only 2.98. Given its distance of about , the star is more than three times the distance from the Sun than Regulus. At this distance, the visual magnitude of Epsilon Leonis is reduced by 0.03 as a result of extinction caused by intervening gas and dust.
Epsilon Leonis exhibits the characteristics of a Cepheid-like variable, changing by an amplitude of 0.3 magnitude every few days. It has around four times the mass of the Sun and a projected rotational velocity of . Based upon its iron abundance, the metallicity of this star's outer atmosphere is only around 52% of the Sun's. That is, the abundance of elements other than hydrogen and helium is about half that in the Sun.
See also
List of stars in Leo
Class G Stars
Variable star
References
Leo (constellation)
Leonis, Epsilon
Algenubi
Leonis, 17
Classical Cepheid variables
G-type bright giants
047908
3873
084441
Suspected variables
Durchmusterung objects | Epsilon Leonis | Astronomy | 382 |
68,407,718 | https://en.wikipedia.org/wiki/Fucophycidae | Fucophycidae is a subclass of Phaeophyceae (brown algae) which contains the most complex and evolved orders of Chromista algae. The members of this subclass have stalks with several morphological forms and distinct structures, characterized by an intercalary growth and a basic heteromorphic, sometimes secondarily iso- or sub-isomorphic life cycle.
Taxonomy
Subclass Fucophycidae Cavalier-Smith 1986
Order Ascoseirales Petrov1964 emend. Moe & Henry 1982
Family Ascoseiraceae Skottsberg 1907
Order Asterocladales T.Silberfeld et al. 2011
Family Asterocladaceae Silberfeld et al. 2011
Order Desmarestiales Setchell & Gardner 1925
Family Arthrocladiaceae Chauvin 1842
Family Desmarestiaceae (Thuret) Kjellman 1880
Order Ectocarpales Bessey 1907 emend. Rousseau & Reviers 1999a [Chordariales Setchell & Gardner 1925; Dictyosiphonales Setchell & Gardner 1925; Scytosiphonales Feldmann 1949]
Family Acinetosporaceae Hamel ex Feldmann 1937 [Pylaiellaceae; Pilayellaceae]
Family Adenocystaceae Rousseau et al. 2000 emend. Silberfeld et al. 2011 [Chordariopsidaceae]
Family Chordariaceae Greville 1830 emend. Peters & Ramírez 2001 [Myrionemataceae]
Family Ectocarpaceae Agardh 1828 emend. Silberfeld et al. 2011
Family Petrospongiaceae Racault et al. 2009
Family Scytosiphonaceae Ardissone & Straforello 1877 [Chnoosporaceae Setchell & Gardner 1925]
Order Fucales Bory de Saint-Vincent 1827 [Notheiales Womersley 1987; Durvillaeales Petrov 1965]
Family Bifurcariopsidaceae Cho et al. 2006
Family Durvillaeaceae (Oltmanns) De Toni 1891
Family Fucaceae Adanson 1763
Family Himanthaliaceae (Kjellman) De Toni 1891
Family Hormosiraceae Fritsch 1945
Family Notheiaceae Schmidt 1938
Family Sargassaceae Kützing 1843 [Cystoseiraceae De Toni 1891]
Family Seirococcaceae Nizamuddin 1987
Family Xiphophoraceae Cho et al. 2006
Order Laminariales Migula 1909 [Phaeosiphoniellales Silberfeld, Rousseau & Reviers 2014 ord. nov. prop.]
Family Agaraceae Postels & Ruprecht 1840 [Costariaceae]
Family Akkesiphycaceae Kawai & Sasaki 2000
Family Alariaceae Setchell & Gardner 1925
Family Aureophycaceae Kawai & Ridgway 2013
Family Chordaceae Dumortier 1822
Family Laminariaceae Bory de Saint-Vincent 1827 [Arthrothamnaceae Petrov 1974]
Family Lessoniaceae Setchell & Gardner 1925
Family Pseudochordaceae Kawai & Kurogi 1985
Order Nemodermatales Parente et al. 2008
Family Nemodermataceae Kuckuck ex Feldmann 1937
Order Phaeosiphoniellales Silberfeld, Rousseau & Reviers 2014
Family Phaeosiphoniellaceae Phillips et al. 2008
Order Ralfsiales Nakamura ex Lim & Kawai 2007
Family Mesosporaceae Tanaka & Chihara 1982
Family Neoralfsiaceae Lim & Kawai 2007
Family Ralfsiaceae Farlow 1881 [Heterochordariaceae Setchell & Gardner 1925]
Order Scytothamnales Peters & Clayton 1998 emend. Silberfeld et al. 2011
Family Asteronemataceae Silberfeld et al. 2011
Family Bachelotiaceae Silberfeld et al. 2011
Family Splachnidiaceae Mitchell & Whitting 1892 [Scytothamnaceae Womersley 1987]
Order Sporochnales Sauvageau 1926
Family Sporochnaceae Greville 1830
Order Tilopteridales Bessey 1907 emend. Phillips et al. 2008 [Cutleriales Bessey 1907]
Family Cutleriaceae Griffith & Henfrey 1856
Family Halosiphonaceae Kawai & Sasaki 2000
Family Phyllariaceae Tilden 1935
Family Stschapoviaceae Kawai 2004
Family Tilopteridaceae Kjellman 1890
References
Brown algae | Fucophycidae | Biology | 927 |
24,512,738 | https://en.wikipedia.org/wiki/Russie.NEI.Visions%20in%20English | Russie.NEI.Visions (RNV )is an online collection of policy papers each in French, English and Russian produced by the Russia/New Independent States (, thus the NEI acronym) Centre of Paris-based Institut français des relations internationales. The collection promotes the publication of policy oriented analyses of events in the post-Soviet space, authored by established experts and up-and-coming analysts.
Created in 2005 by Thomas Gomart, today the collection is directed by Tatiana Kastueva-Jean with Maxime Audinet (co-editor), Anne Souin (co-editor) and Florian Vidal (co-editor). Since its creation, a paper collecting the year's production has been published annually.
The collection offers regular analysis of events in the former Soviet states, peer-reviewed and translated into all three languages. The collection has been cited in academic works as well as in the media.
RNV Publications
No.119, July 2020, Russia and Latin America: A Difficult Rapprochement, Andrey Pyatakov
No.118, June 2020, Transformation of Russian Strategic Culture: Impacts from Local Wars and Global Confrontation, Pavel Baev
No.117, March 2020, Russia’s Arctic Policy: A Power Strategy and Its Limits, Marlène Laruelle
No.116, October 2019, Friends in Need: Whither the Russia-India Strategic Partnership?, Aleksei Zakharov
No.115, August 2019, Russian Nuclear Modernization and Putin’s Wonder-Missiles: Real Issues and False Posturing, Pavel Baev
No.114, April 2019, Russia’s "Great Return" to Africa?, Arnaud Kalika
No.113, April 2019, Russia’s Militia Groups and their Use at Home and Abroad, Marlène Laruelle
No.112, December 2018, China’s Ambitions in Eastern Europe and the South Caucasus, Nadège Rolland
No.111, October 2018, Northern Europe’s Strategic Challenge from Russia: What Political and Military Responses?, Barbara Kunz
No.110, August 2018, Moldova between Russia and the West: Internal Divisions behind the Commitment to European Integration, Ernest Vardanean
No.109, July 2018, Moscow’s Syria Campaign: Russian Lessons for the Art of Strategy, Dima Adamsky
No.108, June 2018, Chutzpah and Realism: Vladimir Putin and the Making of Russian Foreign Policy, Bobo Lo
No.107, April 2018, From Chechnya to Syria: The Evolution of Russia’s Counter-Terrorist Policy, Pavel Baev
No.106, March 2018, Putinism: A Praetorian System?, Jean-Robert Raviot
No.105, December 2017, Russian Spetsnaz, Contractors and Volunteers in the Syrian Conflict, Sarah Fainberg
No.104, October 2017, Japan-Russia: The Limits of a Strategic Rapprochement, Céline Pajon
No.103, July 2017, “Russian World”: Russia’s Policy towards its Diaspora, Mikhail Suslov
No.102, June 2017, Minsk-Beijing: What Kind of Strategic Partnership?, Anaïs Marin
No.101, May 2017, Reforming Ukrainian Defense: No Shortage of Challenges, Isabelle Facon
No.100, April 2017, New Order for Old Triangles? The Russia-China-India Matrix, Bobo Lo
No.99, March 2017, Kadyrovism: Hardline Islam as a Tool of the Kremlin?, Marlène Laruelle
No.98, February 2017, Central Asia: Facing Radical Islam, Erlan Karin
No.97, November 2016, Russia and Central and Eastern Europe: between Confrontation and Collusion, Pavel Baev
No.96, September 2016, Russia's Economic Modernization: The Causes of a Failure, Vladislav Inozemtsev
No.95, July 2016, The Far Right in the Conflict between Russia and Ukraine, Vyacheslav Likhachev
No.94, June 2016, Russia’s Asia Strategy: Bolstering the Eagle’s Eastern Wing, Dmitri Trenin
No.93, May 2016, Russia's Diplomacy in the Middle East: Back to Geopolitics, Alexander Shumilin
No.92, March 2016, The Illusion of Convergence: Russia, China, and the BRICS, Bobo Lo
No.91, January 2016, Russia’s Immigration Policy: New Challenges and Tools, Lyubov Bisson
No.90, December 2015, "Conservatism" in Russia: Political Tool or Historical Choice?, Leonid Polyakov
No.89, December 2015, Eurasia in Russian Foreign Policy: Interests, Opportunities and Constraints, Ivan Timofeev, Elena Alekseenkova
No.88, November 2015, Russia: Business and State, Igor Bunin, Alexey Makarkin
No.87, August 2015, Leaving to Come Back: Russian Senior Officials and the State-Owned Companies, Mikhail Korostikov
No.86, July 2015, Russia's New Energy Alliances: Mythology versus Reality, Vladimir Milov
No.85, June 2015, The Kurds: A Channel of Russian Influence in the Middle East?, Igor Delanoë
No.84, April 2015, Russia's Domestic Evolution, what Impact on its Foreign Policy, Tatiana Kastueva-Jean
No.83, March 2015, The Jewish Diaspora and the Russo-Ukrainian Crisis, Olena Bagno-Moldavsky
No.82, January 2015, Frontiers New and Old: Russia’s Policy in Central Asia, Bobo Lo
No.81, November 2014, Moldova's National Minorities: Why are they Euroskeptical?, Marcin Koscienkowski & William Schreiber
No.80, September 2014, Russia and Global Climate Governance, Nina Tynkkynen
No.79, August 2014, "Green Economy": Opportunities and Constraints for Russian Companies, Pyotr Kiryushin
No.78, June 2014, The Crisis in Ukraine: An Insider's View, Oleg Grytsaienko
No.77, May 2014, Russia's Academy of Sciences' Reform: Causes and Consequences for Russian Science, Irina Dezhina
No.76, April 2014, Russia: Youth and Politics, Mikhail Korostikov
No.75, March 2014, Rosneft, Gazprom and the Government: the Decision-Making Triangle on Russia's Energy Policy, Pavel Baev
No.74, February 2014, The EU, Russia and the Eastern Partnership: What Dynamics under the New German Government?, Dominik Tolksdorf
No.73, December 2013, The Influence of the State on Expanding Russian MNEs: Advantage or Handicap?, Andrei Panibratov
No.72, September 2013, Japan-Russia: Toward a Strategic Partnership?, Céline Pajon
No.71, May 2013, Afghanistan after 2014: The Way Forward for Russia, Ekaterina Stepanova
No.70, April 2013, Russia's Eastern Energy Policy: A Chinese Puzzle for Rosneft, Nina Poussenkova
No.69, March 2013, Russia-Turkey: A Relationship Shaped by Energy, Rémi Bourgeot
No.68, February 2013, Governors, Oligarchs, and Siloviki: Oil and Power in Russia, Ahmed Mehdi & Shamil Yenikeyeff
No.67, January 2013, Deja Vu with BMD: The Improbability of Russia-NATO Missile Defense, Richard Weitz
No.66, October 2012, The WTO and the Customs Union: What Consequences for the Russian Banking Sector?, Dmitri Miroshnichenko
No.65, August 2012, Russia's Arctic Policy and the Northern Fleet Modernization, Pavel Baev
No.64, February 2012, Decoding Russia's WTO Accession, Dominic Fean
No.63, December 2011, Russian Digital Dualism: Changing Society, Manipulative State, Alexey Sidorenko
No.62, September 2011, Italy, Russia's Voice in Europe?, Nadezhda Arbatova
No.61, July 2011, What the North Caucasus Means to Russia, Aleksey Malashenko
No.60, July 2011, The Caucasus: a Hotbed of Terrorism in Metamorphosis, Pavel Baev
No.59, April 2011, "Digital Kremlin": Power and the Internet in Russia, Julien Nocetti
No.58, March 2011, Doing Business in Russia: Informal Practices and Anti-Corruption Strategies, Alena Ledeneva & Stanislav Shekshina
No.57, February 2011, Developing Research in Russian Universities, Irina Dezhina
No.56, December 2010, Israel's Immigrant Parties: An Inefficient Russia Lobby, Olena Bagno & Zvi Magen
No.55, November 2010, Syria: Russia's Best Asset in the Middle East, Andrej Kreutz
No.54, August 2010, Russia's Far East Policy: Looking Beyond China, Stephen Blank
No.53, July 2010, Results of the 'Reset' in US-Russian Relations, R. Craig Nation
No.52, June 2010, From Moscow to Mecca: Russia's Saudi Arabian Diplomacy, Julien Nocetti
No.51, May 2010, Russia and Turkey: Rethinking Europe to Contest Outsider Status, Richard Sakwa
No.50, May 2010, Europe in Russian Foreign Policy: Important but no longer Pivotal, Thomas Gomart
No.49, April 2010, Russia's Greater Middle East Policy: Securing Economic Interests, Courting Islam, Mark N. Katz
No.48, March 2010, Internal and External Impact of Russia's Economic Crisis, Jeffrey Mankoff
No.47, February 2010, Russia, China and the United States: From Strategic Triangularism to the Postmodern Triangle, Bobo Lo
No.46, January 2010, Georgia, Obama, the Economic Crisis: Shifting Ground in Russia-EU Relations, Timofei Bordachev
No.45, December 2009, What Is China To Us ? Westernizers and Sinophiles in Russian Foreign Policy, Andrei Tsygankov
No.44, September 2009, Making Good Use of the EU in Georgia : « Eastern Partnership and Conflict Policy », Dominic Fean
No.43, August 2009, Russia and the “Eastern Partnership” after the War in Georgia, Jean-Philippe Tardieu
No.42, July 2009, “Cool Neighbors” : Sweden’s EU Presidency and Russia, Eva Hagström Frisell, Ingmar Oldberg
No.41, June 2009, The Challenges of Russia’s Demographic Crisis, Anatoly Vichnevsky
No.40, May 2009, NATO and Russia : Post-Georgia Threat Perceptions, Aurel Braun
No.39, April 2009, Obama and Russia : Facing the Heritage of the Bush Years, Thomas Gomart
No.38, April 2009, Russia in Latin America : Geopolitical Games in the US’s Neighborhood, Stephen Blank
No.37, March 2009, Russia’s Armed Forces : The Power of Illusion, Roger McDermott
No.36, January 2009, China as an Emerging Donor in Tajikistan and Kyrgyzstan, Nargis Kassenova
No.35, December 2008, Islamist Terrorism in Greater Central Asia : The “Al-Qaedaization” of Uzbek Jihadism, Didier Chaudet
No.34, September 2008, Russian Chinese Relations through the Lens of the SCO, Stephen Aris
No.33, August 2008, Academic Cooperation between Russia and the US. Moving Beyond Technical Aid ?, Andrey Kortunov
No.32, July 2008, Injecting More Differentiation in European Neighbourhood Policy : What Consequences for Ukraine ?, Kerry Longhurst
No.31, June 2008, Caspian Pipeline Consortium, Bellwether of Russia’s Investment Climate ?, Adrian Dellecker
No.30, April 2008, The Impact of "New Public Management" on Russian Higher Education, Carole Sigman
No.29, April 2008, Higher Education in Russia: How to Overcome the Soviet Heritage ?, Boris Saltykov
No.28, April 2008, Higher Education, the Key to Russia's Competitiveness, Tatiana Kastueva-Jean
No.27, February 2008, Armenia, a Russian Outpost in the Caucasus ?, Gaïdz Minassian
No.26, February 2008, EU Gas Liberalization as a Driver of Gazprom's Strategies ?, Catherine Locatelli
No.25, December 2007, High Stakes in the High North. Russian-Norwegian Relations and their Implication for the EU, Jakub M. Godzimirski
No.24, November 2007, Russia and the "Gas-OPEC". Real or Perceived Threat?, Dominique Finon
No.23, October 2007, Paris and the EU-Russia Dialogue: A New Impulse with Nicolas Sarkozy ?, Thomas Gomart
No.22, September 2007, Rosoboronexport, Spearhead of the Russian Arms Industry, Louis-Marie Clouet
No.21, July 2007, Russia and the Deadlock over Kosovo, Oksana Antonenko
No.20, June 2007, Russie-EU beyond 2007. Russian Domestic Debates, Nadezhda Arbatova
No.19, May 2007, The Opacity of Russian-Ukrainian Energy Relations, Arnaud Dubien
No.18, March 2007, Gazprom as a Predictable Partner. Another Reading of the Russian-Ukrainian and Russian-Belarusian Energy Crises, Jérôme Guillet
No.17, March 2007, Gazprom, the Fastest Way to Energy Suicide, Christophe-Alexandre Paillard
No.16, February 2007, Russia and the WTO: On the Finishing Stretch, Julien Vercueil
No.15, January 2007, Russia and the Council of Europe: Ten Years Wasted ?, Jean-Pierre Massias
No.14, September 2006, The "Greatness and Misery" of Higher Education in Russia, Tatiana Kastueva-Jean
No.13, September 2006, The EU-Russia Energy Dialogue: Competition Versus Monopolies, Vladimir Milov
No.12, July 2006, The Shanghai Cooperation Organization as "Geopolitical Bluff?" A View from Astana, Murat Laumulin
No.11, June 2006, Abkhazia and South Ossetia: Collision of Georgian and Russian Interests, Tracey German
No.10, May 2006, Special Issue : Workshop on EU-Russia relations
"Russia, NATO and the EU: A European Security Triangle or Shades of a New Entente ?", Andrew Monaghan
"The EU and Russia: the Needed Balance Between Geopolitics and Regionalism", Thomas Gomart
"Representing Private Interests to Increase Trust in Russia-EU Relations", Timofei Bordachev
"Multiplying Sources as the Best Strategy for EU-Russia Energy Relations", Michael Thumann
No.9, March 2006, Ukraine's Scissors: between Internal Weakness and External Dependence, James Sherr
No.8, January 2006, Russia and Turkey in the Caucasus: Moving Together to Preserve the Status Quo ?, Fiona Hill and Omer Taspinar
No.7, October 2005, UE Crisis: What Opportunities for Russia ?, Timofei Bordachev
No.6 (a), September 2005 "Russia and Germany: Continuity and Changes", Andrei Zagorski
No.6(b), September 2005, "Germany's Policy on Russia: End of the Honeymoon?", Hannes Adomeit
No.5, August 2005, From Plans to Substance: EU-Russia Relations During the British Presidency, Andrew Monaghan
No.4, June 2005, Russian Scientists: Where Are they? Where Are They Going? Human Resources and Research Policy in Russia, Irina Dezhina
No.3, May 2005, Re-Writing Russia's Subsoil Law: from Sovereignty to Civil Law ?, William Tompson
No.2, April 2005, Shared Neighbourhood or New Frontline ? The Crossroads in Moldova, Dov Lynch
No.1, April 2005, A Fine Balance - The Strange Case of Sino-Russian Relations, Bobo Lo
Russie.Nei.Reports
Launched in September 2009, Russie.Nei.Reports is an electronic collection providing extensive analysis based on fieldwork. Papers are published either in French or in English.
No. 32, August 2020 : Un régime dans la tourmente : le système de sécurité intérieure et extérieure du Bélarus, Andrey Paratnikau (Porotnikov)
No. 31, June 2020 : Mémoire de la Seconde Guerre mondiale dans la Russie actuelle, Tatiana Kastouéva-Jean, (dir.), Olga Konkka, Nikolaï Koposov, Emilia Koustova, Denis Volkov, Tatiana Zhurzhenko
No. 30, May 2020 : Quand la guerre s’invite à l’école : la militarisation de l’enseignement en Russie, Olga Konkka
No. 29, March 2020 : The Return: Russia and the Security Landscape of Northeast Asia, Bobo Lo
No. 28, December 2019 : Russia’s Energy Strategy-2035: Struggling to Remain Relevant, Tatiana Mitrova, Vitaly Yermakov
No. 27, July 2019 : Greater Eurasia: The Emperor’s New Clothes or an Idea whose Time Has Come?, Bobo Lo
No. 26, March 2019 : Russia's Relations with South-East Asia, Dmitry Gorenburg & Paul Schwartz
No. 25, February 2019 : Kremlin-Linked Forces in Ukraine's 2019 Elections: On the Brink of Revenge?, Vladislav Inozemtsev
No. 24, September 2018 : Making Sense of Russia's Policy in Afghanistan, Stephen Blank & Younkyoo Kim
No. 23, May 2018 : Russia’s Afghan Policy in the Regional and Russia-West Contexts, Ekaterina Stepanova
No. 22, February 2018 : Russo-British Relations in the Age of Brexit, Richard Sakwa
No. 21, October 2015 : There Will Be Gas: Gazprom's Transport Strategy in Europe, Aurélie Bros
No. 20, September 2015 : Guerre de l'information : le web russe dans le conflit en Ukraine, Julien Nocetti
No. 19, May 2015 : Ukraine: a Test for Russian Military Reforms, Pavel Baev
No. 18, July 2014 : Gazprom in Europe: a Business Doomed to Fail?, Aurélie Bros
No. 17, January 2014 : Russia's Eastern Direction—Distinguishing the Real from the Virtual, Bobo Lo
No. 16, December 2013 : Russian LNG: The Long Road to Export, Tatiana Mitrova
No. 15, December 2012 : OMC: quel impact pour le secteur agricole russe ?, Pascal Grouiez
No. 14, November 2012 : Kazakhstan and Eurasian Economic Integration: Quick Start, Mixed Results and Uncertain Future, Nargis Kassenova
No. 13, October 2012 : Entreprises et universités russes : de la coopération au recrutement, Tatiana Kastouéva-Jean
No. 12, June 2012 : The Religious Diplomacy of the Russian Federation, Alicja Curanovic
No. 11, April 2012 : Ukraine at the Crossroads: Between the EU DCFTA and Customs Union, Olga Shumylo-Tapiola
No. 10, March 2012 : Le Web en Russie : de la virtualité à la réalité politique ?, Julien Nocetti
No. 9, January 2012 : Université fédérale de l'Oural, une futur "Harvard régionale"?, Tatiana Kastouéva-Jean
No. 8, June 2011 : L'université Goubkine: réservoir de cadres pour le secteur pétrolier et gazier, Tatiana Kastouéva-Jean
No. 7, May 2011 : Economic Constraint and Ukraine's Security Policy, Dominic Fean
No. 6, December 2010 : How the Chinese See Russia, Bobo Lo
No. 5, October 2010 : "Soft power" russe: discours, outils, impact, Tatiana Kastouéva-Jean
No. 4, September 2010 : Les universités privées, "mal-aimées" de l'enseignement supérieur russe, Tatiana Kastouéva-Jean
No. 3, March 2010 : L'université technique Bauman: un atout majeur de la politique industrielle russe, Carole Sigman
No. 2, October 2009 : Le Haut collège d'économie: école de commerce, université et think tank, Carole Sigman
No. 1, October 2009 : "Projet MISiS": futur modèle de l'enseignement supérieur en Russie?, Tatiana Kastouéva-Jean
See also
Foreign relations of Russia
Gazprom
Thomas Gomart
Energy policy
Foreign relations of Russia
2005 works | Russie.NEI.Visions in English | Environmental_science | 4,294 |
24,956,783 | https://en.wikipedia.org/wiki/Matching%20polynomial | In the mathematical fields of graph theory and combinatorics, a matching polynomial (sometimes called an acyclic polynomial) is a generating function of the numbers of matchings of various sizes in a graph. It is one of several graph polynomials studied in algebraic graph theory.
Definition
Several different types of matching polynomials have been defined. Let G be a graph with n vertices and let mk be the number of k-edge matchings.
One matching polynomial of G is
Another definition gives the matching polynomial as
A third definition is the polynomial
Each type has its uses, and all are equivalent by simple transformations. For instance,
and
Connections to other polynomials
The first type of matching polynomial is a direct generalization of the rook polynomial.
The second type of matching polynomial has remarkable connections with orthogonal polynomials. For instance, if G = Km,n, the complete bipartite graph, then the second type of matching polynomial is related to the generalized Laguerre polynomial Lnα(x) by the identity:
If G is the complete graph Kn, then MG(x) is an Hermite polynomial:
where Hn(x) is the "probabilist's Hermite polynomial" (1) in the definition of Hermite polynomials. These facts were observed by .
If G is a forest, then its matching polynomial is equal to the characteristic polynomial of its adjacency matrix.
If G is a path or a cycle, then MG(x) is a Chebyshev polynomial. In this case
μG(1,x) is a Fibonacci polynomial or Lucas polynomial respectively.
Complementation
The matching polynomial of a graph G with n vertices is related to that of its complement by a pair of (equivalent) formulas. One of them is a simple combinatorial identity due to . The other is an integral identity due to .
There is a similar relation for a subgraph G of Km,n and its complement in Km,n. This relation, due to Riordan (1958), was known in the context of non-attacking rook placements and rook polynomials.
Applications in chemical informatics
The Hosoya index of a graph G, its number of matchings, is used in chemoinformatics as a structural descriptor of a molecular graph. It may be evaluated as mG(1) .
The third type of matching polynomial was introduced by as a version of the "acyclic polynomial" used in chemistry.
Computational complexity
On arbitrary graphs, or even planar graphs, computing the matching polynomial is #P-complete . However, it can be computed more efficiently when additional structure about the graph is known. In particular,
computing the matching polynomial on n-vertex graphs of treewidth k is fixed-parameter tractable: there exists an algorithm whose running time, for any fixed constant k, is a polynomial in n with an exponent that does not depend on k .
The matching polynomial of a graph with n vertices and clique-width k may be computed in time nO(k) .
References
.
.
.
.
.
.
.
.
Algebraic graph theory
Matching (graph theory)
Polynomials
Graph invariants | Matching polynomial | Mathematics | 639 |
20,845,241 | https://en.wikipedia.org/wiki/Panobinostat | Panobinostat, sold under the brand name Farydak, is a medication used for the treatment of multiple myeloma. It is a hydroxamic acid and acts as a non-selective histone deacetylase inhibitor (pan-HDAC inhibitor).
Panobinostat was approved for medical use in the United States in February 2015, and in the European Union in August 2015. However, in March 2022, it was withdrawn in the United States.
Medical uses
Panobinostat is used in combination with the anti-cancer drug bortezomib and the corticoid dexamethasone for the treatment of multiple myeloma in adults who had received at least two previous treatments, including bortezomib and an immunomodulatory agent.
Contraindications
Panobinostat is contraindicated in nursing mothers. To judge from experiments in animals, there is a risk for the unborn child if used during pregnancy; still, the benefit of panobinostat may outweigh this risk.
Side effects
Common side effects (in more than 10% of patients) include low blood cell counts (pancytopenia, thrombocytopenia, anaemia, leucopenia, neutropenia, lymphopenia), airway infections, as well as unspecific reactions such as fatigue, diarrhoea, nausea, headache, and sleeping problems.
Pharmacology
Mechanism of action
Panobinostat inhibits multiple histone deacetylase enzymes, a mechanism leading to apoptosis of malignant cells via multiple pathways.
Pharmacokinetics
Panobinostat is absorbed quickly and almost completely from the gut, but has a significant first-pass effect, resulting in a total bioavailability of 21%. Highest blood plasma levels in patients with advanced cancer are reached after two hours. Plasma protein binding is about 90%. The substance is metabolised mainly through oxidation by the liver enzyme CYP3A4 and to a small extent by CYP2D6 and CYP2C19. It is also reduced, hydrolyzed and glucuronidized by unspecified enzymes. All metabolites seem to be inactive.
Biological half-life is estimated to be 37 hours. 29–51% are excreted via the urine and 44–77% via the faeces.
Clinical trials
, it is being tested against Hodgkin's Lymphoma, cutaneous T cell lymphoma (CTCL) and other types of malignant disease in Phase III clinical trials, against myelodysplastic syndromes, breast cancer and prostate cancer in Phase II trials, and against chronic myelomonocytic leukemia (CMML) in a Phase I trial.
panobinostat is being used in a Phase I/II clinical trial that aims at curing AIDS in patients on highly active antiretroviral therapy (HAART). In this technique, panobinostat is used to drive the HIV DNA out of the patient's DNA, in the expectation that the patient's immune system in combination with HAART will destroy it.
panobinostat is being studied in a phase II trial for relapsed and refractory diffuse large B-cell lymphoma (DLBCL).
Preclinical studies
Panobinostat has been found to synergistically act with sirolimus to kill pancreatic cancer cells in the laboratory in a Mayo Clinic study. In the study, investigators found that this combination destroyed up to 65 percent of cultured pancreatic tumor cells. The finding is significant because the three cell lines studied were all resistant to the effects of chemotherapy – as are many pancreatic tumors.
Panobinostat has also been found to significantly increase in vitro the survival of motor neuron (SMN) protein levels in cells of patients with spinal muscular atrophy.
Panobinostat was able to selectively target triple negative breast cancer (TNBC) cells by inducing hyperacetylation and cell cycle arrest at the G2-M DNA damage checkpoint; partially reversing the morphological changes characteristic of breast cancer cells.
Panobinostat, along with other HDAC inhibitors, is also being studied for potential to induce virus HIV-1 expression in latently infected cells and disrupt latency. These resting cells are not recognized by the immune system as harboring the virus and do not respond to antiretroviral drugs.
A 2015 study suggested Panobinostat was effective in preventing diffuse intrinsic pontine glioma cell growth in vitro and in vivo, identifying it as a potential drug candidate.
Panobinostat was also identified as a senolytic, effective at killing senescent cells.
References
CYP2D6 inhibitors
Tryptamines
Hydroxamic acids
Histone deacetylase inhibitors
Drugs developed by Novartis
Cancer treatments
Orphan drugs | Panobinostat | Chemistry | 1,036 |
36,518,828 | https://en.wikipedia.org/wiki/TBP-associated%20factor | The TBP-associated factors (TAF) are proteins that associate with the TATA-binding protein in transcription initiation. It is a part of the transcription initiation factor TFIID multimeric protein complex. It also makes up many other factors, including SL1. They mediate the formation of the transcription preinitiation complex, a step preceding transcription of DNA to RNA by RNA polymerase II.
TAFs have a signature N-terminal histone-like fold domain (HFD). This domain is implicated in the pairwise interaction among specific TAFs.
Function
TFIID
TFIID plays a central role in mediating promoter responses to various activators and repressors. It binds tightly to TAFII-250 and directly interacts with TAFII-40. TFIID is composed of TATA binding protein (TBP) and a number of TBP-associated factors (TAFS).
TAF is part of the TFIID complex, and interacts with the following:
Specific transcriptional activators
Basal transcription factors
Other TAFIIs
Specific DNA sequences, for example the downstream promoter element or gene-specific core promoter sequence
Due to such interactions, they contribute transcription activation and to promoter selectivity.
Some pairs of TAF interact with each other to form "lobes" in TFIID. Pairs known or suggested to exist in TFIID include TAF6-TAF9, TAF4-TAF12, TAF11-13, TAF8-TAF10 and TAF3-TAF10.
SL1
Selective factor 1 is composed of the TATA-binding protein and three TAF (TATA box-binding protein-associated factor) subunits (TAF1A, TAF1B, and TAF1C). These TAFs do not have a histone-like fold domain.
Other complexes
TAF is a part of SAGA (SPT-ADA-GCN5 acetylase) and related coactivation complexes. Such complexes acetylate histone tails to activate genes. Human has three SAGA-like complexes: PCAF, TFTC (TBP-free TAF-containing complex), and STAGA (SPT3-TAF9-GCN5L acetylase). PCAF (GCN5) and KAT2A (GCN5L) are two human homologs of the yeast Gcn5.
TAF8, TAF10, and SPT7L forms a small TAF complex called SMAT.
Structure
The N-terminal domain of TAF has a histone-like protein fold. It contains two short alpha helices and a long central alpha helix.
Human genes
TAF1 (TAFII250)
TAF2 (CIF150)
TAF3 (TAFII140)
TAF4 (TAFII130/135)
TAF4B (TAFII105)
TAF5 (TAFII100)
TAF6 (TAFII70/80)
TAF6L (PAF65A)
TAF7 (TAFII55)
TAF8 (TAFII43)
TAF9 (TAFII31/32)
TAF9B (TAFII31L)
TAF10 (TAFII30)
TAF11 (TAFII28)
TAF12 (TAFII20/15)
TAF13 (TAFII18)
TAF15 (TAFII68)
Assorted signatures
TAF domains are spread out across many digital signatures:
References
Protein domains | TBP-associated factor | Biology | 749 |
14,286,576 | https://en.wikipedia.org/wiki/Lehr%20%28glassmaking%29 | In the manufacture of float glass, a lehr oven is a long kiln with an end-to-end temperature gradient, which is used for annealing newly made glass objects that are transported through the temperature gradient either on rollers or on a conveyor belt. The annealing renders glass into a stronger material with fewer internal stresses, and with a lower probability of breaking.
The rapid cooling of molten glass results in an uneven temperature distribution throughout the material. This temperature differential results in mechanical stresses throughout the molten glass, which may be sufficient to cause the material to crack as it cools to ambient temperature or to make it susceptible to cracking during later use, either spontaneously or due to mechanical or thermal shock. To prevent such material weaknesses, objects made from molten glass are annealed by gradual cooling in a lehr oven, from the annealing point, a temperature just below the solidification temperature of the glass. In the process of annealing glass, the temperature is first equalised by holding or "soaking" the glass at the annealing point for a period of time that depends on the maximum thickness of the glass. The glass is then slowly cooled at a rate that depends upon the maximum thickness of the glass, ranging from tens of degrees Celsius per hour (for thin slabs of glass) to fractions of a degree Celsius per hour (for thick slabs of glass).
See also
Annealing (glass)
References
Glass engineering and science
Glass art
Kilns | Lehr (glassmaking) | Chemistry,Materials_science,Engineering | 301 |
56,734,151 | https://en.wikipedia.org/wiki/HR%202562%20B | HR 2562 B is a substellar companion orbiting the star HR 2562. Discovered in 2016 by a team led by Quinn M. Konopacky by direct imaging, HR 2562 B orbits within the inner edge of HR 2562's circumstellar discas of April 2023, it is one of only two known brown dwarfs to do so. Separated by roughly from its primary companion, HR 2562 B has drawn interest for its potential dynamical interactions with the outer circumstellar disc.
Discovery
HR 2562 B was discovered using the Gemini Planet Imager (GPI), which first observed the star HR 2562 in January 2016. In the initial data set, Konopacky and collaborators identified a candidate companion object. As a result, followup observations were conducted within the following month in the infrared K1-, K2-, and J-bands. Within the processed data set, HR 2562 B was confirmed to share a common proper motion with HR 2562, with Konopacky and collaborators announcing its discovery in a paper published on 14 September 2016.
Host star
HR 2562 B's parent star, HR 2562 (alternatively designated HD 50571 or HIP32775), has a mass of and a radius of . With an estimated effective temperature of 6597 ± 81K, it is a main-sequence star with the spectral type F5V. It is located from the Sun in the constellation Pictor. HR 2562 is not known to belong to a moving group or stellar cluster.
As with many mid F-type stars, the age of HR 2562 is poorly constrained. Between 1999 and 2011, estimates from various teams of astronomers determined ages ranging from roughly 300 Myr to 1.6 Gyr. In 2018, a team of astronomers led by D. Mesa derived an age of Myr using measurements of the star's lithium-temperature relationship.
Properties
Orbital properties
Initial observations of HR 2562 B by Konopacky and collaborators yielded a separation of , placing it interior to and coplanar with the inner edge of HR 2562's observed debris disc. Further observations of HR 2562 B by the Atacama Large Millimeter Array (ALMA) supported this, yielding a semi-major axis of AU, an orbital period of yr, and an orbital eccentricity of . With a probable orbital inclination of °, HR 2562 B's misalignment angle with the debris disc is either ° or °. However, the limited coverage of observations still leaves a wide range of possible orbits; both low-eccentricity, coplanar orbits and high-eccentricity, misaligned orbits would be consistent with observation data. However, a highly misaligned orbit would significantly perturb the disc, suggesting that a low-eccentricity, coplanar solutions are likelier.
Any additional companions around HR 2562 with a mass on the order of 10 should be visible at separations larger than 10 AU, and any companion a few times more massive than Jupiter should be visible to SPHERE's infrared dual-band spectrograph (IRDIS) instrumentthus placing mass restrictions on any additional companions.
Physical properties
HR 2562 B's exact mass is unknown. The brown dwarf was estimated to be 29 ± 15 in 2021. However, subsequent observations placed an upper mass limit of < 18.5 . Its luminosity is about solar luminosity. Its spectral type is L7±3.
See also
PZ Telescopii B, another substellar object with mass slightly below
Notes
References
Exoplanets discovered in 2016
Exoplanets detected by direct imaging
Brown dwarfs
Pictor
L-type brown dwarfs | HR 2562 B | Astronomy | 758 |
37,691,059 | https://en.wikipedia.org/wiki/WR%20156 | WR 156 is a young massive and luminous Wolf–Rayet star in the constellation of Cepheus. Although it shows a WR spectrum, it is thought to be a young star still fusing hydrogen in its core.
Distance
WR 156 has a Hipparcos parallax of 3.16" indicating a distance of about a thousand light years, although with a fairly large margin of error. Other studies indicate that it is much more distant based on a very high luminosity and faint apparent magnitude. The Gaia DR1 parallax is 0.07". The margin of error is larger than the measured parallax, but still the indication is for a very large distance. In Gaia Data Release 2, the parallax is given as but with a marker that the result may be unreliable. In the Gaia Early Release 3, the solution was adjusted to , still with significant astrometric noise excess.
Physical properties
WR 156 has a WR spectrum on the nitrogen sequence, indicating strong emission of helium and nitrogen, but it also shows features of hydrogen. Therefore, it is given a spectral type of WN8h. Its outer layers are calculated to contain 30% hydrogen, one of the highest levels for any galactic Wolf Rayet star.
WR 156 has a low temperature and slow stellar wind by Wolf Rayet standards, only 39,800 K and 660 km/s respectively. The wind is very dense, with total mass loss of more than /year.
WR 156 is a young hydrogen-rich star, still burning hydrogen in its core but sufficiently luminous to have convected up nitrogen and helium fusion products to its surface. It shows 27% hydrogen at its surface. It is estimated to have had an initial mass of several million years ago.
References
Wolf–Rayet stars
Cepheus (constellation)
113569
J23001010+6055385 | WR 156 | Astronomy | 380 |
247,874 | https://en.wikipedia.org/wiki/Terra%20nullius | Terra nullius (, plural terrae nullius) is a Latin expression meaning "nobody's land".
Since the nineteenth century it has occasionally been used in international law as a principle to justify claims that territory may be acquired by a state's occupation of it. There are currently three territories sometimes claimed to be terra nullius: Bir Tawil (a strip of land between Egypt and the Sudan), four pockets of land near the Danube due to the Croatia–Serbia border dispute, and parts of Antarctica, principally Marie Byrd Land.
Doctrine
In international law, terra nullius is territory which belongs to no state. Sovereignty over territory which is terra nullius can be acquired by any state by occupation. According to Oppenheim: "The only territory which can be the object of occupation is that which does not already belong to another state, whether it is uninhabited, or inhabited by persons whose community is not considered to be a state; for individuals may live on as territory without forming themselves into a state proper exercising sovereignty over such territory."
Occupation of terra nullius is one of several ways in which a state can acquire territory under international law. The other means of acquiring territory are conquest, cession by agreement, accretion through the operations of nature, and prescription through the continuous exercise of sovereignty.
History
Although the term terra nullius was not used in international law before the late nineteenth century, some writers have traced the concept to the Roman law term res nullius, meaning nobody's thing. In Roman law, things that were res nullius, such as wild animals (ferae bestiae), lost slaves and abandoned buildings could be taken as property by anyone by seizure. Benton and Straumann, however, state that the derivation of terra nullius from res nullius is "by analogy" only.
Sixteenth century writings on res nullius were in the context of European colonisation in the New World and the doctrine of discovery. In 1535, Domingo de Soto argued that Spain had no right to the Americas because the lands had not been res nullius at the time of discovery. Francisco de Vitoria, in 1539, also used the res nullius analogy to argue that the indigenous populations of the Americas, although “barbarians”, had both sovereignty and private ownership over their lands, and that the Spanish had gained no legal right to possession through mere discovery of these lands. Nevertheless, Vitoria stated that the Spanish possibly had a limited right to rule the indigenous Americans because the latter “are unsuited to setting up or administering a commonwealth both legitimate and ordered in human and civil terms.”
Alberico Gentili, in his De Jure Belli Libri Tres (1598), drew a distinction between the legitimate occupation of land that was res nullius and illegitimate claims of sovereignty through discovery and occupation of land that was not res nullius, as in the case of the Spanish claim to the Americas. Hugo Grotius, writing in 1625, also stated that discovery does not give a right to sovereignty over inhabited land, “For discovery applies to those things which belong to no one.”
By the eighteenth century, however, some writers argued that territorial rights over land could stem from the settlement and cultivation of that land. William Blackstone, in 1765, wrote, "Plantations or colonies, in distant countries, are either such where the lands are claimed by right of occupancy only, by finding them desert and uncultivated, and peopling them from the mother-country; or where, when already cultivated, they have been either gained by conquest, or ceded to us by treaties. And both these rights are founded upon the law of nature, or at least upon that of nations."
Several years before Blackstone, Emer de Vattel, in his Le droit des gents (1758), drew a distinction between land that was effectively occupied and cultivated, and the unsettled and uncultivated land of nomads which was open to colonisation.
Borch states that many commentators erroneously interpreted this to mean that any uncultivated lands, whether inhabited or not, could be claimed by a colonising state by right of occupancy. Borch places the shift towards the view that "uncultivated" but inhabited lands were terra nullius primarily in the 19th century, and argues it was a result of political developments and the rise of new intellectual currents such as scientific racism and legal positivism.
The Berlin West Africa Conference of 1884-85 endorsed the principle that sovereignty over an unclaimed territory required effective occupation, and that where native populations had established effective occupation their sovereignty could not be unilaterally overturned by a colonising state.
The term terra nullius was used in 1885 in relation to the dispute between Spain and the United States over Contoy Island. Herman Eduard von Hoist, wrote, “Contoy was not, in an international sense, a desert, that is an abandoned island and hence terra nullius." In 1888, the introduced the concept of territorium nullius (nobody’s territory) as a public law equivalent to the private law concept of res nullius.
In 1909, the Italian international jurist Camille Piccioni described the island of Spitzbergen in the Arctic Circle as terra nullius. Even though the island was inhabited by the nationals of several European countries, the inhabitants did not live under any formal sovereignty.
In subsequent decades, the term terra nullius gradually replaced territorium nullius. Fitzmaurice argues that the two concepts were initially distinct, territorium nullius applying to territory in which the inhabitants might have property rights but had not developed political sovereignty whereas terra nullius referred to an absence of property. Nevertheless, terra nullius also implied an absence of sovereignty because sovereignty required property rights acquired through the exploitation of nature. Michael Connor, however, argues that territorium nullius and terra nullius were the same concept, meaning land without sovereignty, and that property rights and cultivation of land were not part of the concept.
The term terra nullius was adopted by the International Court of Justice in its 1975 Western Sahara advisory opinion. The majority wrote, "'Occupation' being legally an original means of peaceably acquiring sovereignty over territory otherwise than by cession or succession, it was a cardinal condition of a valid 'occupation' that the territory should be terra nullius – a territory belonging to no-one – at the time of the act alleged to constitute the 'occupation'." The court found that at the time of Spanish colonisation in 1884, the inhabitants of Western Sahara were nomadic but socially and politically organised in tribes and under chiefs competent to represent them. According to State practice of the time the territory therefore was not terra nullius.
Current claims of terra nullius
There are three instances where land is sometimes claimed to be terra nullius: Bir Tawil bordering Egypt and the Sudan, four small areas along the Croatia–Serbia border, and Marie Byrd Land in Antarctica.
Bir Tawil
Between Egypt and the Sudan is the landlocked territory of Bir Tawil, which was created by a discrepancy between borders drawn in 1899 and 1902. One border placed Bir Tawil under the Sudan's control and the Halaib Triangle under Egypt's; the other border did the reverse. Each country asserts the border that would give it the much larger Halaib Triangle, to the east, which is adjacent to the Red Sea, with the side effect that Bir Tawil is unclaimed by either country (each claims the other owns it). Bir Tawil has no settled population, but the land is used by Bedouins who roam the area.
Gornja Siga and other pockets
Croatia and Serbia dispute several small areas on the east bank of the Danube. However, four pockets on the western river bank, of which Gornja Siga is the largest, are not claimed by either country. Serbia makes no claims on the land while Croatia states that the land belongs to Serbia. Croatia states that the disputed area is not terra nullius and they are negotiating with Serbia to settle the border.
Marie Byrd Land
While several countries made claims to parts of Antarctica in the first half of the 20th century, the remainder, including most of Marie Byrd Land (the portion east from 150°W to 90°W), has not been claimed by any sovereign state. Signatories to the Antarctic Treaty of 1959 agreed not to make such claims, except the Soviet Union and the United States, who reserved the right to make a claim.
An undefined area from 20°W to 45°E was historically considered potentially unclaimed; the Norwegian claim in Queen Maud Land was interpreted as covering the coastal regions, but not continuing all the way to the South Pole. In 2015, the claim was extended to reach as far as 90°S.
Historical claims of terra nullius
Several territories have been claimed to be terra nullius. In a minority of those claims, international and domestic courts have ruled on whether the territory is or was terra nullius or not.
Africa
Burkina Faso and the Niger
A narrow strip of land adjacent to two territorial markers along the Burkina Faso–Niger border was claimed by neither country until the International Court of Justice settled a more extensive territorial dispute in 2013. The former unclaimed territory was awarded to the Niger.
Western Sahara
At the request of Morocco, the International Court of Justice in 1975 addressed whether Western Sahara was terra nullius at the time of Spanish colonization in 1885. The court found in its advisory opinion that Western Sahara was not terra nullius at that time.
Asia
Pinnacle Islands (Diaoyu Islands/Senkaku Islands)
A disputed archipelago in the East China Sea, the uninhabited Pinnacle Islands, were claimed by Japan to have become part of its territory as terra nullius in January 1895, following the Japanese victory in the First Sino-Japanese War. However, this interpretation is not accepted by the People's Republic of China (PRC) and the Republic of China (Taiwan), both of whom claim sovereignty over the islands.
Saudi–Iraqi neutral zone
It was an area of on the border between Saudi Arabia and Iraq within which the border between the two countries had not been settled. The neutral zone came into existence following the Uqair Protocol of 1922 that defined the border between Iraq and the Sultanate of Nejd (Saudi Arabia's predecessor state). An agreement to partition the neutral zone was reached by Iraqi and Saudi representatives on 26 December 1981, and approved by the Iraqi National Assembly on 28 January 1982. The territory was divided on an unknown date between 28 January and 30 July 1982. Notice was given to the United Nations in June 1991.
Saudi–Kuwaiti neutral zone
Scarborough Shoal (South China Sea)
The People's Republic of China, the Republic of China (Taiwan) and the Philippines claim Scarborough Shoal, also known as Panatag Shoal or Huangyan Island (). The nearest landmass is the Philippine island of Luzon at 220 km (119 nmi), located in the South China Sea. The Philippines claims it under the principle of terra nullius and the fact that it lies within its EEZ (exclusive economic zone). Meanwhile, both China and Taiwan claim the shoal based on historical records that Chinese fishermen had discovered and mapped the shoal since the 13th century.
Previously, the shoal was administered as part of Municipality of Masinloc, Province of Zambales, by the Philippines. Since the Scarborough Shoal standoff in 2012, the shoal has been administered as part of Xisha District, Sansha City, Hainan Province, by the People's Republic of China. Taiwan places the shoal under the administration of Cijin District, Kaohsiung City, but does not have control of the shoal.
The Permanent Court of Arbitration (PCA) denied the lawfulness of China's claim in 2016; China rejected the ruling, calling it "ill-founded". In 2019, Taiwan also rejected the ruling and has sent more naval vessels to the area.
It has been speculated that Scarborough Shoal is a prime location for the construction of an artificial island and Chinese ships have been seen in the vicinity of the shoal. However, analysis of photos has concluded that the ships lack dredging equipment and therefore represent no imminent threat of reclamation work.
Europe
Ireland
The term terra nullius has been applied by some modern academics in discussing the English colonisation of Ireland, although the term is not used in the international law sense and is often used as an analogy. Griffen and Cogliano state that the English viewed Ireland as a terra nullius. In The Irish Difference: A Tumultuous History of Ireland’s Breakup With Britain, Fergal Tobin writes that "Ireland had no tradition of unified statehood and no culturally unified establishment. Indeed, it had never known any kind of political unity until a version of it was imposed by Cromwell's sword […] So the English Protestant interest […] came to regard Ireland as a kind of terra nullius." Similarly, Bruce McLeod writes in The Geography of Empire in English Literature, 1580-1745 that "although the English were familiar with Ireland and its geography in comparison to North America, they treated Ireland as though it were terra nullius and thus easily and geometrically subdivided into territorial units." Rolston and McVeigh trace this attitude back to Gerald of Wales (13th century), who wrote "This people despises work on the land, has little use for the money-making of towns, contemns the rights and privileges of citizenship, and desires neither to abandon, nor lose respect for, the life which it has been accustomed to lead in the woods and countryside." The semi-nomadism of the native Irish meant that some English judged them not to be productive users of land. However, Rolston and McVeigh state that Gerald made it clear that Ireland was acquired by conquest and not through the occupation of terra nullius.
Rockall
According to Ian Mitchell, Rockall was terra nullius until it was claimed by the United Kingdom in 1955.
It was formally annexed in 1972.
Sealand
In 1967, Paddy Roy Bates claimed an abandoned British anti-aircraft gun tower in the North Sea as the "Principality of Sealand". The structure is now within British territorial waters and no country recognises Sealand.
Svalbard
Denmark–Norway, the Dutch Republic, the Kingdom of Great Britain, and the Kingdom of Scotland all claimed sovereignty over the archipelago of Svalbard in the seventeenth century, but none permanently occupied it. Expeditions from each of these polities visited Svalbard principally during the summer for whaling, with the first two sending a few wintering parties in the 1620s and 1630s.
During the 19th century, both Norway and Russia made strong claims to the archipelago. In 1909, Italian jurist Camille Piccioni described Spitzbergen, as it was then known, as terra nullius:
The territorial dispute was eventually resolved by the Svalbard Treaty of 9 February 1920 which recognized Norwegian sovereignty over the islands.
North America
Canada
Joseph Trutch, the first Lieutenant Governor of British Columbia, insisted that First Nations had never owned land, and thus their land claims could safely be ignored. It is for this reason that most of British Columbia remains unceded land.
In Guerin v. The Queen, a Canadian Supreme Court decision of 1984 on aboriginal rights, the Court stated that the government has a fiduciary duty toward the First Nations of Canada and established aboriginal title to be a sui generis right. Since then there has been a more complicated debate and a general narrowing of the definition of "fiduciary duty".
Eastern Greenland
Norway occupied and claimed parts of (then uninhabited) eastern Greenland in 1931, claiming that it constituted terra nullius and calling the territory Erik the Red's Land.
The Permanent Court of International Justice ruled against the Norwegian claim. The Norwegians accepted the ruling and withdrew their claim.
United States
A similar concept of "uncultivated land" was employed by John Quincy Adams to identify supposedly unclaimed wilderness.
Guano Islands
The Guano Islands Act of 18 August 1856 enabled citizens of the U.S. to take possession of islands containing guano deposits. The islands can be located anywhere, so long as they are not occupied and not within the jurisdiction of other governments. It also empowers the President of the United States to use the military to protect such interests, and establishes the criminal jurisdiction of the United States.
Oceania
Australia
The British penal colony of New South Wales, which included more than half of mainland Australia, was proclaimed by Governor Captain Arthur Phillip at Sydney in February 1788. At the time of British colonisation, Aboriginal Australians had occupied Australia for at least 50,000 years. They were complex hunter-gatherers with diverse economies and societies and about 250 different language groups. The Aboriginal population of the Sydney area was an estimated 4,000 to 8,000 people who were organised in clans which occupied land with traditional boundaries.
There is debate over whether Australia was colonised by the British from 1788 on the basis that the land was terra nullius. Frost, Attwood and others argue that even though the term terra nullius was not used in the eighteenth century, there was widespread acceptance of the concept that a state could acquire territory through occupation of land that was not already under sovereignty and was uninhabited or inhabited by peoples who had not developed permanent settlements, agriculture, property rights or political organisation recognised by European states. Borch, however, states that, "it seems much more likely that there was no legal doctrine maintaining that inhabited land could be regarded as ownerless, nor was this the basis of official policy, in the eighteenth century or before. Rather it seems to have developed as a legal theory in the nineteenth century.”
In Mabo v Queensland (No 2) (1992), Justice Dawson stated, "Upon any account, the policy which was implemented and the laws which were passed in New South Wales make it plain that, from the inception of the colony, the Crown treated all land in the colony as unoccupied and afforded no recognition to any form of native interest in the land."
Stuart Banner states that the first known Australian legal use of the concept (although not the term) terra nullius was in 1819 in a tax dispute between Barron Field and the Governor of New South Wales Lachlan Macquarie. The matter was referred to British Attorney General Samuel Shepherd and Solicitor General Robert Gifford who advised that New South Wales had not been acquired by conquest or cession, but by possession as "desert and uninhabited".
In 1835, a Proclamation by Governor Bourke stated that British subjects could not obtain title over vacant Crown land directly from Aboriginal Australians.
In R v Murrell (1836) Justice Burton of the Supreme Court of New South Wales stated, "although it might be granted that on the first taking possession of the Colony, the aborigines were entitled to be recognised as free and independent, yet they were not in such a position with regard to strength as to be considered free and independent tribes. They had no sovereignty."
In the Privy Council case Cooper v Stuart (1889), Lord Watson stated that New South Wales was, "a tract of territory practically unoccupied, without settled inhabitants or settled law, at the time when it was peacefully annexed to the British dominions."
In the Mabo Case (1992), the High Court of Australia considered the question of whether Australia had been colonised by Britain on the basis that it was terra nullius. The court did not consider the legality of the initial colonisation as this was a matter of international law and, "The acquisition of territory by a sovereign state for the first time is an act of state which cannot be challenged, controlled or interfered with by the courts of that state." The questions for decision included the implications of the initial colonisation for the transmission of the common law to New South Wales and whether the common law recognised that the Indigenous inhabitants had any form of native title to land. Dismissing a number of previous authorities, the court rejected the "enlarged notion of terra nullius", by which lands inhabited by Indigenous peoples could be considered desert and uninhabited for the purposes of Australian municipal law. The court found that the common law of Australia recognised a form of native title held by the Indigenous peoples of Australia and that this title persisted unless extinguished by a valid exercise of sovereign power inconsistent with the continued right to enjoy native title.
Clipperton Island
The sovereignty of Clipperton Island was settled by arbitration between France and Mexico. King Victor Emmanuel III of Italy rendered a decision in 1931 that the sovereignty of Clipperton Island belongs to France from the date of November 17, 1858. The Mexican claim was rejected for lack of proof of prior Spanish discovery and, in any event, no effective occupation by Mexico before 1858, when the island was therefore territorium nullius, and the French occupation then was sufficient and legally continuing.
South Island of New Zealand
In 1840, the newly appointed Lieutenant-Governor of New Zealand, Captain William Hobson of the Royal Navy, following instructions from the British government, declared sovereignty over the Middle Island (later called the South Island) and Stewart Island on the basis they were terra nullius.
South America
Patagonia
Patagonia was according to some considerations regarded a terra nullius in the 19th century. This notion ignored the Spanish Crown's recognition of indigenous Mapuche sovereignty and is considered by scholars Nahuelpán and Antimil to have set the stage for an era of Chilean "republican colonialism".
See also
Appropriation concepts
Footnotes
References
Sources
Further reading
book info here. svenlindqvist.net (author's website).
External links
– Governor Burke's 1835 proclamation of terra nullius.
– analysis of Michael Conner's denial of terra nullius (The Invention of Terra Nullius).
.
.
.
Common law
Constitutional state types
International law
Legal fictions
Latin legal terminology
Aboriginal title
Legal doctrines and principles
Colonialism
Space law
Borders | Terra nullius | Physics,Astronomy | 4,488 |
34,179,325 | https://en.wikipedia.org/wiki/Alabama%20Regional%20Communications%20System | The Alabama Regional Communications System (ARCS) is a radio/alert notification communications district with responsibility for providing user-based administration for operations and maintenance of the interoperable communications system that serves Calhoun County, Alabama and Talladega County, Alabama. The Motorola trunked radio system is licensed by the Federal Communications Commission (FCC) to operate on radio frequency spectrum in the 800 megahertz (MHz) public safety band.
Background
The Calhoun-Talladega 800 MHz communications system was originally developed and installed to enhance interoperability and to provide alert and notification for the community during the destruction of chemical weapons at the Anniston Army Depot (ANAD). The communications system was funded and maintained through a federal grant known as the Chemical Stockpile Emergency Preparedness Program (CSEPP), a joint United States Army and Federal Emergency Management Agency (FEMA) program designed to provide community education and emergency preparedness resources in the event of a chemical agent emergency. The original communications system, a Legacy system with analog trunking, went on-the-air in 1998 and quickly became the primary means of two-way communications for most public safety agencies in Calhoun and Talladega Counties, replacing their standalone conventional radio repeaters. In 2006, the CSEPP communications system was upgraded to a digital Project 25 Motorola Type II SmartZone version 7.4 linear simulcast/multicast trunked 800 MHz communications system.
The facilities at ANAD, known as Anniston Chemical Activity (ANCA), stored approximately seven percent of the nation's chemical weapon's stockpile including VX (nerve agent) and Sarin (GB) agent. ANAD was one of nine United States Army installations in the United States that stored chemical weapons. The United States Army began incinerating the stored weapons on August 9, 2003. The destruction of ANAD's chemical weapons stockpile was controversial, in part, because of the potential danger to the surrounding community should an incident occur during weapons disposal operations. In September 2011, the US Army's Chemical Materials Agency (CMA) successfully completed the safe elimination of ANAD's chemical weapons stockpile.
Funding Transition
As a result of the completion of chemical warfare disposal operations at ANAD, all federal funding for the communities surrounding the Anniston Army Depot will end on March 31, 2012. This includes funding for operations and maintenance of the CSEPP communications system. Over the years, the community has become increasingly dependent upon the capabilities of the communications system, in particular, because of reliability, interoperability, capacity and in-building coverage. It provides 24-7-365 voice and data communications services for nearly 3,000 users including law enforcement, fire and rescue, emergency medical services, school facilities and buses, hospitals, transportation, utilities and many other agencies. It also provides activation signaling for almost 200 outdoor warning sirens located throughout both counties. By the time chemical weapons disposal was completed, the system was serving nearly 100 percent of the mission-critical communications needs in Calhoun and Talladega County.
In preparation for the end of CSEPP grant funding, local officials explored alternative strategies to generate funds that would allow continued service and operation of the existing communications system. With the support and cooperation of local and state elected officials, FEMA, consulting firms and the users of the system, an exploratory committee crafted legislation that would allow the users to take ownership of the system.
New Beginning
On June 15, 2011, Alabama Governor Robert J. Bentley signed House Bill 389 into law as Act 2011-675 to create the radio/alert notification communications district. The purpose of this legislation was to create a mechanism for the community to transition the CSEPP communications system from the federal grant funding to locally generated funding, operations and ownership.
As provided by the new Alabama law codified in Sections 11-31-1 to 11–31–4, Code of Alabama 1975 (as amended), the Calhoun and Talladega County Commissions made their respective appointments to the communications district's board of directors (term of appointment listed in parentheses):
CALHOUN COUNTY
Mr. Mike Fincher, Director of Safety, Calhoun County Board of Education (4)
Police Chief Bill Partridge, Oxford, Alabama (3)
Ms. Melonie Carmichael, emergency management specialist, Jacksonville State University (3)
Fire Chief Mike Howard, Alexandria, Alabama (2)
TALLADEGA COUNTY
Police Chief Travis McGrady, Lincoln, Alabama (4)
Police Chief Alan Watson, Talladega, Alabama (3)
Mr. Andy McWilliams, facility operations manager, Talladega Superspeedway (3)
Fire Chief Danny Warwick, Talladega, Alabama (2)
The duly appointed board of directors convened for the first time on September 1, 2011, the first day of their appointment terms. With all members present, a motion passed unanimously to officially name the communications district as the Alabama Regional Communications System. The ARCS board of directors meetings are open to the public, with regularly scheduled monthly meetings held on the second Tuesday of every month at the Oxford, Alabama Police Department and Municipal Court, 600 Stanley Merrill Drive, Oxford, Alabama 36203.
On April 1, 2012, the ARCS will become fully responsible for administration and operation of the 800 MHz system, with assets estimated at a value of approximately 100 million dollars. At that time, the cost of operating the system will be 100 percent funded by revenue collected from user fees. Based on the current predicted costs to operate and maintain the system, the ARCS board of directors has determined the monthly user fee as $22.50 per two-way radio device (mobile, portable, modem, siren control) and $50.00 per dispatch console (rate applies to local users in Calhoun and Talladega County).
The formula for determining the user fee is based on the total estimated annual costs to operate the communications system compared to the projected total number of users subscribing to the system. The current user fees were established based on approximately 2,800 users with a projected annual cost of approximately $760,000 to operate and maintain the system.
References
External links
ARCS website
CMA Alabama/Anniston Chemical Activity Web Page
Public safety communications
Radio communications
Interoperable communications | Alabama Regional Communications System | Engineering | 1,263 |
2,903,211 | https://en.wikipedia.org/wiki/62%20Aurigae | 62 Aurigae is a star located 559 light years away from the Sun in the northern constellation of Auriga. It is visible to the naked eye as a dim, orange-hued star with an apparent visual magnitude of 6.02. This object is moving further from the Earth with a heliocentric radial velocity of +25 km/s. It is an aging giant star with a stellar classification of K2 III, having exhausted the supply of hydrogen at its core then expanded to 22 times the Sun's radius. 62 Aurigae is radiating 167 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,389 K.
References
K-type giants
Auriga
Durchmusterung objects
Aurigae, 62
051440
033614
2600 | 62 Aurigae | Astronomy | 172 |
3,455,162 | https://en.wikipedia.org/wiki/Selenol | Selenols are organic compounds that contain the functional group with the connectivity . Selenols are sometimes also called selenomercaptans and selenothiols. Selenols are one of the principal classes of organoselenium compounds. A well-known selenol is the amino acid selenocysteine.
Structure, bonding, properties
Selenols are structurally similar to thiols, but the bond is about 8% longer at 196 pm. The angle approaches 90°. The bonding involves almost pure p-orbitals on Se, hence the near 90 angles. The bond energy is weaker than the bond, consequently selenols are easily oxidized and serve as H-atom donors. The Se-H bond is weaker than the bond as reflected in their respective bond dissociation energy (BDE). For , the BDE is 326 kJ/mol, while for , the BDE is 368 kJ/mol.
Selenols are about 1000 times stronger acids than thiols: the pKa of is 5.2 vs 8.3 for . Deprotonation affords the selenolate anion, , most examples of which are highly nucleophilic and rapidly oxidized by air.
The boiling points of selenols tend to be slightly greater than for thiols. This difference can be attributed to the increased importance of stronger van der Waals bonding for larger atoms. Volatile selenols have highly offensive odors.
Applications and occurrence
Selenols have few commercial applications, being limited by the toxicity of selenium as well as the sensitivity of the bond. Their conjugate bases, the selenolates, also have limited applications in organic synthesis.
Biochemical role
Selenols are important in certain biological processes. Three enzymes found in mammals contain selenols at their active sites: glutathione peroxidase, iodothyronine deiodinase, and thioredoxin reductase. The selenols in these proteins are part of the essential amino acid selenocysteine. The selenols function as reducing agents to give selenenic acid derivative (), which in turn are re-reduced by thiol-containing enzymes. Methaneselenol (commonly named "methylselenol") (), which can be produced in vitro by incubating selenomethionine with a bacterial methionine gamma-lyase (METase) enzyme, by biological methylation of selenide ion or in vivo by reduction of methaneseleninic acid (), has been invoked to explain the anticancer activity of certain organoselenium compounds. Precursors of methaneselenol are under active investigation in cancer prevention and therapy. In these studies, methaneselenol is found to be more biologically active than ethaneselenol () or 2-propaneselenol ().
Preparation
Selenols are usually prepared by the reaction of organolithium reagents or Grignard reagents with elemental Se. For example, benzeneselenol is generated by the reaction of phenylmagnesium bromide with selenium followed by acidification:
Another preparative route to selenols involves the alkylation of selenourea, followed by hydrolysis. Selenols are often generated by reduction of diselenides followed by protonation of the resulting selenolate:
Dimethyl diselenide can be easily reduced to methaneselenol within cells.
Reactions
Selenols are easily oxidized to diselenides, compounds containing an bond. For example, treatment of benzeneselenol with bromine gives diphenyl diselenide.
In the presence of base, selenols are readily alkylated to give selenides. This relationship is illustrated by the methylation of methaneselenol to give dimethylselenide.
Safety
Organoselenium compounds (or any selenium compound) are cumulative poisons despite the fact that trace amounts of Se are required for health.
See also
Alcohol
Thiol
Tellurol
Selenol, alcohol, thiol acidity order
References
Functional groups
Selenols | Selenol | Chemistry | 881 |
63,119,437 | https://en.wikipedia.org/wiki/Transplant%20engineering | Transplant engineering (or allograft engineering) is a variant of genetic organ engineering which comprises allograft, autograft and xenograft engineering. In allograft engineering the graft is substantially modified by altering its genetic composition. The genetic modification can be permanent or transient. The aim of modifying the allograft is usually the mitigation of immunological graft rejection.
History
Transient genetic allograft engineering has been pioneered by Shaf Keshavjee and Marcelo Cypel at University Health Network in Toronto by adenoviral transduction for transgenic expression of the IL-10 gene. Permanent genetic allograft engineering has first been done by Rainer Blasczyk and Constanca Figueiredo at Hannover Medical School in Hanover by lentiviral transduction for knocking down MHC expression in pigs (lung) and rats (kidney).
References
Genetic engineering
Transplantation medicine | Transplant engineering | Chemistry,Engineering,Biology | 193 |
34,932,939 | https://en.wikipedia.org/wiki/MicroRNA%20sequencing | MicroRNA sequencing (miRNA-seq), a type of RNA-Seq, is the use of next-generation sequencing or massively parallel high-throughput DNA sequencing to sequence microRNAs, also called miRNAs. miRNA-seq differs from other forms of RNA-seq in that input material is often enriched for small RNAs. miRNA-seq allows researchers to examine tissue-specific expression patterns, disease associations, and isoforms of miRNAs, and to discover previously uncharacterized miRNAs. Evidence that dysregulated miRNAs play a role in diseases such as cancer has positioned miRNA-seq to potentially become an important tool in the future for diagnostics and prognostics as costs continue to decrease. Like other miRNA profiling technologies, miRNA-Seq has both advantages (sequence-independence, coverage) and disadvantages (high cost, infrastructure requirements, run length, and potential artifacts).
Introduction
MicroRNAs (miRNAs) are a family of small ribonucleic acids, 21-25 nucleotides in length, that modulate protein expression through transcript degradation, inhibition of translation, or sequestering transcripts. The first miRNA to be discovered, lin-4, was found in a genetic mutagenesis screen to identify molecular elements controlling post-embryonic development of the nematode Caenorhabditis elegans. The lin-4 gene encoded a 22 nucleotide RNA with conserved complementary binding sites in the 3’-untranslated region of the lin-14 mRNA transcript and downregulated LIN-14 protein expression. miRNAs are now thought to be involved in the regulation of many developmental and biological processes, including haematopoiesis (miR-181 in Mus musculus), lipid metabolism (miR-14 in Drosophila melanogaster) and neuronal development (lsy-6 in Caenorhabditis elegans). These discoveries necessitated development of techniques able to identify and characterize miRNAs, such as miRNA-seq.
History
MicroRNA sequencing (miRNA-seq) was developed to take advantage of next-generation sequencing or massively parallel high-throughput sequencing technologies in order to find novel miRNAs and their expression profiles in a given sample. miRNA sequencing in and of itself is not a new idea, initial methods of sequencing utilized Sanger sequencing methods. Sequencing preparation involved creating libraries by cloning of DNA reverse transcribed from endogenous small RNAs of 21–25 bp size selected by column and gel electrophoresis. However, this method is exhaustive in terms of time and resources, as each clone has to be individually amplified and prepared for sequencing. This method also inadvertently favors miRNAs that are highly expressed. Next-generation sequencing eliminates the need for sequence specific hybridization probes required in DNA microarray analysis as well as laborious cloning methods required in the Sanger sequencing method. Additionally, next-generation sequencing platforms in the miRNA-SEQ method facilitate the sequencing of large pools of small RNAs in a single sequencing run.
miRNA-seq can be performed using a variety of sequencing platforms. The first analysis of small RNAs using miRNA-seq methods examined approximately 1.4 million small RNAs from the model plant Arabidopsis thaliana using Lynx Therapeutics' Massively Parallel Signature Sequencing (MPSS) sequencing platform. This study demonstrated the potential of novel, high-throughput sequencing technologies for the study of small RNAs, and it showed that genomes generate large numbers of small RNAs with plants as particularly rich sources of small RNAs. Later studies used other sequencing technologies, such as a study in C. elegans which identified 18 novel miRNA genes as well as a new class of nematode small RNAs termed 21U-RNAs. Another study comparing small RNA profiles of human cervical tumours and normal tissue, utilized the Illumina (company) Genome Analyzer to identify 64 novel human miRNA genes as well as 67 differentially expressed miRNAs. Applied Biosystems SOLiD sequencing platform has also been used to examine the prognostic value of miRNAs in detecting human breast cancer.
Methods
Small RNA Preparation
Sequence library construction can be performed using a variety of different kits depending on the high-throughput sequencing platform being employed. However, there are several common steps for small RNA sequencing preparation.
Total RNA Isolation
In a given sample all the RNA is extracted and isolated using an isothiocyanate/phenol/chloroform (GITC/phenol) method or a commercial product such as Trizol (Invitrogen) reagent. A starting quantity of 50-100 μg total RNA, 1 g of tissue typically yields 1 mg of total RNA, is usually required for gel purification and size selection. Quality control of the RNA is also measured, for example running an RNA chip on Caliper LabChipGX (Caliper Life Sciences).
Size Fractionation of small RNAs by Gel Electrophoresis
Isolated RNA is run on a denaturing polyacrylamide gel. An imaging method such as radioactive 5’-32P-labeled oligonucleotides along with a size ladder is used to identify a section of the gel containing RNA of the appropriate size, reducing the amount of material ultimately sequenced. This step does not have to be necessarily carried out before the ligation and reverse transcription steps outlined below.
Ligation
The ligation step adds DNA adaptors to both ends of the small RNAs, which act as primer binding sites during reverse transcription and PCR amplification. An adenylated single strand DNA 3’adaptor followed by a 5’adaptor is ligated to the small RNAs using a ligating enzyme such as T4 RNA ligase2. The adaptors are also designed to capture small RNAs with a 5’ phosphate group, characteristic microRNAs, rather than RNA degradation products with a 5’ hydroxyl group.
Reverse Transcription and PCR Amplification
This step converts the small adaptor ligated RNAs into cDNA clones used in the sequencing reaction. There are many commercial kits available that will carry out this step using some form of reverse transcriptase. PCR is then carried out to amplify the pool of cDNA sequences. Primers designed with unique nucleotide tags can also be used in this step to create ID tags in pooled library multiplex sequencing.
Sequencing
The actual RNA sequencing varies significantly depending on the platform used. Three common next-generation sequencing platforms are Pyrosequencing on the 454 Life Sciences platform, polymerase-based sequence-by-synthesis on the Illumina (company) platform, or sequencing by ligation on the ABI Solid Sequencing platform.
Data Analysis
Central to miRNA-seq data analysis is the ability to 1) obtain miRNA abundance levels from sequence reads, 2) discover novel miRNAs and then be able to 3) determine the differentially expressed miRNA and their 4) associated mRNA gene targets.
miRNA Alignment & Abundance Quantification
miRNAs may be preferentially expressed in certain cell types, tissues, stages of development, or in particular disease states such as cancer. Since deep sequencing (miRNA-seq) generates millions of reads from a given sample, it allows us to profile miRNAs; whether it may be by quantifying their absolute abundance, to discover their variants (known as isomirs) Note that given that the average length of sequence reads are longer than the average miRNA (17-25 nt), the 3’ and 5’ ends of the miRNA should be found on the same read.
There are several miRNA abundance quantification algorithms. Their general steps are as follows:
After sequencing, the raw sequence reads are filtered based on quality. The adaptor sequences are also trimmed off the raw sequence reads.
The resulting reads are then formatted into a fasta file where the copy number and sequence is recorded for each unique tag.
Sequences that may represent E. Coli contamination are identified by a BLAST search against an E. Coli database and are removed from analysis.
Each of the remaining sequences are aligned against a miRNA sequence database (such as miRBase) In order to account for imperfect DICER processing, a 6nt overhang on the 3’ end, and 3nt on the 5’ end are allowed.
The reads that do not align to the miRNA database are then loosely aligned to miRNA precursors to detect miRNAs that might carry mutations or those that have gone through RNA editing.
The read counts for each miRNA are then normalized to the total number of mapped miRNAs to report the abundance of each miRNA.
Novel miRNA Discovery
Another advantage of miRNA-seq is that it allows the discovery of novel miRNAs that may have eluded traditional screening and profiling methods. There are several novel miRNA discovery algorithms. Their general steps are as follows:
Obtain reads that did not align to known miRNA sequences, and map them to the genome.
RNA Folding Method
For the miRNA sequences were an exact match is found, obtain the genomic sequence including ~100bp of flanking sequence on either side, and run the RNA through RNA folding software such as the Vienna package.
Folded sequences that lie on one arm of the miRNA hairpin and have a minimum free energy of less than ~25kcal/mol are shortlisted as putative miRNA.
The shortlisted sequences are trimmed down to include only the possible precursor sequence and are then refolded to ensure that the precursor was not artificially stabilized by neighbouring sequences.
The resulting folded sequences are considered novel miRNAs if the miRNA sequence falls within one arm of the hairpin, and are highly conserved between species.
Star Strand Expression Method (miRdeep)
Novel miRNA sequences are identified based on the characteristic expression pattern that they display due to DICER processing: higher expression of the mature miRNA over the star strand and loop sequences.
Differential Expression Analysis
After the abundances of miRNAs are quantified for each sample, their expression levels can be compared between samples. One would then be able to identify miRNA that are preferentially expressed that particular time points, or in particular tissues or disease states. After normalizing for the number of mapped reads between samples, one can use a host of statistical tests (like those used in gene expression profiling) to determine differential expression
Target Prediction
Identifying a miRNA's mRNA targets will provide an understanding of the genes or networks of genes whose expression they regulate. Public databases provide predictions of miRNA targets. But to better distinguish true positive predictions from false positive predictions, miRNA-seq data can be integrated to mRNA-seq data to observe for miRNA:mRNA functional pairs. RNA22, TargetScan, miRanda, and PicTar are software designed for this purpose. A list of prediction software is given here.
The general steps are:
Determine miRNA:mRNA binding pairs, complementarity between the miRNA sequences at the 3’-UTR of the mRNA sequence is identified.
Determine the degree of conservation of miRNA:mRNA binding pairs across species. Typically, more highly binding pairs are less likely to be false positives of prediction.
Observe for evidence of miRNA targeting in mRNA-seq or protein expression data: where the miRNA expression is high, the gene and protein expression of its target gene should be low.
Target Validation for Cleaved mRNA Targets
Many miRNAs function to direct cleavage of their mRNA targets; this is particularly true in plants, and thus high-throughput sequencing methods have been developed to take advantage of this property of miRNAs by sequencing the uncapped 3' ends of cleaved or degraded mRNAs. These methods are known as Degradome sequencing or PARE. Validation of target cleavage in specific mRNAs is typically performed using a modified version of 5' Rapid Amplification of cDNA Ends with a gene-specific primer.
Applications
Identification of Novel miRNAs
miRNA-seq has revealed novel miRNAs that were previously eluded in traditional miRNA profiling methods. Examples of such findings are in embryonic stem cells, chicken embryos, acute lymphoblastic leukaemia, diffuse large b-cell lymphoma and b-cells, acute myeloid leukemia, and lung cancer.
Disease biomarkers
Micro RNAs are important regulators of almost all cellular processes such as survival, proliferation, and differentiation. Consequently, it is not unexpected that miRNAs are involved in various aspects of cancer through the regulation of onco- and tumor suppressor gene expression. In combination with the development of high-throughput profiling methods, miRNAs have been identified as biomarkers for cancer classification, response to therapy, and prognosis. Additionally, because miRNAs regulate gene expression they can also reveal perturbations in important regulatory networks that may be driving a particular disorder. Several applications of miRNAs as biomarkers and predictors of disease are given below.
αThis is not a comprehensive list of miRNAs involved with these malignancies.
Comparison With Other Methods of miRNA Profiling
The disadvantages of using miRNA-seq over other methods of miRNA profiling are that it is more expensive, generally requires a larger amount of total RNA, involves extensive amplification, and is more time-consuming than microarray and qPCR methods. As well, miRNA-seq library preparation methods seem to have systematic preferential representation of the miRNA complement, and this prevents accurate determination of miRNA abundance. At the same time, the approach is hybridization independent and therefore does not require a priori sequence information. Because of this, one can obtain sequences of novel miRNAs and miRNA isoforms (isoMirs), distinguish sequentially similar miRNAs, and identify point mutations.
Platform Comparison of miRNA Profiling
References
DNA sequencing | MicroRNA sequencing | Chemistry,Biology | 2,875 |
72,545,295 | https://en.wikipedia.org/wiki/HD%20204018 | HD 204018, also designated as HR 8202, is a visual binary located in the southern constellation Microscopium. The primary has an apparent magnitude of 5.58, making it faintly visible to the naked eye under ideal conditions. The companion has an apparent magnitude of 8.09. The system is located relatively close at a distance of 176 light years based on Gaia DR3 parallax measurements but is receding with a heliocentric radial velocity of . At its current distance, HD 204018's combined brightness is diminished by 0.13 magnitudes due to interstellar dust.
HD 204018A is an Am star with a stellar classification of kA4hF0 VmF6, indicating that it has the calcium K-line of an A4 star, the hydrogen lines of a F0 main-sequence star and the metallic lines of a F6 star. It has 1.65 times the mass of the Sun and 2.55 times the Sun's radius. It radiates 15 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a yellowish-white hue. At an age of 1.5 billion years, HD 2014018A is estimated to be on the subgiant branch. An alternate model places it on the main sequence at an age of 1,35 billion years. The object spins at a moderate speed with a projected rotational velocity of .
The companion is an F8 main sequence star located " away along a position angle of 151°. It has an angular diameter of , which yields a radius of at its estimated distance. It has 102% times the mass of the Sun and an effective temperature of . HD 204018B is estimated to be 3.46 billion years old and is slightly metal deficient.
There is a magnitude 12 co-moving companion located 295.3" away from the system along a position angle of 75°. It is a red dwarf with an estimated spectral class of K6 and any orbit would take over a million years.
References
Binary stars
A-type subgiants
Am stars
F-type main-sequence stars
Microscopium
Microscopii, 69
CD-43 14539
204018
105913
8202 | HD 204018 | Astronomy | 463 |
15,024,188 | https://en.wikipedia.org/wiki/Bluetooth%20advertising | Bluetooth advertising is a method of mobile marketing that utilizes Bluetooth technology to deliver content such as message, information, or advertisement to mobile devices such as cellular phones or tablet computers. Bluetooth advertising can also be received via laptop or personal digital assistants (PDAs).
Bluetooth advertising is permission based advertising, which means that when a mobile device has received a Bluetooth message, the recipient has the choice to either accept or decline the message. The recipient needs to positively indicate that they wish to receive marketing messages.
While not all users of Bluetooth-mobile devices leave their Bluetooth activated, they can interact with a sign to encourage them to turn on their Bluetooth to receive the content. The advertiser is required to explain that those marketing messages may contain information about other companies’ products and services, if appropriate. It is highly recommended that the Direct Marketing Associations guidelines are used.
Bluetooth advertising proximity range
Bluetooth advertising generally is a broadcast function. The average range of Bluetooth advertising in class two is 15 meters to 40 meters for most Bluetooth enabled mobile devices.
As with all wireless transmission, the range and accessibility to most Bluetooth advertising depends on the transmitter power class and the individual portage of the receiver equipment. However, with advances in mobile devices technology, this distance for proper receiving is increasing to reach 250 meters or more in nowadays smart phones, tablet computers and other mobile devices.
The selectivity goes down with extension of range. Hence the transmission power raise as well as receiver sensitivity raise will reduce the contextual connection between actual location of receiver and contents of broadcast message.
There are several major types of Bluetooth advertising solutions. These are generally Bluetooth dongles as transmitter hardware in conjunction with mostly USB networked common server functions.
Embedded scheduling software serves transmission via dongles to enabled Bluetooth receivers. As Bluetooth reception modes require battery power, the distribution depends on the preparedness of the bearers of receiver units for receiving such transmission.
Bluetooth advertising content types
Bluetooth advertising can send file formats like image files, ring tone files, vCard, barcodes, audio files, Java applications, mobile applications, video files, text files and theoretically any file format mobile devices can handle.
There are two types of possible communications in Bluetooth 1-Broadcasting or 2-Connection. Broadcasting doesn't need pairing (connection).
Broadcaster will send data along with its ID and any receiver can receive it by recognizing that ID. It is best suited for gaming where one device has to continuously send its status to other devices.
Apple provides this feature through iBeacon but one can develop their own. It's easily possible in Android but little bit tricky in iOS. See didDiscoverPeripheral and CBCentralManager class for that. See specific data using CBAdvertisementDataManufacturerDataKey.
Bluetooth advertising applications
Bluetooth advertising applications include:
Broadcast location-based coupons.
Contextual advertising.
Localized information.
Gaming and music.
Content on demand.
Specific and targeted campaign.
References
Advertising by medium
Bluetooth | Bluetooth advertising | Technology | 619 |
56,574,879 | https://en.wikipedia.org/wiki/Strabops | Strabops is a genus of strabopid, an extinct group of arthropods. Strabops is known from a single specimen from the Late Cambrian (Furongian age) of the Potosi Dolomite, Missouri, collected by a former professor, Arthur Thacher. It is classified in the family Strabopidae of the monotypic order Strabopida, a group closely related to the aglaspidids with uncertain affinities. The generic name is composed by the Ancient Greek words στραβός, meaning "squinting", and ὄψῐς, meaning "face" (and therefore, "squinting face").
The history of Strabops has been turbulent and confusing since its original description by Charles Emerson Beecher, who classified it as a eurypterid. Many authors do not agree with this and have classified Strabops and its allies as part of the Aglaspidida order, while others classify them in their own order. Although the latter is the taxonomic position currently accepted, other paleontologists prefer to simply omit the strabopids from their analyzes due to the poor preservation of their fossils. In addition, it has been suggested that the closely related Paleomerus represents a synonym of Strabops, which are uniquely differentiated by the size of the telson (the posteriormost division of the body) and the position of the eyes.
Description
As the other strabopids, Strabops was a small-sized arthropod, measuring only 11 centimetres (4.3 inches) in length. However, it was the largest of the strabopids, surpassing Paleomerus (9.3 cm, 3.7 in) and Parapaleomerus (9.2 cm, 3.6 in).
Like some other arthropod groups, the strabopids possessed segmented bodies and jointed appendages (limbs) covered in a cuticle composed of proteins and chitin. The arthropod body is divided into two tagmata (sections); the anterior prosoma (head) and posterior opisthosoma (abdomen). The appendages were attached to the prosoma, and although they are unknown in strabopids (except for one undescribed specimen of Parapaleomerus), it is most likely they owned several pairs of them. Although the chemical composition of the strabopid exoskeleton is unknown, it was probably mineralized (with inorganic substances), sturdy and calcareous (containing calcium). The head of the strabopids was very short, the back was rounded and lacked trilobation (being divided into three lobes), the abdomen was composed by 11 segments and was followed by a thick tail-like spine, the telson.
In the genus Strabops, the prosoma was short and broad, with a rounded outline. The eyes were located in the middle of the front of the prosoma. These were medium-sized, ovate and narrow, and pointed obliquely inwards (hence the name Strabops). Two spots between the eyes indicate the presence of the ocelli (light-sensitive simple eyes). In its abdomen, there were eleven segments, being the third the widest. The ends of the segments were rounded on the sides. In the posterior part of the segments, a row of tiny crenulations was visible. The first six segments were uniform in size, the three following ones were somewhat shorter and the last two were the longest. The telson was a broad, flat spine, and it rose slightly in the middle. The appendages are unknown, although it has been suggested that they be less than seven pairs (what has been considered an overestimate).
Strabops differed only from Paleomerus in the position of the eyes, which were closer together and farther from the margin than in Paleomerus, and the size of the telson, being longer and narrower than in the latter.
History of research
Strabops is known by one only well preserved specimen (YPM 9001, housed at the Peabody Museum of Natural History). It was found by Arthur Thacher, a former professor at the Washington University in St. Louis, in the Potosi Dolomite of the St. Francois County, Missouri. The specimen was sent to Yale University, Connecticut. The American paleontologist Charles Emerson Beecher described it as the only Cambrian eurypterid, Strabops thacheri, the generic name derived from the Ancient Greek words στραβός (strabós, squinting) and ὄψῐς (ópsis, face) and refers to the inward turning eyes. He considered Strabops different enough from the other eurypterids to erect a new genus, although he did not assign it to any family. In 1912, the American paleontologists John Mason Clarke and Rudolf Ruedemann assigned Strabops to the Eurypteridae family, as well as affirming the possession of a twelfth segment and changing the position of the eyes from anterolateral (in the middle of the front) to lateral.
In his book Cambrian Merostomata of 1939, the American paleontologist and geologist Gilbert Oscar Raasch considered the descriptions of the joint authorship of 1912 erroneous and agreed with Beecher on all aspects of his description, except for some reservations about the ocelli. Despite its initial classification as a eurypterid, Raasch admitted that the description of Strabops concurred with the old thought of what an aglaspidid was. It is possible that Beecher was unaware of the similarity between it and Aglaspis because of the distorted illustration of the latter by the American zoologist and paleontologist Robert Parr Whitfield. Therefore, Raasch placed Strabops under the family Aglaspididae in the order Aglaspida.
In 1971, the Swedish geologist and paleontologist Jan Bergström tentatively removed Strabopidae (at that time containing Strabops and Neostrabops) and Paleomeridae (only containing Paleomerus) from the order Aglaspidida based on the fact that the head tagma was too short to accommodate the six pairs of appendages then assumed to be present in aglaspidids. Instead, he classified them in an uncertain order in the Merostomoidea class together with the emeraldellids. Ironically, Bergström speculated that the number of pairs of appendages present in the three genera could be fewer than seven, as well as including a possible antennal segment. This is currently observed as an overestimate. A study published by Derek Ernest Gilmor Briggs et al. in 1979 has shown that Aglaspis spinifer had between four and five pairs of appendages, but not six, weakening Bergström's argument.
In 1997, Bergström and Hou Xian-guang, a Chinese paleontologist, completely removed Strabopidae (recognizing Paleomeridae as a junior synonym), as well as the family Lemoneitidae (containing Lemoneites), from the order Aglaspidida to erect a new order, Strabopida, this time suggesting a number of no more than two pairs of appendages. However, this new clade (group) remained under the similarly-named Aglaspida subclass. A year later, the British paleontologists Jason Andrew Dunlop and Paul Antony Selden eliminated Strabopida from the suborder Aglaspidida and classified them as the sister taxa of the latter based on the lack of aglaspidid apomorphies (distinctive characteristics), such as the lack of genal spines (a spine placed in the posterolateral part of the prosoma). Other authors have reinforced this argument by the trapezoidal telson form of Paleomerus and Strabops in contrast to the long styliform telson of the aglaspidids. However, some authors prefer to represent the taxonomic position of the strabopids as uncertain due to the poor preservation of their fossils.
Classification
Strabops is classified in its own order, Strabopida, in the clade Arachnomorpha, along with Paleomerus, Parapaleomerus and potentially Khankaspis. It was described originally as the only Cambrian eurypterid, and later as an aglaspidid. It would not be until 1997 when the order Strabopida was described, but there is still doubt if the exclusion of them from Aglaspidida was really correct. The current status of the strabopids is of aglaspidid-like arthropods of uncertain affinities.
Strabops shares with the other strabopids a series of characteristics that distinguish them all from the other arthropods. These are an abdomen divided into eleven segments followed by a thick spine, the telson. The head was short with sessile compound eyes. The back was rounded. Like Paleomerus, Strabops possessed prominent dorsal eyes, however, there is no evidence of this in the fossils of Parapaleomerus.
The great similarity that Strabops and Paleomerus share has cast doubt on many authors about whether both genera are really synonymous or not. The Norwegian paleontologist and geologist Leif Størmer described Paleomerus as an intermediate form between Xiphosura (commonly known as horseshoe crabs) and Eurypterida, only highlighting a unique feature different from Strabops, a twelfth segment. Nevertheless, a fourth specimen found in Sweden has shown that this extra segment actually represented the telson of the animal, making them virtually indistinguishable. Although this should convert both genera into synonyms, over time, more differences have been highlighted, such as the position of the eyes (closer to each other and farther from the margin in Strabops than in Paleomerus) and the size of the telson (longer and narrower in Strabops than in Paleomerus), which keeps them as separate but closely related genera.
The cladogram below published by Jason A. Dunlop and Paul A. Selden (1998) is based on the major chelicerate groups (in bold, Aglaspida, Eurypterida and Xiphosurida, Scorpiones and other arachnid clades) and their outgroup taxa (used as a reference group). Strabops and Paleomerus are shown as the sister taxa of Aglaspida.
Note that there are several outdated elements. For example, Lemoneites was remitted to the Glyptocystitida order of echinoderms in 2005.
Paleoecology
The type and only known specimen of Strabops has been found in Furongian (Upper Cambrian) deposits in eastern Missouri. Strabops was at least an inhabitant of the sea, if not born in it. In addition, there are two specimens of the marine brachiopod Obolus lamborni and a poorly preserved trilobite head attached to the slab.
References
Strabops
Fossils of the United States
Fossil taxa described in 1901
Controversial taxa
Taxa named by Charles Emerson Beecher
Species known from a single specimen | Strabops | Biology | 2,394 |
14,476,022 | https://en.wikipedia.org/wiki/Pfeiffer%20effect | The Pfeiffer effect is an optical phenomenon whereby the presence of an optically active compound influences the optical rotation of a racemic mixture of a second compound.
Racemic mixtures do not rotate plane polarized light, but the equilibrium concentration of the two enantiomers can shift from unity in the presence of a strongly interacting chiral species. Paul Pfeiffer, a student of Alfred Werner and inventor of the salen ligand, reported this phenomenon. The first example of the effect is credited to Eligio Perucca, who observed optical rotations in the visible part of the spectrum when crystals of sodium chlorate, which are chiral and colourless, were stained with a racemic dye. The effect is attributed to the interaction of the optically pure substance with the second coordination sphere of the racemate.
References
Polarization (waves)
Stereochemistry
Transition metals
Coordination chemistry | Pfeiffer effect | Physics,Chemistry | 182 |
13,281,304 | https://en.wikipedia.org/wiki/V391%20Pegasi | V391 Pegasi, also catalogued as HS 2201+2610, is a blue-white subdwarf star approximately 4,400 light-years away in the constellation of Pegasus. The star is classified as an "extreme horizontal branch star". It is small, with only half the mass and a bit less than one quarter the diameter of the Sun. It has luminosity 34 times that of the Sun. It could be quite old, perhaps in excess of 10 Gyr. It is believed that the star's mass when it was still on the main sequence was between 0.8 and 0.9 times that of the Sun.
In 2001, Roy Østensen et al. announced that the star, then called HS 2201+2610, is a variable star. It was given its variable star designation, V391 Pegasi, in 2003. It is a pulsating variable star of the V361 Hydrae type (or also called sdBVr type).
Formation
Subdwarf B stars such as V391 Pegasi are thought to be the result of the ejection of the hydrogen envelope of a red giant star at or just before the onset of helium fusion. The ejection left only a tiny amount of hydrogen on the surface—less than 1/1000 of the total stellar mass. The future for the star is to eventually cool down to make a low-mass white dwarf. Most stars retain more of their hydrogen after the first red giant phase, and eventually become asymptotic giant branch stars. The reason that some stars, like V391 Pegasi, lose so much mass is not well known. At the tip of the red-giant branch, the red giant precursors of the subdwarf stars reach their maximum radius, on the order of 0.7 AU. After this point, the hydrogen envelope is lost and helium fusion begins—this is known as the helium flash.
Hypothesized planetary system
In 2007, research using the variable star timing method indicated the presence of a gas giant planet orbiting V391 Pegasi. This planet was designated V391 Pegasi b. This planet around an "extreme horizontal branch" star provided clues about what could happen to the planets in the Solar System when the Sun turns into a red giant within the next 5 billion years.
However, subsequent research published in 2018, taking the large amount of new photometric time-series data amassed since the publication of the original data into account, found evidence both for and against the exoplanet's existence. Although the planet's existence was not disproved, the case for its existence was now certainly weaker, and the authors stated that it "requires confirmation with an independent method."
References
Sources
External links
B-type subdwarfs
Pegasus (constellation)
Very rapidly pulsating hot stars
Pegasi, V391 | V391 Pegasi | Astronomy | 594 |
43,070,760 | https://en.wikipedia.org/wiki/Lindsay%20Stringer | Lindsay C. Stringer is a Professor in Environment and Development at the University of York.
Stringer's research is interdisciplinary and uses theories and methods from both the natural and social sciences to understand human-environment relationships, feedbacks and trade-offs, examining the impacts for human wellbeing, equity and the environment
Education
PhD in Geography, University of Sheffield, Department of Geography, 2005
MSc in Environmental Monitoring and Assessment in Drylands, University of Sheffield Department of Geography, 2001
BSc in Physical Geography, University of Sheffield Department of Geography, 2000
Career
Stringer has been involved in research on land, food, water, energy and climate change worth c.£42 million (total value) since 2005.
She chaired the Independent International Task Force for the Dryland Systems Programme of the Consultative Group on International Agricultural Research (CGIAR) from 2014 to 2016.
She was an Intergovernmental Panel on Climate Change (IPCC) lead author for the Special Report on Climate Change and Land Use.
She is currently IPCC lead author for the 6th Assessment Report (AR6) as well as Coordinating Lead Author for the IPCC AR6 cross-chapter paper on Deserts, Desertification and Semi-arid Areas.
She was Coordinating Lead Author for the Intergovernmental science-policy Platform on Biodiversity and Ecosystem Services (IPBES) Africa Regional Assessment , and Lead Author for the IPBES Land Degradation and Restoration Assessment.
Stringer is involved in the Economics of Land Degradation (ELD) Initiative, as well as being an Elected Steering Committee Member for DesertNet International.
She was competitively selected for the international Homeward Bound expedition to Antarctica: a women in climate science leadership programme in 2016.
She was Director of the Sustainability Research Institute (SRI) at the School of Earth and Environment, University of Leeds, UK from 2011–2014
She is the Director York Environmental Sustainability Institute (YESI). The centre was set up by the University of York to facilitate and deliver interdisciplinary research in environmental sustainability.
She is a member Centre for Dryland Agriculture’s International Scientific Advisory Board, Bayero University Kano
Prizes
Royal Society Wolfson Research Merit Award, 2017
Women of Achievement Award, 2015
Philip Leverhulme Prize for advancing sustainability in the world’s drylands, 2013
References
Living people
British geographers
Alumni of the University of Sheffield
Academics of the University of Leeds
Academics of the University of York
Environmental social scientists
People from Gravesend, Kent
Year of birth missing (living people) | Lindsay Stringer | Environmental_science | 498 |
937,514 | https://en.wikipedia.org/wiki/SURAnet | SURAnet was a pioneer in scientific computer networks and one of the regional backbone computer networks that made up the National Science Foundation Network (NSFNET). Many later Internet communications standards and protocols were developed by SURAnet.
How SURAnet started
The Southeastern Universities Research Association was created in December 1980 by scientists and university administrators throughout the southeastern United States, primarily led by the University of Virginia, the College of William & Mary, and the University of Maryland, College Park. The chief goal of SURA was the development of a particle accelerator for research in nuclear physics; this facility is now known as the Thomas Jefferson National Accelerator Facility. By the mid-1980s it was clear that access to high-capacity computer resources would be needed to facilitate collaboration among the SURA member institutions. A high-performance network to provide this access was essential, but no single institution could afford to develop such a system. SURA itself stepped up to the challenge and, with support from the U.S. National Science Foundation (NSF) and SURA universities, SURAnet was up and running in 1987, and was part of the first phase of National Science Foundation Network (NSFNET) funding as the agency built a network to facilitate scientific collaboration. SURAnet was one of the first and one of the largest Internet providers in the United States. SURA sites first used a 56 kbit/s backbone in 1987 which was upgraded to 1.5M bit/s (DS1) in 1989, and to a 45 Mbit/s (DS3) backbone in 1991. FIX East and MAE-East, both major peering points, were located at the main SURA facilities. Large-scale collaboration among SURA-affiliated scientists became an everyday reality.
Role of SURAnet in the development of the Internet
SURAnet participated in the development of Internet communications standards and telecommunications protocols that enabled researchers and federal agencies to communicate and work in this early Internet environment. SURAnet was one of the first NSFNET regional networks to become operational. SURAnet provided networking services for universities and industry, and was one of the first TCP/IP networks to sell commercial connections, when IBM Research in Raleigh-Durham, North Carolina was connected in 1987–1988. It was also the first network to attempt to convert to OSPF in 1990.
Beyond SURAnet
SURAnet was so successful that it outgrew SURA's primary mission, and the SURA Board approved its sale to Bolt, Beranek and Newman in 1995.
Many of the protocols and procedures created under SURAnet are still in use in the commercial Internet today. SURA continues to be a force in the information technology community, participating in projects such as the Extreme Science and Engineering Discovery Environment (XSEDE), Earthcube, and AtlanticWave.
References
Computer networking | SURAnet | Technology,Engineering | 578 |
8,511,009 | https://en.wikipedia.org/wiki/Pharyngeal%20artery | The pharyngeal artery is a branch of the ascending pharyngeal artery. The pharyngeal artery passes inferior-ward in between the superior margin of the superior pharyngeal constrictor muscle, and the levator veli palatini muscle. It issues branches to the constrictor muscles of the pharynx, the stylopharyngeus muscle, the pharyngotympanic tube, and palatine tonsil; a palatine branch may sometimes be present, replacing the ascending palatine branch of facial artery.
References
Physiology
Anatomy
Arteries | Pharyngeal artery | Biology | 121 |
12,040,404 | https://en.wikipedia.org/wiki/Diurnal%20air%20temperature%20variation | In meteorology, diurnal temperature variation is the variation between a high air temperature and a low temperature that occurs during the same day.
Temperature lag
Temperature lag, also known as thermal inertia, is an important factor in diurnal temperature variation. Peak daily temperature generally occurs after noon, as air keeps absorbing net heat for a period of time from morning through noon and some time thereafter. Similarly, minimum daily temperature generally occurs substantially after midnight, indeed occurring during early morning in the hour around dawn, since heat is lost all night long. The analogous annual phenomenon is seasonal lag.
As solar energy strikes the Earth's surface each morning, a shallow layer of air directly above the ground is heated by conduction. Heat exchange between this shallow layer of warm air and the cooler air above is very inefficient. On a warm summer's day, for example, air temperatures may vary by from just above the ground to chest height. Incoming solar radiation exceeds outgoing heat energy for many hours after noon and equilibrium is usually reached from 3–5 p.m., but this may be affected by a variety of factors such as large bodies of water, soil type and cover, wind, cloud cover/water vapor, and moisture on the ground.
Differences in variation
Diurnal temperature variations are greatest very near Earth's surface. The Tibetan and Andean Plateaus present one of the largest differences in daily temperature on the planet, as does the Western US and the western portion of southern Africa.
High desert regions typically have the greatest diurnal-temperature variations, while low-lying humid areas near the shores (tropical, oceanic, and arctic) typically have the least. Large cities (urban heat islands) also tend to have a lowed diurnal temperature variation than surrounding areas. This explains why an area like the Pinnacles National Park can have high temperatures of during a summer day, and then have lows of . At the same time, Washington D.C., which is much more humid, has temperature variations of only ;
urban Hong Kong has a diurnal temperature range of little more than .
While the National Park Service claimed that the world single-day record is a variation of (from to ) in Browning, Montana in 1916, the Montana Department of Environmental Quality claimed that Loma, Montana also had a variation of (from to ) in 1972. Both these extreme daily temperature changes were the result of sharp air-mass changes within a single day. The 1916 event was an extreme temperature drop, resulting from frigid Arctic air from Canada invading northern Montana, displacing a much warmer air mass. The 1972 event was a chinook event, where air from the Pacific Ocean overtopped mountain ranges to the west, and dramatically warmed in its descent into Montana, displacing frigid Arctic air and causing a drastic temperature rise.
In the absence of such extreme air-mass changes, diurnal temperature variations typically range from or smaller in humid, tropical areas, up to in higher-elevation, arid to semi-arid areas, such as parts of the U.S. Western states' Intermountain Plateau areas, for example Elko, Nevada, Ashton, Idaho and Burns, Oregon. The higher the humidity is, the lower the diurnal temperature variation is.
In Europe, due to its more northern latitude and close proximity to large warm water bodies (such as the Mediterranean), differences in daily temperature are not as pronounced as in other continents. However, places in Southern Europe significantly far from the Mediterranean tend to have high differences in daily temperatures, some around . These include Southwestern Iberia (e.g. Alvega or Badajoz) or the high-altitude plateaus of Turkey (if considered part of Europe) (e.g. Kayseri).
In Australia, significant diurnal temperature variations generally occur in the Red Centre around Alice Springs and Uluru.
Viticulture
Diurnal temperature variation is of particular importance in viticulture. Wine regions situated in areas of high altitude experience the most dramatic swing in temperature variation during the course of a day. In grapes, this variation has the effect of producing high acid and high sugar content as the grapes' exposure to sunlight increases the ripening qualities while the sudden drop in temperature at night preserves the balance of natural acids in the grape.
See also
Diurnal cycle
References
Meteorological phenomena
Daily events
Atmospheric temperature
fr:Amplitude thermique#Types | Diurnal air temperature variation | Physics | 912 |
2,910,814 | https://en.wikipedia.org/wiki/National%20Physical%20Laboratory%20of%20India | The CSIR- National Physical Laboratory of India, situated in New Delhi, is the measurement standards laboratory of India. It maintains standards of SI units in India and calibrates the national standards of weights and measures.
History of measurement systems in India
In the Harappan era, which is nearly 5000 years old, one finds excellent examples of town planning and architecture. The sizes of the bricks were the same all over the region. In the time of Chandragupta Maurya, some 2400 years ago, there was a well - defined system of weights and measures. The government of that time ensured that everybody used the same system. In the Indian medical system, Ayurveda, the units of mass and volume were well defined.
The measurement system during the time of the Mughal emperor, Akbar, the guz was the measure of length. The guz was widely used till the introduction of the metric system in India in 1956. During the British period, efforts were made to achieve uniformity in weights and measures. A compromise was reached in the system of measurements which continued till India's independence in 1947. After independence in 1947, it was realized that for fast industrial growth of the country, it would be necessary to establish a modern measurement system in the country. The Lok Sabha in April 1955 resolved : This house is of the opinion that the Government of India should take necessary steps to introduce uniform weights and measures throughout the country based on metric system
Key Functions of NPL:
Maintaining SI Units: NPL establishes and maintains the Indian standards for the International System of Units (SI), which includes units like meter, kilogram, second, ampere, kelvin, candela, and mole.
Calibrating National Standards: NPL calibrates the national standards of weights and measures to ensure their accuracy and traceability to international standards.
Conducting Research: NPL conducts research in various fields of physics, including metrology, materials science, and nanotechnology.
Providing Calibration and Testing Services: NPL offers calibration and testing services to industries and other organizations to help them maintain product quality and comply with regulatory standards.
Disseminating Time and Frequency: NPL provides accurate time and frequency signals to various users through satellite, radio, and television broadcasts.
History of the National Physical Laboratory, India
The National Physical Laboratory, India was one of the earliest national laboratories set up under the Council of Scientific & Industrial Research. Jawaharlal Nehru laid the foundation stone of NPL on 4 January 1947. Dr. K. S. Krishnan was the first Director of the laboratory. The main building of the laboratory was formally opened by Former Deputy Prime Minister Sardar Vallabhbhai Patel on 21 January 1950. Former Prime Minister Indira Gandhi, inaugurated the Silver Jubilee Celebration of the Laboratory on 23 December 1975.
NPL Charter:-
The main aim of the laboratory is to strengthen and advance physics-based research and development for the overall development of science and technology in the country. In particular its objectives are:
To establish, maintain and improve continuously by research, for the benefit of the nation, National Standards of Measurements and to realize the Units based on International System (Under the subordinate Legislations of Weights and Measures Act 1956, reissued in 1988 under the 1976 Act).
To identify and conduct after due consideration, research in areas of physics which are most appropriate to the needsof the nation and for advancement of field
To assist industries, national and other agencies in their developmental tasks by precision measurements, calibration, development of devices, processes, and other allied problems related to physics.
To keep itself informed of and study critically the status of physics.
In 1957, India became member of the General Conference of Weight and Measures (CGPM), BIPM, an International Intergovernmental organization constituted by diplomatic treaty, i.e. ‘The Metre Convention’. Being NMI of India and to fulfil the mandate, Dr. K. S. Krishnan, the then Director, CSIR-NPL signed the ‘Metre Convention’ on behalf of Government of India. In 1958, BIPM provided CSIR-NPL the Copies No. 57 (NPK) and No. 4 of International Prototypes of the Kilogram (IPK) and the platinum-iridium (Pt–Ir) Metre bar, respectively, to realize the SI base units ‘kilogram’ and ‘metre’. This was the milestone in the foundation of quality infrastructure in independent India.
In 1960, when the metric system was officially adopted as the basis for SI units, the number of base units being maintained at the NPL increased. However, in 1963 on the recommendation of Nobel Laureate P.M.S. Blackett, these groups were brought together under a single umbrella. The objective was to bring greater coordination between the various groups and to give the standards activity a programme-based approach on a
bigger scale and enable the Laboratory to play its role more effectively. Other physical standards in the form of standard cells, standard resistance coils, standard lamps, etc. were acquired and calibration and testing work were started in these areas also. It has since been maintaining six SI base units; namely, metre (for length), kilogram (for mass), second (for time), kelvin (for temperature), ampere (for current) and candela (for luminous intensity).
Maintenance of standards of measurements in India
Each modernized country, including India has a National Metrological Institute (NMI), which maintains the standards of measurements. This responsibility has been given to the National Physical Laboratory, New Delhi.
Metre
The standard unit of length, metre, is realized by employing a stabilized helium-neon laser as a source of light. Its frequency is measured experimentally. From this value of frequency and the internationally accepted value of the speed of light (), the wavelength is determined using the relation:
The nominal value of wavelength, employed at NPL is 633 nanometer. By a sophisticated instrument, known as an optical interferometer, any length can be measured in terms of the wavelength of laser light.
The present level of uncertainty attained at NPL in length measurements is ±3 × 10−9. However, in most measurements, an uncertainty of ±1 × 10−6 is adequate.
Kilogramme
The Indian national standard of mass, kilogramme, is copy number 57 of the international prototype of the kilogram supplied by the International Bureau of Weights and Measures (BIPM: French – Bureau International des Poids et Mesures), Paris. This is a platinum-iridium cylinder whose mass is measured against the international prototype at BIPM. The NPL also maintains a group of transfer standard kilograms made of non-magnetic stainless steel and nickel-chromium alloy.
The uncertainty in mass measurements at NPL is ±4.6 × 10−9.
Second
The national standard of time interval, second as well as frequency, is maintained through four parameters, which can be measured most accurately. Therefore, attempts are made to link other physical quantities to time and frequency. The standard maintained at NPL has to be linked to different users. This process, known as dissemination, is carried out in a number of ways. For applications requiring low levels of uncertainty, there is satellite based dissemination service, which utilizes the Indian national satellite, INSAT. Time is also disseminated through TV, radio, and special telephone services. The caesium atomic clocks maintained at NPL are linked to other such instituted all over the world through a set of global positioning satellites.
Ampere
The unit of electric current, ampere, is realized at NPL by measuring the volt and the ohm separately.
The uncertainty in measurement of ampere is ± 1 × 10−6.
Kelvin
The standard of temperature is based on the International Temperature Scale of 1990 (ITS-90). This is based on the assigned temperatures to several fixed points. One of the most fundamental temperatures of these is the triple point of water. At this temperature, ice, water and steam are at equilibrium with each other. This temperature has been assigned the value of 273.16 kelvins. This temperature can be realized, maintained and measured in the laboratory. At present temperature standards maintained at NPL cover a range of 54 to 2,473 kelvins.
The uncertainty in its measure is ± 2.5 × 10−4.
Candela
The unit of luminous intensity, candela, is realized by using an absolute radiometer. For practical work, a group of tungsten incandescent lamps are used.
The level of uncertainty is ± 1.3 × 10−2.
Mole
Experimental work has been initiated to realize mole, the SI unit for amount of substance
Radiation
The NPL does not maintain standards of measurements for ionizing radiations. This is the responsibility of the Bhabha Atomic Research Centre, Mumbai.
Calibrator of weights and measures
The standards maintained at NPL are periodically compared with standards maintained at other National Metrological Institutes in the world as well as the BIPM in Paris. This exercise ensures that Indian national standards are equivalent to those of the rest of the world.
Any measurement made in a country should be directly or indirectly linked to the national standards of the country, For this purpose, a chain of laboratories are set up in different states of the country. The weights and measures used in daily life are tested in the laboratories and certified. It is the responsibility of the NPL to calibrate the measurement standards in these laboratories at different levels. In this manner, the measurements made in any part of the country are linked to the national standards and through them to the international standards.
The weights and balances used in local markets and other areas are expected to be certified by the Department of Weights and Measures of the local government. Working standards of these local departments should, in turn, be calibrated against the state level standards or any other laboratory which is entitled to do so. The state level laboratories are required to get their standards calibrated from the NPL at the national level which is equivalent to the international standards.
Bharatiya Nirdeshak Dravya (BND) or Indian Reference Materials
Bharatiya Nirdeshak Dravya (BND) or Indian reference materials are reference materials developed by NPL which derive their traceability from National Standards.
Research programs
NPL is also involved in research. One of the important research activities undertaken by NPL is to devise the chemical formula for the indelible ink which is being used in the Indian elections to prevent fraudulent voting. This ink, manufactured by the Mysore Paints and Varnish Limited is applied on the finger nail of the voter as an indicator that the voter has already cast his vote.
NPL also have section working on development of biosensors. Currently the division is headed by Dr. C. Sharma and section is primarily focusing on development of sensor for cholesterol, measurement and microfluidic based biosensors. Section is also developing biosensors for Uric acid detection.
India’s polar research program
During the 28th Indian Scientific Expeditions to Antarctica (ISEA) (2008-2009), CSIR-NPL established a state of art Indian Polar Space Physics Laboratory (IPSPL) at Indian Permanent Research Base Maitri (70 0 46’ S, 110 43’ E), Antarctica on the occasion of International Polar Year (IPY) for continuous and real time monitoring of high latitude ionosphere to address the scientific interest of high latitudinal ionospheric consequences caused by the modulation of near-earth space environmental conditions. In 2011 CSIR-NPL provided leadership to the Antarctic expedition to India's newly constructed 3rd permanent scientific base “Bharati” (69° 24’ S, 76 ° 11’) to test & validate its facilities during extreme winter conditions.
CSIR NPL is also the part of India's arctic expeditions. Himadri is India's first permanent Arctic research station located at the International Arctic Research base, Ny-Ålesund at Spitsbergen, Svalbard Norway. It was set up during India's second Arctic expedition in June 2008. It is located 1200 km from the North Pole.
NPL's contributions
The indelible mark/ink
During general election, nearly 40 million people wear a CSIR mark on their fingers. The indelible ink used to mark the fingernail of a voter during general elections is a time-tested gift of CSIR to the spirit of democracy. Developed in 1952, it was first produced in-campus. Subsequently, industry has been manufacturing the Ink. It is also exported to Sri Lanka, Indonesia, Turkey and other democracies.
Pristine air-quality monitoring station at Palampur
National Physical Laboratory (NPL) has established an atmospheric monitoring station in the campus of Institute of Himalayan Bioresource Technology (IHBT) at Palampur (H.P.) at an altitude of 1391 m for generating the base data for atmospheric trace species & properties to serve as reference for comparison of polluted atmosphere in India. At this station, NPL has installed state of art air monitoring system, greenhouse gas measurement system and Raman Lidar. A number of parameters like CO, NO, , , , , PM, HC & BC besides & are being currently monitored at this station which is also equipped with weather station (AWS) for measurement of weather parameters.
Gold standard (BND-4201)
The BND-4201 is first Indian reference material for gold of ‘9999’ fineness (gold that is 99.99% pure with impurities of only 100 parts-per-million).
Honors and awards bestowed upon CSIR-NPL Staff
Padma Bhushan
Dr. K.S. Krishnan - 1954
Dr. A.R. Verma – 1982
Dr. A.P. Mitra - 1989
Dr. S.K. Joshi - 2003
Padma Shri
Dr. S.K. Joshi – 1991
Shanti Swarup Bhatnagar Prize
Dr. K.S. Krishnan - 1958
Dr. A.P. Mitra – 1968
Dr. Vinay Gupta - 2017
Other awards
Contributors to Nobel Peace Prize-winning team for Intergovernmental Panel on Climate Change IPCC Dr. A.P. Mitra & Dr. Chhemmendra Sharma – 2007
See also
National Institute of Standards and Technology in the United States
National Physical Laboratory (United Kingdom)
Versailles project on advanced materials and standards
References
External links
Bureau International des Poids et Mesures (BIPM)
Science and technology in India
Council of Scientific and Industrial Research
India
Organisations based in Delhi
Research institutes in Delhi
Research institutes established in 1947
1947 establishments in India
Standards organisations in India | National Physical Laboratory of India | Mathematics | 2,995 |
753,099 | https://en.wikipedia.org/wiki/TSX-Plus | TSX-Plus is a multi-user operating system for the PDP-11/LSI-11 series of computers. It was developed by S&H Computer Systems, Inc. and is based on DEC's RT-11 single-user real-time operating system (TSX-Plus installs on top of RT-11).
Overview
The system is highly configurable and tunable.
Due to the constraints of the memory management system in the PDP-11/LSI-11, the entire operating system core must occupy no more than 40 kibibytes of memory, out of a maximum possible 4 mebibytes of physical memory that can actually be installed in those machines (mandated by the 22-bit address space). The strength of TSX-Plus is to simultaneously provide to multiple users the services of DEC's single-user RT-11. Depending on which PDP-11 model and the amount of memory, the system could support a minimum of 12 users (14-18 users on a 2Mb 11/73, depending on workload). A productivity feature called "virtual lines" "allows a single user to control several tasks from a single terminal."
The software included a WP package named Lex-11 and a spreadsheet from Saturn Software. The machine slowed considerably if more than 8 students wanted to use the word-processing package at the same time. There was also a decision-table language called "D" from the NCC in Manchester which worked very well on TSX Plus.
History
Released in 1980, TSX-Plus was the successor to TSX, released in 1976. The system was popular in the 1980s. The last version of TSX-Plus had TCP/IP support.
S&H wrote the original TSX because "Spending $25K on a computer that could only support one user bugged" (founder Harry Sanders); the outcome was the initial four-user TSX in 1976.
Bootstrapping
TSX-Plus required bootstrapping RT-11 first before running TSX-Plus as a user program. Once TSX-Plus was running, it would take over complete control of the machine from RT-11. It provided true memory protection for users from other users, provided user accounts and maintained account separation on disk volumes and implemented a superset of the RT-11 EMT programmed requests. RT-11 programs generally ran, unmodified, under TSX-Plus and, in fact, most of the RT-11 utilities were used as-is under TSX-Plus. Device drivers generally required only slight modifications.
See also
TSX-32
References
External links
S&H Homepage
Proprietary operating systems
PDP-11 | TSX-Plus | Technology | 559 |
28,458,032 | https://en.wikipedia.org/wiki/Konrad%20Bleuler | Konrad Bleuler (; 23 September 1912, Herzogenbuchsee – 1 January 1992, Königswinter) was a Swiss physicist who worked in the field of theoretical particle physics and quantum field theory. He is known for his work on the quantisation of the photon, the Gupta–Bleuler formalism.
Education and career
Bleuler was born in Herzogenbuchsee, Switzerland on 23 September 1912. He received his doctorate for the mathematical work titled "On the Rolle's theorem for the operator Δu + λu and related properties of the Green's function" from the ETH Zurich in 1942. His thesis advisor was Michel Plancherel. From 1960 to 1980 he was a professor at the University of Bonn, where he founded the Institute of Theoretical Nuclear Physics, which is now the Helmholtz Institute for Radiation and Nuclear Physics. Even after his retirement, he was active and remained there until his death.
1971 Bleuler organized the first "International Conference on Differential Geometric Methods in Theoretical Physics" and since then he had been organizing the conference regularly. The last one was the 19th Conference in 1990 held at Rapallo. In 1993, at the 22nd Conference in his honor a "Bleuler Medal" awarded.
Scientific works
Bleuler's most notable contribution was the introduction of Gupta–Bleuler formalism for the quantization of the electromagnetic field, which he developed independently with Suraj N. Gupta. This was an important work on quantum electrodynamics. Bleuler also made contributions to nuclear and particle physics. He also wrote about the work of other famous scientists, so on Wolfgang Pauli and Rolf Nevanlinna.
References
External links
Oral history interview transcript with Konrad Bleuler on 20 January 1984, American Institute of Physics, Niels Bohr Library and Archives - interview conducted by Lillian Hoddeson in Hischegg, Austria
Dedicated to Bleuler
1912 births
1992 deaths
People from Oberaargau District
Swiss physicists
Swiss nuclear physicists
Quantum physicists
ETH Zurich alumni
Academic staff of the University of Bonn | Konrad Bleuler | Physics | 438 |
72,566,597 | https://en.wikipedia.org/wiki/Relation%20of%20degree%20zero | A relation of degree zero, 0-ary relation, or nullary relation is a relation with zero attributes. There are exactly two relations of degree zero. One has cardinality zero; that is, contains no tuples at all. The other has cardinality 1 and contains only the unique 0-tuple.:56
The zero-degree relations represent true and false in relational algebra.:57 Under the closed-world assumption, an n-ary relation is interpreted as the extension of some n-adic predicate: all and only those n-tuples whose values, substituted for corresponding free variables in the predicate, yield propositions that hold true, appear in the relation. A zero-degree relation is therefore interpreted as the extension of the 0-adic predicate . The zero-degree relation with cardinality zero therefore represents false because it contains no tuples that yield a true proposition, and the zero-degree relation with cardinality 1 represents true because it contains the unique 0-tuple that yields a true proposition.
The zero-degree relations are also significant as identities for certain operators in the relational algebra. The zero-degree relation of cardinality 1 is the identity with respect to join (⋈); that is, when it is joined with any other relation , the result is . Defining an identity with respect to join makes it possible to extend the binary join operator into a n-ary join operator.:89
Since the relational Cartesian product is a special case of join, the zero-degree relation of cardinality 1 is also the identity with respect to the Cartesian product.:89
A projection of a relation over no attributes yields one of the relations of degree zero. If the projected relation has cardinality 0, the projection will have cardinality 0; if the projected relation has positive cardinality, the result will have cardinality 1.
Hugh Darwen refers to the zero-degree relation with cardinality 0 as TABLE_DUM and the relation with cardinality 1 as TABLE_DEE, alluding to the characters Tweedledum and Tweedledee.
See also
Empty set
Identity element
Relational algebra
References
Relational algebra | Relation of degree zero | Mathematics | 437 |
4,142,269 | https://en.wikipedia.org/wiki/Potassium%20hydrogen%20phthalate | Potassium hydrogen phthalate, often called simply KHP, is an acidic salt compound. It forms white powder, colorless crystals, a colorless solution, and an ionic solid that is the monopotassium salt of phthalic acid. KHP is slightly acidic, and it is often used as a primary standard for acid–base titrations because it is solid and air-stable, making it easy to weigh accurately. It is not hygroscopic. It is also used as a primary standard for calibrating pH meters because, besides the properties just mentioned, its pH in solution is very stable. It also serves as a thermal standard in thermogravimetric analysis.
KHP dissociates completely in water, giving the potassium cation (K+) and hydrogen phthalate anion (HP− or Hphthalate−)
KHP ->[{}\atop\ce{H2O}] K+ + HP−
and then, acting as a weak acid, hydrogen phthalate reacts reversibly with water to give hydronium (H3O+) and phthalate ions.
HP− + H2O P2− + H3O+
KHP can be used as a buffering agent in combination with hydrochloric acid (HCl) or sodium hydroxide (NaOH). The buffering region is dependent upon the pKa, and is typically +/- 1.0 pH units of the pKa. The pKa of KHP is 5.4, so its pH buffering range would be 4.4 to 6.4; however, due to the presence of the second acidic group that bears the potassium ion, the first pKa also contributes to the buffering range well below pH 4.0, which is why KHP is a good choice for use as a reference standard for pH 4.00.
KHP is also a useful standard for total organic carbon (TOC) testing. Most TOC analyzers are based on the oxidation of organics to carbon dioxide and water, with subsequent quantitation of the carbon dioxide. Many TOC analysts suggest testing their instruments with two standards: one typically easy for the instrument to oxidize (KHP), and one more difficult to oxidize. For the latter, benzoquinone is suggested.
References
Carboxylic acids
Phthalates
Potassium compounds | Potassium hydrogen phthalate | Chemistry | 497 |
76,883,568 | https://en.wikipedia.org/wiki/Belomorite | Belomorite ( — from the toponym), sometimes peristerite or moonstone, also murchisonite, Ceylon opal, hecatolite — a decorative variety of albite (oligoclase) of white or light gray color with a distinct iridescence effect. By composition, belomorite belongs to the feldspar family; it is a sodium aluminosilicate from the plagioclase group, in most cases belonging to the isomorphic series albite (Ab) — anorthite (An) with an approximate percentage of 70Ab-30An.
The name “belomorite” was given to this variety of albite by academician Alexander Fersman in 1925, based on the location of its discovery near the shore of the White Sea, and also by association — for the similarity of iridescence colors with the shades of sea water. The best varieties of belomorite are translucent or transparent, they have a pearl-glass luster and iridescence in blue, gray-blue, violet-blue, greenish-blue or pale violet tones. The most famous deposits of this gem are in the north, in the pegmatites of the Kola Peninsula and Karelia.
Belomorite is a spectacular and popular jewelry and ornamental material, one of the varieties of moonstone. However, due to its fragility and perfect cleavage, the mineral often breaks and is difficult to process, so it is cut in the form of simple cabochons (oval, round, teardrop-shaped), as well as balls or polished plates.
History and name
A decade and a half after the discovery of this variety of albite near the White Sea coast, Alexander Fersman described the history of his “find” in sufficient detail and accurately in a short lyrical essay entitled “Belomorite.” Together with his companion, he got off the train at the in the Loukhsky District of the Republic of Karelia and they set off together towards the “Blue Pale” — that was the name of the mined-out vein of feldspars, located in the middle of a swampy area, between hills (in Karelian - varaks) almost on the very shore of the White Sea, about six kilometers east of the station.
There, in an old working, among dark amphibole shales, there was a snow-white vein of albite at least ten meters long, it rose to the top of the neighboring hill and went with lateral branches into the dark stone of shale rocks. Alexander Fersman sat down near a stack of feldspar, folded for transportation, looked at it carefully and, as he writes, could no longer look away, — in front of him was “a white, barely bluish stone, barely translucent, barely transparent, but clean and even, like a well-ironed tablecloth.”
The stone was split along individual shiny surfaces, and some mysterious light played on these edges. These were gentle bluish-green, barely noticeable iridescences, only occasionally they flashed with a reddish light, but usually a continuous mysterious moonlight flooded the entire stone, and this light came from somewhere from the depths of the stone — well, just like the Black Sea burns with blue light in autumn evenings near Sevastopol.
The delicate pattern of the stone from some thin stripes crossed it in several directions, as if imposing a mysterious lattice on the rays emanating from the depths. I collected, selected, admired and again turned the moonstone towards the sun.
— Alexander Fersman, “Memories of a Stone”, 1940
The stone found in an old mine was called “belomorite” — because, as Fersman explains, “The White Sea shimmered with the colors of moonstone... or did the stone reflect the pale blue depths of the White Sea?..” — Geologists took several samples to the , recommending it as a new jewelry stone.
Meanwhile, the authors of the “new mineral” did not insist that they had made some kind of mineralogical discovery, noting that the decorative variety of stone received from them a new poetic name or even a trademark. On the one hand, Fersman directly writes that belomorite “was not born there, it was we who invented it there”; and on the other hand, he calls this variety of feldspar “true moonstone”.
It must be remembered that the coast and, more broadly speaking, the environs of the White Sea are by no means the only place where Fersmanovsky belomorite, which belongs to the class of perhaps the most common rock-forming minerals on earth, reveals itself. Deposits of this type of plagioclase, most often associated with mica-bearing and ceramic pegmatites, are located in other places in North Karelia (the vicinity of , Mica Bor), as well as in the south of the Kola Peninsula.
Another name for iridescent feldspar — peristerite — also comes from its geographical name (Peristeri — a mountain in western Greece). Other regional synonyms of belomorite are used even less frequently — murchisonite, Ceylon opal and jarisol, associated with the sites of finds. There are also names like hecatolite associated with the crystallographic orientation of the iridescence. All of them, one way or another, can be attributed to local territorial or commercial brands, one way or another connected with the trade in jewelry or ornaments made from iridescent albite.
Properties
The reasons for the iridescence of belomorite have repeatedly been the subject of study, with generally consistent conclusions. Most researchers agreed that the bluish or greenish glow is associated with specific defects in the layered structure of the mineral. Alexander Fersman notes in his memoirs that the “mysterious light” emanating from the depths of the stone played precisely “on individual shiny surfaces,” lines of cleavage or fracture, naturally passing along the boundaries of the perfect cleavage of belomorite. This effect also appears much stronger and brighter after polishing the mineral.
The perfect cleavage of belomorite (peristerite) is manifested in its structure. The mineral consists of the thinnest (parallel) plates, almost invisible to the naked eye. Light, reflected from the internal cleavage planes, is refracted many times, which leads to spectacular color on spinodal decomposition structures commensurate with its wavelength. These properties of belomorite are by no means unique; their manifestations are characteristic of many minerals, collectively called “moon stones.” Also, other iridescent plagioclases, such as many labradorites, have a narrowly directed (within an approximate range of 15-20°) approximately along the b axis a colored iridescent reflection, most often manifested in beautiful blues and dark blues, less often in green, yellow or even reddish tones. Iridescence is a special type of pseudochromatism caused by the interference of light on spinodal decomposition structures commensurate with its wavelength. A similar effect is also occasionally typical for some potassium feldspars (orthoclases), anthophyllite, and quite often for enstatite and bronzite.
The characteristic optical effect of moonstones is most often called iridescence. However, the nature of their colored glow has a special specificity; it would be more accurate to call it adularescence (from the name of the titular mineral: adularia or moonstone). This effect is associated with the scattering of white light by very small submicroscopic point defects in the structure of the stone such as microperthite ingrowths, thin cleavage plates or spatial fluctuations in the internal composition. According to the Rayleigh scattering theory, short-wave radiation, other things being equal, is always scattered more strongly, and therefore the reflected and scattered light will have a bluer tint than the original one. In addition, belomorite sometimes exhibits weak orange luminescence in ultraviolet rays.
As a rule, belomorite forms solid granular and crystalline masses or massive crystalline aggregates; crystals of tabular and tabular-prismatic types are much less common. Complex polysynthetic twinning is very often observed.
Mineral formation
Belomorite is of igneous origin; it is part of many granites and granite pegmatites. Albites (oligoclases) are very widespread throughout the world and are among the most common rock-forming minerals. Iridescent regional varieties, which are quite similar in composition and properties to belomorite, are naturally less common, but it would be wrong to call them rare minerals. Plagioclases with moonlight have long been known in some pegmatite veins of Shaitanka and Lipovka (Middle Urals), as well as in Utochkina Pad near Ulan-Ude (Buryatia).
High-quality industrially significant belomorite, used in jewelry production, is mined mainly in Ceylon (most often called «Ceylon opal» there). Belomorites have also been found in large quantities in Madagascar, Tanzania, India, and the USA (California and Colorado). In addition, local deposits of iridescent albite are known in Australia, Austria, Germany, Italy, Kenya, Norway, Poland, Ukraine, as well as in France, Switzerland, Sweden and Japan.
Typical (title) deposits of belomorite are located in North Karelia, where this stone was discovered by Alexander Fersman, as well as in the south of the Kola Peninsula.
Usage
Belomorite is a spectacular, inexpensive and popular ornamental stone; it is used in jewelry as one of the varieties of “moonstone”. It is typically cut into cabochons, often double-sided, convex in both directions, thus enhancing its brilliance, unlike, say, similar labradorite, which is often cut into flat plates cut parallel to the cleavage lines.
At the same time, the brittleness and perfect cleavage of belomorites serves as a natural obstacle in the process of their processing. As a result, the mineral often breaks and is difficult to produce, also for this reason it is most often cut in the form of simple cabochons (oval, round, teardrop-shaped), as well as balls, taking into account existing cracks and cleavage lines. Moonstones are at the top of the list in terms of the number of synthetic minerals (counterfeits) that enter the market under the guise of natural minerals. In addition to the brighter effect of iridescence, artificial minerals are not as fragile and vulnerable as their natural counterparts.
See also
Albite
Oligoclase
Plagioclase
Feldspar
Iceland spar
Moonstone
References
External links
Belomorite (A material that is NOT an approved mineral species): information about the mineral belomorite in the Mindat database.
Moonstone (Belomorite) in the database Mineralienatlas
Sodium minerals
Aluminium minerals
Calcium minerals
Silicate minerals
Triclinic minerals
Gemstones | Belomorite | Physics | 2,297 |
15,546,777 | https://en.wikipedia.org/wiki/Learned%20non-use | Learned non-use of a limb is a learning phenomenon whereby movement is suppressed initially due to adverse reactions and failure of any activity attempted with the affected limb, which then results in the suppression of behavior. Continuation of this response results in persisting tendency and consequently, the individual never learns that the limb may have become potentially useful. By constraining the less-affected limb there is a change in motivation, which overcomes the learned nonuse of the more-affected limb.
The principles of constraint-induced movement therapy (CIMT) used in stroke patients are based on the idea of the reversal of learned non-use. CIMT uses constrained movement of the less-affected limb and intensive training of the paretic arm to counter-condition the nonuse of the more-affected arm learned in the acute and early sub-acute periods. More recently, clinical versions of CIMT - called "modified constraint induced movement therapy" (mCIT) - have been produced that are administered over a longer time period than CIMT (usually 10 weeks). While offering the same effectiveness and cortical changes as CIMT, these versions are better tolerated, and can be integrated into traditional therapy clinics and reimbursement parameters.
See also
Silver Spring monkeys
References
Limbs (anatomy)
Learning
Human power | Learned non-use | Physics | 260 |
665,433 | https://en.wikipedia.org/wiki/John%20Franklin%20Enders | John Franklin Enders (February 10, 1897 – September 8, 1985) was an American biomedical scientist and Nobel Laureate. Enders has been called "The Father of Modern Vaccines."
Life and education
Enders was born in West Hartford, Connecticut on February 10, 1897. His father, John Ostrom Enders, was CEO of the Hartford National Bank and left him a fortune of $19 million upon his death. He attended the Noah Webster School in Hartford, and St. Paul's School in Concord, New Hampshire. After attending Yale University a short time, he joined the United States Army Air Corps in 1918 as a flight instructor and a lieutenant.
After returning from World War I, he graduated from Yale, where he was a member of Scroll and Key as well as Delta Kappa Epsilon. He went into real estate in 1922, and tried several careers before choosing the biomedical field with a focus on infectious diseases, gaining a PhD at Harvard in 1930. He later joined the faculty at Children's Hospital Boston.
Enders died at his summer home in Waterford, Connecticut, aged 88, on 8 September 1985. His wife died in 2000.
Biomedical career
In 1949, Enders, Thomas Huckle Weller, and Frederick Chapman Robbins reported successful in vitro culture of an animal virus—poliovirus. The three received the 1954 Nobel Prize in Physiology or Medicine "for their discovery of the ability of poliomyelitis viruses to grow in cultures of various types of tissue".
Meanwhile, Jonas Salk applied the Enders-Weller-Robbins technique to produce large quantities of poliovirus, and then developed a polio vaccine in 1952. Upon the 1954 polio vaccine field trial, whose success Salk announced on the radio, Salk became a public hero but failed to credit the many other researchers that his effort rode upon, and was somewhat shunned by America's scientific establishment.
In 1954, Enders and Thomas C. Peebles isolated measlesvirus from an 11-year-old boy, David Edmonston. Disappointed by polio vaccine's development and involvement in some cases of polio and death—what Enders attributed to Salk's technique—Enders began development of measles vaccine. In October 1960, an Enders team began trials on 1,500 mentally retarded children in New York City and on 4,000 children in Nigeria.
Refusing credit for merely himself when The New York Times announced the measles vaccine effective on September 17, 1961, Enders wrote to the newspaper to acknowledge the work of various colleagues and the collaborative nature of the research. In 1963, a deactivated measles vaccine and an attenuated measles vaccine were introduced by Pfizer and Merck & Co., respectively.
He continued to work in virology research till the late 1970s and retired from the laboratory at the age of 80.
Honors
1946: Fellow of the American Academy of Arts and Sciences
1953: Member of the American Philosophical Society
1954: Nobel Prize in Physiology or Medicine (together with Frederick Chapman Robbins and Thomas Huckle Weller)
1954: Albert Lasker Award for Basic Medical Research
1955: Kyle Award from the U.S. Public Health Service
1955: Member of the American Philosophical Society
1958: inducted into the Polio Hall of Fame
1960: Cameron Prize for Therapeutics of the University of Edinburgh
1962: Robert Koch Prize
1963: Presidential Medal of Freedom
1963: Science Achievement Award from the American Medical Association
1967: Foreign Member, The Royal Society
Enders also held honorary doctoral degrees from 13 universities.
See also
Anna Mitus
References
External links
including the Nobel Lecture, December 11, 1954 The Cultivation of the Poliomyelitis Viruses in Tissue Culture
John Franklin Enders Papers (MS 1478). Manuscripts and Archives, Yale University Library.
1897 births
1985 deaths
American chief executives of financial services companies
American Nobel laureates
American virologists
Harvard Medical School alumni
Nobel laureates in Physiology or Medicine
People from West Hartford, Connecticut
Polio
Recipients of the Albert Lasker Award for Basic Medical Research
St. Paul's School (New Hampshire) alumni
Yale University alumni
American medical researchers
Foreign members of the Royal Society
United States Army Air Forces soldiers
United States Army Air Service pilots of World War I
Fellows of the American Academy of Arts and Sciences
Measles
Presidential Medal of Freedom recipients
Vaccination advocates
Time Person of the Year
Members of the American Philosophical Society | John Franklin Enders | Biology | 894 |
41,237,117 | https://en.wikipedia.org/wiki/Poly%28amidoamine%29 | Poly(amidoamine), or PAMAM, is a class of dendrimer which is made of repetitively branched subunits of amide and amine functionality. PAMAM dendrimers, sometimes referred to by the trade name Starburst, have been extensively studied since their synthesis in 1985, and represent the most well-characterized dendrimer family as well as the first to be commercialized. Like other dendrimers, PAMAMs have a sphere-like shape overall, and are typified by an internal molecular architecture consisting of tree-like branching, with each outward 'layer', or generation, containing exponentially more branching points. This branched architecture distinguishes PAMAMs and other dendrimers from traditional polymers, as it allows for low polydispersity and a high level of structural control during synthesis, and gives rise to a large number of surface sites relative to the total molecular volume. Moreover, PAMAM dendrimers exhibit greater biocompatibility than other dendrimer families, perhaps due to the combination of surface amines and interior amide bonds; these bonding motifs are highly reminiscent of innate biological chemistry and endow PAMAM dendrimers with properties similar to that of globular proteins. The relative ease/low cost of synthesis of PAMAM dendrimers (especially relative to similarly-sized biological molecules such as proteins and antibodies), along with their biocompatibility, structural control, and functionalizability, have made PAMAMs viable candidates for application in drug development, biochemistry, and nanotechnology.
Synthesis
Divergent synthesis
Divergent synthesis refers to the sequential "growth" of a dendrimer layer by layer, starting with a core "initiator" molecule which contains functional groups capable of acting as active sites in the initial reaction. Each subsequent reaction in the series increases the number of available surface groups exponentially. Core molecules which give rise to PAMAM dendrimers can vary, but the most basic initiators are ammonia and ethylene diamine. Outward growth of PAMAM dendrimers is accomplished by alternating between two reactions:
Michael addition of the amino-terminated surface onto methyl acrylate, resulting in an ester-terminated outer layer, and
Coupling with ethylene diamine to achieve a new amino-terminated surface.
Each round of reactions forms a new "generation", and PAMAM dendrimers are often classified by generation number; the common shorthand for this classification is "GX" or "GX PAMAM", where X is a number referring to the generation number. The first full cycle of Michael addition followed by coupling with ethylene diamine forms Generation 0 PAMAM, with subsequent Michael additions giving rise to "half" generations, and subsequent amide coupling giving rise to "full" (integer) generations.
With divergent synthesis of dendrimers, it is extremely important to allow each reaction to proceed to completion; any defects caused by incomplete reaction or intramolecular coupling of new surface amines with unreacted methyl ester surface groups could cause "trailing" generations, stunting further growth for certain branches. These impurities are difficult to remove when using the divergent synthetic approach because the molecular weight, physical size, and chemical properties of the defective dendrimers are very similar in nature to the desired product. As generation number increases, it becomes more difficult to produce pure products in a timely fashion due to steric constraints. As a result, synthesis of higher-generation PAMAM dendrimers can take months.
Convergent synthesis
Convergent synthesis of a dendrimer begins with what will eventually become the surface of the dendrimer and proceeds inward. The convergent synthetic approach makes use of orthogonal protecting groups (two protecting groups whose deprotection conditions will not remove one another); this is an additional consideration not present when using a divergent approach. The figure below depicts a general scheme for a convergent synthetic approach.
Convergent synthesis as shown above begins with the dendritic subunit composed of reactive "focal group" A and branched group B (B can be multiply branched in the most generalized scenario, but PAMAMs only split once at each branching point). First, A is orthogonally protected and set aside for further reactions. B is also orthogonally protected, leaving the unprotected A on this molecule to couple with each of the unprotected B groups from the initial compound. This results in a new higher-generation species that is protected on both A and B. Selective deprotection of A yields a new molecule which can again be coupled onto the original monomer, thus forming another new generation. This process can then be repeated to form more and more layers.
Note that the black protecting groups for group B represent what will become the outermost layer of the final molecule, and remain attached throughout the synthetic process; their purpose is to guarantee that propagation of dendrimer growth can take place in a controlled fashion by preventing unwanted side reactions.
In forming each new layer, the number of AB couplings is restricted to two, in sharp contrast to the divergent synthetic approach, which involves exponentially more couplings per layer.
Incomplete reaction products (single addition adduct, unreacted starting materials) will have a markedly different molecular weight from the desired product, especially for higher-generation compounds, making purification more straightforward.
The reactive focal group A must be terminated onto a final acceptor at some point during the synthetic process; until then, each compound can only be considered a dendron and not a full dendrimer (see page for disambiguation).
An advantage to synthesizing dendrons with focal group A as a chemical handle is the ability to attach multiple equivalents of the dendron to a polyfunctional core molecule; changing the core element does not require rebuilding the entire dendrimer. In the case of PAMAM, the focal points of convergently synthesized fragments have been used to create unsymmetrical dendrimers as well as dendrimers with various core functionalization.
Since each successive generation of dendron becomes bulkier, with final attachment to the dendrimer core being the most prohibitive step of all, steric constraints can severely impact yield.
Toxicity
in vitro
It has been established that cationic macromolecules in general destabilize the cell membrane, which can lead to lysis and cell death. The common conclusion present in current work echoes this observation: increasing dendrimer molecular weight and surface charge (both being generation-dependent) increases their cytotoxic behavior.
Initial studies on PAMAM toxicity showed that PAMAM was less toxic (in some cases, much less so) than related dendrimers, exhibiting minimal cytotoxicity across multiple toxicity screens, including tests of metabolic activity (MTT assay), cell breakdown (LDH assay), and nucleus morphology (DAPI staining). However, in other cell lines, the MTT assay and several other assays revealed some cytotoxicity. These disparate observations could be due to differences in sensitivity of the various cell lines used in each study to PAMAM; although cytotoxicity for PAMAM varies among cell lines, they remain less toxic than other dendrimer families overall.
More recently, a series of studies by Mukherjee et al. have shed some light on the mechanism of PAMAM cytotoxicity, providing evidence that the dendrimers break free of their encapsulating membrane (endosome) after being absorbed by the cell, causing harm to the cell's mitochondria and eventually leading to cell death. Further elucidation of the mechanism of PAMAM cytotoxicity would help resolve the dispute as to precisely how toxic the dendrimers are.
In relation to neuronal toxicity, fourth generation PAMAM has been shown to break down calcium transients, altering neurotransmitter vesicle dynamics and synaptic transmission. All of the above can be prevented by replacing the surface amines with folate or polyethylene glycol.
It has also been shown that PAMAM dendrimers cause rupturing of red blood cells, or hemolysis. Thus, if PAMAM dendrimers are to be considered in biological applications that involve dendrimers or dendrimer complexes traveling through the bloodstream, the concentration and generation number of unmodified PAMAM in the bloodstream should be taken into account.
in vivo
To date, few in-depth studies on the in vivo behavior of PAMAM dendrimers have been carried out. This could be in part due to the diverse behavior of PAMAMs depending on surface modification (see below), which make characterization of their in vivo properties largely case-dependent. Nonetheless, the fate and transport of unmodified PAMAM dendrimers is an important case study as any biological applications could involve unmodified PAMAM as a metabolic byproduct. In the only major systematic study of in vivo PAMAM behavior, injections of high levels of bare PAMAMs over extended periods of time in mice showed no evidence of toxicity up through G5 PAMAM, and for G3-G7 PAMAM, low immunogenicity was observed. These systemic-level observations seem to align with the observation that PAMAM dendrimers are not extremely cytotoxic overall; however, more in-depth studies of the pharmacokinetics and biodistribution of PAMAM are required before a move toward in vivo applications can be made.
Surface modification
One unique property of dendrimers such as PAMAM is the high density of surface functional groups, which allow many alterations to be made to the surface of each dendrimer molecule. In putative PAMAM dendrimers, the surface is rife with primary amines, with higher generations expressing exponentially greater densities of amino groups. Although the potential to attach many things to each dendrimer is one of their greatest advantages, the presence of highly localized positive charges can be toxic to cells. Surface modification via attachment of acetyl and lauroyl groups help mask these positive charges, attenuating cytotoxicity and increasing permeability to cells. Thus, these types of modifications are especially beneficial for biological applications. Secondary and tertiary amino surface groups are also found to be less toxic than primary amino surface groups, suggesting it is charge shielding which has major bearing on cytotoxicity and not some secondary effect from a particular functional group. Furthermore, other studies point to a delicate balance in charge which must be achieved to obtain minimal cytotoxicity. Hydrophobic interactions can also cause cell lysis, and PAMAM dendrimers whose surfaces are saturated with nonpolar modifications such as lipids or polyethylene glycol (PEG) suffer from higher cytotoxicity than their partially substituted analogues. PAMAM dendrimers with nonpolar internal components have also been shown to induce hemolysis.
Applications
Applications involving dendrimers in general take advantage of either stuffing cargo into the interior of the dendrimer (sometimes referred to as the "dendritic box"), or attaching cargo onto the dendrimer surface. PAMAM dendrimer applications have generally focused on surface modification, taking advantage of both electrostatic and covalent methods for binding cargo. Currently, major areas of study using PAMAM dendrimers and their functionalized derivatives involve drug delivery and gene delivery.
Drug delivery
Since PAMAM dendrimers have shown penetration capability to a wide range of cell lines, simple PAMAM-drug complexes would affect a broad spectrum of cells upon introduction to a living system. Thus, additional targeting ligands are required for the selective penetration of cell types. For example, PAMAM derivatized with folic acid is preferentially taken up by cancer cells, which are known to overexpress the folate receptor on their surfaces. Attaching additional treatment methods along with the folic acid, such as boron isotopes, cisplatin, and methotrexate have proven quite effective. In the future, as synthetic control over dendrimer surface chemistry becomes more robust, PAMAM and other dendrimer families may rise to prominence alongside other major approaches to targeted cancer therapy.
In a study of folic acid functionalized PAMAM, methotrexate was combined either as an inclusion complex within the dendrimer or as a covalent surface attachment. In the case of the inclusion complex, the drug was released from the dendrimer interior almost immediately when subjected to biological conditions and acted similarly to the free drug. The surface attachment approach yielded stable, soluble complexes which were able to selectively target cancer cells and did not prematurely release their cargo. Drug release in the case of the inclusion complex could be explained by the protonation of surface and interior amines under biological conditions, leading to unpacking of the dendrimer conformation and consequent release of the inner cargo. A similar phenomenon was observed with complexes of PAMAM and cisplatin.
PAMAM dendrimers have also demonstrated intrinsic drug properties. One quite notable example is the ability for PAMAM dendrimers to remove prion protein aggregates, the deadly protein aggregates responsible for bovine spongiform encephalopathy ("mad cow disease") and Creutzfeldt–Jakob disease in humans. The solubilization of prions is attributed to the polycationic and dendrimeric nature of the PAMAMs, with higher generation (>G3) dendrimers being the most efficient; hydroxy-terminated PAMAMs as well as linear polymers showed little to no effect. Since there are no other known compounds capable of dissolving prions which have already aggregated, PAMAM dendrimers have offered a bit of reprieve in the study of such fatal diseases, and may offer additional insight into the mechanism of prion formation.
Gene therapy
The discovery that mediating positive charge on PAMAM dendrimer surfaces decreases their cytotoxicity has interesting implications for DNA transfection applications. Because the cell membrane has a negatively charged exterior, and the DNA phosphate backbone is also negatively charged, the transfection of free DNA is not very efficient simply due to charge repulsion. However, it would be reasonable to expect charged interactions between the anionic phosphate backbone of DNA and the amino-terminated surface groups of PAMAM dendrimers, which are positively ionized under physiological conditions. This could result in a PAMAM-DNA complex, which would make DNA transfection more efficient due to neutralization of the charges on both elements, while the cytotoxicity of the PAMAM dendrimer would also be reduced. Indeed, several reports have confirmed PAMAM dendrimers as effective DNA transfection agents.
When the charge balance between DNA phosphates and PAMAM surface amines is slightly positive, the maximum transfection efficiency is obtained; this finding supports the idea that the complex binds to the cell surface via charge interactions. A striking observation is that "activation" of PAMAM by partial degradation via hydrolysis improves transfection efficiency by 2-3 orders of magnitude, providing further evidence supporting the existence of an electrostatically coupled complex. The fragmentation of some branches of the dendrimer is thought to loosen up the overall structure (fewer amide bonds and space constraints), which would theoretically result in better contact between the dendrimer and DNA substrate because the dendrimer is not forced into a rigid spherical conformation due to sterics. This in turn results in more compact DNA complexes which are more easily endocytosed. After endocytosis, the complexes are subjected to the acidic conditions of the cellular endosome. The PAMAM dendrimers act as a buffer in this environment, soaking up the excess protons with multitudes of amine residues, leading to the inhibition of pH-dependent endosomal nuclease activity and thus protecting the cargo DNA. The tertiary amines on the interior of the dendrimer can also participate in the buffering activity, causing the molecule to puff up; additionally, as the PAMAMs take on more and more positive charge, fewer of them are required for the optimal PAMAM-DNA interaction, and free dendrimers are released from the complex. Dendrimer release and swelling can eventually lyse the endosome, resulting in release of the cargo DNA. The activated PAMAM dendrimers have less spatial barrier to interior amine protonation, which is thought to be a major source of their advantage over non-activated PAMAM.
In the context of existing approaches to gene transfer, PAMAM dendrimers hold a strong position relative to major classical technologies such as electroporation, microinjection, and viral methods. Electroporation, which involves pulsing electricity through cells to create holes in the membrane through which DNA can enter, has obvious cytotoxic effects and is not appropriate for in vivo applications. On the other hand, microinjection, the use of fine needles to physically inject genetic material into the cell nucleus, offers more control but is a high-skill, meticulous task in which a relatively low number of cells can be transfected. Although viral vectors can offer highly specific, high-efficiency transfection, the generation of such viruses is costly and time-consuming; furthermore, the inherent viral nature of the gene transfer often triggers an immune response, thus limiting in vivo applications. In fact, many modern transfection technologies are based on artificially assembled liposomes (both liposomes and PAMAMs are positively charged macromolecules). Since PAMAM dendrimers and their complexes with DNA exhibit low cytotoxicity, higher transfection efficiencies than liposome-based methods, and are effective across a broad range of cell lines, they have taken an important place in modern gene therapy methodologies. The biotechnology company Qiagen currently offers two DNA transfection product lines (SuperFect and PolyFect) based on activated PAMAM dendrimer technology.
Much work lies ahead before activated PAMAM dendrimers can be used as in vivo gene therapy agents. Although the dendrimers have proved to be highly efficient and non-toxic in vitro, the stability, behavior, and transport of the transfection complex in biological systems has yet to be characterized and optimized. As with drug delivery applications, specific targeting of the transfection complex is ideal and must be explored as well.
See also
Amidoamine
References
Bibliography
Dendrimers
Materials science | Poly(amidoamine) | Physics,Chemistry,Materials_science,Engineering | 3,890 |
13,795,221 | https://en.wikipedia.org/wiki/Wildlife%20disease | Disease is described as a decrease in performance of normal functions of an individual caused by many factors, which is not limited to infectious agents. Furthermore, wildlife disease is a disease when one of the hosts includes a wildlife species. In many cases, wildlife hosts can act as a reservoir of diseases that spillover into domestic animals, people and other species. Wildlife diseases spread through both direct contact between two individual animals or indirectly through the environment. Additionally, human industry has created the possibility for cross-species transmission through the wildlife trade.Furthermore, there are many relationships that must be considered when discussing wildlife disease, which are represented through the Epidemiological Triad Model. This model describes the relationship between a pathogen, host and the environment. There are many routes to infection of a susceptible host by a pathogen, but when the host becomes infected that host now has the potential to infect other hosts. Whereas, environmental factors affect pathogen persistence and spread through host movement and interactions with other species. An example to apply to the ecological triad is Lyme disease, where changes in environment have changed the distribution of Lyme disease and its vector, the Ixodes tick. The recent increase in wildlife disease occurrences is cause for concern among conservationists, as many vulnerable species do not have the population to recover from devastating disease outbreaks.
Transmission
Indirect
Wildlife may come in contact with pathogens through indirect vectors such as their environment by consuming infected food and water, breathing contaminated air, or encountering virulent urine or feces from an infected organism. This type of transmission is typically associated with pathogens that are able to survive prolonged periods, with or without a host organism.
The most recognizable wildlife disease that indirectly spreads are prion disease. Prion diseases are indirectly spread due to their longevity in the environment, lasting for several months once released from a host via their excretions (urine or feces). Notable animal prion diseases include chronic wasting disease in cervids, scrapie in sheep and goats, and various types of spongiform encephalopathy including bovine (also known as mad cow disease), mink, feline, and ungulate.
Direct
Disease can be spread from organism to organism through direct contact such as exposure to infected blood, mucus, milk (in mammals), saliva, or sexual fluids such as vaginal secretions and semen.
A prominent example of direct infection is facial tumor disease in Tasmanian devils, as these marsupials will repeatedly bite other individuals in the face during the breeding season. These open wounds allow transmission via blood and saliva in the devil's orifices.
Wildlife Trade
A major driver for transmission between species recently is wildlife trade, as many organisms that do not typically encounter each other naturally are in close proximity. This can include places such as wet markets as well as the illegal trade of both live and dead animals and their body parts.
The most notable example of wildlife trade impacting both animal and human health is COVID-19, originating in a wet market in Wuhan, China. The originating species has been a topic of debate as it is unclear due to the variety of species found at the market, however pangolins and bats both have been absolved of blame despite initial claims.
Wildlife Disease Management
The challenges associated with wildlife disease management, some are environmental factors, wildlife is freely moving, and the effects of anthropogenic factors. Anthropogenic factors have driven significant changes in ecosystems and species distribution globally. The changes in ecosystems can be caused by introduction of invasive species, habitat loss and fragmentation, and overall changes in the function of ecosystems. Due to the significant changes in the environment because of humans, there becomes a need for wildlife management, which manages the interactions between domestic animals and humans, and wildlife.
Wildlife species are freely moving within different areas, and come into contact with domestic animals, humans, and even invade new areas. These interactions can allow for disease transmission, and disease spillover into new populations. Disease spillover can become of great concern when considering outbreaks, not only in humans but in other wildlife species raising a concern for species preservation.
Detection
Wildlife disease is detected primarily through surveys, for example taking samples from wildlife populations in an area to determine the prevalence of disease within a population. Prevalence is define as the percentage of a population that is diseased at a particular time. There are limitations to using this to detect disease within wildlife populations, such as all host may not show signs of disease, the sample distribution, and the disease distribution. Diseases in wildlife tend to form patches of disease throughout an entire population, which can affect the prevalence of the disease within a population. Sampling is assumed to be random, but is often opportunistic. Another form of disease detection is through observation of diseased hosts. However if some hosts within a species do not show signs of disease, this can influence the prevalence of disease detection within a wildlife population.
The reservoir of wildlife disease can also be a challenge when considering wildlife disease detection. An example of a challenge identifying the pathogen is the mass mortality event in bald eagles in southeastern United States in 1994. The challenge identifying the causative agent of disease was due to the neurotoxin being isolated from the areas of outbreak, but not when grown in the laboratory until a brominate metabolite was used. The management of wildlife diseases involve many factors, which should are all important to consider when determining the persistence of a pathogen within a population.
Surveillance and Monitoring
Programs have begun to survey wildlife populations to better understand transmission and health impacts in the affected wildlife communities. Tools such as the Geographical Information System (GIS) can be utilized in order to keep track of individual occurrences of disease in order to create an overall image of disease prevalence and spread in a given area. Major zoonotic diseases such as rabies, COVID-19, influenza, and hemorrhagic fever are monitored to ensure both human health and safety as well as mitigation of impacts on wildlife. Proactive intervention can increase the likelihood of species survival while simultaneously preventing emerging pathogens from escalating to an epidemic.
Prevention
Culling
Disease outbreaks in wild animals are sometimes controlled by killing infected individuals to prevent transmission to domestic and economically important animals. While easy and quick for disease management, culling has the consequence of disrupting ecosystem function and reducing biodiversity of the population due to the loss of individuals. Animal rights advocates argue against culling, as they consider individual wild animals to be intrinsically valuable and believe that they have a right to live. Activists favor humane methods of prevention such as vaccination or treatment via rehab centers, as these are non-lethal forms of management.
Vaccination programs
Wild animal suffering, as a result of disease, has been drawn attention to by some authors, who argue that we should alleviate this form of suffering through vaccination programs. Such programs are also deemed beneficial for reducing the exposure of humans and domestic animals to disease and for species conservation.
The oral rabies vaccine has been used successfully in multiple countries to control the spread of rabies among populations of wild animals and reduce human exposure. Australia, the UK, Spain and New Zealand have all conducted successful vaccination programs to prevent Bovine Tuberculosis, by vaccinating badgers, possums and wild boar.
In response to the COVID-19 pandemic, it has been proposed that, in the future, wild animals could be vaccinated against coronaviruses to relieve the suffering of the affected animals, prevent disease transmission and inform future vaccination efforts.
Zoonoses
Wild animals, domestic animals and humans share a large and increasing number of infectious diseases, known as zoonoses. The continued globalization of society, human population growth, and associated landscape change further increase the interactions between humans and other animals, thereby facilitating additional infectious disease emergence. Contemporary diseases of zoonotic origin include SARS, Lyme disease and West Nile virus.
Disease emergence and resurgence in populations of wild animals are considered an important topic for conservationists, as these diseases can affect the sustainability of affected populations and the long-term survival of some species. Examples of such diseases include chytridiomycosis in amphibians, chronic wasting disease in deer, white-nose syndrome, in bats, and devil facial tumour disease in Tasmanian devils.
Conservation
Populations on the Decline
When an epidemic strikes a population of organisms, the loss of individuals can be detrimental to already fragile or fragmented populations. Many disease epidemics have largely reduced the population of their host organisms, some even increasing the possibility of an endangered or extinct status.
Notable Epidemics Impacting Species
Chronic wasting disease in cervids, both wild and captive
Influenza and its variations (avian and swine) impact birds and pigs respectively
White-nose syndrome in bats
Chytrid fungi in amphibians
Facial tumors in Tasmanian devils
Fibro-papillomatosis in sea turtles
Whirling disease in various fish such as trout and salmon
Recovery
While disease can ravage a population, many wildlife are resilient and can recuperate their population loss. Human intervention can also increase the chances of species recovering from epidemics via various prevention and treatment methods. Individuals that survive epidemics can repopulate, now with disease resistance present in the gene pool of that population. This will result in future generations of a species that are less susceptible to a specific disease.
Notable Species that Recovered From Epidemics
Canids such as foxes and coyotes steadily recovered from mange
Black-footed ferrets recovered from Sylvatic Plague
Sea urchins recovered from a ciliate parasite known as Philaster apodigitformis
See also
Epizootic
Threshold host density
Wildlife management
References
Further reading
External links
Wildlife Disease Association
Animal welfare
Human–animal interaction | Wildlife disease | Biology | 1,978 |
81,196 | https://en.wikipedia.org/wiki/Computing%20platform | A computing platform, digital platform, or software platform is the infrastructure on which software is executed. While the individual components of a computing platform may be obfuscated under layers of abstraction, the summation of the required components comprise the computing platform.
Sometimes, the most relevant layer for a specific software is called a computing platform in itself to facilitate the communication, referring to the whole using only one of its attributes – i.e. using a metonymy.
For example, in a single computer system, this would be the computer's architecture, operating system (OS), and runtime libraries. In the case of an application program or a computer video game, the most relevant layer is the operating system, so it can be called a platform itself (hence the term cross-platform for software that can be executed on multiple OSes, in this context).
In a multi-computer system, such as in the case of offloading processing, it would encompass both the host computer's hardware, operating system (OS), and runtime libraries along with other computers utilized for processing that are accessed via application programming interfaces or a web browser. As long as it is a required component for the program code to execute, it is part of the computing platform.
Components
Platforms may also include:
Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS; this is referred to as running on "bare metal".
Device drivers and firmware.
A browser in the case of web-based software. The browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser.
An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform.
Software frameworks that provide ready-made functionality.
Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together. The social networking sites Twitter and Facebook are also considered development platforms.
A application virtual machine (VM) such as the Java virtual machine or .NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM.
A virtualized version of a complete system, including virtualized hardware, OS, software, and storage. These allow, for instance, a typical Windows program to run on what is physically a Mac.
Some architectures have multiple layers, with each layer acting as a platform for the one above it. In general, a component only has to be adapted to the layer immediately beneath it. For instance, a Java program has to be written to use the Java virtual machine (JVM) and associated libraries as a platform but does not have to be adapted to run on the Windows, Linux or Macintosh OS platforms. However, the JVM, the layer beneath the application, does have to be built separately for each OS.
Operating system examples
Desktop, laptop, server
Unix and Unix-like
BSD
SunOS
NeXTSTEP
Darwin
macOS
OpenDarwin
386BSD
NetBSD
OpenBSD
FreeBSD
DragonFly BSD
System V
HP-UX
IBM AIX
A/UX
Solaris
OpenSolaris
illumos
OpenIndiana
MINIX
GNU Hurd
Linux
Android
ChromeOS
OSF/1
Tru64 UNIX
z/OS
VM
OpenVMS
DOS
MS-DOS / IBM PC DOS
Windows 9x
FreeDOS
QNX
Classic Mac OS
AmigaOS
OS/2
IBM i
Windows NT
BeOS
Haiku
HarmonyOS
Mobile
Newton OS
Palm OS
Symbian
BlackBerry OS
Windows Mobile
Unix and Unix-like
iOS
iPadOS
watchOS
Linux
Android
Fire OS
LineageOS
webOS
Bada
Ubuntu Touch
Tizen
Firefox OS
KaiOS
Sailfish OS
LuneOS
postmarketOS
Windows
Windows Phone
Windows 10 Mobile
BlackBerry 10
HarmonyOS
Fuchsia
Software examples
Shockwave
Binary Runtime Environment for Wireless (BREW)
Cocoa
Cocoa Touch
.NET
Mono
.NET Framework
Silverlight
Flash
AIR
Java
Java ME
Java SE
Java EE
JavaFX
JavaFX Mobile
LiveCode
Microsoft XNA
Mozilla Prism, XUL and XULRunner
Mozilla WebExtensions API is modeled after Google Chrome's API. Thus Firefox extensions are now largely compatible with their Chrome counterparts.
Web platform
Oracle Database
Qt
SAP NetWeaver
Smartface
Universal Windows Platform
Windows Runtime
HMS Core
Cangjie
ArkTS
ArkUI
ArkUI-X
Huawei Phoenix Engine
Phoenix Engine Ray Shop
Hardware examples
ARM architecture based devices
Raspberry Pi or Gumstix full function miniature computers
ARM servers with Unix-like systems such as Linux or BSD variants
ChromeBooks from various manufacturers
IBM PC compatible systems
IBM System p and IBM Power Systems computers
IBM z/Architecture mainframes
CP/M computers based on the S-100 bus, maybe the earliest microcomputer platform
Video game consoles, any variety (PlayStation, Xbox, Nintendo)
3DO Interactive Multiplayer, that was licensed to manufacturers
Apple Pippin, a multimedia player platform for video game console development
Supercomputer architectures
See also
Cross-platform software
Hardware virtualization
Third platform
Platform ecosystem
References
External links
Ryan Sarver: What is a platform? | Computing platform | Technology | 1,109 |
27,879,368 | https://en.wikipedia.org/wiki/Functional%20divergence | Functional divergence is the process by which genes, after gene duplication, shift in function from an ancestral function. Functional divergence can result in either subfunctionalization, where a paralog specializes one of several ancestral functions, or neofunctionalization, where a totally new functional capability evolves. It is thought that this process of gene duplication and functional divergence is a major originator of molecular novelty and has produced the many large protein families that exist today.
Functional divergence is just one possible outcome of gene duplication events. Other fates include nonfunctionalization where one of the paralogs acquires deleterious mutations and becomes a pseudogene and superfunctionalization (reinforcement), where both paralogs maintain original function. While gene, chromosome, or whole genome duplication events are considered the canonical sources of functional divergence of paralogs, orthologs (genes descended from speciation events) can also undergo functional divergence and horizontal gene transfer can also result in multiple copies of a gene in a genome, providing the opportunity for functional divergence.
Many well known protein families are the result of this process, such as the ancient gene duplication event that led to the divergence of hemoglobin and myoglobin, the more recent duplication events that led to the various subunit expansions (alpha and beta) of vertebrate hemoglobins, or the expansion of G-protein alpha subunits
See also
Heterotachy
Subfunctionalization
Neofunctionalization
References
Genetics
Evolution | Functional divergence | Biology | 322 |
56,520,290 | https://en.wikipedia.org/wiki/Neutral%20Point%20Clamped | Neutral point clamped (NPC) inverters are widely used topology of multilevel inverters in high-power applications. This kind of inverters are able to be used for up to several megawatts applications. See links for more information.
See also
Active power filter
Synchronverter
References
Power electronics | Neutral Point Clamped | Engineering | 68 |
69,642,523 | https://en.wikipedia.org/wiki/Salazinic%20acid | Salazinic acid is a depsidone with a lactone ring. It is found in some lichens, and is especially prevalent in Parmotrema and Bulbothrix, where its presence or absence is often used to help classify species in those genera.
History
In 1897, Friedrich Wilhelm Zopf named the chemical he originally isolated from the African species Stereocaulon salazinum as salazinic acid. Later studies showed that the compound he named was actually norstictic acid.
In 1933, Yasuhiko Asahina and J. Asano studied salazinic acid they had isolated from Parmelia cetrata, and found a unique ring system with seven members containing two phenolic components. The fundamental structure was named depsidone, that is, a seven-membered ring with an oxygen bridge binding two aromatic rings. Japanese chemists demonstrated in the late 1960s that the isolated mycobiont of the lichen Ramalina crassa could produce salazinic acid when grown in laboratory culture. Subsequent studies tried to determine the influence of environmental factors on the production of salazinic acid in culture. For example, two studies in the late 1980s showed that only 4-O-demethylbarbatic acid (a precursor of salazinic acid) was produced by the isolated mycobiont of Ramalina siliquosa when grown in malt yeast extract medium supplemented with low amounts of sucrose. When extra sucrose was added to the growth medium, the production of salazinic acid was observed; the increased osmolality enhances the reaction from 4-O-demethylbarbatic acid to salazinic acid.
Properties
Salazinic acid has a molecular formula of C18H12O10, and a molecular mass of 388.3 grams/mole. In its purified form, it exists as colourless needles with a melting point range between that undergo a colour change to brown at about . Its solubility in water is about 27 milligrams per litre.
The compound has been shown in in vitro studies to have antimicrobial properties, but it did not have any substantive antimycobacterial effects when tested against Mycobacterium aurum. Recent (2021) research indicates that salazinic acid is a potent modulator of Nrf2, NF-κB and STAT3 signaling pathways in colorectal cancer cells.
The complete 1H and 13C NMR spectral assignments for salazinic acid were reported in 2000.
Occurrence
Salazinic acid is derived via the acetyl polymalonyl pathway, a metabolic pathway that uses acetyl-CoA and malonyl-CoA (derivatives of coenzyme A). The compound is common in the large lichen genus Parmotrema, and plays an important role in the chemotaxonomy and systematics of that genus. A 2020 revision included 66 salazinic acid-containing species. The presence or absence of the compound is also important in the classification of genus Bulbothrix.
In nature, salazinic acid serves as an antioxidant as well as a photoprotectant, helping the lichen to survive in conditions of both abiotic and biotic stress. A study of three foliose lichen species showed higher quantities of salazinic acid correlating with increases in altitude. An earlier study demonstrated other possible effects of environmental conditions on salazinic acid content. It was shown that the salazinic acid content of Ramalina siliquosa is higher where the annual mean temperature is higher, and the content of the lichens growing on the dark-coloured rock or on the southern rock face is higher than that of the lichens growing on the light-coloured rock or on the northern rock face.
Related compounds
The depsidones chalybaeizanic acid and quaesitic acid, isolated from the lichens Xanthoparmelia amphixanthoides and Hypotrachyna quaesita, respectively, are structurally similar to salazinic acid. In consalazinic acid, the aldehyde group of salazinic acid is replaced with a benzyl alcohol functional group.
8'-O-Methylsalazinic acid was isolated from Parmotrema dilatatum. Several new synthesised derivatives of salazinic acid were reported in 2021 using bromination, nucleophilic addition, Friedel-Crafts alkylation, and esterification.
Eponyms
Several authors have explicitly named salazinic acid in the specific epithets of their published lichen species, thereby acknowledging the presence of this compound as an important taxonomic characteristic. These eponyms are listed here, followed by their taxonomic authority and year of publication:
Acanthothecis salazinica
Bryoria salazinica
Graphina salazinica
Karoowia salazinica
Lepraria salazinica
Myelochroa salazinica
Ocellularia salazinica
Pertusaria salazinica
Phaeographina salazinica
Psiloparmelia salazinica
Diorygma salazinicum
Oropogon salazinicus
References
Lactones
Lichen products
Heterocyclic compounds with 4 rings
Hydroxymethyl compounds
Polyphenols
Benzodioxepines | Salazinic acid | Chemistry | 1,105 |
49,256,303 | https://en.wikipedia.org/wiki/EcoRI | EcoRI (pronounced "eco R one") is a restriction endonuclease enzyme isolated from species E. coli. It is a restriction enzyme that cleaves DNA double helices into fragments at specific sites, and is also a part of the restriction modification system. The Eco part of the enzyme's name originates from the species from which it was isolated - "E" denotes generic name which is "Escherichia" and "co" denotes species name, "coli" - while the R represents the particular strain, in this case RY13, and the I denotes that it was the first enzyme isolated from this strain.
In molecular biology it is used as a restriction enzyme. EcoRI creates 4 nucleotide sticky ends with 5' end overhangs of AATT. The nucleic acid recognition sequence where the enzyme cuts is G↓AATTC, which has a palindromic complementary sequence of CTTAA↓G. Other restriction enzymes, depending on their cut sites, can also leave 3' overhangs or blunt ends with no overhangs.
History
EcoRI is an example of type II restriction enzymes which now has more the 300 enzymes with more than 200 different sequence-specificities, which has transformed molecular biology and medicine.
EcoRI, discovered in 1970, was isolated by PhD student Robert Yoshimori who investigated clinical E. coli isolates that contained restriction systems presented on its plasmids. The purified isolates became known as EcoRI that is used to cleave G’AATTC.
Structure
Primary structure
EcoRI contains the PD..D/EXK motif within its active site like many restriction endonucleases.
Tertiary and quaternary structure
The enzyme is a homodimer of a 31 kilodalton subunit consisting of one globular domain of the α/β architecture. Each subunit contains a loop which sticks out from the globular domain and wraps around the DNA when bound.
EcoRI has been cocrystallized with the sequence it normally cuts. This crystal was used to solve the structure of the complex (). The solved crystal structure shows that the subunits of the enzyme homodimer interact with the DNA symmetrically. In the complex, two α-helices from each subunit come together to form a four-helix bundle. On the interacting helices are residues Glu144 and Arg145, which interact together, forming a crosstalk ring that is believed to allow the enzyme's two active sites to communicate.
Uses
Restriction enzymes are used in a wide variety of molecular genetics techniques including cloning, DNA screening and deleting sections of DNA in vitro. Restriction enzymes, like EcoRI, that generate sticky ends of DNA are often used to cut DNA prior to ligation, as sticky ends make the ligation reaction more efficient. One example of this use is in recombinant DNA production, when joining donor and vector DNA. EcoRI can exhibit non-site-specific cutting, known as star activity, depending on the conditions present in the reaction. Conditions that can induce star activity when using EcoRI include low salt concentration, high glycerol concentration, excessive amounts of enzyme present in the reaction, high pH and contamination with certain organic solvents.
See also
EcoRII, another nuclease enzyme from E. coli.
EcoRV, another nuclease enzyme from E. coli.
References
External links
EC 3.1.21
Bacterial enzymes
Restriction enzymes
Nucleases | EcoRI | Biology | 716 |
38,687,102 | https://en.wikipedia.org/wiki/Mechanical%20power%20%28medicine%29 | In medicine, mechanical power is a measure of the amount of energy imparted to a patient by a mechanical ventilator.
While in many cases mechanical ventilation is a life-saving or life-preserving intervention, it also has the potential to cause harm to the patient via ventilator-associated lung injury. A number of stresses may be induced by the ventilator on the patient's lung. These include barotrauma caused by pressure, volutrauma caused by distension of the lungs, rheotrauma caused by fast-flowing delivery of gases and atelectotrauma resulting from repeated collapse and re-opening of the lung.
The purpose of mechanical power is to provide a quantity which can account for all of these stresses and therefore predict the amount of lung injury which is likely to be seen in the patient.
References
Respiratory therapy
Pulmonology
Emergency medicine
Medical equipment
Intensive care medicine
Lung disorders
There is no agreed upon equation for Mechanical Power. | Mechanical power (medicine) | Biology | 201 |
955,298 | https://en.wikipedia.org/wiki/Biopunk | Biopunk (a portmanteau of "biotechnology" or "biology" and "punk") is a subgenre of science fiction that focuses on biotechnology. It is derived from cyberpunk, but focuses on the implications of biotechnology rather than mechanical cyberware and information technology. Biopunk is concerned with synthetic biology. It is derived from cyberpunk and involve bio-hackers, biotech megacorporations, and oppressive government agencies that manipulate human DNA. Most often keeping with the dark atmosphere of cyberpunk, biopunk generally examines the dark side of genetic engineering and depicts the potential perils of biotechnology.
Description
Biopunk is a subgenre of science fiction closely related to cyberpunk that focuses on the near-future (most often unintended) consequences of the biotechnology revolution following the invention of recombinant DNA. Biopunk stories explore the struggles of individuals or groups, often the product of human experimentation, against a typically dystopian backdrop of totalitarian governments and megacorporations which misuse biotechnologies as means of social control and profiteering. Unlike cyberpunk, it builds not on information technology, but on synthetic biology. Like in postcyberpunk fiction, individuals are usually modified and enhanced not with cyberware, but by genetic manipulation. A common feature of biopunk fiction is the "black clinic", which is a laboratory, clinic, or hospital that performs illegal, unregulated, or ethically dubious biological modification and genetic engineering procedures. Many features of biopunk fiction have their roots in William Gibson's Neuromancer, one of the first cyberpunk novels.
One of the prominent writers in this field is Paul Di Filippo, though he called his collection of such stories ribofunk, a blend of "ribosome" and "funk". Di Filippo suggests that precursors of biopunk fiction include H. G. Wells' The Island of Doctor Moreau; Julian Huxley's The Tissue-Culture King; some of David H. Keller's stories, Damon Knight's Natural State and Other Stories; Frederik Pohl and Cyril M. Kornbluth's Gravy Planet; novels of T. J. Bass and John Varley; Greg Bear's Blood Music and Bruce Sterling's Schismatrix. The stories of Cordwainer Smith, including his first and most famous Scanners Live in Vain, also foreshadow biopunk themes. Another example is the New Jedi Order series published from 1999 to 2003, which prominently feature the Yuuzhan Vong who exclusively use biotechnology.
See also
List of biopunk works
Cyberpunk
Cyberpunk derivatives
Nanopunk
Dieselpunk
Steampunk
Solarpunk
Seapunk
Genetic engineering in fiction
Grinder (biohacking)
Human enhancement
Transhumanism
References
External links
Hackteria.org, a community for bio-artists
Biology and culture
Biocybernetics
Bioinformatics
Molecular genetics
Postmodernism
Science fiction genres
Synthetic biology
Systems biology
Subcultures
Transhumanism
1990s neologisms | Biopunk | Chemistry,Technology,Engineering,Biology | 643 |
69,995,164 | https://en.wikipedia.org/wiki/The%20Secret%20Guide%20to%20Computers | The Secret Guide to Computers is a book on computer hardware and software techniques by Russ Walter.
The book was written to be useful in both teaching and professional environments. Its goal is to describe everything necessary to become a "computer expert," covering philosophies, technicalities, hardware, software, theory, and practice.
Walter shares his telephone number for readers of the book to ask questions 24 hours a day.
Editions
, there are 34 editions of the book.
Details:
The original edition, now called "edition zero," was written in 1972. It was 17 pages about how to write programs in BASIC. The 7th edition, written in 1976, was the first edition to actually use the title "The Secret Guide to Computers."
Some editions are multi-volume sets.
Table of contents
All editions are self-published by the author, Russell M. Walter (nicknamed "Russ"), but other publishers have reprinted their own versions. For example, the 11th edition, written in 1983, was a 2-volume set. The photo (which you see on the right or above) shows a reprint, published by Birkhäuser Boston, of volume 1 of the 11th edition. It includes a different cover and different advertising material than Russ's version. It was Birkhäuser Boston's first edition but a reprint of just part of Russ's 11th edition.
The 31st edition had an expanded title: "Secret Guide to Computers & Tricky Living." That's because it combined "The Secret Guide to Computers" with Russ's other book, "Tricky Living," to form a huge book, 703 pages. That expanded title was used on the 31st edition and all later editions (the 32nd, 33rd, and 34th).
The current edition, the 34th, was published in 2022. Its 703 pages include 42 chapters, organized into 7 mega-chapters: buying (use this book, how to shop, chips, disks, I/O devices software, complete systems), Windows (Windows 10&11, Web, email, security, maintenance, repairs, command prompt), handhelds (pure Android, Samsung's Android, iPad), tricky living (health, daily survival, intellectuals, language, places, Donna's comments, arts, math, government, morals, sex), Microsoft Office (Word, Excel, PowerPoint), programming (Basic, Python, Web-page design, challenges, Visual Basic, Visual C#, exotic languages, assembler), and parting (computer past, your future, resources). Though most of the book was written by Russ, the "Donna's comments" chapter was written instead by his wife (Guang Chun Walter, nicknamed "Donna") and edited by him, so she's a coauthor.
Many earlier editions are still available from Russ, at reduced prices and including many topics omitted from the 34th edition, such as a dozen big 33rd-edition topics (Windows 7 & 8 & 8.1, Internet Explorer, Yahoo Mail, iPhone, Microsoft Publisher&Access, QB64, and Java&APL) and prostitution (most thoroughly in Tricky Living's first edition).
The book's Website, SecretFun.com, includes links to free PDFs of the entire 33rd & 34th editions, plus many other topics, such as how to get printed books directly from Russ and phone him at 603-666-6644 for free help about computers and everything else in life.
References
Computer books
1984 non-fiction books
Birkhäuser books | The Secret Guide to Computers | Technology | 726 |
48,551,373 | https://en.wikipedia.org/wiki/Li%20Shanlan%20identity | In mathematics, in combinatorics, the Li Shanlan identity (also called Li Shanlan's summation formula) is a certain combinatorial identity attributed to the nineteenth century Chinese mathematician Li Shanlan. Since Li Shanlan is also known as Li Renshu (his courtesy name), this identity is also referred to as the Li Renshu identity. This identity appears in the third chapter of Duoji bilei (垛积比类 / 垛積比類, meaning summing finite series), a mathematical text authored by Li Shanlan and published in 1867 as part of his collected works. A Czech mathematician Josef Kaucky published an elementary proof of the identity along with a history of the identity in 1964. Kaucky attributed the identity to a certain Li Jen-Shu. From the account of the history of the identity, it has been ascertained that Li Jen-Shu is in fact Li Shanlan. Western scholars had been studying Chinese mathematics for its historical value; but the attribution of this identity to a nineteenth century Chinese mathematician sparked a rethink on the mathematical value of the writings of Chinese mathematicians.
The identity
The Li Shanlan identity states that
.
Li Shanlan did not present the identity in this way. He presented it in the traditional Chinese algorithmic and rhetorical way.
Proofs of the identity
Li Shanlan had not given a proof of the identity in Duoji bilei. The first proof using differential equations and Legendre polynomials, concepts foreign to Li, was published by Pál Turán in 1936, and the proof appeared in Chinese in Yung Chang's paper published in 1939. Since then at least fifteen different proofs have been found. The following is one of the simplest proofs.
The proof begins by expressing as Vandermonde's convolution:
Pre-multiplying both sides by ,
.
Using the following relation
the above relation can be transformed to
.
Next the relation
is used to get
.
Another application of Vandermonde's convolution yields
and hence
Since is independent of k, this can be put in the form
Next, the result
gives
Setting p = q and replacing j by k,
Li's identity follows from this by replacing n by n + p and doing some rearrangement of terms in the resulting expression:
On Duoji bilei
The term duoji denotes a certain traditional Chinese method of computing sums of piles. Most of the mathematics that was developed in China since the sixteenth century is related to the duoji method. Li Shanlan was one of the greatest exponents of this method and Duoji bilei is an exposition of his work related to this method. Duoji bilei consists of four chapters: Chapter 1 deals with triangular piles, Chapter 2 with finite power series, Chapter 3 with triangular self-multiplying piles and Chapter 4 with modified triangular piles.
References
Combinatorics
Chinese mathematics
Chinese mathematical discoveries
Science and technology in China | Li Shanlan identity | Mathematics | 599 |
57,731,238 | https://en.wikipedia.org/wiki/Estrone%20benzoate | Estrone benzoate, or estrone 3-benzoate, is a synthetic estrogen and estrogen ester – specifically, the C3 benzoate ester of estrone – which was first reported in 1932 and was never marketed. It led to the development in 1933 of the more active estradiol benzoate, the first estradiol ester to be introduced for medical use.
See also
List of estrogen esters § Estrone esters
References
Abandoned drugs
Benzoate esters
Estrone esters
Ketones
Sex hormone esters and conjugates
Synthetic estrogens | Estrone benzoate | Chemistry | 125 |
2,325,729 | https://en.wikipedia.org/wiki/Gratification | Gratification is the pleasurable emotional reaction of happiness in response to a fulfillment of a desire or goal. It is also identified as a response stemming from the fulfillment of social needs such as affiliation, socializing, social approval, and mutual recognition.
Gratification, like all emotions, is a motivator of behavior and plays a role in the entire range of human social systems.
Causes
The emotion of gratification is the result of accomplishing a certain goal or achieving a reward. Gratification is an outcome of specific situations and is induced through the completion of and as a consequence of these situations. Specifically, gratification may be experienced after achieving a long-term goal, such as graduating from college, buying one's first house, or getting one's dream job.
Immediate and delayed gratification
The term immediate gratification is often used to label the satisfactions gained by more impulsive behaviors: choosing now over tomorrow. The skill of giving preference to long-term goals over more immediate ones is known as deferred gratification or patience, and it is usually considered a virtue, producing rewards in the long term. There are sources who claim that the prefrontal cortex plays a part in the incidence of these two types of gratification, particularly in the case of delayed gratification since one of its functions involve predicting future events.
Walter Mischel developed the well-known marshmallow experiment to test gratification patterns in four-year-olds, offering one marshmallow now or two after a delay. He discovered in long-term follow-up that the ability to resist eating the marshmallow immediately was a good predictor of success in later life. However, Tyler W. Watts, Greg J. Duncan, and Haonan Quan, published Revisiting the Marshmallow Test: A Conceptual Replication Investigating Links Between Early Delay of Gratification and Later Outcomes debunking the original marshmallow experiment. Concluding that "This bivariate correlation was only half the size of those reported in the original studies and was reduced by two thirds in the presence of controls for family background, early cognitive ability, and the home environment. Most of the variation in adolescent achievement came from being able to wait at least 20 s. Associations between delay time and measures of behavioral outcomes at age 15 were much smaller and rarely statistically significant."
Criticism
While one might say that those who lack the skill to delay are immature, an excess of this skill can create problems as well; i.e. an individual becomes inflexible, or unable to take pleasure in life (anhedonia) and seize opportunities for fear of adverse consequences.
There are also circumstances, in an uncertain/negative environment, when seizing gratification is the rational approach, as in wartime.
Emotional gratification
Emotional gratification is a motivating force that results from the gratifying effects of emotions. The emotional reaction of emotional gratification is itself caused by emotions, resulting in a circular model of this complex interaction. Emotions themselves can instigate different varieties of gratification, ranging from hedonic outcomes to more psychologically beneficial outcomes.
Bipolar disorder
Gratification is a major issue in bipolar disorder. One sign of the onset of depression is a spreading loss of the sense of gratification in such immediate things as friendship, jokes, conversation, food and sex. Long-term gratification seems even more meaningless.
By contrast, the manic can find gratification in almost anything, even a leaf falling, or seeing their crush for example. There is also the case of the so-called manic illusion of gratification, which is analogous to an infant's illusion of obtaining food. Here, if the food is not given right away, he fantasizes about it and this eventually give way to stronger emotions such as anger and depression.
See also
References
Further reading
- An academic paper treating gratification and self-control problems
Happiness
Motivation
Positive mental attitude
bg:Удовлетворение
ca:Satisfacció
es:Satisfacción
fr:Satisfaction
it:Soddisfazione
scn:Sudisfazzioni | Gratification | Biology | 864 |
1,311,015 | https://en.wikipedia.org/wiki/Gates%20of%20Heaven | Gates of Heaven is a 1978 American independent documentary film produced, directed, and edited by Errol Morris about the pet cemetery business. It was made when Morris was unknown and did much to launch his career.
Production
After a trip to Florida where he tried and failed to make a film about the residents of the town of Vernon, Errol Morris read a San Francisco Chronicle article with the headline: "450 Dead Pets Going to Napa Valley." This story about dead pets being exhumed from one pet cemetery and reburied in another became the basis for Gates of Heaven. For financing Morris borrowed money from family and friends, and the film was shot throughout the spring and summer of 1977, with the total budget estimated at $125,000. Production was difficult at times, with Morris frequently clashing with his cinematographer over the film's visual style. Morris ultimately ended up firing three cinematographers before finally settling on Ned Burgess, with whom he would work again on his second film Vernon, Florida. Morris had a falling out with his sound-woman when one of his subjects, Florence Rasmussen, said "Here today, gone tomorrow, right?" and she said "Wrong." Morris couldn't decide which had offended him more, that his sound-woman had interrupted Rasmussen or that she had said she was "Wrong."
Release
Gates of Heaven had its premiere at the 1978 New York Film Festival, and would play at various other festivals around the world before being picked up for a limited theatrical run by New Yorker Films in 1981.
Synopsis
The film, like Morris's other works, is unnarrated and the stories are told purely through interviews. It is divided into two main sections. The first concerns Floyd "Mac" McClure and his lifelong quest to allow pets to have a graceful burial. McClure's business associates and his competitor, a manager of a rendering plant, are interviewed. Morris reveals that McClure's business has failed. Dividing the two sections is an interview with Florence Rasmussen, an elderly woman whose home overlooked the cemetery. After this, Morris follows the 450 dead pets to the Bubbling Well Pet Memorial Park. This operation is run by John "Cal" Harberts and his two sons, Dan and Phil. This business is far more successful, and continues to operate today, run by Cal's son Dan Harberts. Throughout the film, the speakers touch on philosophical themes, as when McClure says "Death is for the living and not for the dead so much" or a grieving pet owner says "There's your dog, your dog's dead. But where's the thing that made it move? It had to be something, didn't it?"
Reception and legacy
Noted director Werner Herzog pledged that he would eat the shoe he was wearing if Morris's film on this improbable subject was completed and shown in a public theater. When the film was released, Herzog lived up to his wager and the consumption of his footwear was made into the short film Werner Herzog Eats His Shoe. At a seminar at the Telluride Film Festival, Herzog praised Gates of Heaven as "a very, very fine film, and it was made with no money, only guts." Morris recalls showing a rough cut of the movie to Wim Wenders, who called it a masterpiece. It also aired as an episode of P.O.V.
In an interview on the Criterion DVD, Morris recalls that he showed Gates of Heaven to Douglas Sirk at the Berlin Film Festival. Sirk warned Morris that "There's a danger that somebody might find this movie to be ironic." People are often unsure of the film's tone: is it sincere or satirical? Morris says he "loves the absurd" and that "to love the absurdity of people is not to ridicule them, it's to embrace, on some level, how desperate life is for each and every one of us, including me."
Gates of Heaven launched Morris's career and is now considered a classic. In 1991, film critic Roger Ebert named it one of the ten best films ever made in his list for the Sight & Sound poll. Ebert's television partner Gene Siskel shared his enthusiasm for the film. Ebert wrote that the film is an "underground legend," and in 1997 put it in his list of The Great Movies. Ebert wrote that Gates of Heaven "is surrounded by layer upon layer of comedy, pathos, irony, and human nature. I have seen this film perhaps 30 times, and am still not anywhere near the bottom of it: All I know is, it's about a lot more than pet cemeteries."
Home media
The film was initially released on DVD by MGM in 2005. In 2015 The Criterion Collection made it available as part of a new special edition DVD and Blu-Ray that also included Morris's second film Vernon, Florida.
References
External links
Gates of Heaven from ErrolMorris.com
Bubbling Well Pet Memorial Park.
Gates of Heaven and Vernon, Florida: Bullshitting a Bullshitter an essay by Eric Hynes at the Criterion Collection
POV trailer of Gates of Heaven on PBS
1978 films
American documentary films
Films directed by Errol Morris
Films produced by Errol Morris
Documentary films about death
Human–animal interaction
Films about pets
Animal cemeteries
1978 directorial debut films
1980s English-language films
1970s English-language films
1970s American films
1980s American films
American independent films
English-language documentary films | Gates of Heaven | Biology | 1,122 |
56,837,096 | https://en.wikipedia.org/wiki/Caenorhabditis%20elegans%20Cer1%20virus | Caenorhabditis elegans Cer1 virus is a species of retroviruses in the genus Metavirus.
References
External links
ICTVdB Index of Viruses
Descriptions of Plant Viruses
Metaviridae
RNA reverse-transcribing viruses
Caenorhabditis elegans | Caenorhabditis elegans Cer1 virus | Biology | 61 |
11,322,517 | https://en.wikipedia.org/wiki/Myriogenospora%20aciculispora | Myriogenospora aciculispora is a fungal plant pathogen. It has been reported to cause disease among sugarcane in Brazil.
References
Fungal plant pathogens and diseases
Clavicipitaceae
Fungi described in 1926
Fungus species | Myriogenospora aciculispora | Biology | 49 |
69,997,187 | https://en.wikipedia.org/wiki/Jinhua%20Ye | Jinhua Ye is a Chinese chemist who is a professor at the National Institute for Materials Science in Tsukuba. Her research considers high-temperature superconductors for photocatalysis. She was elected Fellow of the Royal Society of Chemistry in 2016 and has been included in the Clarivate Analytics Highly Cited Researcher every year since then.
Early life and education
Ye became interested in science fiction as a child. She was particularly interested in a story by Ye Yonglie that included a castle made from diamond. Ye learned that photocatalysis could split water into hydrogen and oxygen. She then became inspired by Jules Verne's The Mysterious Island,I believe that water will one day be employed as fuel, that hydrogen and oxygen which constitute it, used singly or together, will furnish an inexhaustible source of heat and light, of an intensity of which coal is not capable. She studied chemistry at the Zhejiang University. After completing her undergraduate degree, she moved to Japan, where she joined the University of Tokyo. After earning her doctorate in 1990, she joined Osaka University as a research associate.
Research and career
In 1991, Ye joined the National Institute for Materials Science. She was made Director of Photocatalytic Materials Center in 2006 and Director of Environmental Remediation Materials in 2011.
Ye has dedicated her career to the realization of artificial photosynthesis. She is particularly interested in the development of materials that harvest the most sunlight. Ye has studied the reaction mechanisms, and, in an effort to overcome harsh reaction kinetics, has worked on the careful construction of interfaces. In particular, Ye has developed nano-structured surfaces that enhance reactivities, and, using localized surface plasmon resonance, broaden the spectral range of her photocatalytic materials.
Ye was elected Fellow of the Royal Society of Chemistry in 2016. In 2022, she was included by the American Chemical Society Energy Letters in their list of the world's leading women scientists in energy research.
Selected publications
References
Chinese women chemists
Living people
Year of birth missing (living people)
University of Tokyo alumni
Zhejiang University alumni
Fellows of the Royal Society of Chemistry
Photochemists
Osaka University | Jinhua Ye | Chemistry | 440 |
17,387,312 | https://en.wikipedia.org/wiki/Cation-anion%20radius%20ratio | In condensed matter physics and inorganic chemistry, the cation-anion radius ratio can be used to predict the crystal structure of an ionic compound based on the relative size of its atoms. It is defined as the ratio of the ionic radius of the positively charged cation to the ionic radius of the negatively charged anion in a cation-anion compound. Anions are larger than cations. Large sized anions occupy lattice sites, while small sized cations are found in voids.
In a given structure, the ratio of cation radius to anion radius is called the radius ratio. This is simply given by .
Ratio rule and stability
The radius ratio rule defines a critical radius ratio for different crystal structures, based on their coordination geometry. The idea is that the anions and cations can be treated as incompressible spheres, meaning the crystal structure can be seen as a kind of unequal sphere packing. The allowed size of the cation for a given structure is determined by the critical radius ratio. If the cation is too small, then it will attract the anions into each other and they will collide hence the compound will be unstable due to anion-anion repulsion; this occurs when the radius ratio drops below the critical radius ratio for that particular structure. At the stability limit the cation is touching all the anions and the anions are just touching at their edges. For radius ratios greater than the critical ratius ratio, the structure is expected to be stable.
The rule is not obeyed for all compounds. By one estimate, the crystal structure can only be guessed about 2/3 of the time. Errors in prediction are partly due to the fact that real chemical compounds are not purely ionic, they display some covalent character.
The table below gives the relation between critical radius ratio, , and coordination number, , which may be obtained from a simple geometrical proof.
History
The radius ratio rule was first proposed by Gustav F. Hüttig in 1920. In 1926, Victor Goldschmidt extended the use to ionic lattices. In 1929, the rule was incorporated as the first of Pauling's rules for crystal structures.
See also
Goldschmidt tolerance factor
Pauling's rules
Cubic crystal system
Sphere packing
References
Crystallography
Inorganic chemistry
Ratios
Atomic radius | Cation-anion radius ratio | Physics,Chemistry,Materials_science,Mathematics,Engineering | 468 |
17,793,469 | https://en.wikipedia.org/wiki/Calcium%20pump | Calcium pumps are a family of ion transporters found in the cell membrane of all animal cells. They are responsible for the active transport of calcium out of the cell for the maintenance of the steep Ca2+ electrochemical gradient across the cell membrane. Calcium pumps play a crucial role in proper cell signalling by keeping the intracellular calcium concentration roughly 10,000 times lower than the extracellular concentration. Failure to do so is one cause of muscle cramps.
The plasma membrane Ca2+ ATPase and the sodium-calcium exchanger are together the main regulators of cytoplasmic Ca2+ concentrations.
Biological role
Ca2+ has many important roles as an intracellular messenger. The release of a large amount of free Ca2+ can trigger a fertilized egg to develop, skeletal muscle cells to contract, secretion by secretory cells and interactions with Ca2+ -responsive proteins like calmodulin. To maintain low concentrations of free Ca2+ in the cytosol, cells use membrane pumps like calcium ATPase found in the membranes of sarcoplasmic reticulum of skeletal muscle. These pumps are needed to provide the steep electrochemical gradient that allows Ca2+ to rush into the cytosol when a stimulus signal opens the Ca2+ channels in the membrane. The pumps are also necessary to actively pump the Ca2+ back out of the cytoplasm and return the cell to its pre-signal state.
Crystallography of calcium pumps
The structure of calcium pumps found in the sarcoplasmic reticulum of skeletal muscle was elucidated in 2000 by Toyoshima, et al. using microscopy of tubular crystals and 3D microcrystals. The pump has a molecular mass of 110,000 amu, shows three well separated cytoplasmic domains, with a transmembrane domain consisting of ten alpha helices and two transmembrane Ca2+ binding sites.
Mechanism
Classical theory of active transport for P-type ATPases
Data from crystallography studies by Chikashi Toyoshima applied to the above cycle
References
Cell biology
Transport proteins
nl:Sarcolemmale calciumpomp | Calcium pump | Biology | 438 |
20,821 | https://en.wikipedia.org/wiki/Miller%E2%80%93Urey%20experiment | The Miller–Urey experiment, or Miller experiment, was an experiment in chemical synthesis carried out in 1952 that simulated the conditions thought at the time to be present in the atmosphere of the early, prebiotic Earth. It is seen as one of the first successful experiments demonstrating the synthesis of organic compounds from inorganic constituents in an origin of life scenario. The experiment used methane (CH4), ammonia (NH3), hydrogen (H2), in ratio 2:2:1, and water (H2O). Applying an electric arc (simulating lightning) resulted in the production of amino acids.
It is regarded as a groundbreaking experiment, and the classic experiment investigating the origin of life (abiogenesis). It was performed in 1952 by Stanley Miller, supervised by Nobel laureate Harold Urey at the University of Chicago, and published the following year. At the time, it supported Alexander Oparin's and J. B. S. Haldane's hypothesis that the conditions on the primitive Earth favored chemical reactions that synthesized complex organic compounds from simpler inorganic precursors.
After Miller's death in 2007, scientists examining sealed vials preserved from the original experiments were able to show that more amino acids were produced in the original experiment than Miller was able to report with paper chromatography. While evidence suggests that Earth's prebiotic atmosphere might have typically had a composition different from the gas used in the Miller experiment, prebiotic experiments continue to produce racemic mixtures of simple-to-complex organic compounds, including amino acids, under varying conditions. Moreover, researchers have shown that transient, hydrogen-rich atmospheres – conducive to Miller-Urey synthesis – would have occurred after large asteroid impacts on early Earth.
History
Foundations of organic synthesis and the origin of life
Until the 19th century, there was considerable acceptance of the theory of spontaneous generation, the idea that "lower" animals, such as insects or rodents, arose from decaying matter. However, several experiments in the 19th century – particularly Louis Pasteur's swan neck flask experiment in 1859 — disproved the theory that life arose from decaying matter. Charles Darwin published On the Origin of Species that same year, describing the mechanism of biological evolution. While Darwin never publicly wrote about the first organism in his theory of evolution, in a letter to Joseph Dalton Hooker, he speculated:But if (and oh what a big if) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts, light, heat, electricity etcetera present, that a protein compound was chemically formed, ready to undergo still more complex changes [...]"
At this point, it was known that organic molecules could be formed from inorganic starting materials, as Friedrich Wöhler had described Wöhler synthesis of urea from ammonium cyanate in 1828. Several other early seminal works in the field of organic synthesis followed, including Alexander Butlerov's synthesis of sugars from formaldehyde and Adolph Strecker's synthesis of the amino acid alanine from acetaldehyde, ammonia, and hydrogen cyanide. In 1913, Walther Löb synthesized amino acids by exposing formamide to silent electric discharge, so scientists were beginning to produce the building blocks of life from simpler molecules, but these were not intended to simulate any prebiotic scheme or even considered relevant to origin of life questions.
But the scientific literature of the early 20th century contained speculations on the origin of life. In 1903, physicist Svante Arrhenius hypothesized that the first microscopic forms of life, driven by the radiation pressure of stars, could have arrived on Earth from space in the panspermia hypothesis. In the 1920s, Leonard Troland wrote about a primordial enzyme that could have formed by chance in the primitive ocean and catalyzed reactions, and Hermann J. Muller suggested that the formation of a gene with catalytic and autoreplicative properties could have set evolution in motion. Around the same time, Alexander Oparin's and J. B. S. Haldane's "Primordial soup" ideas were emerging, which hypothesized that a chemically-reducing atmosphere on early Earth would have been conducive to organic synthesis in the presence of sunlight or lightning, gradually concentrating the ocean with random organic molecules until life emerged. In this way, frameworks for the origin of life were coming together, but at the mid-20th century, hypotheses lacked direct experimental evidence.
Stanley Miller and Harold Urey
At the time of the Miller–Urey experiment, Harold Urey was a Professor of Chemistry at the University of Chicago who had a well-renowned career, including receiving the Nobel Prize in Chemistry in 1934 for his isolation of deuterium and leading efforts to use gaseous diffusion for uranium isotope enrichment in support of the Manhattan Project. In 1952, Urey postulated that the high temperatures and energies associated with large impacts in Earth's early history would have provided an atmosphere of methane (CH4), water (H2O), ammonia (NH3), and hydrogen (H2), creating the reducing environment necessary for the Oparin-Haldane "primordial soup" scenario.
Stanley Miller arrived at the University of Chicago in 1951 to pursue a PhD under nuclear physicist Edward Teller, another prominent figure in the Manhattan Project. Miller began to work on how different chemical elements were formed in the early universe, but, after a year of minimal progress, Teller was to leave for California to establish Lawrence Livermore National Laboratory and further nuclear weapons research. Miller, having seen Urey lecture on his 1952 paper, approached him about the possibility of a prebiotic synthesis experiment. While Urey initially discouraged Miller, he agreed to allow Miller to try for a year. By February 1953, Miller had mailed a manuscript as sole author reporting the results of his experiment to Science. Urey refused to be listed on the manuscript because he believed his status would cause others to underappreciate Miller's role in designing and conducting the experiment and so encouraged Miller to take full credit for the work. Despite this the set-up is still most commonly referred to including both their names. After not hearing from Science for a few weeks, a furious Urey wrote to the editorial board demanding an answer, stating, "If Science does not wish to publish this promptly we will send it to the Journal of the American Chemical Society." Miller's manuscript was eventually published in Science in May 1953.
Experiment
In the original 1952 experiment, methane (CH4), ammonia (NH3), and hydrogen (H2) were all sealed together in a 2:2:1 ratio (1 part H2) inside a sterile 5-L glass flask connected to a 500-mL flask half-full of water (H2O). The gas chamber was intended to represent Earth's prebiotic atmosphere, while the water simulated an ocean. The water in the smaller flask was boiled such that water vapor entered the gas chamber and mixed with the "atmosphere". A continuous electrical spark was discharged between a pair of electrodes in the larger flask. The spark passed through the mixture of gases and water vapor, simulating lightning. A condenser below the gas chamber allowed aqueous solution to accumulate into a U-shaped trap at the bottom of the apparatus, which was sampled.
After a day, the solution that had collected at the trap was pink, and after a week of continuous operation the solution was deep red and turbid, which Miller attributed to organic matter adsorbed onto colloidal silica. The boiling flask was then removed, and mercuric chloride (a poison) was added to prevent microbial contamination. The reaction was stopped by adding barium hydroxide and sulfuric acid, and evaporated to remove impurities. Using paper chromatography, Miller identified five amino acids present in the solution: glycine, α-alanine and β-alanine were positively identified, while aspartic acid and α-aminobutyric acid (AABA) were less certain, due to the spots being faint.
Materials and samples from the original experiments remained in 2017 under the care of Miller's former student, Jeffrey Bada, a professor at the UCSD, Scripps Institution of Oceanography who also conducts origin of life research. , the apparatus used to conduct the experiment was on display at the Denver Museum of Nature and Science.
Chemistry of experiment
In 1957 Miller published research describing the chemical processes occurring inside his experiment. Hydrogen cyanide (HCN) and aldehydes (e.g., formaldehyde) were demonstrated to form as intermediates early on in the experiment due to the electric discharge. This agrees with current understanding of atmospheric chemistry, as HCN can generally be produced from reactive radical species in the atmosphere that arise when CH4 and nitrogen break apart under ultraviolet (UV) light. Similarly, aldehydes can be generated in the atmosphere from radicals resulting from CH4 and H2O decomposition and other intermediates like methanol. Several energy sources in planetary atmospheres can induce these dissociation reactions and subsequent hydrogen cyanide or aldehyde formation, including lightning, ultraviolet light, and galactic cosmic rays.
For example, here is a set photochemical reactions of species in the Miller-Urey atmosphere that can result in formaldehyde:
H2O + hv → H + OH
CH4 + OH → CH3 + HOH
CH3 + OH → CH3OH
CH3OH + hv → CH2O (formaldehyde) + H2
A photochemical path to HCN from NH3 and CH4 is:
NH3 + hv → NH2 + H
NH2 + CH4 → NH3 + CH3
NH2 + CH3 → CH5N
CH5N + hv → HCN + 2H2
Other active intermediate compounds (acetylene, cyanoacetylene, etc.) have been detected in the aqueous solution of Miller–Urey-type experiments, but the immediate HCN and aldehyde production, the production of amino acids accompanying the plateau in HCN and aldehyde concentrations, and slowing of amino acid production rate during HCN and aldehyde depletion provided strong evidence that Strecker amino acid synthesis was occurring in the aqueous solution.
Strecker synthesis describes the reaction of an aldehyde, ammonia, and HCN to a simple amino acid through an aminoacetonitrile intermediate:
CH2O + HCN + NH3 → NH2-CH2-CN (aminoacetonitrile) + H2O
NH2-CH2-CN + 2H2O → NH3 + NH2-CH2-COOH (glycine)
Furthermore, water and formaldehyde can react via Butlerov's reaction to produce various sugars like ribose.
The experiments showed that simple organic compounds, including the building blocks of proteins and other macromolecules, can abiotically be formed from gases with the addition of energy.
Related experiments and follow-up work
Contemporary experiments
There were a few similar spark discharge experiments contemporaneous with Miller-Urey. An article in The New York Times (March 8, 1953) titled "Looking Back Two Billion Years" describes the work of Wollman M. MacNevin at Ohio State University, before the Miller Science paper was published in May 1953. MacNevin was passing 100,000V sparks through methane and water vapor and produced "resinous solids" that were "too complex for analysis." Furthermore, K. A. Wilde submitted a manuscript to Science on December 15, 1952, before Miller submitted his paper to the same journal in February 1953. Wilde's work, published on July 10, 1953, used voltages up to only 600V on a binary mixture of carbon dioxide (CO2) and water in a flow system and did not note any significant reduction products. According to some, the reports of these experiments explain why Urey was rushing Miller's manuscript through Science and threatening to submit to the Journal of the American Chemical Society.
By introducing an experimental framework to test prebiotic chemistry, the Miller–Urey experiment paved the way for future origin of life research. In 1961, Joan Oró produced milligrams of the nucleobase adenine from a concentrated solution of HCN and NH3 in water. Oró found that several amino acids were also formed from HCN and ammonia under those conditions. Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere. Other researchers also began using UV-photolysis in prebiotic schemes, as the UV flux would have been much higher on early Earth. For example, UV-photolysis of water vapor with carbon monoxide was found to yield various alcohols, aldehydes, and organic acids. In the 1970s, Carl Sagan used Miller-Urey-type reactions to synthesize and experiment with complex organic particles dubbed "tholins", which likely resemble particles formed in hazy atmospheres like that of Titan.
Modified Miller–Urey experiments
Much work has been done since the 1950s toward understanding how Miller-Urey chemistry behaves in various environmental settings. In 1983, testing different atmospheric compositions, Miller and another researcher repeated experiments with varying proportions of H2, H2O, N2, CO2 or CH4, and sometimes NH3. They found that the presence or absence of NH3 in the mixture did not significantly impact amino acid yield, as NH3 was generated from N2 during the spark discharge. Additionally, CH4 proved to be one of the most important atmospheric ingredients for high yields, likely due to its role in HCN formation. Much lower yields were obtained with more oxidized carbon species in place of CH4, but similar yields could be reached with a high H2/CO2 ratio. Thus, Miller-Urey reactions work in atmospheres of other compositions as well, depending on the ratio of reducing and oxidizing gases. More recently, Jeffrey Bada and H. James Cleaves, graduate students of Miller, hypothesized that the production of nitrites, which destroy amino acids, in CO2 and N2-rich atmospheres may explain low amino acids yields. In a Miller-Urey setup with a less-reducing (CO2 + N2 + H2O) atmosphere, when they added calcium carbonate to buffer the aqueous solution and ascorbic acid to inhibit oxidation, yields of amino acids greatly increased, demonstrating that amino acids can still be formed in more neutral atmospheres under the right geochemical conditions. In a prebiotic context, they argued that seawater would likely still be buffered and ferrous iron could inhibit oxidation.
In 1999, after Miller suffered a stroke, he donated the contents of his laboratory to Bada. In an old cardboard box, Bada discovered unanalyzed samples from modified experiments that Miller had conducted in the 1950s. In a "volcanic" apparatus, Miller had amended an aspirating nozzle to shoot a jet of steam into the reaction chamber. Using high-performance liquid chromatography and mass spectrometry, Bada's lab analyzed old samples from a set of experiments Miller conducted with this apparatus and found some higher yields and a more diverse suite of amino acids. Bada speculated that injecting the steam into the spark could have split water into H and OH radicals, leading to more hydroxylated amino acids during Strecker synthesis. In a separate set of experiments, Miller added hydrogen sulfide (H2S) to the reducing atmosphere, and Bada's analyses of the products suggested order-of-magnitude higher yields, including some amino acids with sulfur moieties.
A 2021 work highlighted the importance of the high-energy free electrons present in the experiment. It is these electrons that produce ions and radicals, and represent an aspect of the experiment that needs to be better understood.
After comparing Miller–Urey experiments conducted in borosilicate glassware with those conducted in Teflon apparatuses, a 2021 paper suggests that the glass reaction vessel acts as a mineral catalyst, implicating silicate rocks as important surfaces in prebiotic Miller-Urey reactions.
Early Earth's prebiotic atmosphere
While there is a lack of geochemical observations to constrain the exact composition of the prebiotic atmosphere, recent models point to an early "weakly reducing" atmosphere; that is, early Earth's atmosphere was likely dominated by CO2 and N2 and not CH4 and NH3 as used in the original Miller–Urey experiment. This is explained, in part, by the chemical composition of volcanic outgassing. Geologist William Rubey was one of the first to compile data on gases emitted from modern volcanoes and concluded that they are rich in CO2, H2O, and likely N2, with varying amounts of H2, sulfur dioxide (SO2), and H2S. Therefore, if the redox state of Earth's mantle — which dictates the composition of outgassing – has been constant since formation, then the atmosphere of early Earth was likely weakly reducing, but there are some arguments for a more-reducing atmosphere for the first few hundred million years.
While the prebiotic atmosphere could have had a different redox condition than that of the Miller–Urey atmosphere, the modified Miller–Urey experiments described in the above section demonstrated that amino acids can still be abiotically produced in less-reducing atmospheres under specific geochemical conditions. Furthermore, harkening back to Urey's original hypothesis of a "post-impact" reducing atmosphere, a recent atmospheric modeling study has shown that an iron-rich impactor with a minimum mass around 4×1020 – 5×1021 kg would be enough to transiently reduce the entire prebiotic atmosphere, resulting in a Miller-Urey-esque H2-, CH4-, and NH3-dominated atmosphere that persists for millions of years. Previous work has estimated from the lunar cratering record and composition of Earth's mantle that between four and seven such impactors reached the Hadean Earth.
A large factor controlling the redox budget of early Earth's atmosphere is the rate of atmospheric escape of H2 after Earth's formation. Atmospheric escape – common to young, rocky planets — occurs when gases in the atmosphere have sufficient kinetic energy to overcome gravitational energy. It is generally accepted that the timescale of hydrogen escape is short enough such that H2 made up < 1% of the atmosphere of prebiotic Earth, but, in 2005, a hydrodynamic model of hydrogen escape predicted escape rates two orders of magnitude lower than previously thought, maintaining a hydrogen mixing ratio of 30%. A hydrogen-rich prebiotic atmosphere would have large implications for Miller-Urey synthesis in the Hadean and Archean, but later work suggests solutions in that model might have violated conservation of mass and energy. That said, during hydrodynamic escape, lighter molecules like hydrogen can "drag" heavier molecules with them through collisions, and recent modeling of xenon escape has pointed to a hydrogen atmospheric mixing ratio of at least 1% or higher at times during the Archean.
Taken together, the view that early Earth's atmosphere was weakly reducing, with transient instances of highly-reducing compositions following large impacts is generally supported.
Extraterrestrial sources of amino acids
Conditions similar to those of the Miller–Urey experiments are present in other regions of the Solar System, often substituting ultraviolet light for lightning as the energy source for chemical reactions. The Murchison meteorite that fell near Murchison, Victoria, Australia in 1969 was found to contain an amino acid distribution remarkably similar to Miller-Urey discharge products. Analysis of the organic fraction of the Murchison meteorite with Fourier-transform ion cyclotron resonance mass spectrometry detected over 10,000 unique compounds, albeit at very low (ppb–ppm) concentrations. In this way, the organic composition of the Murchison meteorite is seen as evidence of Miller-Urey synthesis outside Earth.
Comets and other icy outer-solar-system bodies are thought to contain large amounts of complex carbon compounds (such as tholins) formed by processes akin to Miller-Urey setups, darkening surfaces of these bodies. Some argue that comets bombarding the early Earth could have provided a large supply of complex organic molecules along with the water and other volatiles, however very low concentrations of biologically-relevant material combined with uncertainty surrounding the survival of organic matter upon impact make this difficult to determine.
Relevance to the origin of life
The Miller–Urey experiment was proof that the building blocks of life could be synthesized abiotically from gases, and introduced a new prebiotic chemistry framework through which to study the origin of life. Simulations of protein sequences present in the last universal common ancestor (LUCA), or the last shared ancestor of all extant species today, show an enrichment in simple amino acids that were available in the prebiotic environment according to Miller-Urey chemistry. This suggests that the genetic code from which all life evolved was rooted in a smaller suite of amino acids than those used today. Thus, while creationist arguments focus on the fact that Miller–Urey experiments have not generated all 22 genetically-encoded amino acids, this does not actually conflict with the evolutionary perspective on the origin of life.
Another common criticism is that the racemic (containing both L and D enantiomers) mixture of amino acids produced in a Miller–Urey experiment is not exemplary of abiogenesis theories, as life on Earth today uses almost exclusively L-amino acids. While it is true that Miller-Urey setups produce racemic mixtures, the origin of homochirality is a separate area in origin of life research.
Recent work demonstrates that magnetic mineral surfaces like magnetite can be templates for the enantioselective crystallization of chiral molecules, including RNA precursors, due to the chiral-induced spin selectivity (CISS) effect. Once an enantioselective bias is introduced, homochirality can then propagate through biological systems in various ways. In this way, enantioselective synthesis is not required of Miller-Urey reactions if other geochemical processes in the environment are introducing homochirality.
Finally, Miller-Urey and similar experiments primarily deal with the synthesis of monomers; polymerization of these building blocks to form peptides and other more complex structures is the next step of prebiotic chemistry schemes. Polymerization requires condensation reactions, which are thermodynamically unfavored in aqueous solutions because they expel water molecules. Scientists as far back as John Desmond Bernal in the late 1940s thus speculated that clay surfaces would play a large role in abiogenesis, as they might concentrate monomers. Several such models for mineral-mediated polymerization have emerged, such as the interlayers of layered double hydroxides like green rust over wet-dry cycles. Some scenarios for peptide formation have been proposed that are even compatible with aqueous solutions, such as the hydrophobic air-water interface and a novel "sulfide-mediated α-aminonitrile ligation" scheme, where amino acid precursors come together to form peptides. Polymerization of life's building blocks is an active area of research in prebiotic chemistry.
Amino acids identified
Below is a table of amino acids produced and identified in the "classic" 1952 experiment, as analyzed by Miller in 1952 and more recently by Bada and collaborators with modern mass spectrometry, the 2008 re-analysis of vials from the volcanic spark discharge experiment, and the 2010 re-analysis of vials from the H2S-rich spark discharge experiment. While not all proteinogenic amino acids have been produced in spark discharge experiments, it is generally accepted that early life used a simpler set of prebiotically-available amino acids.
References
External links
A simulation of the Miller–Urey Experiment along with a video Interview with Stanley Miller by Scott Ellis from CalSpace (UCSD)
Origin-Of-Life Chemistry Revisited: Reanalysis of famous spark-discharge experiments reveals a richer collection of amino acids were formed.
Miller–Urey experiment explained
Miller experiment with Lego bricks
"Stanley Miller's Experiment: Sparking the Building Blocks of Life" on PBS
The Miller-Urey experiment website
Details of 2008 re-analysis
Articles containing video clips
Biology experiments
Chemical synthesis of amino acids
Chemistry experiments
Origin of life
1952 in biology
1953 in biology
2008 in science | Miller–Urey experiment | Chemistry,Biology | 5,085 |
22,722,870 | https://en.wikipedia.org/wiki/Emotions%20in%20the%20workplace | Emotions in the workplace play a large role in how an entire organization communicates within itself and to the outside world. "Events at work have real emotional impact on participants. The consequences of emotional states in the workplace, both behaviors and attitudes, have substantial significance for individuals, groups, and society". "Positive emotions in the workplace help employees obtain favorable outcomes including achievement, job enrichment and higher quality social context". "Negative emotions, such as fear, anger, stress, hostility, sadness, and guilt, however increase the predictability of workplace deviance,", and how the outside world views the organization.
"Emotions normally are associated with specific events or occurrences and are intense enough to disrupt thought processes.". Moods on the other hand, are more "generalized feelings or states that are not typically identified with a particular stimulus and not sufficiently intense to interrupt ongoing thought processes". There can be many consequences for allowing negative emotions to affect your general attitude or mood at work. "Emotions and emotion management are a prominent feature of organizational life. It is crucial "to create a publicly observable and desirable emotional display as a part of a job role."
"The starting point for modern research on emotion in organizations seems to have been sociologist Arlie Russell Hochschild's (1983) seminal book on emotional labor: The Managed Heart". Ever since then the study of emotions in the workplace has been seen as a near science, with seminars being held on it and books being writing about it every year to help us understand the role it plays, especially via the Emonet website and Listserv, founded by Neal M. Ashkanasy in 1997.
Positive
Positive emotions at work such as high achievement and excitement have "desirable effect independent of a person's relationships with others, including greater task activity, persistence, and enhanced cognitive function." "Strong positive emotions of emotionally intelligent people [include] optimism, positive mood, self-efficacy, and emotional resilience to persevere under adverse circumstances. ". "Optimism rests on the premise that failure is not inherent in the individual; it may be attributed to circumstances that may be changed with a refocusing of effort." Those who express positive emotions in the workplace are better equipped to influence their coworkers favorably. "They are also more likable, and a halo effect may occur when warm or satisfied employees are rated favorably on other desirable attributes." It is likely that these people will inspire cooperation in others to carry out a task. It is said that "employees experience fewer positive emotions when interacting with their supervisors as compared with interactions with coworkers and customers." Specific workers such as "service providers are expected to react to aggressive behaviors directed toward them with non-aggressive and even courteous behavior…also to engage in what has been termed emotional labor by demonstrating polite and pleasant manners regardless of the customer's behavior."
Being aware of whether or not you are showing positive emotions will cause ripple effects in the workplace. A manager or co-worker who displays positive emotions consistently is more likely to motivate those around him/her and have more opportunities within the company. Being able to bring out positive emotions and aware of how to do this can be an incredibly useful tool in the workplace. "Positive mood also elicits more exploration and enjoyment of new ideas and can enhance creativity" (Isen, 2000). A manager who is able to reward and speak to his employees in a way that brings out their positive emotions will be much more successful than one who lacks these skills.
Emotional labor/ emotional work
"As the nature of the U.S. and global economies is increasingly transforming from manufacturing to service, organizational participants are coping with new challenges, and those challenges often involve complex processes of emotion in the workplace. The initial shift in the economy involved a move to customer service (including industries such as retailing, restaurants and the travel industry), leading to scholarly consideration of the way emotional communication is used in the service of customers and in the advancement of organizational goals. This type of work has come to be labeled as emotional labor...the emotions and displays in emotional labor are largely inauthentic and are seen by management as a commodity that can be controlled, trained and set down in employee handbooks." "This relates to the induction or suppression of feeling in order to sustain an outward appearance that produces a sense in others of being cared for in a convivial safe place.". Emotional labor refers to effort to show emotions that may not be genuinely felt but must be displayed in order to "express organizationally desired emotion during inter-personal transaction." "Commercialization of emotional labor and the trends towards the homogenization of industrial and service-sector labor processes have, in turn, been shaped by the adoption of new management practices designed to promote feeling rules and personal patterns of behavior that enhance the institutions or enterprises performance or competitive edge". In order to define the image that they want their organizations to portray, leaders use a "core component of "emotional intelligence" to recognize emotions.". that appear desirable. Organizations have begun using their employee's "emotion as a commodity used for the sake of profit".
Emotional labor inhibits workers from being able to participate in authentic emotional work. Emotional work is described as "emotion that is authentic, not emotion that is manufactured through surface acting…rarely seen as a profit center for management". "The person whose feelings are easily aroused (but not necessarily easily controlled) is going to have far more difficulty in dealing with emotionally stressful situations. In contrast, empathetic concern is hypothesized to have positive effects on responsiveness in international and on outcomes for the worker. A worker with empathetic concern will have feelings for the client but will be able to deal more effectively with the client's problems because there is not a direct sharing of the client's emotions". "Although emotional labor may be helpful to the organizational bottom line, there has been recent work suggesting that managing emotions for pay may be detrimental to the employee". Emotional labor and emotional work both have negative aspects to them including the feelings of stress, frustration or exhaustion that all lead to burnout. "Burnout is related to serious negative consequences such as deterioration in the quality of service, job turnover, absenteeism and low morale…[It] seems to be correlated with various self report indices of personal distress, including physical exhaustion, insomnia, increased use of alcohol and drugs and marital and family problems". Ironically, innovations that increase employee empowerment — such as conversion into worker cooperatives, co-managing schemes, or flattened workplace structures — have been found to increase workers’ levels of emotional labor as they take on more workplace responsibilities.
Negative
Negative emotions at work can be formed by "work overload, lack of rewards, and social relations which appear to be the most stressful work-related factors". "Cynicism is a negative effective reaction to the organization. Cynics feel contempt, distress, shame, and even disgust when they reflect upon their organizations" (Abraham, 1999). Negative emotions are caused by "a range of workplace issues, including aggression, verbal abuse, sexual harassment, computer flaming, blogging, assertiveness training, grapevines, and non verbal behavior". "Stress is the problem of each person feeling it. [Negative emotions] can be caused by "poor leadership, lack of guidance, lack of support and backup. It can cause the performance of the employees to decrease causing the performance of the organization to decrease. Employees' lack of confidence in their abilities to deal with work demands… and their lack of confidence in coworkers… can also create prolonged negative stress". Showing stress reveals weakness, therefore, employees suppress their negative emotions at work and home. "People who continually inhibit their emotions have been found to be more prone to disease than those who are emotionally expressive". Negative emotions can be seen as a disease in the workplace. Those who exhibit it negatively affect those around them and can change the entire environment. A co-working might de-motivate those around them, a manager might cause his employees to feel contempt. Recognizing the negative emotions and learning how to handle them can be a tool for personal success as well as the success of your team. Managing your emotions in a way that does not show negativity will cause you to be seen more favorably in the workplace and can help with your personal productivity and development.
Consequences
Psychological and Emotional- "Individuals experiencing job insecurity have an increased risk for anxiety, depression, substance abuse, and somatic complaints".
Marital and Family- Spouses and children can feel the crossover effects of burnout brought home from the workplace. Depleted levels of energy which effect home management is another consequence.
Organizational- Negative feelings at work effect "employee moral, turnover rate, commitment to the organization".
Not being able to control personal emotions and recognize emotional cues in others can be disastrous in the workplace. It can cause conflict between you and others, or simply cause you to be seen in a negative light and result in missed opportunities.
Not having a strong base to things like drama and gossip can also disrupt a functioning business. Lisa McQuerrey gives a definition for drama: "Drama is usually defined as spreading unverified information, discussing personal matters at work, antagonizing colleagues or blowing minor issues out of proportion to get attention." McQuerry wrote an article giving solutions to stop drama and conflict between coworkers. There are eight important solutions to ending conflict in a workplace according to McQuerrey, first being to set a policy in an employee handbook making drama unacceptable. With this, there needs to be a list of consequences. Second being that the roles of employees need to be clarified. Other examples in her article include: Stopping gossip before it makes its rounds, confronting employees about changes at work yourself instead of having a rumor mill, report drama if there is a regular instigator. McQuerrey goes on with saying that if situations go on, there should be a meeting held where management mediates the people who gossip. It is also important to follow up with your policy and give warnings about the consequences. Employees may be unaware of how their actions impact their coworkers, bringing in a behavioral expert into your business is usually a positive reinforcement when there's nothing else you can do.
Conclusion
Being able to not only control your emotions, but gauge the emotions of those around you and effectively influence them is imperative to success in the workplace. "Toxicity in the workplace is a regular occurrence and an occupational hazard. That is why the success of many projects, and the organization itself, depends on the success of "handlers," the people (usually managers) whose interventions either assuage individuals' pain from toxicity or eliminate it completely. " "One can conclude that the ability to effectively deal with emotions and emotional information in the workplace assists employees in managing occupational stress and maintaining psychological well-being. This indicates that stress reduction and health protection could be achieved not only by decreasing work demands (stressors), but also by increasing the personal resources of employees, including emotional intelligence. The increasing of EI skills (empathy, impulse control) necessary for successful job performance can help workers to deal more effectively with their feelings, and thus directly decrease the level of job stress and indirectly protect their health".
References
Works cited
Abraham, Rebecca. (1999). Emotional Intelligence in Organizations: A Conceptualization. Genetic, Social, and General Psychology Monographs, 125(2), 209–224. Retrieved from PsycINFO database.
Anand, N., Ginka Toegel, and Martin Kilduff. (2007). Emotion Helpers: The Role of High Positive Affectivity and High Self-Monitoring Managers. Personnel Psychology, 60(2), 337–365. Retrieved from PsycINFO database.
Ben-Zur, H. and Yagil, D. (2005). The relationship between empowerment, aggressive behaviours of customers, coping, and burnout. European Journal of Work and Organizational Psychology, 14(1) 81–99. Retrieved from PsycINFO database.
Bono, Joyce E, Hannah Jackson Foldes, Gregory Vinson, and John P. Muros. (2007). Workplace Emotions: The Role of Supervision and Leadership. Journal of Applied Psychology, 92(5), 1357–1367. Retrieved from PsycINFO database.
Brescoll, V.L. and Uhlmann, E.L. (2008). Can an angry woman get ahead? : Status conferral, gender, and expression of emotion in the workplace. Association for Psychological Science, 19(3) 268–275. Retrieved from PsycINFO database.
Brief, Arthur P., and Howard M. Weiss. (2002). Organizational Behavior: Affect in the Workplace. Annu. Rev. Psychol. 53, 279–307. Retrieved from PsycINFO database.
Canaff, Audrey L., and Wanda Wright. (2004). High Anxiety: Counseling the Job- Insecure Client. Journal of Employment Counseling, 41(1), 2-10. Retrieved from PsycINFO database.
Elfenbein, H.A. and Ambady, N. (2002). Predicting workplace outcomes from the ability to eavesdrop on feelings. Journal of Applied Psychology, 87 (5) 963–971. Retrieved from PsycINFO database.
Fong, Christina C., and Larissa Z. Tiedens. (2002). Dueling Experiences and Dual Ambivalences: Emotional and Motivational Ambivalence of Women in High Status Positions. Motivation and Emotion, 26(1), 105–121. Retrieved from PsycINFO database.
Grandey, A. A. (2000). Emotion regulation in the workplace: A new way to conceptualize emotional labor. Journal of Occupational Health Psychology, 5(1), 95-110. Retrieved from PsycINFO database.
Lee, Kibeom, & Allen, Natalie J. (2002). Organizational Citizenship Behavior and Workplace Deviance: The Role of Affect and Cognitions. Journal of Applied Psychology, 87(1), 131–142. Retrieved from PsychoINFO database.
Mann, S. (1999). Emotion at work: to what extent are we expressing, suppressing, or faking it? European Journal of Work and Organizational Psychology, 8(3) 347–369. Retrieved from PsycINFO database.
Martin, Dick. (2012). OtherWise: The Wisdom You Need to Succeed in a Diverse World Organization. Published by AMACOM Books, a division of American Management Association, 1601 Broadway, New York, NY 10019.
Miller, Katherine. (2007). Compassionate Communication in the Workplace: Exploring Processes of Noticing, Connecting, and Responding. Journal of Applied Communication Research, 35(3), 223–245. Retrieved from PsychoINFO database.
Miller, Kathrine, & Koesten, Joy. (2008). Financial Feeling: An Investigation of Emotion and Communication in the Workplace. Journal of Applied Communication Research, 36(1), 8-32. Retrieved from PsycINFO database.
Muir, Clive. (2006). Emotions At Work. Business Communication Quarterly, 69(4). Retrieved from PsychoINFO database.
Oginska-Bulik, Nina. (2005). Emotional Intelligence in the Workplace: Exploring its Effects on Occupational Stress and Health Outcomes in Human Service Workers. International Journal of Occupational Medicine & Environmental Health, 28(2), 167–175. Retrieved from PsychoINFO database.
Olofsson, B., Bengtsson, C., Brink, E. (2003). Absence of response: a study of nurses' experience of stress in the workplace. Journal of Nursing Management, 11, 351–358. Retrieved from PsycINFO database.
Poynter, Gavin. (2002). Emotions in the Labour Process. European Journal of Psychotherapy, Counseling and Health, 5(3), 247–261. Retrieved from PsycINFO database.
Staw, B.M., Sutton, R. S., Pelled, L.H. (1994). Employee positive emotion and favorable outcomes at the workplace. Organization Science, 5(1) 51–70. Retrieved from PsycINFO database.
Weiss, Howard. (2002). Introductory comments: Antecedents of Emotional Experiences at Work. Motivation and Emotion, 26(1), 1–2. Retrieved from PsychoINFO database.
Pearson, Christine M. "The Smart Way to Respond to Negative Emotions at Work". MIT Sloan Management Review. Retrieved 2018-12-10.
Seaton, Cherisse L.; Bottorff, Joan L.; Jones-Bricker, Margaret; Lamont, Sonia (2018-10-18). "The Role of Positive Emotion and Ego-Resilience in Determining Men's Physical Activity Following a Workplace Health Intervention". American Journal of Men's Health. 12 (6): 1916–1928. doi:10.1177/1557988318803744. ISSN 1557-9883. PMC 6199438. PMID 30334492.
Mérida-López, Sergio; Extremera, Natalio; Quintana-Orts, Cirenia; Rey, Lourdes (2018-09-21). "In pursuit of job satisfaction and happiness: Testing the interactive contribution of emotion-regulation ability and workplace social support". Scandinavian Journal of Psychology. doi:10.1111/sjop.12483. ISSN 0036-5564.
Balducci, Cristian (2012)."Exploring the relationship between workaholism and workplace aggressive behaviour: The role of job-related emotion." Personality and individual differences. 53: 629–634.
Hochschild, Arlie Russell (1983). The Managed Heart: Commercialization of Human Feeling. University of California Press. .
Emotion
Workplace | Emotions in the workplace | Biology | 3,714 |
70,195,045 | https://en.wikipedia.org/wiki/Phaeotremella | Phaeotremella is a genus of fungi in the family Phaeotremellaceae. All Phaeotremella species are parasites of other fungi and produce anamorphic yeast states. Basidiocarps (fruit bodies), when produced, are gelatinous and are colloquially classed among the "jelly fungi". Fifteen or so species of Phaeotremella are currently recognized worldwide. Tremella sanguinea, shown to be a Phaeotremella species by DNA sequencing, is cultivated in China as an ingredient in traditional Chinese medicine.
Taxonomy
History
The genus Phaeotremella was originally created by British mycologist Carleton Rea to accommodate Phaeotremella pseudofoliacea, a fungus that resembled a Tremella species but had brown rather than white basidiospores. Later authors considered this to be a mistaken observation and placed Phaeotremella in synonymy with Tremella and its type species in synonymy with Tremella foliacea.
Molecular research, based on cladistic analysis of DNA sequences, has however shown that Tremella is paraphyletic (and hence artificial). A different generic name was therefore required for a group of species not closely related to Tremella mesenterica (the type species of Tremella) and Phaeotremella was selected as the earliest such name available. As a result, the current definition of Phaeotremella is not the same as Rea's original concept. The type species, P. pseudofoliacea, has been placed in synonymy with Phaeotremella frondosa.
Description
Fruit bodies (when present) are gelatinous. In some species they are small (under 5 mm across) and pustular to pulvinate (cushion-shaped). In others they are much larger (up to 150 mm across) and may be variously lobed or foliose (with leaf-like or seaweed-like fronds). Several Phaeotremella species are, however, only known from their yeast states.
Microscopic characters
Phaeotremella species produce hyphae that are typically (but not always) clamped and have haustorial cells from which hyphal filaments seek out and penetrate the hyphae of the host. The basidia are "tremelloid" (globose to ellipsoid and vertically or diagonally septate), giving rise to long, sinuous sterigmata or epibasidia on which the basidiospores are produced. These spores are smooth, globose to ellipsoid, and germinate by hyphal tube or by yeast cells. Conidiophores are often present, producing conidiospores that are similar to yeast cells.
Habitat and distribution
Most species are parasitic on members of the corticioid fungi, specifically species of Aleurodiscus and Stereum, with one species on the ascomycetous genus Lophodermium. Those on Aleurodiscus, including Phaeotremella mycophaga, parasitize the fruit bodies of their hosts; those on Stereum, such as Phaeotremella foliacea, P. frondosa, and P. fimbriata, parasitize the host mycelium within the wood.
As a group, Phaeotremella species occur worldwide, though individual species may have a more restricted distribution.
Species and hosts
Only species producing basidiocarps (fruit bodies) are listed. Not all hosts are known.
References
Tremellomycetes
Basidiomycota genera
Taxa described in 1912
Yeasts | Phaeotremella | Biology | 767 |
54,237,348 | https://en.wikipedia.org/wiki/Trifluoromethanol | Trifluoromethanol is a synthetic organic compound with the formula . It is also referred to as perfluoromethanol or trifluoromethyl alcohol. The compound is the simplest perfluoroalcohol. The substance is a colorless gas, which is unstable at room temperature.
Synthesis
Like all primary and secondary perfluoroalcohols, trifluoromethanol eliminates hydrogen fluoride in an endothermic reaction and forms carbonyl fluoride.
⇌ + (I)
At temperatures in the range of -120 °C, trifluoromethanol can be prepared from trifluoromethyl hypochlorite and hydrogen chloride:
+ → + (II)
In this reaction, the recombination of a partially positively charged chlorine atom (in trifluoromethyl hypochlorite) with a partially negatively charged chlorine atom (in hydrogen chloride) is used as elemental chlorine. The undesired products, by-products chlorine, hydrogen chloride, and chlorotrifluoromethane, can be removed by evaporation at -110 °C. Trifluoromethanol has a melting point of -82 °C and a calculated boiling point of about -20 °C. The boiling point is thus about 85 K lower than that of methanol. This fact can be explained by the absence of intramolecular H—F bonds, which are also not visible in the infrared gas phase spectrum.
A simpler synthesis uses the reaction (I); an equilibrium can be shifted to the thermodynamically preferred trifluoromethanol at lower temperatures. If the synthesized trifluoromethanol is protonated by superacids, for example (fluoroantimonic acid), the equilibrium can be further shifted to the left towards the desired product.
Similar to reaction (I), trifluoromethoxides () can be prepared from saline-type fluorides (e.g., ) and carbonyl fluoride. However, if the ion is, for example, in an aqueous solution displaced by an acid, trifluoromethanol decomposes at the room temperature.
Occurrence in upper layers of atmosphere
While trifluoromethanol is unstable under normal conditions, it is generated in the stratosphere from and radicals by reaction with and radicals. In this case, decomposition of trifluoromethanol is negligible under the conditions prevailing in the atmosphere due to the high activation energy of the reaction. The expected lifetime of trifluoromethanol is several million years at altitudes below 40 km.
See also
Trifluoroethanol
References
Trifluoromethyl compounds
Primary alcohols
Trifluoromethoxy compounds
Organic compounds with 1 carbon atom
Substances discovered in the 1970s | Trifluoromethanol | Chemistry | 608 |
24,163,440 | https://en.wikipedia.org/wiki/HD%2086264%20b | HD 86264 b is an extrasolar planet which orbits the F-type main sequence star HD 86264, located approximately 237 light years away in the constellation Hydra. The planet is considered to orbit in an eccentric path around the star with a period of about four years. This planet can be as close as 0.86 AU to as far as 4.86 AU. It has minimum mass seven Jupiter masses and orbits at a distance of 2.86 astronomical units. This planet was detected by radial velocity method on August 13, 2009.
An estimate of the planet's inclination and true mass () via astrometry, though with high error, was published in 2022.
References
External links
PlanetQuest, HD 86264 b
Exoplanets discovered in 2009
Giant planets
Hydra (constellation)
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | HD 86264 b | Astronomy | 179 |
18,324,880 | https://en.wikipedia.org/wiki/Peter%20Schwerdtfeger | Peter Schwerdtfeger (born 1 September 1955) is a German scientist. He holds a chair in theoretical chemistry at Massey University in Auckland, New Zealand, serves as director of the Centre for Theoretical Chemistry and Physics, is the head of the New Zealand Institute for Advanced Study, and is a former president of the Alexander von Humboldt Foundation.
Academic career
Schwerdtfeger took his first degree in chemical engineering at Aalen University in 1976, after finishing a degree as chemical-technical assistant at the Institute Dr. Flad in Stuttgart in 1973. He studied chemistry, physics and mathematics at Stuttgart University where he received his PhD in theoretical chemistry in 1986. He received a Feodor-Lynen fellowship of the Alexander von Humboldt Foundation to join the chemistry department and later the School of Engineering at University of Auckland in 1987. After a two years research fellowship at the Research School of Chemistry (Australian National University), he returned to Auckland University in 1991 for a lectureship in chemistry. He received his habilitation and venia legendi (Privatdozent) in 1995 from the Philipps University of Marburg. He held a personal chair in physical chemistry for five years until moving to Massey University Albany in 2004, where he established the Centre for Theoretical Chemistry and Physics. He became a founding member of the New Zealand Institute for Advanced Study in 2007. In 2007 he received the Royal Society Australasian Chemistry Lectureship, and was the Källen Lecturer in Physics at Lund University (Sweden) in 2015. From 2017-2018 he was member of the Centre for Advanced Study at the Norwegian Academy of Science and Letters. He has published 350 papers in international journals. He was awarded eight consecutive Marsden awards by the Royal Society of New Zealand.
One of Schwedtfeger's notable doctoral students is Patricia Hunt, professor at Victoria University of Wellington.
Fellowships and awards
2001 James Cook Fellowship
2011 Fukui Medal
2012 Fellow of the International Academy of Quantum Molecular Science.
2014 Royal Society of New Zealand's Rutherford Medal.
2019 Dan Walls Medal
Selected publications
References
External links
Official web site
1955 births
Living people
20th-century German chemists
Academic staff of Massey University
Scientists from Stuttgart
Recipients of the Rutherford Medal
21st-century New Zealand chemists
Fellows of the Australian Academy of Technological Sciences and Engineering
Theoretical chemists
James Cook Research Fellows | Peter Schwerdtfeger | Chemistry | 472 |
2,184,818 | https://en.wikipedia.org/wiki/Bischler%E2%80%93M%C3%B6hlau%20indole%20synthesis | The Bischler–Möhlau indole synthesis, also often referred to as the Bischler indole synthesis, is a chemical reaction that forms a 2-aryl-indole from an α-bromo-acetophenone and excess aniline; it is named after August Bischler and
.
Despite its long history, this classical reaction had received relatively little attention in comparison with other methods for indole synthesis, owing to the reactions harsh conditions, poor yields and unpredictable regioselectivity. Recently, milder methods have been developed, including the use of lithium bromide as a catalyst and an improved procedure involving the use of microwave irradiation.
History
What is now known as the Bischler-Möhlau indole synthesis was discovered and formulated through the separate, but complimentary, findings of German Scientist Richard Möhlau in 1882 and Russia-born German chemist August Bischler (with partner H. Brion) in 1892. These two researchers did not collaborate with each other, but instead independently developed very similar procedures starting from an aromatic ketone structure with an excess of some aniline and ultimately producing a product. The images below depict the original indole synthesis equations written by Möhlau and Bischler, respectively:
Being that both scientists had published their works for indole synthesis within the same decade, the general indole synthesis process was given the name Bischler-Möhlau indole synthesis.
This original procedure for the indole synthesis is known to have inconsistent results and yields, but has been modified into new indole synthesis procedures:
Buu-Hoï Modified Indole Synthesis
Blackhall and Thomson Modified Indole Synthesis
Japp and Murray Modified Indole Synthesis
Reaction mechanism
The first two step involve the reaction of the α-bromo-acetophenone with molecules of aniline to form intermediate 4. The charged aniline forms a decent enough leaving group for an electrophilic cyclization to form intermediate 5, which quickly aromatizes and tautomerizes to give the desired indole 7.
See also
Fischer indole synthesis
Bischler–Napieralski reaction
References
Indole forming reactions
Name reactions | Bischler–Möhlau indole synthesis | Chemistry | 445 |
6,659,320 | https://en.wikipedia.org/wiki/Design%20computing | The terms design computing and other relevant terms including design and computation and computational design refer to the study and practice of design activities through the application and development of novel ideas and techniques in computing. One of the early groups to coin this term was the Key Centre of Design Computing and Cognition at the University of Sydney in Australia, which for more than fifty years (since the late 1960s) pioneered the research, teaching, and consulting of design and computational technologies. This group organised the academic conference series "Artificial Intelligence in Design (AID)" published by Springer during that period. AID was later renamed "Design Computing and Cognition (DCC)" and is currently a leading biannual conference in the field. Other notable groups in this area are the Design and Computation group at Massachusetts Institute of Technology's School of Architecture + Planning and the Computational Design group at Georgia Tech.
Whilst these terms share in general an interest in computational technologies and design activity, there are important differences in the various approaches, theories, and applications. For example, while in some circles the term "computational design" refers in general to the creation of new computational tools and methods in the context of computational thinking, design computing is concerned with bridging these two fields in order to build an increased understanding of design.
The Bachelor of Design Computing (BDesComp) was created in 2003 at the University of Sydney and continues to be a leading programme in interaction design and creative technologies, now hosted by the Design Lab. In that context, design computing is defined to be the use and development of computational models of design processes and digital media to assist and/or automate various aspects of the design process with the goal of producing higher quality and new design forms.
Areas
In recent years a number of research and education areas have been grouped under the umbrella term "design computing", namely:
Artificial intelligence in design
Expert systems and knowledge-based systems
Computational creativity
Computer-aided design
Responsive computer-aided design
Digital architecture
Digital morphogenesis
Visual and spatial modelling
Computational analogy
Automated design systems
Design support systems
Computer-supported cooperative work (CSCW)
Building information modeling (BIM)
Extended reality (XR) and spatial computing
Digital place-making
Research groups
The main research groups working in this area span from Faculties of Architecture, Engineering and Computer Science. Australia has been a pioneer in this area. For the last five decades the Key Centre of Design Computing and Cognition (KCDC), currently known as the Design Lab, at the University of Sydney has been active in establishing this area of research and teaching. The University of Sydney offers a Bachelor of Design Computing () and the University of New South Wales also in Sydney a Bachelor of Computational Design (). In the US this research area is also known as "Design and Computation", namely at the Massachusetts Institute of Technology (MIT). Other relevant research groups include:
Critical Research in Digital Architecture (CRIDA), Faculty of Architecture, Building and Planning, University of Melbourne
School of Architecture, Carnegie Mellon
Department of Computer Science, University College London
Department of Informatics Engineering, Universidade de Coimbra
Department of Computer Science, Vrije University, Amsterdam
Creativity and Cognition Studios, University of Technology Sydney
Department of Computer Science, University of Colorado at Boulder
Department of Architecture, Tokyo Institute of Technology
Department of Architecture, MIT
Department of Computer Science, Helsinki University of Technology
College of Architecture, Georgia Institute of Technology
Design Machine Group, University of Washington College of Built Environments, Seattle
Design Computing Program, Georgia Institute of Technology College of Architecture
School of Interactive Arts + Technology, Simon Fraser University
Department of Architecture, Technical University of Delft, The Netherlands
Institute of Computational Design, University of Stuttgart
Architectural Design Computing, Istanbul Technical University
Faculty of Architecture, Istanbul Bilgi University, Turkey
Centre of IT and Architecture (CITA), The Royal Danish Academy of Fine Arts, Copenhagen
Institute of Architectural Algorithms & Applications (Inst.AAA), Southeast University, Nanjing
Department of Experimental and Digital Design and Construction, University of Kassel, Germany
Computational Media Design (CMD) program, University of Calgary, Canada
School of Civil Engineering, Architecture and Urbanism (FEC-Unicamp), University of Campinas, Brazil
Computation, Appearance, and Manufacturing group (CAM), Max Planck Institute for Informatics, Saarbrücken, Germany
Conferences
The biannual International Conference on Design Computing and Cognition (DCC) brings together high quality research on this area, as do annual conferences by the Association for Computer Aided Design In Architecture and others.
References
Graphic design
Computer-aided design | Design computing | Engineering | 915 |
551,666 | https://en.wikipedia.org/wiki/IBM%20Research | IBM Research is the research and development division for IBM, an American multinational information technology company headquartered in Armonk, New York, with operations in over 170 countries. IBM Research is the largest industrial research organization in the world and has twelve labs on six continents.
IBM employees have garnered six Nobel Prizes, six Turing Awards, 20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology, five National Medals of Science and three Kavli Prizes. , the company has generated more patents than any other business in each of 25 consecutive years, which is a record.
History
The roots of today's IBM Research began with the 1945 opening of the Watson Scientific Computing Laboratory at Columbia University. This was the first IBM laboratory devoted to pure science and later expanded into additional IBM Research locations in Westchester County, New York, starting in the 1950s, including the Thomas J. Watson Research Center in 1961.
Notable company inventions include the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the Universal Product Code (UPC), the financial swap, the Fortran programming language, SABRE airline reservation system, DRAM, copper wiring in semiconductors, the smartphone, the portable computer, the Automated Teller Machine (ATM), the silicon-on-insulator (SOI) semiconductor manufacturing process, Watson artificial intelligence and the Quantum Experience.
Advances in nanotechnology include IBM in atoms, where a scanning tunneling microscope was used to arrange 35 individual xenon atoms on a substrate of chilled crystal of nickel to spell out the three letter company acronym. It was the first time atoms had been precisely positioned on a flat surface.
Major undertakings at IBM Research have included the invention of innovative materials and structures, high-performance microprocessors and computers, analytical methods and tools, algorithms, software architectures, methods for managing, searching and deriving meaning from data and in turning IBM's advanced services methodologies into reusable assets.
IBM Research's numerous contributions to physical and computer sciences include the Scanning Tunneling Microscope and high-temperature superconductivity, both of which were awarded the Nobel Prize. IBM Research was behind the inventions of the SABRE travel reservation system, the technology of laser eye surgery, magnetic storage, the relational database, UPC barcodes and Watson, the question-answering computing system that won a match against human champions on the Jeopardy! television quiz show. The Watson technology is now being commercialized as part of a project with healthcare company Anthem Inc. Other notable developments include the Data Encryption Standard (DES), fast Fourier transform (FFT), Benoît Mandelbrot's introduction of fractals, magnetic disk storage (hard disks), the MELD-Plus risk score, the one-transistor dynamic random-access memory (DRAM), the reduced instruction set computer (RISC) architecture, relational databases, and Deep Blue (grandmaster-level chess-playing computer).
Notable IBM researchers
There are a number of computer scientists "who made IBM Research famous." These include Frances E. Allen, Marc Auslander, John Backus, Charles H. Bennett (computer scientist), Erich Bloch, Grady Booch,
Fred Brooks (known for his book The Mythical Man-Month), Peter Brown, Larry Carter, Gregory Chaitin, John Cocke, Alan Cobham, Edgar F. Codd, Don Coppersmith, Wallace Eckert, Ronald Fagin, Horst Feistel, Jeanne Ferrante, Zvi Galil, Ralph E. Gomory, Jim Gray, Joseph Halpern, Kenneth E. Iverson, Frederick Jelinek, Reynold B. Johnson, Benoit Mandelbrot, Robert Mercer, C. Mohan, Kirsten Moselund, Michael O. Rabin, Arthur Samuel, Barbara Simons, Alfred Spector, Gardiner Tucker, Moshe Vardi, John Vlissides, Mark N. Wegman and Shmuel Winograd.
Laboratories
IBM currently has 19 research facilities spread across 12 laboratories on six continents:
Africa (Nairobi, Kenya, and Johannesburg, South Africa)
Almaden (San Jose)
Australia (Melbourne)
Brazil (São Paulo and Rio de Janeiro)
Cambridge – IBM Research and MIT-IBM Watson AI Lab (Cambridge, US)
China (Beijing)
Israel (Haifa)
Ireland (Dublin)
India (Delhi and Bengaluru)
Japan (Tokyo and Shin-Kawasaki)
Switzerland (Zürich)
IBM Thomas J. Watson Research Center (Yorktown Heights and Albany)
Historic research centers for IBM also include IBM La Gaude (Nice), the Cambridge Scientific Center, the IBM New York Scientific Center, 330 North Wabash (Chicago), IBM Austin Research Laboratory, and IBM Laboratory Vienna.
In 2017, IBM invested $240 million to create the MIT–IBM Watson AI Lab. Headquartered in Cambridge, MA, the Lab is a unique joint research venture in artificial intelligence established by IBM and MIT and brings together researchers in academia and industry to advance AI that has a real world impact for business, academic and society. The Lab funds approximately 50 projects per year, which are co-led by principal investigators from MIT and IBM Research, with results published regularly at top peer-reviewed journals and conferences. Projects range from computer vision, natural language processing and reinforcement learning, to devising new ways to ensure that AI systems are fair, reliable and secure.
Almaden in Silicon Valley
IBM Research – Almaden is in Almaden Valley, San Jose, California. Its scientists perform basic and applied research in computer science, services, storage systems, physical sciences, and materials science and technology.
Almaden occupies part of a site owned by IBM at 650 Harry Road on nearly of land in the Santa Teresa Hills above Silicon Valley. The site, built in 1985 for the research center, was chosen because of its close proximity to Stanford University, UC Santa Cruz, UC Berkeley and other collaborative academic institutions. Today, the research division is still the largest tenant of the site, but the majority of occupants work for other divisions of IBM.
IBM opened its first West Coast research center, the San Jose Research Laboratory in 1952, managed by Reynold B. Johnson. Among its first developments was the IBM 350, the first commercial moving head hard disk drive. Launched in 1956, this saw use in the IBM 305 RAMAC computer system. Subdivisions included the Advanced Systems Development Division. Directors of the center include hard disc drive developer Jack Harker.
Prompted by a need for additional space, the center moved to its present Almaden location in 1986.
Scientists at IBM Almaden have contributed to several scientific discoveries such as the development of photoresists and the quantum mirage effect.
The following are some of the famous scientists who have worked in the past or are currently working in this laboratory: Rakesh Agrawal, Miklos Ajtai, Rama Akkiraju, John Backus, Raymond F. Boyce, Donald D. Chamberlin, Ashok K. Chandra, Edgar F. Codd, Mark Dean, Cynthia Dwork, Don Eigler, Ronald Fagin, Jim Gray, Laura M. Haas, Jean Paul Jacob, Joseph Halpern, Andreas J. Heinrich, Reynold B. Johnson, Maria Klawe, Jaishankar Menon, Dharmendra Modha, William E. Moerner, C. Mohan, Stuart Parkin, Nick Pippenger, Dan Russell, Patricia Selinger, Ted Selker, Barbara Simons, Malcolm Slaney, Arnold Spielberg, Ramakrishnan Srikant, Larry Stockmeyer, Moshe Vardi, Jennifer Widom, Shumin Zhai.
Australia
IBM Research – Australia was a research and development laboratory established by IBM Research in 2009 in Melbourne. It was involved in social media, interactive content, healthcare analytics and services research, multimedia analytics, and genomics. The lab was headed by several directors over its 10 years lifespan, including Vice President, Joanna Batstone and Professor Iven Mareels. It was to be the company’s first laboratory combining research and development in a single organisation.
The opening of the Melbourne lab in 2011 received an injection of $22 million in Australian Federal Government funding and an undisclosed amount provided by the State Government.
The Melbourne Research lab was closed in 2021, approximately at the same time as the deal for tax breaks from the State Government ended. Approximately 80 full-time researchers were made redundant.
Brazil
IBM Research – Brazil is one of twelve research laboratories comprising IBM Research, its first in South America. It was established in 2011, with locations in São Paulo and Rio de Janeiro. Research focuses on Industrial Technology and Science, Systems of Engagement and Insight, Social Data Analytics and Natural Resources Solutions.
The new lab, IBM's ninth at the time of opening and first in 12 years, underscores the growing importance of emerging markets and the globalization of innovation. In collaboration with Brazil's government, it will help IBM to develop technology systems around natural resource development and large-scale events such as the 2016 Summer Olympics.
Engineer and associate lab director Ulisses Mello explains that IBM has four priority areas in Brazil: "The main area is related to natural resources management, involving oil and gas, mining and agricultural sectors. The second is the social data analytics segment that comprises the analysis of data generated from social networking sites [such as Twitter or Facebook], which can be applied, for example, to financial analysis. The third strategic area is nanotechnology applied to the development of the smarter devices for the intermittent production industry. This technology can be applied to, for example, blood testing or recovering oil from existing fields. And the last one is smarter cities."
Japan
The IBM Research – Tokyo, which was called IBM Tokyo Research Laboratory (TRL) before January 2009, is one of IBM's twelve major worldwide research laboratories. It is a branch of IBM Research, and about 200 researchers work for TRL. Established in 1982 as the Japan Science Institute (JSI) in Tokyo, it was renamed to IBM Tokyo Research Laboratory in 1986, and moved to Yamato in 1992 and back to Tokyo in 2012.
IBM Tokyo Research Laboratory was established in 1982 as the Japan Science Institute (JSI) in Sanbanchō, Tokyo. It was IBM's first research laboratory in Asia. Hisashi Kobayashi was appointed the founding director of TRL in 1982; he served as director until 1986. JSI was renamed to the IBM Tokyo Research Laboratory in 1986. In 1988, English-to-Japanese machine translation system called "System for Human-Assisted Language Translation" (SHALT) was developed at TRL. It was used to translate IBM manuals.
History
TRL was shifted from downtown Tokyo to the suburbs to share a building with IBM Yamato Facility in Yamato, Kanagawa Prefecture in 1993. In 1993, world record was accomplished for generation of continuous coherent Ultraviolet rays. In 1996, Java JIT compiler was developed at TRL, and it was released for major IBM platforms. Numerous other technological breakthroughs were made at TRL.
The team led by Chieko Asakawa (:ja:浅川智恵子), IBM Fellow since 2009, provided basic technology for IBM's software programs for the visually handicapped, IBM Home Page Reader in 1997 and IBM aiBrowser (:ja:aiBrowser) in 2007. TRL moved back to Tokyo in 2012, this time at IBM Toyosu Facility.
Research
TRL researchers are responsible for numerous breakthroughs in sciences and engineering. The researchers have presented multiple papers at international conferences, and published numerous papers in international journals. They have also contributed to the products and services of IBM, and patent filings. TRL conducts research in microdevices, system software, security and privacy, analytics and optimization, human computer interaction, embedded systems, and services sciences.
Other activities
TRL collaborates with the Japanese universities, and support their research programs. IBM donates its equipment such as servers, storage systems, and so forth to the Japanese universities to support their research programs under the Shared University Research (SUR) program.
In 1987, IBM Japan Science Prize was created to recognize researchers, who are not over 45 years old, working at Japanese universities or public research institutes. It is awarded in physics, chemistry, computer science, and electronics.
Israel
IBM Research – Haifa, previously known as the Haifa Research Lab (HRL) was founded as a small scientific center in 1972. Since then, it has grown into a major lab that leads the development of innovative technologies and solutions for the IBM corporation. The lab’s offices are situated in three locations across Israel: Haifa, Tel Aviv, and Beer Sheva.
IBM Research – Haifa employs researchers in a range of areas. Research projects are being executed today in areas such as artificial intelligence, hybrid cloud, quantum computing, blockchain, IoT, quality, cybersecurity, and industry domains such as healthcare.
Aya Soffer is IBM Vice President of AI Technology and serves as the Director of the IBM Research Lab in Haifa, Israel.
History
In its 30th year, the IBM Haifa Research Lab in Israel moved to a new home on the University of Haifa campus.
The researchers at the Lab are involved in special projects with academic institutions across Israel, the United States, and Europe, and actively participate in numerous consortiums as part of the EU Horizon 2020 programme. Today in 2020, the Lab describes itself as having the highest number of employees in Israel's hi-tech industry who hold advanced degrees in science, electrical engineering, mathematics, or related fields. Researchers participate in international conferences and are published in professional publications.
In 2014, IBM Research announced the Cybersecurity Center of Excellence (CCoE) in Beer Sheva in collaboration with Ben-Gurion University of the Negev.
Switzerland
IBM Research – Zurich (previously called IBM Zurich Research Laboratory, ZRL) is the European branch of IBM Research. It was opened in 1956 and is located in Rüschlikon near Zürich, Switzerland.
In 1956, IBM opened their first European research laboratory in Adliswil, Switzerland. The lab moved to its own campus in neighboring Rüschlikon in 1962. The Zürich lab is staffed by a multicultural and interdisciplinary team of a few hundred permanent research staff members, graduate students and post-doctoral fellows, representing about 45 nationalities. Collocated with the lab is a Client Center (formerly the Industry Solutions Lab), an executive briefing facility demonstrating technology prototypes and solutions.
The Zürich lab is world-renowned for its scientific achievements—most notably Nobel Prizes in physics in 1986 and 1987 for the invention of the scanning tunneling microscope and the discovery of high-temperature superconductivity, respectively. Other key inventions include trellis modulation, which revolutionized data transmission over telephone lines; Token Ring, which became a standard for local area networks and a highly successful IBM product; the Secure Electronic Transaction (SET) standard used for highly secure payments; and the Java Card OpenPlatform (JCOP), a smart card operating system. Most recently the lab was involved in the development of SuperMUC, a supercomputer that is cooled using hot water.
The Zürich lab focus areas are future chip technologies; nanotechnology; data storage; quantum computing, brain-inspired computing; security and privacy; risk and compliance; business optimization and transformation; server systems. The Zürich laboratory is involved in many joint projects with universities throughout Europe, in research programs established by the European Union and the Swiss government, and in cooperation agreements with research institutes of industrial partners. One of the lab's most high-profile projects is called DOME, which is based on developing an IT roadmap for the Square Kilometer Array.
The research projects pursued at the IBM Zürich lab are organized into four scientific and technical departments: Science & Technology, Cloud and AI Systems Research, Cognitive Computing & Industry Solutions and Security Research. The lab is currently managed by Alessandro Curioni.
On 17 May 2011, IBM and the Swiss Federal Institute of Technology (ETH) Zurich opened the Binnig and Rohrer Nanotechnology Center, which is located on the same campus in Rüschlikon.
IBM Scientific Centers
In addition to the IBM Research Division, the IBM Scientific Centers, which were active in various functions from 1964 to the early 1990s, were another remarkable research unit. In contrast to the central control of the Research Division from the headquarters in Armonk in the USA, the IBM Scientific Centers were structured in a decentralized manner. Each center functioned as an integral part of the IBM organization in its respective region or country. This organization also financed the center and ultimately determined its content and strategic direction. The task of an IBM Scientific Center was to contribute with its research, its expertise and its cooperation projects for the benefit of the respective country and thus to contribute to the reputation of IBM in this country or this region.
While the research laboratories of the IBM Research Division had to be very restrictive with regard to scientific cooperation projects with non-IBM institutions for patent reasons and other reasons, technical-scientific and application-oriented cooperation projects with universities and other public research institutions were an important part of IBM's mission for the scientific centers. Because of this, the spectrum of activities of such a center was often very broad. For example, some research groups could deal with topics that can be assigned to basic or product-oriented research, while others dealt with application-oriented research topics, for example satellite-based soil classification.
Descriptions of the thematic focus and research projects as well as a selection of references to the scientific publications of the individual centers, as far as they were still alive in 1989, can be found in. A comprehensive description of the evolution, projects, and success stories of the IBM Heidelberg Scientific Center from its very beginning and to shortly before its end can be found in.
The history of the IBM Scientific Centers began in 1964 with the founding of the first four centers in the USA (marked with * in the list below) and has subsequently grown to 26 centers worldwide in 1989. Their story ended in the early 1990s.
Bari, Italy (1969–1979)
Bergen, Norway (since 1986)
Brasilia, Brazil (1980–1986)
Cairo, Egypt (since 1983)
Cambridge, Massachusetts, USA (since 1964) *
Caracas, Venezuela (since 1983)
Grenoble, France (1967–1973)
Haifa, Israel (since 1972)
Heidelberg, Germany (since 1968)
Houston, Texas (1966–1974)
Kuwait City, Kuwait (since 1980)
Los Angeles, California, USA (since 1964) *
Madrid, Spain (since 1972)
Mexico City, Mexico (since 1971)
New York City (1964–1972) *
Palo Alto, California, USA (since 1964) *
Paris, France (since 1977)
Peterlee, United Kingdom (1969–1979)
Pisa, Italy (since 1971)
Philadelphia, Pennsylvania, USA (1972–1974)
Rio de Janeiro, Brazil (since 1986)
Rome, Italy (since 1979)
Tokyo, Japan (since 1970)
Venice, Italy (1969–1979)
Wheaton, Maryland, USA (1967–1969)
Winchester, United Kingdom (since 1979)
Publications
IBM Journal of Research and Development
References
Further reading
External links
of IBM Research
Projects (archived 10 December 2005)
Research History Highlights (Top Innovations)
Research history by year
Oral history interview with Martin Schwarzschild head of Watson Scientific Computation Laboratory at Columbia University, Charles Babbage Institute, University of Minnesota (archived 12 August 2002)
IBM Research's technical journals
IBM facilities
Computer science organizations
Research and development organizations | IBM Research | Technology | 3,969 |
4,613,861 | https://en.wikipedia.org/wiki/Abstract%20algebraic%20logic | In mathematical logic, abstract algebraic logic is the study of the algebraization of deductive systems
arising as an abstraction of the well-known Lindenbaum–Tarski algebra, and how the resulting algebras are related to logical systems.
History
The archetypal association of this kind, one fundamental to the historical origins of algebraic logic and lying at the heart of all subsequently developed subtheories, is the association between the class of Boolean algebras and classical propositional calculus. This association was discovered by George Boole in the 1850s, and then further developed and refined by others, especially C. S. Peirce and Ernst Schröder, from the 1870s to the 1890s. This work culminated in Lindenbaum–Tarski algebras, devised by Alfred Tarski and his student Adolf Lindenbaum in the 1930s. Later, Tarski and his American students (whose ranks include Don Pigozzi) went on to discover cylindric algebra, whose representable instances algebraize all of classical first-order logic, and revived relation algebra, whose models include all well-known axiomatic set theories.
Classical algebraic logic, which comprises all work in algebraic logic until about 1960, studied the properties of specific classes of algebras used to "algebraize" specific logical systems of particular interest to specific logical investigations. Generally, the algebra associated with a logical system was found to be a type of lattice, possibly enriched with one or more unary operations other than lattice complementation.
Abstract algebraic logic is a modern subarea of algebraic logic that emerged in Poland during the 1950s and 60s with the work of Helena Rasiowa, Roman Sikorski, Jerzy Łoś, and Roman Suszko (to name but a few). It reached maturity in the 1980s with the seminal publications of the Polish logician Janusz Czelakowski, the Dutch logician Wim Blok and the American logician Don Pigozzi. The focus of abstract algebraic logic shifted from the study of specific classes of algebras associated with specific logical systems (the focus of classical algebraic logic), to the study of:
Classes of algebras associated with classes of logical systems whose members all satisfy certain abstract logical properties;
The process by which a class of algebras becomes the "algebraic counterpart" of a given logical system;
The relation between metalogical properties satisfied by a class of logical systems, and the corresponding algebraic properties satisfied by their algebraic counterparts.
The passage from classical algebraic logic to abstract algebraic logic may be compared to the passage from "modern" or abstract algebra (i.e., the study of groups, rings, modules, fields, etc.) to universal algebra (the study of classes of algebras of arbitrary similarity types (algebraic signatures) satisfying specific abstract properties).
The two main motivations for the development of abstract algebraic logic are closely connected to (1) and (3) above. With respect to (1), a critical step in the transition was initiated by the work of Rasiowa. Her goal was to abstract results and methods known to hold for the classical propositional calculus and Boolean algebras and some other closely related logical systems, in such a way that these results and methods could be applied to a much wider variety of propositional logics.
(3) owes much to the joint work of Blok and Pigozzi exploring the different forms that the well-known deduction theorem of classical propositional calculus and first-order logic takes on in a wide variety of logical systems. They related these various forms of the deduction theorem to the properties of the algebraic counterparts of these logical systems.
Abstract algebraic logic has become a well established subfield of algebraic logic, with many deep and interesting results. These results explain many properties of different classes of logical systems previously explained only on a case-by-case basis or shrouded in mystery. Perhaps the most important achievement of abstract algebraic logic has been the classification of propositional logics in a hierarchy, called the abstract algebraic hierarchy or Leibniz hierarchy, whose different levels roughly reflect the strength of the ties between a logic at a particular level and its associated class of algebras. The position of a logic in this hierarchy determines the extent to which that logic may be studied using known algebraic methods and techniques. Once a logic is assigned to a level of this hierarchy, one may draw on the powerful arsenal of results, accumulated over the past 30-odd years, governing the algebras situated at the same level of the hierarchy.
The similar terms 'general algebraic logic' and 'universal algebraic logic' refer the approach of the Hungarian School including Hajnal Andréka, István Németi and others.
Examples
See also
Abstract algebra
Algebraic logic
Abstract model theory
Hierarchy (mathematics)
Model theory
Variety (universal algebra)
Universal logic
Notes
References
Blok, W., Pigozzi, D, 1989. Algebraizable logics. Memoirs of the AMS, 77(396). Also available for download from Pigozzi's home page
Czelakowski, J., 2001. Protoalgebraic Logics. Kluwer. . Considered "an excellent and very readable introduction to the area of abstract algebraic logic" by Mathematical Reviews
Czelakowski, J. (editor), 2018, Don Pigozzi on Abstract Algebraic Logic, Universal Algebra, and Computer Science, Outstanding Contributions to Logic Volume 16, Springer International Publishing,
Font, J. M., 2003. An Abstract Algebraic Logic view of some multiple-valued logics. In M. Fitting & E. Orlowska (eds.), Beyond two: theory and applications of multiple-valued logic, Springer-Verlag, pp. 25–57.
Font, J. M., Jansana, R., 1996. A General Algebraic Semantics for Sentential Logics. Lecture Notes in Logic 7, Springer-Verlag. (2nd edition published by ASL in 2009) Also open access at Project Euclid
--------, and Pigozzi, D., 2003, A survey of abstract algebraic logic, Studia Logica 74: 13-79.
Andréka, H., Németi, I.: General algebraic logic: A perspective on "what is logic", in D. Gabbay (ed.): What is a logical system?, Clarendon Press, 1994, pp. 485–569.
online at
External links
Stanford Encyclopedia of Philosophy: "Algebraic Propositional Logic"—by Ramon Jansana.
Algebraic logic | Abstract algebraic logic | Mathematics | 1,326 |
49,987,432 | https://en.wikipedia.org/wiki/Metal%20stitching | Metal stitching is an industrial technique for repairing cracked and broken cast iron, steel, bronze or aluminium structures and their components. The process is carried out cold, without welding. It allows the repair of cast iron and cast steel, often in-situ, without the distortion from welding, and can be used in other situations where heat cannot be used to achieve a repair.
Background
The metal stitching process was developed in the late 1930s as an option for repairing cast iron components and equipment on the Texas oil fields. The process was developed to provide a permanent, stress-free repair and utilized when the use of heat or open flame was limited or not allowed. Four men have been credited with the development of this new metal locking technique: Lawrence B. Scott, Fred Lewis, Earl Reynolds and Hal W. Harman. However, it was Hal Harman who initially invented the metal stitching technique, and he filed for a patent to the technique in the 7th of August, 1937.
In 1938 L.B. Scott was officially credited with the invention of the Metalock variation of metal stitching, whilst he was still working for Harman. Scott was given patent rights to the repair technique and materials used. Scott used his patents to secure the repair process, called it the ‘Metalock Repair Process’, and began to offer franchises under the Metalock Corporation trade name after starting his own operation in Long Island City, NY. Shortly thereafter, Thomas O. Oliver Ltd. (based in Ontario, Canada) was the first company to purchase a franchise.
Fred Lewis (a partner in the development process) purchased a franchise and began operation in Chicago, IL in 1942. The same year, George Jackman Sr. left T.O. Oliver Ltd. and formed Metal Locking Service, Inc. as a stand-alone company.
Hal Harman took forward his method - called Chainlock. Initially Harman and Scott both offered competing metal stitching repairs. Then, for several years, Harman and Scott proceeded to take each other to court, to contest patent infringements and design rights. Ultimately Harman succeeded and Scott had to concede the process ownership to others.
The first repairs, in the 1930s, were in hazardous oilfields. Just prior to and during WWII, the process was used secretly on US Naval vessels, the process becoming a standard repair method approved by the US Navy after the war. It was in this time period that the process was verified as a credible alternative to welding.
Over the years alternative variations of the metal stitching processes were developed, they use terms like Metal Stitch, and Metal Locking, and Metalock to describe their repair process. Lock-N-Stitch is a slightly different stitching method, that was developed from the original stitching concept, by Gary J Reed.
Development of the process
Major Edward Peckham, a Canadian engineer originally of the Canadian army, was so impressed by what he saw in Texas that he brought the metalock process to Europe, and in 1947 opened an office in London, England. Peckham registered the Metalock Casting Repair Service, which became Metalock (Britain) Ltd. In 1953, to coordinate the expansion of the new process, the Metalock International Association was started in London. An engineering standard was developed to ensure the best possible outcome for a metalock repair.
During the early years, from the 1953 to the 1970s, research was applied to improve the process in Sweden, Germany and the UK. This resulted in improvement in two main areas; the creation of a material for the key that was designed for maximum strength under operating conditions. And the development of the key design and dimensions, and how to locate the keys in as best layout possible.
During the mid 20th century, the process gained rapid popularity among engineers and in manufacturing. The evidence of this is in the publications of specialist engineering publications
Process description
The metalock process consists of a series of steps, that uses metal alloy ‘locks’ or 'keys' that are inserted into the cast iron across and at right angles to the fracture. The process is applied to a fracture, or to a complete break in the material. There is often related damage caused in the break, that has to be cut out prior to repair.
The steps in the process are:
Once complete, the appearance this repair gives is one of a 'stitch' from the sewing of cloth, hence the common term 'metal stitching.’ This method has also been called ‘metal locking’ as it locks in the broken parts of the machine. The durability of the repair is normally high as the technician ensures that the repair maximises the original equipment strength design pattern.
Applications
As a cold repair process, metal stitching is applicable where heat should not be used, and in situations where the material cannot be successfully repaired by welding.
Situations where application of heat would be problematic are particularly appropriate for cold metal stitching, examples include: oil installations and engine rooms. Often large equipment cannot be easily dismantled and removed for repairs, the metalock repair process can often be performed in-situ with little or no dismantling. It is this feature that created the foundation for the development of the process. More unusual onsite locations expanded to include the repair of ship propellers whilst they are fixed to the ship, large mining equipment that is located underground, and underwater repairs.
Welding introduces thermal stresses into the base metal, and also changes the grain structure of the metal crystals - altering the characteristics and the strength of that part of the equipment. Heat also distorts the alignment of the original surface. Once the equipment is machined and returned to use, the parent metal is always significantly weaker. Often, the site of the original repair then subsequently fails.
The metal stitching repair process however tends to maintain alignment of original surfaces, since the lack of heat during the repair produces no distortion of the completed repair. In addition, the parent metal is not weakened due to material changes. Metal stitching dampens and absorbs compression stresses; providing a good ‘expansion joint’ for castings subject to thermal stresses. It distributes the tensional load away from fatigue points and maintains relieved conditions of inherent internal stresses where the rupture initially occurred. Where the repair involved a pressurized interface, the repair process has the ability to seal the joint.
References
Mechanical fasteners
Maintenance | Metal stitching | Engineering | 1,274 |
7,688,216 | https://en.wikipedia.org/wiki/Australia%20New%20Zealand%20Therapeutic%20Products%20Authority | The Australia New Zealand Therapeutic Products Authority (ANZTPA) is a proposed authority which if adopted in both Australia and New Zealand will be the sole authority which regulates therapeutic goods in both countries. The authority will replace the Therapeutic Goods Administration in Australia and Medsafe in New Zealand.
It has been proposed that the ANZTPA regulate:
Complementary medicines;
Over-the-counter medicines;
Prescription medicines;
Medical devices;
Blood & blood products;
Tissues;
Cellular therapies.
The establishment of the joint authority was postponed by the New Zealand government on 16 July 2007 until such time as there is agreement in the New Zealand parliament to resume the establishment process. The Australian government was informed and agreed to the postponement.
But despite completing its first harmonization activity in 2014, Australian and New Zealand officials have now decided to end the ANZTPA effort, saying a "comprehensive review of progress and [an] assessment of the costs and benefits to each country" had concluded the effort was no longer worth pursuing.
"While work on ANZTPA will cease, our two countries will continue to co-operate on the regulation of therapeutic products where there are mutual benefits for consumers, businesses and regulators in each country," the two countries said in a joint statement.
References
External links
Australia New Zealand Therapeutic Products Authority
Therapeutic Goods Administration
Medsafe
Government agencies of New Zealand
Commonwealth Government agencies of Australia
National agencies for drug regulation
Proposed international organizations
Medical and health organisations based in Australia
Medical and health organisations based in New Zealand | Australia New Zealand Therapeutic Products Authority | Chemistry | 305 |
42,333,823 | https://en.wikipedia.org/wiki/Dirac%20hole%20theory | Dirac hole theory is a theory in quantum mechanics, named after English theoretical physicist Paul Dirac, who introduced it in 1929. The theory poses that the continuum of negative energy states, that are solutions to the Dirac equation, are filled with electrons, and the vacancies in this continuum (holes) are manifested as positrons with energy and momentum that are the negative of those of the state. The discovery of the positron in 1929 gave a considerable support to the Dirac hole theory.
While Enrico Fermi, Niels Bohr and Wolfgang Pauli were skeptical about the theory, other physicists, like Guido Beck and Kurt Sitte, made use of Dirac hole theory in alternative theories of beta decay. Gian Wick extended Dirac hole theory to cover neutrinos, introducing the anti-neutrino as a hole in a neutrino Dirac sea.
Pair production and annihilation
Hole theory provides an alternative perspective on the processes of pair production and annihilation – when a photon of sufficient energy is incident upon an occupied state in the negative energy 'sea', it can excite an electron into the positive energy region, creating both an observable electron while creating a vacant state (hole) in the negative energy region – an anti-electron, or more commonly, a positron.
Conversely, due to the principle of least action, the close proximity of an electron and positron presents an opportunity for the electron to de-excite, releasing a photon, reducing the overall energy of the system – this is observationally identical to the process of annihilation.
References
Dirac equation
Paul Dirac | Dirac hole theory | Physics | 338 |
8,422,003 | https://en.wikipedia.org/wiki/Journal%20of%20Number%20Theory | The Journal of Number Theory (JNT) is a monthly peer-reviewed scientific journal covering all aspects of number theory. The journal was established in 1969 by R.P. Bambah, P. Roquette, A. Ross, A. Woods, and H. Zassenhaus (Ohio State University). It is currently published monthly by Elsevier and the editor-in-chief is Dorian Goldfeld (Columbia University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7.
David Goss prize
The David Goss Prize in Number theory, founded by the Journal of Number Theory, is awarded every two years, to mathematicians under the age of 35 for outstanding contributions to number theory. The prize is dedicated to the memory of David Goss who was the former editor in chief of the Journal of Number Theory. The current award is 10,000 USD.
The winners are selected and chosen by the scientific organizing committee of the JNT Biennial Conference and announced during the JNT Biennial Conference.
List of winners
References
External links
JNT 2019 Biennial: https://www.math.columbia.edu/~goldfeld/JNTBiennial2019.html
JNT 2022 Biennial: https://www.math.columbia.edu/~goldfeld/JNTBiennial2022.html
JNT 2024 Biennial: https://www.math.columbia.edu/~goldfeld/JNTBiennial2024.html
Number theory
Number theory journals
Academic journals established in 1969
Elsevier academic journals
Monthly journals
English-language journals | Journal of Number Theory | Mathematics | 331 |
252,081 | https://en.wikipedia.org/wiki/Tsung-Dao%20Lee | Tsung-Dao Lee (; November 24, 1926 – August 4, 2024) was a Chinese-American physicist, known for his work on parity violation, the Lee–Yang theorem, particle physics, relativistic heavy ion (RHIC) physics, nontopological solitons, and soliton stars. He was a university professor emeritus at Columbia University in New York City, where he taught from 1953 until his retirement in 2012.
In 1957, at the age of 30, Lee won the Nobel Prize in Physics with Chen Ning Yang for their work on the violation of the parity law in weak interactions, which Chien-Shiung Wu experimentally proved from 1956 to 1957, with her well known Wu experiment.
Lee remains the youngest Nobel laureate in the science fields after World War II. He is the third-youngest Nobel laureate in sciences in history after William L. Bragg (who won the prize at 25 with his father William H. Bragg in 1915) and Werner Heisenberg (who won in 1932 also at 30). Lee and Yang were the first Chinese laureates. Since he became a naturalized American citizen in 1962, Lee is also the youngest American ever to have won a Nobel Prize.
Biography
Family
Lee was born in Shanghai, China, with his ancestral home in nearby Suzhou. His father Chun-kang Lee (), one of the first graduates of the University of Nanking, was a chemical industrialist and merchant who was involved in China's early development of modern synthesized fertilizer. Lee's grandfather Chong-tan Lee () was the first Chinese Methodist Episcopal senior pastor of St. John's Church in Suzhou (蘇州聖約翰堂).
Lee has four brothers and one sister. Educator Robert C. T. Lee was one of T. D.'s brothers. Lee's mother Chang and brother Robert C. T. moved to Taiwan in the 1950s.
Early life
Lee received his secondary education in Shanghai (High School Affiliated to Soochow University, 東吳大學附屬中學) and Jiangxi (Jiangxi Joint High School, 江西聯合中學). Due to the Second Sino-Japanese war, Lee's high school education was interrupted, thus he did not obtain his secondary diploma. Nevertheless, in 1943, Lee directly applied to and was admitted by the National Chekiang University (now Zhejiang University). Initially, Lee registered as a student in the Department of Chemical Engineering. Very quickly, Lee's talent was discovered and his interest in physics grew rapidly. Several physics professors, including Shu Xingbei and Wang Ganchang, largely guided Lee, and he soon transferred into the Department of Physics of National Che Kiang University, where he studied in 1943–1944.
However, again disrupted by a further Japanese invasion, Lee continued at the National Southwestern Associated University in Kunming the next year in 1945, where he studied with Professor Wu Ta-You.
Life and research in the U.S.
Professor Wu nominated Lee for a Chinese government fellowship for graduate study in the United States. In 1946, Lee went to the University of Chicago and was selected by Professor Enrico Fermi to become his PhD student. Lee received his PhD under Fermi in 1950 for his research work Hydrogen Content of White Dwarf Stars. Lee served as research associate and lecturer in physics at the University of California at Berkeley from 1950 to 1951.
In 1953, Lee joined Columbia University, where he remained until retirement. His first work at Columbia was on a solvable model of quantum field theory better known as the Lee model. Soon, his focus turned to particle physics and the developing puzzle of K meson decays. Lee realized in early 1956 that the key to the puzzle was parity non-conservation. At Lee's suggestion, the first experimental test was on hyperon decay by the Steinberger group. At that time, the experimental result gave only an indication of a 2 standard deviation effect of possible parity violation. Encouraged by this feasibility study, Lee made a systematic study of possible Time reversal (T), Parity (P), Charge Conjugation (C), and CP violations in weak interactions with collaborators, including C. N. Yang. After the definitive experimental confirmation by Chien-Shiung Wu and her assistants that showed that parity was not conserved, Lee and Yang were awarded the 1957 Nobel Prize in Physics. Wu was not awarded the Nobel prize, which is considered one of the largest controversies in Nobel committee history.
In the early 1960s, Lee and collaborators initiated the important field of high-energy neutrino physics. In 1964, Lee, with M. Nauenberg, analyzed the divergences connected with particles of zero rest mass, and described a general method known as the KLN theorem for dealing with these divergences, which still plays an important role in contemporary work in QCD, with its massless, self-interacting gluons. In 1974–75, Lee published several papers on "A New Form of Matter in High Density", which led to the modern field of RHIC physics, now dominating the entire high-energy nuclear physics field.
Besides particle physics, Lee was active in statistical mechanics, astrophysics, hydrodynamics, many body system, solid state, and lattice QCD. In 1983, Lee wrote a paper entitled, "Can Time Be a Discrete Dynamical Variable?"; which led to a series of publications by Lee and collaborators on the formulation of fundamental physics in terms of difference equations, but with exact invariance under continuous groups of translational and rotational transformations. Beginning in 1975, Lee and collaborators established the field of non-topological solitons, which led to his work on soliton stars and black holes throughout the 1980s and 1990s.
From 1997 to 2003, Lee was director of the RIKEN-BNL Research Center (now director emeritus), which together with other researchers from Columbia, completed a 1 teraflops supercomputer QCDSP for lattice QCD in 1998 and a 10 teraflops QCDOC machine in 2001. Most recently, Lee and Richard M. Friedberg developed a new method to solve the Schrödinger equation, leading to convergent iterative solutions for the long-standing quantum degenerate double-wall potential and other instanton problems. They also did work on the neutrino mapping matrix.
Lee was one of the 20 American recipients of the Nobel Prize in Physics to sign a letter addressed to President George W. Bush in May 2008, urging him to "reverse the damage done to basic science research in the Fiscal Year 2008 Omnibus Appropriations Bill" by requesting additional emergency funding for the Department of Energy's Office of Science, the National Science Foundation, and the National Institute of Standards and Technology.
Educational activities
Soon after the re-establishment of China-American relations with the PRC, Lee and his wife, Jeannette Hui-Chun Chin (), were able to go to the PRC, where Lee gave a series of lectures and seminars, and organized the CUSPEA (China-U.S. Physics Examination and Application).
In 1998, Lee established the Chun-Tsung Endowment (秦惠䇹—李政道中国大学生见习基金) in memory of his wife, who had died three years earlier. The Chun-Tsung scholarships, supervised by the United Board for Christian Higher Education in Asia (New York), are awarded to undergraduates, usually in their 2nd or 3rd year, at six universities, which are Shanghai Jiaotong University, Fudan University, Lanzhou University, Soochow University, Peking University, and Tsinghua University. Students selected for such scholarships are named "Chun-Tsung Scholars" (䇹政学者).
Personal life and death
Lee and Jeannette Hui-Chun Chin married in 1950 and had two sons: James Lee () and Stephen Lee (). His wife died in 1996.
Tsung-Dao Lee died in San Francisco on August 4, 2024, at the age of 97.
Honours and awards
Awards
Nobel Prize in Physics (1957)
G. Bude Medal, Collège de France (1969, 1977)
Galileo Galilei Medal (1979)
Order of Merit, Grande Ufficiale, Italy (1986)
Oskar Klein Memorial Lecture and Medal (1993)
Science for Peace Prize (1994)
China National-International Cooperation Award (1995)
Matteucci Medal (1995)
Naming of Small Planet 3443 as the 3443 Leetsungdao (1997)
New York City Science Award (1997)
Pope Joannes Paulus Medal (1999)
Ministero dell'Interno Medal of the Government of Italy (1999)
New York Academy of Science Award (2000)
The Order of the Rising Sun, Gold and Silver Star, Japan (2007)
Marcel Grossmann Awards (2015), "for his work on white dwarfs motivating Enrico Fermi’s return to astrophysics and guiding the basic understanding of neutron star matter and fields"
Memberships
National Academy of Sciences
American Academy of Arts and Sciences
American Philosophical Society
Academia Sinica
Accademia Nazionale dei Lincei
Chinese Academy of Sciences
Third World Academy of Sciences
Pontifical Academy of Sciences
Selected publications
Technical reports
"Conservation Laws in Weak Interactions," Columbia University, United States Department of Energy (through predecessor agency the Atomic Energy Commission, March 1957).
"Weak Interactions," Columbia University, United States Department of Energy (through predecessor agency the Atomic Energy Commission, June 1957).
(with C.N. Yang) "Elementary Particles and Weak Interactions," Brookhaven National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission, October 1957).
"History of Weak Interactions," Columbia University, United States Department of Energy (through predecessor agency the Atomic Energy Commission), July 1970).
"High Energy Electromagnetic and Weak Interaction Processes," Brookhaven National Laboratory, United States Department of Energy (through predecessor agency the Atomic Energy Commission, January 11, 1972).
Books
See also
Chinese people in New York City
References
External links
T.D. Lee's English homepage
T.D. Lee Digital Resource Center
T.D. Lee's Home Page at Columbia University
including his Nobel Lecture, December 11, 1957 Weak Interactions and Nonconservation of Parity
Brookhaven National Laboratory: Tsung-Dao Lee Appointed as Member of the Pontifical Academy of Sciences
Celebration of T.D. Lee's 80th Birthday and the 50th Anniversary of the Discovery of Parity Non-conservation
Related archival collections
Haskell A. Reich collection of student notes, circa 1945-1954, Niels Bohr Library & Archives (includes lecture notes from Tsung-Dao Lee's courses at Columbia University)
1926 births
2024 deaths
American Nobel laureates
21st-century American physicists
Brookhaven National Laboratory Nobel laureates
Brookhaven National Laboratory staff
Chinese emigrants to the United States
Columbia University faculty
Institute for Advanced Study faculty
Members of Academia Sinica
Members of the Pontifical Academy of Sciences
Members of the United States National Academy of Sciences
Foreign members of the Chinese Academy of Sciences
Nankai University alumni
Nobel laureates in Physics
Nobel laureates from the Republic of China
Particle physicists
Recipients of the Order of the Rising Sun, 2nd class
Scientists from Shanghai
Scientists from Suzhou
Theoretical physicists
University of Chicago alumni
Zhejiang University alumni
Academic staff of Zhejiang University
National Southwestern Associated University alumni
Naturalized citizens of the United States
American scientists of Asian descent
Fellows of the American Physical Society
Recipients of the Matteucci Medal | Tsung-Dao Lee | Physics | 2,343 |
663,372 | https://en.wikipedia.org/wiki/Bucket%20elevator | A bucket elevator, also called a grain leg, is a mechanism for hauling flowable bulk materials (most often grain or fertilizer) vertically.
It consists of:
Buckets to contain the material;
A belt to carry the buckets and transmit the pull;
Means to drive the belt;
Accessories for loading the buckets or picking up the material, for receiving the discharged material, for maintaining the belt tension and for enclosing and protecting the elevator.
A bucket elevator can elevate a variety of bulk materials from light to heavy and from fine to large lumps.
A centrifugal discharge elevator may be vertical or inclined. Vertical elevators depend entirely on centrifugal force to get the material into the discharge chute, and so must be run at a relatively high speed. Inclined elevators with buckets spaced apart or set close together may have the discharge chute set partly under the head pulley. Since they do not depend entirely on centrifugal force to put the material into the chute, their speed may be slower.
Nearly all centrifugal discharge elevators have spaced buckets with rounded bottoms. They pick up their load from a boot, a pit, or a pile of material at the foot pulley.
The buckets can be also triangular in cross-section and set close together on the belt with little or no clearance between them. This is a continuous bucket elevator. Its main use is to carry difficult materials at a slow speed.
Early bucket elevators used a flat chain with small, steel buckets attached every few inches. While some elevators are still manufactured with a chain and steel buckets, most current bucket elevators use a rubber belt with plastic buckets. Pulleys several feet in diameter are used at the top and bottom. The top pulley is driven by an electric motor.
The bucket elevator is the enabling technology that permitted the construction of grain elevators. A diverter at the top of the elevator allows the grain to be sent to the chosen bin.
A similar device with flat steps is occasionally used as an elevator for humans, e.g. for employees in parking garages. (This sort of elevator is generally considered too dangerous for use by the public.)
Bucket elevator styles
There are three common bucket elevator designs seen in bulk material handling facilities worldwide:
Centrifugal Discharge Elevator – This is the typical style of elevator used in many grain handling facilities. The elevator buckets discharge the grain freely, using centrifugal force. The grain is flung out of the bucket into the discharge spout at the top of the elevator. The most common style of agricultural elevator bucket is the "CC" style. This style can be recognized by the four breaks in the inside bottom of the bucket, straight sides, and the presence of high sides or "ears".
Continuous Discharge Elevator – This style of bucket elevator is used typically to discharge sluggish and non-free flowing materials; the elevator buckets discharge on top of each other. To achieve the required centrifugal force, a speed of around 6 metres (20 feet) per second is used. Common styles of elevator buckets used are the MF, HF, LF, and HFO due to their "V" style, among other attributes.
Positive Discharge Elevator – Buckets are used to elevate delicate products such as popcorn, candy and potato chips (British: crisps), for which gentle handling is very important. This style elevator bucket operates off a double strand chain; the buckets are held in place by two pins so that they can swivel freely. To discharge the bucket, it is mechanically flipped, but until then the bucket is held parallel to the floor and upright. These elevators typically form an "S" or "L" in design and run throughout a plant.
References
External links
Elevators
Bulk material handling | Bucket elevator | Engineering | 780 |
15,521 | https://en.wikipedia.org/wiki/Devanagari%20numerals | The Devanagari numerals are the symbols used to write numbers in the Devanagari script, predominantly used for northern Indian languages. They are used to write decimal numbers, instead of the Western Arabic numerals.
Table
The word for zero was calqued into Arabic as , meaning 'nothing', which became the term "zero" in many European languages via Medieval Latin .
Variants
Devanagari digits shapes may vary depending on geographical area or epoch. Some of the variants are also seen in older Sanskrit literature.
In Nepali language ५, ८, ९ (5, 8, 9) - these numbers are slightly different from modern Devanagari numbers. In Nepali language uses old Devanagari system for writing these numbers, like , ,
See also
Indian numbering system
Numbers in Nepali language
References
Notes
Sources
Sanskrit Siddham (Bonji) Numbers
Devanagari Numbers in Nepali language
Numerals | Devanagari numerals | Mathematics | 188 |
4,606,683 | https://en.wikipedia.org/wiki/Diagonal%20functor | In category theory, a branch of mathematics, the diagonal functor is given by , which maps objects as well as morphisms. This functor can be employed to give a succinct alternate description of the product of objects within the category : a product is a universal arrow from to . The arrow comprises the projection maps.
More generally, given a small index category , one may construct the functor category , the objects of which are called diagrams. For each object in , there is a constant diagram that maps every object in to and every morphism in to . The diagonal functor assigns to each object of the diagram , and to each morphism in the natural transformation in (given for every object of by ). Thus, for example, in the case that is a discrete category with two objects, the diagonal functor is recovered.
Diagonal functors provide a way to define limits and colimits of diagrams. Given a diagram , a natural transformation (for some object of ) is called a cone for . These cones and their factorizations correspond precisely to the objects and morphisms of the comma category , and a limit of is a terminal object in , i.e., a universal arrow . Dually, a colimit of is an initial object in the comma category , i.e., a universal arrow .
If every functor from to has a limit (which will be the case if is complete), then the operation of taking limits is itself a functor from to . The limit functor is the right-adjoint of the diagonal functor. Similarly, the colimit functor (which exists if the category is cocomplete) is the left-adjoint of the diagonal functor. For example, the diagonal functor described above is the left-adjoint of the binary product functor and the right-adjoint of the binary coproduct functor.
See also
Diagram (category theory)
Cone (category theory)
Diagonal morphism
References
Category theory | Diagonal functor | Mathematics | 415 |
4,069,270 | https://en.wikipedia.org/wiki/Difference%20polynomials | In mathematics, in the area of complex analysis, the general difference polynomials are a polynomial sequence, a certain subclass of the Sheffer polynomials, which include the Newton polynomials, Selberg's polynomials, and the Stirling interpolation polynomials as special cases.
Definition
The general difference polynomial sequence is given by
where is the binomial coefficient. For , the generated polynomials are the Newton polynomials
The case of generates Selberg's polynomials, and the case of generates Stirling's interpolation polynomials.
Moving differences
Given an analytic function , define the moving difference of f as
where is the forward difference operator. Then, provided that f obeys certain summability conditions, then it may be represented in terms of these polynomials as
The conditions for summability (that is, convergence) for this sequence is a fairly complex topic; in general, one may say that a necessary condition is that the analytic function be of less than exponential type. Summability conditions are discussed in detail in Boas & Buck.
Generating function
The generating function for the general difference polynomials is given by
This generating function can be brought into the form of the generalized Appell representation
by setting , , and .
See also
Carlson's theorem
Bernoulli polynomials of the second kind
References
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263.
Polynomials
Finite differences
Factorial and binomial topics | Difference polynomials | Mathematics | 321 |
2,874,626 | https://en.wikipedia.org/wiki/Interleave%20sequence | In mathematics, an interleave sequence is obtained by merging two sequences via an in shuffle.
Let be a set, and let and , be two sequences in The interleave sequence is defined to be the sequence . Formally, it is the sequence given by
Properties
The interleave sequence is convergent if and only if the sequences and are convergent and have the same limit.
Consider two real numbers a and b greater than zero and smaller than 1. One can interleave the sequences of digits of a and b, which will determine a third number c, also greater than zero and smaller than 1. In this way one obtains an injection from the square to the interval (0, 1). Different radixes give rise to different injections; the one for the binary numbers is called the Z-order curve or Morton code.
References
Real analysis
Sequences and series | Interleave sequence | Mathematics | 179 |
6,464,661 | https://en.wikipedia.org/wiki/Peptide%20library | A peptide library is a tool for studying proteins. Peptide libraries typically contain a large number of peptides that have a systematic combination of amino acids. Usually, solid phase synthesis, e.g. resin as a flat surface or beads, is used for peptide library generation. Peptide libraries are a popular tool for experiments in drug design, protein–protein interactions, and other biochemical and pharmaceutical applications.
Synthetic peptide libraries are synthesized without utilizing biological systems such as phage or in vitro translation. There are at least five subtypes of synthetic peptide libraries that differ from each other by the design of the library and/or the method used for the synthesis of the library. The subtypes include:
Overlapping peptide libraries - in which the entirety of a larger protein is used to produce a library of 8-20 amino acid peptides which overlap; these libraries can be used to identify the specific regions of a larger protein which participate in a given interaction or to provide pre-digested versions of a larger protein for binding.
Truncation peptide libraries - in which a given peptide is produced with various or all N or C terminal truncations, these smaller fragments can be used to identify the minimal required region of a peptide for a given interaction being studied.
Random libraries - randomly generated peptides of a set length, or range of lengths, can be used to identify novel binding partners of a target of interest.
Alanine scanning libraries - in which each amino acid of a given protein or peptide is replaced with an alanine sequentially such that each peptide contains only one alanine mutations but all possible mutations to alanine are present; this can be used to identify critical residues for binding
Positional or scrambled peptide libraries - in which specific positions in the peptide are substituted for many or all other amino acids such that the effect of each amino acid at that position in the peptide on the binding or other activity of the peptide can be tested. Scrambled libraries are often random peptides and used as negative controls.
Solid phase peptide synthesis is limited to a peptide chain length of approximately 70 amino acids and is generally unsuitable for the study of larger proteins. Many libraries utilize peptide chains much shorter than 70 amino acids. For 20 encoded amino acids at maximally 70 positions, this results in an upper limit of 2070, or more than 10 quindecillion (1x1091), possible combinations, not accounting for the potential use of amino acids with post-translational modifications or amino acids not encoded in the genetic code, such as selenocysteine and pyrrolysine. Peptide libraries generally encompass only a fraction of this diversity, selected for depending on the needs of the experiment, for instance keeping some amino acids constant at certain positions.
Large random peptide libraries are often used for the synthesis of certain peptide molecules, such as ultra-large chemical libraries for the discovery of high-affinity peptide binders. Any increase in the library size severely affects parameters, such as the synthesis scale, the number of library members, the sequence deconvolution and peptide structure elucidation. To mitigate these technical challenges, an algorithm-supported approach to peptide library design may use molecular mass and amino acid diversity to simplify the laborious permutation identification in complex mixtures when using mass spectrometry. This approach is used to avoid mass redundancy.
Biological reagent companies, such as Pepscan, ProteoGenix, Mimotopes, GenScript and many others, manufacture customized peptide libraries.
Example
A peptide chain of 10 residues in length is used in native chemical ligation with a larger recombinantly expressed protein.
Residue 1: alanine
Residue 2: one of glutamine, glycine, arginine, glutamic acid, serine, or methionine
Residue 3: any one of the 20 amino acids
Residue 4: acetyllysine
Residue 5: alanine
Residue 6: isoleucine
Residue 7: aspartic acid
Residue 8: phenylalanine
Residue 9: acetyllysine
Residue 10: arginine with the carboxy terminal thioester
With 7 possibilities at Residue 2 and 20 possibilities at Residue 3, the total would be or 140 different polypeptides in the library.
This peptide library would be useful for analyzing the effect of the post-translational modification acetylation on lysine which neutralizes the positive charge. Having the library of different peptides at residue 2 and 3 would let the investigator see if some change in chemical properties in the N-terminal tail of the ligated protein makes the protein more useful or useful in a different way.
References
Further reading
Peptide library at pbcpeptide.com
Biochemistry methods
Proteins | Peptide library | Chemistry,Biology | 963 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.