id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
19,305,765 | https://en.wikipedia.org/wiki/HD%20126128 | HD 126128/9 is a triple star in the northern constellation of Boötes. Two of the components (HD 126128) form a binary star system with an orbital period of 39.5 years and an eccentricity of 0.25. The third component (HD 126129), and the brightest member of the trio, lies at an angular separation of 6.250″ from the other two.
References
External links
HR 5385
CCDM J14234+082
Image HD 126128
Boötes
126128
Triple star systems
F-type main-sequence stars
070327
5385 6
Durchmusterung objects | HD 126128 | Astronomy | 134 |
13,529,988 | https://en.wikipedia.org/wiki/Westerhout%205 | Westerhout 5 (Sharpless 2-199, LBN 667, Soul Nebula) is an emission nebula located in Cassiopeia. Several small open clusters are embedded in the nebula: CR 34, 632, and 634 (in the head) and IC 1848 (in the body). The object is more commonly called by the cluster designation IC 1848.
Small emission nebula IC 1871 is present just left of the top of the head, and small emission nebulae 670 and 669 are just below the lower back area.
The galaxies Maffei 1 and Maffei 2 are both nearby the nebula, although light extinction from the Milky Way makes them very hard to see.
Once thought to be part of the Local Group, they are now known to belong to their own group- the IC 342/Maffei Group.
This complex is the eastern neighbor of IC1805 (Heart Nebula) and the two are often mentioned together as the "Heart and Soul".
Star formation
W5, a radio source within the nebula, spans an area of sky equivalent to four full moons and is about 6,500 light-years away in the constellation Cassiopeia. Like other massive star-forming regions, such as Orion and Carina, W5 contains large cavities that were carved out by radiation and winds from the region's most massive stars. According to the theory of triggered star formation, the carving out of these cavities pushes gas together, causing it to ignite into successive generations of new stars. The image in the gallery above contains some of the best evidence yet for the triggered star formation theory. Scientists analyzing the photo have been able to show that the ages of the stars become progressively and systematically younger with distance from the center of the cavities.
References
H II regions
Astronomical radio sources
Sharpless objects
Articles containing video clips
Cassiopeia (constellation)
Emission nebulae
IC objects
Star-forming regions | Westerhout 5 | Astronomy | 397 |
1,794,797 | https://en.wikipedia.org/wiki/Dynamic%20enterprise%20modeling | Dynamic enterprise modeling (DEM) is an enterprise modeling approach developed by the Baan company, and used for the Baan enterprise resource planning system which aims "to align and implement it in the organizational architecture of the end-using company".
According to Koning (2008), Baan introduced dynamic enterprise modelling in 1996 as a "means for implementing the Baan ERP product. The modelling focused on a Petri net–based technique for business process modelling to which the Baan application units were to be linked. DEM also contains a supply-chain diagram tool for the logistic network of the company and of an enterprise function modelling diagram".
Overview
To align a specific company with dynamic enterprise modeling, the organizational structure is blueprinted top-down from high-level business processes to low-level processes. This blueprint is used as a roadmap of the organization, that is compatible with the structural roadmap of the software package. Having both roadmaps, the software package and the organizational structure are alienable. The blueprint of an organizational structure in dynamic enterprise modeling is called a reference model. A reference model is the total view of visions, functions, and organizational structures and processes, which together can be defined as a representative way of doing business in a certain organizational typology.
The DEM reference model consists of a set of underlying models that depict the organizational architecture in a top-down direction. The underlying models are:
Enterprise structure diagrams: The company site structure is visualized with the dispersed geographic locations, the headquarters, manufacturing plants, warehouses, and supplier and customer locations. Physical as well as logical multi-site organizations for internal logistic or financial flow optimization can be diagrammed.
Business control model : The business control model represents the primary processes of the organization and their control, grouped in business functions. The DEM reference model exists of one main Business Control Model, resulting in several other Business Control Models per function area of the organization.
Business function model : The business function model is a function model that focuses on the targets of the several functions within the company.
Business process model : The business process model focuses on the execution of the functions and processes that originate from the business control model, and the business function model. Processes flows are depicted and processes are detailed out.
Business organization model : The business organization model focuses less on the processes and more on the organizational aspects such as roles and responsibilities.
Together these models are capable of depicting the total organizational structure and aspects that are necessary during the implementation of the dynamic enterprise modeling. The models can have differentiations, which are based on the typology of the organization (i.e.: engineer-to-order organizations require different model structures than assemble-to-order organizations. To elaborate on the way that the reference model is used to implement software and to keep track of the scope of implementation methods, the business control model and the business process model will be explained in detail.
Dynamic enterprise modeling topics
Business control model
The business control model exists of the business functions of the organization and their internal and external links. Basic features in the model are:
Request-feedback-loop: A link from, to, or between business functions is called a request-feedback-loop, which consists of 4 states that complete the process and information flows between both business functions. The states are labeled: requested, committed, completed, and accepted.
Workflow case. A workflow case is the description of the execution and the target of the process that occurs between two business functions. The most important critical factors of the workflow case are quantity, quality, and time. The 4 states of Request-feedback-loop the together represent the workflow case.
Triggers: Business functions are aggregates of business processes and focus mainly on the triggers (control) between processes, thus not on the information flows.
Business functions : In an optimal situation for the modeling process, a company has only one business function. Business functions are however subdivided when:
The nature and characteristics of workflow cases fluctuate
The frequency in underlying processes fluctuate
Detail-level fluctuates
More than 1 type of request triggers a function
Next to interaction between two business functions, interaction can also exist between objects that are not in the scope of the reference model. These objects can be external business functions and agents.
External business function : this is a group of processes that are part of the organization (meaning that the organization can control the functions), but that is outside of the scope of the reference model.
Agents on the other hand are entities similar to business functions with the exception that they are external of the business (i.e.: customers and suppliers).
Processes within or between business functions are executed by triggers, which can be event-driven or time-driven.
Exceptions in a system are handled, according to the set handling level in the business process configuration, when the success path of the model is not met in practice.
Subroutines of processes can be modeled in the Business Control Model to take care of possible exceptions that can occur during the execution of a process (i.e.: delay handling in the delivery of goods).
In addition to business functions that consist of the main processes of the organization, management functions exist.
Management business functions: These are functions that manage the business process itself, and that thus, support the execution and triggering of the main business functions.
Having this reference, the main processes of the organization can be captured in the Business Control Model. The main functions of the organization are grouped in the business functions, which consist of the processes that are part of the specific business function. Interactions between the business functions are then depicted using the request-feedback loops.
Constructing the business control model
A business control model is constructed according to a set path.
First, the scope of the business is defined. The scope includes scoping what to model and includes the definition of the agents and external business functions that relate to the business.
Next, the scope is depicted to a model of the black box with al the agents and external business functions surrounding the black box.
The next step is to define the process and information flows (request-feedback flows) between the agents and external business functions to and from the black box of the business control model. Defining the request-feedback flows enables the modeler to define what processes are inside the black box.
After creating the main business functions within the business control model, the several business functions are detailed out.
In case of a production business it is vital to define the customer order decoupling point, referring to the split in the physical process where processes are based on the customer order instead of forecasts.
Service based businesses on the other hand do not have a physical goods flow and thus do not require a physical process model. It is however imaginable that the same type of process flow can be utilized to construct a business control model for a service based business, as a service can be interpreted as a product as well. In this way, a business control model can be constructed similarly for a service based business as for a physical goods production business, having intangible goods instead of tangible.
Next to the low-level physical production process, the high-level business functions need to be defined as well. In most cases the higher level business functions relate to planning functions and other tactical and strategical business functions, followed by functions as sales and purchase.
After high-level detail definitions, the business functions are decomposed to lower-level detail definitions to make the business control model alienable to the lower models within the reference model, for this practice, mainly the Business Process Model. In the Business Process Model the processes are elaborated until the lowest level of detail. Given this level of detail, the Baan software functionality is then projected on the processes, depicted in the Business Process Model.
Business process model
The modeling of processes in DEM, modeling the business process model is done using Petri net building blocks. DEM uses 4 construction elements:
State : A state element represents the state of a job token and is followed by the activity that executes the job token of the state.
Processing activity : A processing activity is the activity that processes the job token of a state, transforming the state of the job token to another state.
Control activity: A control activity navigates the process activity but does not execute it.
Sub-process : A sub-process is a collection of different other processes, aggregated in a single element by means of complexity management.
These 4 construction elements enables the modeling of DEM models. The modeling is due to a set collection of modeling constraints, guiding the modeling process in order to have similarly created models by different modelers. Control activities exist in different structures in order to set different possible routes for process flows. The used structures for control activities are:
OR-split / XOR-split : This structure creates 2 new states out of 1 state, signaling the creation of 2 job tokens out of 1 job token. If the new state can be both of the output tokens, the split is OR, if not, the split is an exclusive OR split (XOR).
AND-join construction : 2 job tokens are both needed to enable the control activity, creating 1 new job token (thus 1 new state).
OR-join / XOR-join : 2 job tokens are needed to enable the control activity, creating 1 new job token.
OR means one of the two starting job tokens can be used or both, XOR means only one of the tokens can be used to create the output job token.
An example
The example below demonstrates the modeling of the concept of marriage and divorce using Petri net building blocks.
The Petri net built model expresses the transformation from a single man and woman to a married couple through marriage and back to single individuals through divorce.
The model starts with the two states called man and woman.
Through an AND-join construction (both man and woman are needed in order to form a couple) the two states are joined within the control activity called coupling to the new state called couple.
The couple state then is transformed through the processing activity called marriage, resulting in the transformed state of married couple.
The state married couple is then transformed to the state divorced couple using the process activity called divorce, resulting in the state called divorced couple.
The control activity called decoupling finally splits the divorced couple state into the states of man and woman.
Assessments
Using an embedded method, brings the power that the method is designed to implement the software product that the method comes with. This suggests a less complicated usage of the method and more support possibilities.
The negative aspect of an embedded method obviously is that it can only be used for specific product software. Engineers and consultants, operating with several software products, could have more use of a general method, to have just one way of working.
See also
Dynamic enterprise
Dynamic enterprise architecture (DYA)
Enterprise resource planning
SAP R/3
References
Further reading
Fred Driese and Martin Hromek (1999). "Some aspects of strategic tactical and operational usage of dynamic enterprise modeling".
Van Es, R.M., Post, H.A. eds. (1996). Dynamic Enterprise Modelling : A Paradigm Shift in Software Implementation. Kluwer.
External links
Baan Dynamic Enterprise Management short intro
DynamicEnterprise Modeling presentation 1999.
Management
Enterprise modelling | Dynamic enterprise modeling | Engineering | 2,297 |
40,011,187 | https://en.wikipedia.org/wiki/High%20energy%20X-ray%20imaging%20technology | High energy X-ray imaging technology (HEXITEC) is a family of spectroscopic, single photon counting, pixel detectors developed for high energy X-ray and gamma ray spectroscopy applications.
The HEXITEC consortium was formed in 2006 funded by the Engineering and Physical Sciences Research Council, UK. The consortium is led by the University of Manchester; other members include the Science and Technology Facilities Council, the University of Surrey, Durham University and University of London, Birkbeck. In 2010 the consortium expanded to include the Royal Surrey County Hospital and the University College London. The vision of the consortium was to "develop a UK-based capability in high energy X-ray imaging technology". It is now available commercially through Quantum Detectors.
High energy X-ray imaging technology
X-ray spectroscopy is a powerful experimental technique that provides qualitative information about the elemental composition and internal stresses and strain within a specimen. High energy X-rays have the ability to penetrate deeply into materials allowing the examination of dense objects such as welds in steel, geological core sections bearing oil or gas or for the internal observation of chemical reactions inside heavy plant or machinery. Different experimental techniques such as X-ray fluorescence imaging and X-Ray diffraction imaging require X-ray detectors that are sensitive over a broad range of energies. Established semiconductor detector technology based on silicon and germanium have excellent energy resolution at X-ray energies under 30 keV but above this, due to a reduction in the material mass attenuation coefficient, the detection efficiency is dramatically reduced. To detect high energy X-rays, detectors produced from higher density materials are required.
High density, compound semiconductors such as cadmium telluride (CdTe), cadmium zinc telluride (CdZnTe), gallium arsenide (GaAs), mercuric iodide or thallium bromide have been the subject of extensive research for use in high energy X-ray detection. The favorable charge transport properties and high electrical resistivity of CdTe and CdZnTe have made them ideally suited to applications requiring spectroscopy at higher X-ray energies. Imaging applications, such as SPECT, require detectors with a pixelated electrode that allow objects to be imaged in 2D and 3D. Each pixel of the detector requires its own chain of readout electronics and for a highly pixelated detector this requires the use of a high sensitivity application-specific integrated circuit.
The HEXITEC ASIC
The HEXITEC application specific integrated circuit (ASIC) was developed for the consortium by the Science and Technology Facilities Council Rutherford Appleton Laboratory. The initial prototype consisted of an array of 20 x 20 pixels on a 250μm pitch fabricated using a 0.35μm CMOS process; the second generation of the ASIC expanded the array size to 80 x 80 pixels (4 cm2). Each ASIC pixel contains a charge amplifier, a CR-RC shaping amplifier and a peak track-and-hold circuit. The ASIC records the position and total charge deposited for each X-ray event detected.
The PIXIE ASIC
The PIXIE ASIC is a research and development ASIC developed by the Science and Technology Facilities Council Rutherford Appleton Laboratory for the consortium. The ASIC is being used to investigate charge induction and the small pixel effect in semiconductor detectors as described by the Shockley–Ramo theorem. The ASIC consists of three separate arrays of 3 x 3 pixels on a 250μm pitch and a single array of 3 x 3 pixels on a 500μm pitch. Each pixel contains a charge amplifier and output buffer allowing the induced charge pulses of each pixel to be recorded.
The HEXITEC-MHz ASIC
The original HEXITEC ASIC was delivered in the early 2010's and operated at a maximum frame rate of 10 kHz. At this speed the detector system was able to deliver per pixel X-ray spectroscopy with an energy resolution of <1keV but was limited to fluxes of 104 photons s−1 mm−2. With the development of Diffraction-limited storage ring synchrotrons, the intensity of X-rays produced in typical experiments increased by >×100. In order to continue to provide a spectroscopic X-ray imaging capability at these facilities, a new generation of the HEXITEC ASIC had to be developed. The development of the HEXITEC-MHz ASIC began in 2018 with the aim of increasing the frame rate of the camera system to 1 MHz to allow spectroscopic imaging at photon fluxes in excess of 106 photons s−1 mm−2 while maintaining the same spectroscopic performance. The first ASICs were delivered in 2022 and are currently undergoing testing at the Science and Technology Facilities Council Rutherford Appleton Laboratory and Diamond Light Source.
HEXITEC detectors
HEXITEC ASICs are flip-chip bonded to a direct conversion semiconductor detector using a low temperature (~100 °C) curing silver epoxy and gold stud technique in a hybrid detector arrangement. The X-ray detector layer is a semiconductor, typically cadmium telluride (CdTe) or cadmium zinc telluride (CdZnTe), between 1 – 3 mm thick. The detectors consist of a planar cathode and a pixelated anode and are operated under a negative bias voltage. X-rays and gamma rays interacting within the detector layer form charge clouds of electron-hole pairs which drift from the cathode to the anode pixels. The charge drifting across the detectors induce charge on the ASIC pixels as described by the Shockley–Ramo theorem which form the detected signal. The detectors are capable of measuring a photo-peak FWHM of the order 1 keV in the energy range 3 - 200 keV.
Applications
HEXITEC detectors are in use in a number of different application areas including:
materials science, medical imaging, illicit material detection,
and X-ray astronomy.
References
X-ray instrumentation | High energy X-ray imaging technology | Technology,Engineering | 1,206 |
30,991,801 | https://en.wikipedia.org/wiki/Data%20Integrity%20Field | Data Integrity Field (DIF) is an approach to protect data integrity in computer data storage from data corruption. It was proposed in 2003 by the T10 subcommittee of the International Committee for Information Technology Standards. A similar approach for data integrity was added in 2016 to the NVMe 1.2.1 specification.
Packet-based storage transport protocols have CRC protection on command and data payloads. Interconnect buses have parity protection. Memory systems have parity detection/correction schemes. I/O protocol controllers at the transport/interconnect boundaries have internal data path protection.
Data availability in storage systems is frequently measured simply in terms of the reliability of the hardware components and the effects of redundant hardware. But the reliability of the software, its ability to detect errors, and its ability to correctly report or apply corrective actions to a failure have a significant bearing on the overall storage system availability.
The data exchange usually takes place between the host CPU and storage disk. There may be a storage data controller in between these two. The controller could be RAID controller or simple storage switches.
DIF included extending the disk sector from its traditional 512 bytes, to 520 bytes, by adding eight additional protection bytes.
This extended sector is defined for Small Computer System Interface (SCSI) devices, which is in turn used in many enterprise storage technologies, such as Fibre Channel. Oracle Corporation included support for DIF in the Linux kernel.
An evolution of this technology called T10 Protection Information was introduced in 2011.
References
External links
Linux Data Integrity, August 30, 2008, Oracle Corporation, by Martin K. Petersen (archived from the original on January 9, 2015)
Linux Storage Topology and Advanced Features, November 24, 2009, by Martin K. Petersen
Data Integrity Field - T10.org, working on Feb 15 2019.
Error detection and correction | Data Integrity Field | Engineering | 372 |
6,650,279 | https://en.wikipedia.org/wiki/Spacecraft%20electric%20propulsion | Spacecraft electric propulsion (or just electric propulsion) is a type of spacecraft propulsion technique that uses electrostatic or electromagnetic fields to accelerate mass to high speed and thus generating thrust to modify the velocity of a spacecraft in orbit. The propulsion system is controlled by power electronics.
Electric thrusters typically use much less propellant than chemical rockets because they have a higher exhaust speed (operate at a higher specific impulse) than chemical rockets. Due to limited electric power the thrust is much weaker compared to chemical rockets, but electric propulsion can provide thrust for a longer time.
Electric propulsion was first demonstrated in the 1960s and is now a mature and widely used technology on spacecraft. American and Russian satellites have used electric propulsion for decades. , over 500 spacecraft operated throughout the Solar System use electric propulsion for station keeping, orbit raising, or primary propulsion. In the future, the most advanced electric thrusters may be able to impart a delta-v of , which is enough to take a spacecraft to the outer planets of the Solar System (with nuclear power), but is insufficient for interstellar travel. An electric rocket with an external power source (transmissible through laser on the photovoltaic panels) has a theoretical possibility for interstellar flight. However, electric propulsion is not suitable for launches from the Earth's surface, as it offers too little thrust.
On a journey to Mars, an electrically powered ship might be able to carry 70% of its initial mass to the destination, while a chemical rocket could carry only a few percent.
History
The idea of electric propulsion for spacecraft was introduced in 1911 by Konstantin Tsiolkovsky. Earlier, Robert Goddard had noted such a possibility in his personal notebook.
On 15 May 1929, the Soviet research laboratory Gas Dynamics Laboratory (GDL) commenced development of electric rocket engines. Headed by Valentin Glushko, in the early 1930s he created the world's first example of an electrothermal rocket engine. This early work by GDL has been steadily carried on and electric rocket engines were used in the 1960s on board the Voskhod 1 spacecraft and Zond-2 Mars probe.
The first test of electric propulsion was an experimental ion engine carried on board the Soviet Zond 1 spacecraft in April 1964, however they operated erratically possibly due to problems with the probe. The Zond 2 spacecraft also carried six Pulsed Plasma Thrusters (PPT) that served as actuators of the attitude control system. The PPT propulsion system was tested for 70 minutes on the 14 December 1964 when the spacecraft was 4.2 million kilometers from Earth.
The first successful demonstration of an ion engine was NASA SERT-1 (Space Electric Rocket Test) spacecraft. It launched on 20 July 1964 and operated for 31 minutes. A follow-up mission launched on 3 February 1970, SERT-2. It carried two ion thrusters, one operated for more than five months and the other for almost three months.
Electrically powered propulsion with a nuclear reactor was considered by Tony Martin for interstellar Project Daedalus in 1973, but the approach was rejected because of its thrust profile, the weight of equipment needed to convert nuclear energy into electricity, and as a result a small acceleration, which would take a century to achieve the desired speed.
By the early 2010s, many satellite manufacturers were offering electric propulsion options on their satellites—mostly for on-orbit attitude control—while some commercial communication satellite operators were beginning to use them for geosynchronous orbit insertion in place of traditional chemical rocket engines.
Types
Ion and plasma drives
These types of rocket-like reaction engines use electric energy to obtain thrust from propellant.
Electric propulsion thrusters for spacecraft may be grouped into three families based on the type of force used to accelerate the ions of the plasma:
Electrostatic
If the acceleration is caused mainly by the Coulomb force (i.e. application of a static electric field in the direction of the acceleration) the device is considered electrostatic. Types:
Gridded ion thruster
NASA Solar Technology Application Readiness (NSTAR)
HiPEP
Radiofrequency ion thruster
Hall-effect thruster, including its subtypes Stationary Plasma Thruster (SPT) and Thruster with Anode Layer (TAL)
Colloid ion thruster
Field-emission electric propulsion
Nano-particle field extraction thruster
Electrothermal
The electrothermal category groups devices that use electromagnetic fields to generate a plasma to increase the temperature of the bulk propellant. The thermal energy imparted to the propellant gas is then converted into kinetic energy by a nozzle of either solid material or magnetic fields. Low molecular weight gases (e.g. hydrogen, helium, ammonia) are preferred propellants for this kind of system.
An electrothermal engine uses a nozzle to convert heat into linear motion, so it is a true rocket even though the energy producing the heat comes from an external source.
Performance of electrothermal systems in terms of specific impulse (Isp) is 500 to ~1000 seconds, but exceeds that of cold gas thrusters, monopropellant rockets, and even most bipropellant rockets. In the USSR, electrothermal engines entered use in 1971; the Soviet "Meteor-3", "Meteor-Priroda", "Resurs-O" satellite series and the Russian "Elektro" satellite are equipped with them. Electrothermal systems by Aerojet (MR-510) are currently used on Lockheed Martin A2100 satellites using hydrazine as a propellant.
Resistojet
Arcjet
Microwave
Variable specific impulse magnetoplasma rocket (VASIMR)
Electromagnetic
Electromagnetic thrusters accelerate ions either by the Lorentz force or by the effect of electromagnetic fields where the electric field is not in the direction of the acceleration. Types:
Electrodeless plasma thruster
Magnetoplasmadynamic thruster
Pulsed inductive thruster
Pulsed plasma thruster
Helicon Double Layer Thruster
Magnetic field oscillating amplified thruster
Non-ion drives
Photonic
A photonic drive interacts only with photons.
Electrodynamic tether
Electrodynamic tethers are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electric energy, or as motors, converting electric energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through the Earth's magnetic field. The choice of the metal conductor to be used in an electrodynamic tether is determined by factors such as electrical conductivity, and density. Secondary factors, depending on the application, include cost, strength, and melting point.
Controversial
Some proposed propulsion methods apparently violate currently-understood laws of physics, including:
Quantum Vacuum Thruster
EM Drive or Cannae Drive
Steady vs. unsteady
Electric propulsion systems can be characterized as either steady (continuous firing for a prescribed duration) or unsteady (pulsed firings accumulating to a desired impulse). These classifications can be applied to all types of propulsion engines.
Dynamic properties
Electrically powered rocket engines provide lower thrust compared to chemical rockets by several orders of magnitude because of the limited electrical power available in a spacecraft. A chemical rocket imparts energy to the combustion products directly, whereas an electrical system requires several steps. However, the high velocity and lower reaction mass expended for the same thrust allows electric rockets to run on less fuel. This differs from the typical chemical-powered spacecraft, where the engines require more fuel, requiring the spacecraft to mostly follow an inertial trajectory. When near a planet, low-thrust propulsion may not offset the gravitational force. An electric rocket engine cannot provide enough thrust to lift the vehicle from a planet's surface, but a low thrust applied for a long interval can allow a spacecraft to manoeuvre near a planet.
See also
Magnetic sail, a proposed system powered by solar wind from the Sun or any star
List of spacecraft with electric propulsion, a list of past and proposed spacecraft which used electric propulsion
Rocket propulsion technologies (disambiguation)
References
External links
NASA Jet Propulsion Laboratory
The technological and commercial expansion of electric propulsion - D. Lev et al. The technological and commercial expansion of electric propulsion
Electric (Ion) Propulsion, University Center for Atmospheric Research, University of Colorado at Boulder, 2000.
Distributed Power Architecture for Electric Propulsion
Choueiri, Edgar Y. (2009). New dawn of electric rocket
Robert G. Jahn and Edgar Y. Choueiri. Electric Propulsion
Colorado State University Electric Propulsion and Plasma Engineering (CEPPE) Laboratory
Stationary plasma thrusters(PDF)
electric space propulsion
Public Lessons Learned Entry: 0736
A Critical History of Electric Propulsion:The First Fifty Years (1906–1956) - AIAA-2004-3334
Aerospace America, AIAA publication, December 2005, Propulsion and Energy section, pp. 54–55, written by Mitchell Walker.
Russian inventions
Soviet inventions
Spacecraft propulsion
Electric motors | Spacecraft electric propulsion | Technology,Engineering | 1,799 |
74,026,474 | https://en.wikipedia.org/wiki/Fostering%20%28falconry%29 | Fostering, in falconry and reintroduction of birds, is a method of breeding birds for their introduction into the wild that consists of placing chicks in the nest of a couple that has others of a similar age and size. Sometimes it can also be used when the chicks have already left the nest but continue to be fed by their parents.
This method can be used in those species that do not have siblicide behaviors and that are capable of carrying out this adoption without rejecting the new chicks. In addition, the parents must have previously been assessed to find out if they are capable of feeding more chicks.
See also
Cross-fostering
Hack (falconry)
Hand-rearing
Human-guided migration
Puppet-rearing
References
Falconry
Animal reintroduction
Conservation biology | Fostering (falconry) | Biology | 156 |
1,149,356 | https://en.wikipedia.org/wiki/Ronald%20Maddison | Leading Aircraftman Ronald George Maddison (23 January 1933 – 6 May 1953) was a twenty-year-old Royal Air Force mechanic who was unlawfully killed as the result of exposure to nerve agents while acting as a voluntary test subject at Porton Down, in Wiltshire, England. After substantial controversy, his death was the subject of an inquest 51 years after the event.
Sarin test and death
Porton Down had been testing sarin on humans since October 1951, but the first adverse reaction was not recorded until February 1953. An even more severe reaction occurred on 27 April when one of six volunteers, a man named John Patrick Kelly, was exposed to 300 milligrams of sarin and fell into a coma but subsequently recovered. This prompted a reduction in the dose used in this series of experiments to 200 mg.
Along with other servicemen, Maddison was offered 15 shillings and a three-day leave pass for taking part in the experiments. He had planned to use the money to purchase an engagement ring for his girlfriend, Mary Pyle.
On the day he died, Ronald Maddison entered a gas chamber at 10:00 a.m. along with five other test subjects. They were each to have an identical experiment performed on them which was part of a series of experiments to determine the lethal dose of sarin when delivered to bare or battle dress-covered skin. The method used was to measure the change in active acetylcholinesterase in red blood cells at small dose levels and extrapolate this to work out what the effect of larger doses would be. Sarin is extremely poisonous because it attacks the nervous system by blocking the activity of cholinesterase enzymes present in it, including acetylcholinesterase. The method was practical because red blood cell membranes contain forms of acetylcholinesterase.
The participants were wearing respirators, with woollen hats and oversize overalls but no proper protective clothing. Two technicians were also present to carry out the experiment. The respirators were tested by exposing the men to tear gas in the chamber before the experiment started.
Maddison was the fourth to have the drops applied, at 10:17 having twenty 10 mg drops of sarin applied to the two layers of cloth used in uniforms, serge and flannel, which had been taped to the inside of his left forearm. After twenty minutes, Maddison began to sweat and complain that he did not feel well. One eye witness reported at the second inquest that he slumped over the table. The contaminated cloth was removed and he left the chamber, walking (perhaps with help) about 30 metres to a bench.
An ambulance was called and shortly afterwards Maddison complained of deafness, collapsed and began gasping for breath and the scientists injected him with atropine after they witnessed an asthma-like attack and convulsions. An ambulance took him to the site's local medical facility, where he arrived at 10:47. Attempts were made to resuscitate him using oxygen, further injections of atropine and anacardone, and finally an injection of adrenaline into his heart just after 11 am. Although he had died at 11 am, less than 45 minutes after being exposed to the poison, he was not formally pronounced dead until 1:30pm.
Aftermath
The post mortem was carried out in Salisbury Infirmary. On 8 and 16 May 1953, an inquest was held in secret before the Wiltshire Coroner, Harold Dale, who returned a verdict of misadventure. Maddison's father was permitted to attend the inquest but warned that he would be prosecuted under the Official Secrets Act if he informed anyone, including his family, of the circumstances surrounding his son's death. An internal court of inquiry at Porton Down found that Maddison had died because of "personal idiosyncrasy", either because he was unusually sensitive to the poison or his skin absorbed it faster than in other test subjects.
The Ministry of Defence delivered Ronald Maddison's body in a steel coffin with the lid bolted down to maintain secrecy. A large number of samples of body parts including brain and spinal cord tissue, skin, muscle, stomach, lung, and gut were retained without his family's knowledge or permission and used over several years in other toxicology experiments. Maddison's father, John Maddison, was paid £40 to cover the funeral expenses, made up of £20 for black clothes, £16 for undertaker's fees and £4 for catering.
Second inquest
Operation Antler was a police investigation from 1999 to 2004 into Maddison's death, and into allegations that other British chemical-weapons test participants between 1939 and 1989 had not been properly informed and may have been misled about the experiments and their risks.
As a result of the investigation, and campaigning by Ronald Maddison's family, Lord Chief Justice Lord Woolf, sitting with Mrs Justice Hallett in the High Court quashed the original inquest verdict in November 2002. A new inquest opened on 5 May 2004 and was the longest held in England and Wales up to that time, hearing around 100 witnesses over 50 days. On 15 November 2004, the inquest jury returned the verdict that Ronald Maddison had been unlawfully killed.
The Ministry of Defence applied for a judicial review to quash the unlawful killing verdict, although the government announced that whatever the outcome they would look "favourably" upon paying compensation to Maddison's family. In February 2006 an agreement was struck within the framework of the judicial review whereby the MoD accepted the inquest verdict on the grounds that Maddison had died through "gross negligence in the planning and conduct of the experiment". The MoD did not accept that there was sufficient evidence to conclude that Maddison had not given his informed consent to take part. Ronald Maddison's relatives received a total of £100,000 in compensation from the Ministry of Defence.
The Crown Prosecution Service announced in 2003 that there was insufficient evidence to charge anyone responsible for the tests, but that they would review this decision following Maddison's second inquest. In June 2006, they confirmed that there would be no prosecutions.
References
Books
Tucker, Jonathan B. War of Nerves: Chemical Warfare from World War I to Al-Qaeda (1st edition, 2006). Pantheon Books, New York. .
1933 births
1953 deaths
Royal Air Force airmen
People from Consett
Chemical warfare
British human subject research
Medical controversies in the United Kingdom
United Kingdom chemical weapons program
Deaths from nerve agent poisoning | Ronald Maddison | Chemistry | 1,341 |
38,726,329 | https://en.wikipedia.org/wiki/Serape%20effect | The serape effect is a rotational trunk movement that increases the power output of the human body. It is trained in sports that involve rotation of the torso, such as boxing and discus throwing. The muscles involved in the serape effect are stretched and then snap-back with increased strength. It is named after a piece of clothing called the serape.
History
The term serape originates from a piece of clothing worn by people of Latin-American countries, specifically Mexico, also known by the same name. A serape is a brightly colored blanket which hangs around the shoulders and crosses diagonally across the anterior portion of the trunk. The general direction of how a serape is worn is similar to the direction of the pull of four muscles in the same area. The serape effect is this group of four muscles working together to produce an opposition of the rib cage and pelvis in the wind-up of a motion, and finally, generate a large summation of internal forces from the snap-back. The serape effect is prevalent in ballistic motions like throwing, kicking, and swinging.
Muscles involved
The rhomboids, serratus anterior, external obliques, and internal obliques are involved in the serape effect.
Sport significance
The serape effect is important in throwing motions and motions that involve the rotation of the torso that have a high velocity (Northrip, Logan, McKinney, 1974). This includes ballistic motions such as with throwing a discus or javelin. The transverse rotation of the pelvic girdle prior to a ballistic throwing motion is important for creating a higher velocity in the direction of the motion. Without this pelvic girdle rotation prior to the ballistic movement then the pelvis will recoil and there will not be as a great of a velocity to the upper body during the ballistic motion because of a lack of stretching of the muscles and a lack of energy built up to contribute to the movement. The rotational movement of this larger body segment, the trunk, enables a summation of internal forces that is able to be transferred from this large area to a smaller area as such as the arm and the hand for throwing an object. The serape effect can also be applied to kicking by transferring these forces from the trunk and pelvis to the lower legs.
For a throwing motion when the throwing limb is diagonally abducted and laterally rotated then the rib cage and pelvis should be at their farthest distance apart, which allows for a maximal amount of stretch in the muscles involved in the serape effect. This maximum point of stretching of the muscles lengthens the muscles so that when the throw takes place the muscles create a maximum amount of force as they shorten back to a resting length. “Muscles must be placed on their longest length in order to exert their greatest force”.
References
Earp, Jacob E, M.A., C.S.C.S., & Kraemer, William J, PhD, C.S.C.S.D., F.N.S.C.A. (2010). Medicine ball training implications for rotational power sports. Strength and Conditioning Journal, 32(4), 20-25
Biomechanics
Motor control | Serape effect | Physics,Biology | 651 |
59,249,253 | https://en.wikipedia.org/wiki/Paul%20H.%20Silverman | Paul Hyman Silverman (October 8, 1924 – July 15, 2004) was an American medical researcher in the fields of immunology, epidemiology, and parasitology. He was recognized for his research on stem cells and on the human genome.
Early life and education
Silverman was born on October 8, 1924, in Minneapolis, Minnesota. Growing up, he became fascinated with reading, and he won a local prize for his reading comprehension ability. He attended the University of Minnesota as a pre-medical student while also working three part-time jobs. He went on to serve in a MASH in the United States Army during World War II. He received a bachelor's degree from Roosevelt University. In 1951, Silverman received his M.S. from Northwestern University, after which he moved to Israel with his family. In Israel, he began research on malaria, which he continued to study for many years thereafter. In 1953, he and his family moved again, this time to England. There, he began studying at the Liverpool School of Tropical Medicine, from which he received his Ph.D. in parasitology and epidemiology in 1955.
Academic career
Silverman returned to the United States when he was 39 years old. He then accepted a position at the University of Illinois at Urbana–Champaign before moving to the University of New Mexico in 1972. At the University of New Mexico, he and his team developed a killed malaria vaccine based on Jonas Salk's polio vaccine. He became Vice President for Research and Graduate Studies at the University of New Mexico in 1975, and joined the State University of New York as their Provost for Research and Graduate Studies in 1978. In 1980, he became president of the University of Maine, a position he held until 1984. At the University of Maine, he was credited with expanding the scope of research activities. In 1984, he returned to research as a senior scientist at the Lawrence Berkeley National Laboratory. He also served at the University of California, Berkeley as Associate Laboratory Director for Life Sciences and Director of the Donner Lab. In 1987, he helped organize a partnership between the University of California, Berkeley and Lawrence Livermore National Laboratory to establish the first research center dedicated to the study of the human genome. He then worked at Beckman Instruments for several years before being appointed Associate Chancellor for Health Sciences at the University of California, Irvine in 1994, a position he held until his retirement in 1996. Also in 1994, he was elected to the World Academy of Art and Science. In the fall of 2003, he gave the commencement speech to the class of Roosevelt University, from which he received an honorary doctorate of human letters.
Research and views
Silverman was a member of the Human Genome Project's advisory committee. After its results showed that humans had about 30,000 genes, he noted that this suggested that genes were much less important causes of human diseases than previously thought. In an article published in The Scientist shortly before his death, he urged his fellow researchers to abandon genetic determinism, asking, "With only 30,000 genes, what is it that makes humans human?"
Personal life
Silverman met his wife, Nancy Josephs, while he was serving in the Army during World War II. The two married on May 20, 1945, and their son, Daniel, was born in 1950; they also had a daughter, Claire. Paul Silverman died on July 16, 2004, of complications resulting from a bone marrow transplant he had received to treat his myelofibrosis.
References
1924 births
2004 deaths
American medical researchers
American geneticists
20th-century American biologists
Human Genome Project scientists
Scientists from Minneapolis
United States Army personnel of World War II
Roosevelt University alumni
Northwestern University alumni
Alumni of the Liverpool School of Tropical Medicine
University of Illinois Urbana-Champaign faculty
University of New Mexico faculty
State University of New York faculty
Presidents of the University of Maine
Lawrence Berkeley National Laboratory people
University of California, Berkeley faculty
University of California, Irvine faculty
American parasitologists
Human geneticists | Paul H. Silverman | Engineering | 812 |
49,022,545 | https://en.wikipedia.org/wiki/IBM%20Journal%20of%20Research%20and%20Development | IBM Journal of Research and Development is a former, peer-reviewed bimonthly scientific journal covering research on information systems.
This Journal has ceased production in 2020.
According to the Journal Citation Reports in 2019, the journal had an impact factor of 1.27.
IBM also published the IBM Systems Journal () starting in 1962; it ceased publication in 2008 and was absorbed in part by the IBM Journal of Research and Development.
References
External links
English-language journals
IBM
Information systems journals | IBM Journal of Research and Development | Technology | 97 |
275,704 | https://en.wikipedia.org/wiki/Certification%20mark | A certification mark on a commercial product or service is a registered mark that enables its owner ("certification body") to certify that the goods or services of a particular provider (who is not the owner of the certification mark) have particular properties, e.g., regional or other origin, material, quality, accuracy, mode of manufacture, being produced by union labor, etc. The standards to which the product is held are stipulated by the owner of the certification mark.
There are essentially three general types of certification marks:
certifying that goods or services had originated in a particular geographic region (e.g., Roquefort cheese);
certifying that goods or services meet particular standards for quality, materials, methods of manufacturing, for example, tests by the Underwriter Laboratories;
certifying that the manufacturer has met certain standards or belong to a certain organization or union (e.g., "union made" in clothing).
The term "certification mark" is very recent, so while discussing historical certification marks, terms "guild sign", "quality mark", "hallmark", and "trade mark" are used by researchers.
A certification mark indicates a property standard or regulation and a claim that the manufacturer has verified compliance with those standards or regulations. The specific specification, test methods, and frequency of testing are published by the standards organization. Certification listing does not necessarily guarantee fitness-for-use. Validation testing, proper usage, and field testing are often needed.
Certification marks distinguished from other marks
Certification marks can be owned by independent companies absolutely unrelated in ownership to the companies, offering goods or rendering services under the particular certification mark.
Certification marks and trademarks
The USPTO states that a certification mark is "a type of trademark". However, it "is a special creature, created for a purpose uniquely different from that of a trademark or service mark", since:
its owner cannot use it (it is used only by providers of certified goods or services);
the mark does not define the source of the product. Instead, it identifies properties of the good or service (regional or other origin), material, quality, accuracy, etc.
However, what is meant by a collective trade marks or certification mark differs from country to country. However, a common feature of these types of marks is that they may be used by more than one person, as long as the users comply with the regulations of use or standards established by the holder. Those regulations or standards may require that the mark be used only in connection with goods that have a particular geographical origin or specific characteristics. In some jurisdictions, the main difference between collective marks and certification marks is that the former may only be used by members of an association, while certification marks may be used by anyone who complies with the standards defined by the holder of the mark. The holder, which may be a private or a public entity, acts as a certifier verifying that the mark is used according to established standards. Generally, the holder of a certification mark does not itself have the right to use the mark.
For various reasons, usually relating to technical issues, certification marks are difficult to register, especially in relation to services. One practical workaround for trademark owners is to register the mark as an ordinary trademark in relation to quality control and similar services.
Certification marks and approvals
Certification is often mistakenly referred to as an approval, which is not true. Organizations such as Underwriters Laboratories, TÜV Rheinland, NTA Inc, and CSA International will test the products according to standard procedures and "list" them as compliant to that standard. They do not approve anything except the use of the mark to show that a product has been certified for compliance with such specific standard. Thus, for instance, a product certification mark for a fire door or for a spray fireproofing product does not signify its universal acceptance for use within a building. Approvals are up to the Authority Having Jurisdiction (AHJ), such as a municipal building inspector or fire prevention officer.
Regulations
Trademark laws in countries, such as the United States, Australia, and others that provide for the filing of applications to register certificate marks also usually require the submission of regulations, which define a number of issues, including:
People authorized to use the certification mark
Characteristics that the certification mark certifies
How the certification or standards tests these characteristics and supervises use of the mark
What the dispute resolution procedures are
The main purpose of the regulations is to protect consumers against misleading practices.
Examples
International treaties and certification marks
Many jurisdictions have been required to amend their trade mark legislation to accommodate protection of certification marks under the TRIPs treaty.
Some jurisdictions recognise certification marks from other jurisdictions. This means good manufactured in one country may need not go through certification in another. One example is the European Union recognition of Australia and New Zealand marks based on an International treaty.
Cases
Cases involving certification marks include:
Re Legal Aid Board's Trade Mark Application (unreported 3 October 2000, UK CA)
the Sea Island Cotton case [1989]RPC 87
See also
References
Sources
External links
List of Standard Certification Marks – description of the most common standard certification marks
Standards
Trademark law | Certification mark | Mathematics | 1,049 |
50,971,965 | https://en.wikipedia.org/wiki/Album-equivalent%20unit | The album-equivalent unit, or album equivalent, is a measurement unit in music industry to define the consumption of music that equals the purchase of one album copy. This consumption includes streaming and song downloads in addition to traditional album sales. The album-equivalent unit was introduced in the mid-2010s as an answer to the drop of album sales in the 21st century. Album sales more than halved from 1999 to 2009, declining from a $14.6 to $6.3 billion industry, partly due to cheap digitally downloaded singles. For instance, the only albums that went platinum in the United States in 2014 were the Frozen soundtrack and Taylor Swift's 1989, whereas several artists' works had in 2013.
The usage of the album-equivalent units revolutionized the charts from the "best-selling albums" ranking into the "most popular albums" ranking. The International Federation of the Phonographic Industry (IFPI) have used album-equivalent unit to measure their Global Recording Artist of the Year since 2013.
Terminology
The term album-equivalent unit had been used by the International Federation of the Phonographic Industry (IFPI) long before the streaming era began. Between 1994 and 2005, the IFPI counted three physical singles as an equivalent of one album unit in their annual Recording Industry in Numbers (RIN) report. The term was reintroduced by the IFPI in 2013 to measure their Global Recording Artist of the Year. By this point, the album-equivalent units had already included music downloads and streams. An alternative term of album equivalent unit is sales plus streaming (SPS) unit, which was introduced by Hits magazine.
Use on record charts and certifications
United States
Beginning with the December 13, 2014, issue, the Billboard 200 albums chart revised its ranking methodology with album-equivalent unit instead of pure album sales. With this overhaul, the Billboard 200 includes on-demand streaming and digital track sales (as measured by Nielsen SoundScan) by way of a new algorithm, using data from all of the major on-demand audio subscription services including Spotify, Apple Music, Google Play, YouTube and formerly Xbox Music. Known as TEA (track equivalent album) and SEA (streaming equivalent album) when originally implemented, 10 song sales or 1,500 song streams from an album were treated as equivalent to one purchase of the album. Billboard continues to publish a pure album sales chart, called Top Album Sales, that maintains the traditional Billboard 200 methodology, based exclusively on Nielsen SoundScan's sales data. Taylor Swift's 1989 was the first album to top the chart with this methodology, generating 339,000 album-equivalent units (281,000 units came from pure album sales). In Billboard's February 8, 2015, issue, Now That's What I Call Music! 53 became the first album in history to miss the top position of the Billboard 200 despite being the best-selling album of the week.
Similarly the Recording Industry Association of America, which had previously certified albums based on units sold to retail stores, began factoring streaming for their certifications in February 2016.
RIAA summary by format, in million copies per year.
In July 2018, Billboard and Nielsen revised the ratios used for streaming equivalent album units to account for the relative value of streams on paid music services like Apple Music or Amazon Music Unlimited versus ad-supported music and video platforms such as Spotify's free tier and YouTube. Under the updated album equivalent ratios, 1,250 premium audio streams, 3,750 ad-supported streams, or 3,750 video streams are equal to one album unit.
United Kingdom
In the United Kingdom, the Official Charts Company has included streaming into the UK Albums Chart since March 2015. The change was decided after the massive growth of streaming; the number of tracks streamed in the UK in a year doubled from 7.5 billion in 2013 to just under 15 billion in 2014. Under the new methodology, Official Charts Company takes the 12 most-streamed tracks from an album, with the top two songs being given lesser weight so that the figure will reflect the popularity of the album as a whole rather than of one or two successful singles. The adjusted total is divided by 1000 and added to the album sales figure. Sam Smith's In the Lonely Hour was the first album to top the chart with this rule. Out of its 41,000 album-equivalent units, 2,900 units came from streaming and the rest were pure sales. By 2017, streaming had accounted more than half of album-equivalent units in the UK, according to British Phonographic Industry (BPI).
Germany
In Germany, streaming began to be included on the albums chart since February 2016. Nevertheless, the German Albums Chart is used to rank the albums based on weekly revenue, instead of units. Hence, only paid streaming is counted and must be played at least 30 seconds. At least 6 tracks of one album have to be streamed to make streams count for the album, with 12 tracks being the maximum counted. Similar to the UK chart rule, the actual streams of the top-two songs are not counted, but instead the average of the following tracks.
Australia
The Australian Recording Industry Association, which issues the ARIA Charts, began incorporating streaming into its singles chart beginning on November 24, 2014, and its albums chart beginning on May 13, 2017. ARIA changes the conversion rate regularly, and , one sale is equivalent to 170 streams on a paid subscription service, or 420 streams on an ad-supported service.
Responses and criticism
According to Silvio Pietroluongo, vice president of charts and data development at Billboard, album equivalent units methodology "reflects album popularity in today's world, where music is accessible on so many platforms [and] has become the accepted measure of album success." Physical albums have mostly turned into collectors' items as noted by a 2016 poll by ICM Research, which found that nearly half of the surveyed people did not listen to the record they bought.
In Forbes, Hugh McIntyre noted that the usage of album equivalent units has resulted in artists releasing albums with excessive track lists. Brian Josephs from Spin said: "If you're a thirsty (eager for fame or notoriety) pop artist of note, you can theoretically game the system by packing as many as 20 tracks into an album, in the process rolling up more album-equivalent units—and thus album "sales"—as listeners check the album out." He also criticized Chris Brown's album Heartbreak on a Full Moon, which contains over 40 songs.
Rolling Stone columnist Tim Ingham observed the figures of Drake's Scorpion and found that 63% of the album's streams on Spotify came from just three songs off the 25-track album. Additionally, only six songs accounted for 82% of the album's total stream, meaning that only a quarter of the songs determined the overall success of the album in terms of album-equivalent units. Cherie Hu from NPR felt that album equivalent units often do not reflect the actual album because they put further weight on an album's biggest single(s) rather than on all the project's tracks as a whole.
See also
Album sales
References
Albums
Music industry
Equivalent units
Units of amount of substance | Album-equivalent unit | Mathematics | 1,449 |
8,137,458 | https://en.wikipedia.org/wiki/John%20Threat | John Lee, a.k.a. John Threat is an American computer hacker and entrepreneur. He used the name "Corrupt" as a member of Masters of Deception (MOD), a New York based hacker group in the early 1990s.
As a result of his participation in the Great Hacker War, between MOD and rival hacker group Legion of Doom, he was indicted on federal wiretapping charges in 1992. He pled guilty and was sentenced to one year at a federal detention center. His participation in the Great Hacker War landed him on the cover of Wired Magazine in 1994.
Lee was born on July 6, 1973, in Brooklyn, New York. He grew up in Brownsville, where he was a member of the Decepticons, a Brooklyn-based street gang formed in the early '80s, named after the villains in the Saturday morning cartoon, Transformers. Lee attended Stuyvesant High School and went on to New York University. During his freshman year at NYU, Lee was sentenced to prison for his role in the Great Hacker War.
Lee also has editing, producing, and directing credits in film and television. In 2004, he founded Mediathreat, LLC, a film production company. In 2005, he directed the original documentary "Dead Prez: Bigger than Hip Hop." In 2011, he co-directed the music video for MAKE OUT's single "You Can't Be Friends With Everyone" with Diane Martel.
Lee also gained notoriety in 2001 when he revealed himself as the anonymous editor of UrbanExpose.com, a controversial entertainment gossip website.
References
External links
Wired Magazine December 1994: Gang Wars in Cyberspace
Hackateer, directed by John Threat
1973 births
Living people
Masters of Deception
20th-century African-American people
21st-century African-American people
Hackers
People from Brooklyn | John Threat | Technology | 370 |
46,539,048 | https://en.wikipedia.org/wiki/Penicillium%20ludwigii | Penicillium ludwigii is an anamorph species of the genus of Penicillium.
References
Further reading
ludwigii
Fungi described in 1969
Fungus species | Penicillium ludwigii | Biology | 35 |
48,800,466 | https://en.wikipedia.org/wiki/Port%20operations%20service | Port operations service (short: POS; | also: port operations radiocommunication service') is – according to Article 1.30 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A maritime mobile service in or near a port, between coast stations and ship stations, or between ship stations, in which messages are restricted to those relating to the operational handling, the movement and the safety of ships and, in emergency, to the safety of persons. Messages which are of a public correspondence nature shall be excluded from this service.»
See also
Classification
The ITU Radio Regulations classifies this radiocommunication service as follows:
Mobile-satellite service (article 1.25)
Maritime mobile service (article 1.28)
Maritime mobile-satellite service (article 1.29)
Port operations service
Ship movement service (article 1.31)
References / sources
International Telecommunication Union (ITU)
Mobile services ITU
Maritime communication | Port operations service | Technology | 197 |
1,478,662 | https://en.wikipedia.org/wiki/Hyperspace | In science fiction, hyperspace (also known as nulspace, subspace, overspace, jumpspace and similar terms) is a concept relating to higher dimensions as well as parallel universes and a faster-than-light (FTL) method of interstellar travel. In its original meaning, the term hyperspace was simply a synonym for higher-dimensional space. This usage was most common in 19th-century textbooks and is still occasionally found in academic and popular science texts, for example, Hyperspace (1994). Its science fiction usage originated in the magazine Amazing Stories Quarterly in 1931 and within several decades it became one of the most popular tropes of science fiction, popularized by its use in the works of authors such as Isaac Asimov and E. C. Tubb, and media franchises such as Star Wars.
One of the main reasons for the concept's popularity in science fiction is the impossibility of faster-than-light travel in ordinary space, which hyperspace allows writers to bypass. In most works, hyperspace is described as a higher dimension through which the shape of our three-dimensional space can be distorted to bring distant points close to each other, similar to the concept of a wormhole; or a shortcut-enabling parallel universe that can be travelled through. Usually it can be traversed – the process often known as "jumping" – through a gadget known as a "hyperdrive"; rubber science is sometimes used to explain it. Many works rely on hyperspace as a convenient background tool enabling FTL travel necessary for the plot, with a small minority making it a central element in their storytelling. While most often used in the context of interstellar travel, a minority of works focus on other plot points, such as the inhabitants of hyperspace, hyperspace as an energy source, or even hyperspace as the afterlife.
Concept
The basic premise of hyperspace is that vast distances through space can be traversed quickly by taking a kind of shortcut. There are two common models used to explain this shortcut: folding and mapping. In the folding model, hyperspace is a place of higher dimension through which the shape of our three-dimensional space can be distorted to bring distant points close to each other; a common analogy popularized by Robert A. Heinlein's Starman Jones (1953) is that of crumpling two-dimensional paper or cloth in the third dimension, thus bringing points on its surface into contact. In the mapping model, hyperspace is a parallel universe much smaller than ours (but not necessarily the same shape), which can be entered at a point corresponding to one location in ordinary space and exited at a different point corresponding to another location after travelling a much shorter distance than would be necessary in ordinary space. The Science in Science Fiction compares it to being able to step onto a world map at one's current location, walking across the map to a different continent, and then stepping off the map to find oneself at the new location—noting that the hyperspace "map" could have a significantly more complicated shape, as in Bob Shaw's Night Walk (1967).
Hyperspace is generally seen as a fictional concept not compatible with present-day scientific theories, particularly the theory of relativity). Some science fiction writers attempted quasi-scientific rubber science explanations of this concept. For others, however, it is just a convenient MacGuffin enabling faster-than-light travel necessary for their story without violating the prohibitions against FTL travel in ordinary space imposed by known laws of physics.
Terminology
The means of accessing hyperspace is often called a "hyperdrive", and navigating hyperspace is typically referred to as "jumping" (as in "the ship will now jump through hyperspace").
A number of related terms (such as imaginary space, Jarnell intersplit, jumpspace, megaflow, N-Space, nulspace, slipstream, overspace, Q-space, subspace, and tau-space) have been used by various writers, although none have gained recognition to rival that of hyperspace. Some works use multiple synonyms; for example, in the Star Trek franchise, the term hyperspace itself is only used briefly in a single 1988 episode ("Coming of Age") of Star Trek: The Next Generation, while a related set of terms – such as subspace, transwarp, and proto-warp – are employed much more often, and most of the travel takes place through the use of a warp drive. Hyperspace travel has also been discussed in the context of wormholes and teleportation, which some writers consider to be similar whereas others view them as separate concepts.
History
Emerging in the early 20th century, within several decades hyperspace became a common element of interstellar space travel stories in science fiction. Kirk Meadowcroft's "The Invisible Bubble" (1928) and John Campbell's Islands of Space (1931) feature the earliest known references to hyperspace, with Campbell, whose story was published in the science fiction magazine Amazing Stories Quarterly, likely being the first writer to use this term in the context of space travel. According to the Historical Dictionary of Science Fiction, the earliest known use of the word "hyper-drive" comes from a preview of Murray Leinster's story "The Manless Worlds" in Thrilling Wonder Stories 1946.
Another early work featuring hyperspace was Nelson Bond's The Scientific Pioneer Returns (1940). Isaac Asimov's Foundation series, first published in Astounding starting in 1942, featured a Galactic Empire traversed through hyperspace through the use of a "hyperatomic drive". In Foundation (1951), hyperspace is described as an "...unimaginable region that was neither space nor time, matter nor energy, something nor nothing, one could traverse the length of the Galaxy in the interval between two neighboring instants of time." E. C. Tubb has been credited with playing an important role in the development of hyperspace lore; writing a number of space operas in the early 1950s in which space travel occurs through that medium. He was also one of the first writers to treat hyperspace as a central part of the plot rather than a convenient background gadget that just enables the faster-than-light space travel.
In 1963, Philip Harbottle called the concept of hyperspace "a fixture" of the science fiction genre, and in 1977 Brian Ash wrote in The Visual Encyclopedia of Science Fiction that it had become the most popular of all faster-than-light methods of travel. The concept would subsequently be further popularized through its use in the Star Wars franchise.
In the 1974 film Dark Star, special effects designer Dan O'Bannon created a visual effect to depict going into hyperspace wherein the stars in space appear to move rapidly toward the camera. This is considered to be the first depiction in cinema history of a ship making the jump into hyperspace. The same effect was later employed in Star Wars (1977) and the "star streaks" are considered one of the visual "staples" of the Star Wars franchise.
Characteristics
Hyperspace is typically described as chaotic and confusing to human senses; often at least unpleasant – transitions to or from hyperspace can cause symptoms such as nausea, for example – and in some cases even hypnotic or dangerous to one's sanity. Visually, hyperspace is often left to the reader's imagination, or depicted as "a swirling gray mist". In some works, it is dark. Exceptions exist; for example, John Russel Fearn's Waters of Eternity (1953) features hyperspace that allows observation of regular space from within.
Many stories feature hyperspace as a dangerous, treacherous place where straying from a preset course can be disastrous. In Frederick Pohl's The Mapmakers (1955), navigational errors and the perils of hyperspace are one of the main plot-driving elements, and in K. Houston Brunner's Fiery Pillar (1955), a ship re-emerges within Earth, causing a catastrophic explosion. In some works, travelling or navigating hyperspace requires not only specialized equipment, but physical or psychological modifications of passengers or at least navigators, as seen in Frank Herbert's Dune (1965), Michael Moorcock's The Sundered Worlds (1966), Vonda McIntyre's Aztecs (1977), and David Brin's The Warm Space (1985).
While generally associated with science fiction, hyperspace-like concepts exist in some works of fantasy, particularly ones which involve movement between different worlds or dimensions. Such travel, usually done through portals rather than vehicles, is usually explained through the existence of magic.
Use
While mainly designed as means of fast space travel, occasionally, some writers have used the hyperspace concept in more imaginative ways, or as a central element of the story. In Arthur C. Clarke's "Technical Error" (1950), a man is laterally reversed by a brief accidental encounter with "hyperspace". In Robert A. Heinlein's Glory Road (1963) and Robert Silverberg's "Nightwings" (1968), it is used for storage. In George R.R. Martin's FTA (1974) hyperspace travel takes longer than in regular space, and in John E. Stith's Redshift Rendezvous (1990), the twist is that the relativistic effects within it appear at lower velocities. Hyperspace is generally unpopulated, save for the space-faring travellers. Early exceptions include Tubb's Dynasty of Doom (1953), Fearn's Waters of Eternity (1953) and Christopher Grimm's Someone to Watch Over Me (1959), which feature denizens of hyperspace. In The Mystery of Element 117 (1949) by Milton Smith, a window is opened into a new "hyperplane of hyperspace" containing those who have already died on Earth, and similarly, in Bob Shaw's The Palace of Eternity (1969), hyperspace is a form of afterlife, where human minds and memories reside after death. In some works, hyperspace is a source of extremely dangerous energy, threatening to destroy the entire world if mishandled (for instance Eando Binder's The Time Contractor from 1937 or Alfred Bester's "The Push of a Finger" from 1942). The concept of hyperspace travel, or space folding, can be used outside space travel as well, for example in Stephen King's short story "Mrs. Todd's Shortcut" it is a means for an elderly lady to take a shortcut while travelling between two cities.
In many stories, a starship cannot enter or leave hyperspace too close to a large concentration of mass, such as a planet or star; this means that hyperspace can only be used after a starship gets to the outside edge of a solar system, so that it must use other means of propulsion to get to and from planets. Other stories require a very large expenditure of energy in order to open a link (sometimes called a jump point) between hyperspace and regular space; this effectively limits access to hyperspace to very large starships, or to large stationary jump gates that can open jump points for smaller vessels. Examples include the "jump" technology in Babylon 5 and the star gate in Arthur C. Clarke's 2001: A Space Odyssey (1968). Just like with the very concept of hyperspace, the reasons given for such restrictions are usually technobabble, but their existence can be an important plot device. Science fiction author Larry Niven published his opinions to that effect in N-Space. According to him, an unrestricted FTL technology would give no limits to what heroes and villains could do. Limiting the places a ship can appear in, or making them more predictable, means that they will meet each other most often around contested planets or space stations, allowing for narratively satisfying battles or other encounters. On the other hand, a less restricted hyperdrive may also allow for dramatic escapes as the pilot "jumps" to hyperspace in the midst of battle to avoid destruction. In 1999 science fiction author James P. Hogan wrote that hyperspace is often treated as a plot-enabling gadget rather than as a fascinating, world-changing item, and that there are next to no works that discuss how hyperspace has been discovered and how such discovery subsequently changed the world.
See also
Minkowski space
Teleportation in fiction
Wormholes in fiction
Warp (video games)
Notes
References
Further reading
External links
Hyperspace by Curtis Saxton at Star Wars Technical Commentaries
Who Invented Hyperspace? Hyperspace in Science Fiction by Sten Odenwald at Astronomy Cafe
Historical Dictionary of Science Fiction entry for hyperspace
Fiction about faster-than-light travel
Fictional dimensions
Science fiction themes
Space
Fiction about teleportation | Hyperspace | Physics,Mathematics | 2,623 |
46,211,031 | https://en.wikipedia.org/wiki/U%20Microscopii | U Microscopii is a Mira variable star in the constellation Microscopium. It ranges from magnitude 7 to 14.4 over a period of 334 days. The Astronomical Society of Southern Africa in 2003 reported that observations of U Microscopii were very urgently needed as data on its light curve was incomplete.
References
Mira variables
Microscopium
Microscopii, U
M-type giants
Durchmusterung objects
194814
101063
Emission-line stars | U Microscopii | Astronomy | 99 |
1,585,092 | https://en.wikipedia.org/wiki/Institute%20of%20Physical%20Chemistry%20of%20the%20Polish%20Academy%20of%20Sciences | The Institute of Physical Chemistry of the Polish Academy of Sciences (Polish Instytut Chemii Fizycznej Polskiej Akademii Nauk, IChF PAN) is one of numerous institutes belonging to the Polish Academy of Sciences. As its name suggests, the institute's primary research interests are in the field of physical chemistry.
History
The Institute was established by a resolution of the Presidium of the Government of the Polish People's Republic on 19 March 1955. It was the first chemical institute of the Polish Academy of Sciences. Its tasks were defined at that time: "The Institute of Physical Chemistry covers research on current issues of physical chemistry important from the point of view of the development of chemical sciences and the needs of the national economy".
At the beginning of its activity, the main task was to prepare scientific staff who would be able to conduct scientific research in the field of physical chemistry. The development of scientific staff was facilitated because the employed scientific workers did not have the teaching burdens required in higher education institutions.
The first Director of the Institute and, at the same time, the Chairman of the Scientific Council of the Institute was prof. Wojciech Świętosławski. The subsequent directors of the Institute were prof. Michał Śmiałowski (1960–1973), prof. Wojciech Zielenkiewicz (1973–1990), prof. Jan Popielawski (1990–1992), prof. Janusz Lipkowski (1992–2003), prof. Aleksander Jabłoński (2003–2011), prof. Robert Hołyst (2011–2015), prof. Marcin Opałło (2015-2023), dr hab. Adam Kubas (since 2023).
Over the following years, the structure of the IChF changed, the number of employees increased, and new research topics emerged, which is reflected in the current structure of the Institute.
Structure
The Institute is divided into research departments, within which research teams operate:
Department of Physical Chemistry of Biological Systems (Head: prof. Maciej Wojtkowski).
Team leaders: prof. M. Wojtkowski, dr. Jan Guzowski and dr. hab. Jan Paczesny
Department of Physical Chemistry of Soft Matter (Head: prof. Robert Hołyst)
Team leaders: dr. hab. Jacek Gregorowicz, dr. hab. Volodymyr Sashuk, prof. Robert Hołyst, prof. Piotr Garstecki, dr. hab. Marco Costantini
Department of Catalysis on Metals (Head: dr. hab. Zbigniew Kaszkur)
Team leaders: dr. hab. Zbigniew Kaszkur, prof. Rafał Szmigielski and dr. hab. Juan Carlos Colmenares Quintero
Department of Electrode Processes (Head: prof. Marcin Opałło)
Team leaders: prof. Joanna Niedziółka-Jönsson, dr hab. Martin Jönsson-Niedziółka, dr Wojciech Nogala, prof. Marcin Opałło and dr inż Emilia Witkowska-Nery
Department of Complex Systems and Chemical Information Processing (Head: prof. Jerzy Górecki)
Team leaders: dr hab. Wojciech Góźdź and prof. Jerzy Górecki
Department of Photochemistry and Spectroscopy (Head: prof. Jacek Waluk)
Team leaders: dr hab. Agnieszka Michota-Kamińska, dr hab. Gonzalo Angulo Nunez, prof. Robert Kołos, dr hab. Yuriy Stepanenko and prof. Jacek Waluk
Independent teams
Leaders: prof. Janusz Lewiński, dr Bartłomiej Wacław, dr Piyush Sindhu Sharma, dr hab. Adam Kubas, prof. Robert Nowakowski, dr hab. Daniel Prochowicz and dr Tomasz Ratajczyk
International Centre for Translational Eye Research - ICTER (International Centre for Translational Eye Research), headed by Professor Maciej Wojtkowski. ICTER's strategic foreign partner is the Institute of Ophthalmology, University College London, in the United Kingdom. The Centre's international scientific partner is the University of California, Irvine, United States. The primary scientific priority of ICTER is to thoroughly investigate the dynamics and plasticity of the human eye to develop new therapies and diagnostic tools. Cutting-edge ICTER research is conducted at various levels of resolution, from single molecules to the entire architecture and function of the eye. ICTER has five research groups:
Physical Optics and Biophotonics Group (POB)
Integrated Structural Biology Group (ISB)
Ophthalmic Imaging and Technologies Group (IDoc)
Ocular Biology Group (OBi)
Computational Genomics Group (CGG)
Commercialization
The work conducted by the Institute has given rise to five companies, operating mainly in the field of medical diagnostics:
Scope Fluidics was founded in 2010 as the first spin-off of the Institute of Physical Chemistry of the Polish Academy of Sciences. The company aimed to commercialize microfluidic technologies developed at the Institute. Since its inception, the company has specialized in creating innovative solutions for medical diagnostics.
SERSitive - manufacturer of SERS substrates for a wide range of analytical sciences, such as pharmacy, forensic laboratories, border guard laboratories and medicine.
Siliquan - manufacturer of fluorescent silica nanomaterials.
Cell-IN offers a reagent enabling the introduction of various types of macromolecules (from polymers and proteins to DNA molecules) into cells.
InCellVu is developing a clinical form of a prototype device for in vivo imaging of the human retina using the new STOC-T method developed by scientists from the International Center for Eye Research.
External links
Institute of Physical Chemistry website
Institutes of the Polish Academy of Sciences
Chemistry organizations | Institute of Physical Chemistry of the Polish Academy of Sciences | Chemistry | 1,249 |
62,186,695 | https://en.wikipedia.org/wiki/Entwisleia | Entwisleia is a monotypic genus in the red algae family, Entwisleiaceae. There is just one species (the type species) in this genus,
Entwisleia bella, from south-eastern Tasmania and represents both a new family and a new order (Entwisleiales) in the Nemaliophycidae.
It is a marine species found in the Derwent River estuary. It grows at depths between 5.0 and 9.0 m and is found scattered on mudstone reef flats dusted or shallowly covered by sand. The site at which it was found is subject to episodic high-rainfall events throughout the year and heavy swells in winter. It is a feathery dioecious seaweed, very like the freshwater red algae, Batrachospermum, but from DNA sequencing, appears to be quite unrelated. Scott et al.'s (2013) study shows it as a sister clade of the Colaconematales.
The genus was named to honour Tim Entwisle, was circumscribed by Fiona Jean Scott and Gerald Thompson Kraft in Eur. J. Phycol. Vol.48 (Issue 4) on page 402 in 2013.
References
Red algae genera
Seaweeds
Florideophyceae
Monotypic algae genera | Entwisleia | Biology | 275 |
23,535,218 | https://en.wikipedia.org/wiki/Industrial%20engineering | Industrial engineering is an engineering profession that is concerned with the optimization of complex processes, systems, or organizations by developing, improving and implementing integrated systems of people, money, knowledge, information and equipment. Industrial engineering is central to manufacturing operations.
Industrial engineers use specialized knowledge and skills in the mathematical, physical, and social sciences, together with engineering analysis and design principles and methods, to specify, predict, and evaluate the results obtained from systems and processes. Several industrial engineering principles are followed in the manufacturing industry to ensure the effective flow of systems, processes, and operations. These include:
Lean Manufacturing
Six Sigma
Information Systems
Process Capability
Define, Measure, Analyze, Improve and Control (DMAIC).
These principles allow the creation of new systems, processes or situations for the useful coordination of labor, materials and machines and also improve the quality and productivity of systems, physical or social. Depending on the subspecialties involved, industrial engineering may also overlap with, operations research, systems engineering, manufacturing engineering, production engineering, supply chain engineering, management science, engineering management, financial engineering, ergonomics or human factors engineering, safety engineering, logistics engineering, quality engineering or other related capabilities or fields.
History
Origins
Industrial engineering
There is a general consensus among historians that the roots of the industrial engineering profession date back to the Industrial Revolution. The technologies that helped mechanize traditional manual operations in the textile industry including the flying shuttle, the spinning jenny, and perhaps most importantly the steam engine generated economies of scale that made mass production in centralized locations attractive for the first time. The concept of the production system had its genesis in the factories created by these innovations. It has also been suggested that perhaps Leonardo da Vinci was the first industrial engineer because there is evidence that he applied science to the analysis of human work by examining the rate at which a man could shovel dirt around the year 1500. Others also state that the industrial engineering profession grew from Charles Babbage’s study of factory operations and specifically his work on the manufacture of straight pins in 1832 . However, it has been generally argued that these early efforts, while valuable, were merely observational and did not attempt to engineer the jobs studied or increase overall output.
Specialization of labour
Adam Smith's concepts of Division of Labour and the "Invisible Hand" of capitalism introduced in his treatise The Wealth of Nations motivated many of the technological innovators of the Industrial Revolution to establish and implement factory systems. The efforts of James Watt and Matthew Boulton led to the first integrated machine manufacturing facility in the world, including the application of concepts such as cost control systems to reduce waste and increase productivity and the institution of skills training for craftsmen.
Charles Babbage became associated with industrial engineering because of the concepts he introduced in his book On the Economy of Machinery and Manufacturers which he wrote as a result of his visits to factories in England and the United States in the early 1800s. The book includes subjects such as the time required to perform a specific task, the effects of subdividing tasks into smaller and less detailed elements, and the advantages to be gained from repetitive tasks.
Interchangeable parts
Eli Whitney and Simeon North proved the feasibility of the notion of interchangeable parts in the manufacture of muskets and pistols for the US Government. Under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, which eventually led to the industrial environment to be studied later.
Pioneers
Frederick Taylor (1856–1915) is generally credited as being the father of the industrial engineering discipline. He earned a degree in mechanical engineering from Stevens Institute of Technology and earned several patents from his inventions. His books, Shop Management and The Principles of Scientific Management, which were published in the early 1900s, were the beginning of industrial engineering. Improvements in work efficiency under his methods was based on improving work methods, developing of work standards, and reduction in time required to carry out the work. With an abiding faith in the scientific method, Taylor did many experiments in machine shop work on machines as well as men. Taylor developed "time study" to measure time taken for various elements of a task and then used the study observations to reduce the time further. Time study was done for the improved method once again to provide time standards which are accurate for planning manual tasks and also for providing incentives.
The husband-and-wife team of Frank Gilbreth (1868–1924) and Lillian Gilbreth (1878–1972) was the other cornerstone of the industrial engineering movement whose work is housed at Purdue University School of Industrial Engineering. They categorized the elements of human motion into 18 basic elements called therbligs. This development permitted analysts to design jobs without knowledge of the time required to do a job. These developments were the beginning of a much broader field known as human factors or ergonomics.
In 1908, the first course on industrial engineering was offered as an elective at Pennsylvania State University, which became a separate program in 1909 through the efforts of Hugo Diemer. The first doctoral degree in industrial engineering was awarded in 1933 by Cornell University.
In 1912, Henry Laurence Gantt developed the Gantt chart, which outlines actions the organization along with their relationships. This chart opens later form familiar to us today by Wallace Clark.
With the development of assembly lines, the factory of Henry Ford (1913) accounted for a significant leap forward in the field. Ford reduced the assembly time of a car from more than 700 hours to 1.5 hours. In addition, he was a pioneer of the economy of the capitalist welfare ("welfare capitalism") and the flag of providing financial incentives for employees to increase productivity.
In 1927, the then Technische Hochschule Berlin was the first German university to introduce the degree. The course of studies developed by Willi Prion was then still called Business and Technology and was intended to provide descendants of industrialists with an adequate education.
Comprehensive quality management system (Total quality management or TQM) developed in the forties was gaining momentum after World War II and was part of the recovery of Japan after the war.
The American Institute of Industrial Engineering was formed in 1948. The early work by F. W. Taylor and the Gilbreths was documented in papers presented to the American Society of Mechanical Engineers as interest grew from merely improving machine performance to the performance of the overall manufacturing process, most notably starting with the presentation by Henry R. Towne (1844–1924) of his paper The Engineer as An Economist (1886).
Modern practice
From 1960 to 1975, with the development of decision support systems in supply such as material requirements planning (MRP), one can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide.
In the 1970s, with the penetration of Japanese management theories such as Kaizen and Kanban, Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own continuous improvement programs. W. Edwards Deming made significant contributions in the minimization of variance starting in the 1950s and continuing to the end of his life.
In the 1990s, following the global industry globalization process, the emphasis was on supply chain management and customer-oriented business process design. The theory of constraints, developed by Israeli scientist Eliyahu M. Goldratt (1985), is also a significant milestone in the field.
Comparison to other engineering disciplines
Engineering is traditionally decompositional. To understand the whole of something, it is first broken down into its parts. One masters the parts, then puts them back together to create a better understanding of how to master the whole. The approach of industrial and systems engineering (ISE) is opposite; any one part cannot be understood without the context of the whole system. Changes in one part of the system affect the entire system, and the role of a single part is to better serve the whole system.
Also, industrial engineering considers the human factor and its relation to the technical aspect of the situation and all of the other factors that influence the entire situation, while other engineering disciplines focus on the design of inanimate objects.
"Industrial Engineers integrate combinations of people, information, materials, and equipment that produce innovative and efficient organizations. In addition to manufacturing, Industrial Engineers work and consult in every industry, including hospitals, communications, e-commerce, entertainment, government, finance, food, pharmaceuticals, semiconductors, sports, insurance, sales, accounting, banking, travel, and transportation."
"Industrial Engineering is the branch of Engineering most closely related to human resources in that we apply social skills to work with all types of employees, from engineers to salespeople to top management. One of the main focuses of an Industrial Engineer is to improve the working environments of people – not to change the worker, but to change the workplace."
"All engineers, including Industrial Engineers, take mathematics through calculus and differential equations. Industrial Engineering is different in that it is based on discrete variable math, whereas all other engineering is based on continuous variable math. We emphasize the use of linear algebra and difference equations, as opposed to the use of differential equations which are so prevalent in other engineering disciplines. This emphasis becomes evident in optimization of production systems in which we are sequencing orders, scheduling batches, determining the number of materials handling units, arranging factory layouts, finding sequences of motions, etc. As, Industrial Engineers, we deal almost exclusively with systems of discrete components."
Etymology
While originally applied to manufacturing, the use of industrial in industrial engineering can be somewhat misleading, since it has grown to encompass any methodical or quantitative approach to optimizing how a process, system, or organization operates. In fact, the industrial in industrial engineering means the industry in its broadest sense. People have changed the term industrial to broader terms such as industrial and manufacturing engineering, industrial and systems engineering, industrial engineering and operations research, industrial engineering and management.
Sub-disciplines
Industrial engineering has many sub-disciplines, the most common of which are listed below. Although there are industrial engineers who focus exclusively on one of these sub-disciplines, many deals with a combination of them such as supply chain and logistics, and facilities and energy management.
Methods engineering
Facilities engineering & energy management
Financial engineering
Energy engineering
Human factors & safety engineering
Information systems engineering & management
Manufacturing engineering
Operations engineering & managementOperations research & optimization
Policy planning
Production engineeringQuality & reliability engineering
Supply chain management & logistics
Systems engineering & analysis
Systems simulation
Related disciplines
Organization development & change management
Behavioral economics
Education
Industrial engineers study the interaction of human beings with machines, materials, information, procedures and environments in such developments and in designing a technological system.
Industrial engineering degrees accredited within any member country of the Washington Accord enjoy equal accreditation within all other signatory countries, thus allowing engineers from one country to practice engineering professionally in any other.
Universities offer degrees at the bachelor, masters, and doctoral level.
Undergraduate curriculum
In the United States, the undergraduate degree earned is either a bachelor of science (B.S.) or a bachelor of science and engineering (B.S.E.) in industrial engineering (IE). In South Africa, the undergraduate degree is a bachelor of engineering (BEng). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE or ISyE).
The typical curriculum includes a broad math and science foundation spanning chemistry, physics, mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design, and the standard range of engineering mathematics (i.e., calculus, linear algebra, differential equations, statistics). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work – which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization, applied probability, stochastic modeling, design of experiments, statistical process control, simulation, manufacturing engineering, ergonomics/safety engineering, and engineering economics. Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing, supply chains and logistics, analytics and machine learning, production systems, human factors and industrial design, and service systems.
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
Graduate curriculum
The usual graduate degree earned is the master of science (MS), master of science and engineering (MSE) or master of engineering (MEng) in industrial engineering or various alternative related concentration titles.
Typical MS curricula may cover:
Manufacturing Engineering
Analytics and machine learning
Computer-aided manufacturing
Engineering economics
Financial engineering
Human factors engineering and ergonomics (safety engineering)
Lean Six Sigma
Management sciences
Materials management
Operations management
Operations research and optimization techniques
Predetermined motion time system and computer use for IE
Product development
Production planning and control
Productivity improvement
Project management
Reliability engineering and life testing
Robotics
Statistical process control or quality control
Supply chain management and logistics
System dynamics and policy planning
Systems simulation and stochastic processes
Time and motion study
Facilities design and work-space design
Quality engineering
System analysis and techniques
Differences in teaching
While industrial engineering as a formal degree has been around for years, consensus on what topics should be taught and studied differs across countries. For example, Turkey focuses on a very technical degree while Denmark, Finland and the United Kingdom have a management focus degree, thus making it less technical. The United States, meanwhile, focuses on case studies, group problem solving and maintains a balance between the technical and non-technical side.
Practicing engineers
Traditionally, a major aspect of industrial engineering was planning the layouts of factories and designing assembly lines and other manufacturing paradigms. And now, in lean manufacturing systems, industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.
Examples of where industrial engineering might be used include flow process charting, process mapping, designing an assembly workstation, strategizing for various operational logistics, consulting as an efficiency expert, developing a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply-chain management), and shortening lines (or queues) at a bank, hospital, or a theme park.
Modern industrial engineers typically use predetermined motion time systems, computer simulation (especially discrete event simulation), along with extensive mathematical tools for modeling, such as mathematical optimization and queueing theory, and computational methods for system analysis, evaluation, and optimization. Industrial engineers also use the tools of data science and machine learning in their work owing to the strong relatedness of these disciplines with the field and the similar technical background required of industrial engineers (including a strong foundation in probability theory, linear algebra, and statistics, as well as having coding skills).
See also
International Conference on Mechanical Industrial & Energy Engineering
Related topics
Associations
Washington Accord
Notes
Further reading
Badiru, A. (Ed.) (2005). Handbook of industrial and systems engineering. CRC Press. .
B. S. Blanchard and Fabrycky, W. (2005). Systems Engineering and Analysis (4th Edition). Prentice-Hall. .
Salvendy, G. (Ed.) (2001). Handbook of industrial engineering: Technology and operations management. Wiley-Interscience. .
Turner, W. et al. (1992). Introduction to industrial and systems engineering (Third edition). Prentice Hall. .
Eliyahu M. Goldratt, Jeff Cox (1984). The Goal North River Press; 2nd Rev edition (1992). ; 20th Anniversary edition (2004)
Miller, Doug, Towards Sustainable Labour Costing in UK Fashion Retail (February 5, 2013).
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
Systems Engineering Body of Knowledge (SEBoK)
Traditional Engineering
Master of Engineering Administration (MEA)
Kambhampati, Venkata Satya Surya Narayana Rao (2017). "Principles of Industrial Engineering" IIE Annual Conference. Proceedings; Norcross (2017): 890-895.
External links | Industrial engineering | Engineering | 3,370 |
43,996,093 | https://en.wikipedia.org/wiki/International%20Berthing%20and%20Docking%20Mechanism | The International Berthing and Docking Mechanism (IBDM) is the European androgynous low impact docking mechanism that is capable of docking and berthing large and small spacecraft. The development of the IBDM is under ESA contract with QinetiQ Space as prime contractor.
History
The IBDM development was initiated as a joint development programme with NASA JSC. The first application of the IBDM was intended to be the ISS Crew Return Vehicle (CRV). In the original Agency to Agency agreement, it was decided to develop an Engineering Development Unit (EDU) to demonstrate the feasibility of the system and the associated technologies. NASA JSC were responsible for the system and avionics designs and ESA for the mechanical design. However, since the cancellation of the CRV program, the two Agencies have independently progressed with the docking system development.
The IBDM is designed to be compatible with the International Docking System Standard (IDSS) and is hence compatible with the ISS International Docking Adapters (IDA) on the US side of the ISS.
The European Space Agency started a cooperation with SNC to provide the IBDM for attaching this new vehicle to the ISS in the future. After SNC was selected as a commercial contractor to resupply the International Space Station in January 2016, ESA decided to spend 33 million euros ($36 million) to complete the design of the IBDM and build a flight model for Dream Chaser’s first mission.
Design
The IBDM provides both docking and berthing capability. The docking mechanism comprises a Soft Capture Mechanism (SCS), and a structural mating system called the Hard Capture System (HCS), explained in more detail below. The IBDM avionics runs in hot redundancy.
Soft Capture System
The SCS utilizes active control using 6 servo-actuated legs from RUAG Space (Switzerland) which are coordinated to control the SCS ring in its 6 degrees of freedom. The leg forces are measured to modify the compliance of the SCS ring to facilitate alignment of the active platform during capture. A large range of vehicle mass properties can be handled. Mechanical latches achieve soft capture.
Hard Capture System
The HCS uses structural hook mechanisms to close the sealed mated interface. QinetiQ Space has developed several generations of latches and hooks to come to the final hook design.
SENER (Spain) will be responsible for the further development and qualification of the HCS subsystem.
Features
The key feature of IBDM is that it is a fully computer controlled mechanism, and it is able to take part in smooth low impact docking and berthing (which reduces contact forces and resultant structural loads), autonomous operations in case of failures, flexibility in vehicle mass making it suitable for applications ranging from explorations to resupply missions. A backup safe mode is also available in case of failure.
Application
The American company Sierra Nevada Corporation (SNC) is developing the Dream Chaser, which is a small reusable spacecraft that is selected to transport cargo and/or crew to the ISS. The European Space Agency has started a cooperation with SNC to potentially provide the IBDM for attaching this new vehicle to the ISS in the future. The IBDM will be mounted to the unpressurised cargo module, which will be ejected before reentry.
Status
The IBDM development has successfully passed the Critical Design Review (December 2015).
An engineering model of the mechanism and its heat-redundant avionics has been developed and successfully tested (March 2016). The performance of the system has been verified at the certified SDTS docking test facility at NASA JSC.
The consortium has currently started the manufacturing of the full IBDM qualification model (SCS + HCS).
References
Astrodynamics
Orbital maneuvers
Spacecraft docking systems | International Berthing and Docking Mechanism | Engineering | 765 |
48,934,192 | https://en.wikipedia.org/wiki/Angle%20of%20incidence%20%28optics%29 | The angle of incidence, in geometric optics, is the angle between a ray incident on a surface and the line perpendicular (at 90 degree angle) to the surface at the point of incidence, called the normal. The ray can be formed by any waves, such as optical, acoustic, microwave, and X-ray. In the figure below, the line representing a ray makes an angle θ with the normal (dotted line). The angle of incidence at which light is first totally internally reflected is known as the critical angle. The angle of reflection and angle of refraction are other angles related to beams.
In computer graphics and geography, the angle of incidence is also known as the illumination angle of a surface with a light source, such as the Earth's surface and the Sun. It can also be equivalently described as the angle between the tangent plane of the surface and another plane at right angles to the light rays. This means that the illumination angle of a certain point on Earth's surface is 0° if the Sun is precisely overhead and that it is 90° at sunset or sunrise.
Determining the angle of reflection with respect to a planar surface is trivial, but the computation for almost any other surface is significantly more difficult.
Grazing angle or glancing angle
When dealing with a beam that is nearly parallel to a surface, it is sometimes more useful to refer to the angle between the beam and the surface tangent, rather than that between the beam and the surface normal. The 90-degree complement to the angle of incidence is called the grazing angle or glancing angle. Incidence at small grazing angles is called "grazing incidence."
Grazing incidence diffraction is used in X-ray spectroscopy and atom optics, where significant reflection can be achieved only at small values of the grazing angle. Ridged mirrors are designed to reflect atoms coming at a small grazing angle. This angle is usually measured in milliradians. In optics, there is Lloyd's mirror.
See also
Effect of Sun angle on climate
Illumination angle
Phase angle (astronomy)
Plane of incidence
Reflection (physics)
Refraction
Scattering vector
Total internal reflection
References
External links
geometry : rebound on the strip billiards Flash animation
Geometrical optics
Angle | Angle of incidence (optics) | Physics | 442 |
42,400 | https://en.wikipedia.org/wiki/Socialization | In sociology, socialization (also socialisation – see spelling differences) is the process of internalizing the norms and ideologies of society. Socialization encompasses both learning and teaching and is thus "the means by which social and cultural continuity are attained".
Socialization is strongly connected to developmental psychology and behaviourism. Humans need social experiences to learn their culture and to survive.
Socialization essentially represents the whole process of learning throughout the life course and is a central influence on the behavior, beliefs, and actions of adults as well as of children.
Socialization may lead to desirable outcomes—sometimes labeled "moral"—as regards the society where it occurs. Individual views are influenced by the society's consensus and usually tend toward what that society finds acceptable or "normal". Socialization provides only a partial explanation for human beliefs and behaviors, maintaining that agents are not blank slates predetermined by their environment; scientific research provides evidence that people are shaped by both social influences and genes.
Genetic studies have shown that a person's environment interacts with their genotype to influence behavioral outcomes.
It is the process by which individuals learn their own societies culture.
History
Notions of society and the state of nature have existed for centuries. In its earliest usages, socialization was simply the act of socializing or another word for socialism. Socialization as a concept originated concurrently with sociology, as sociology was defined as the treatment of "the specifically social, the process and forms of socialization, as such, in contrast to the interests and contents which find expression in socialization". In particular, socialization consisted of the formation and development of social groups, and also the development of a social state of mind in the individuals who associate. Socialization is thus both a cause and an effect of association. The term was relatively uncommon before 1940, but became popular after World War II, appearing in dictionaries and scholarly works such as the theory of Talcott Parsons.
Stages of moral development
Lawrence Kohlberg studied moral reasoning and developed a theory of how individuals reason situations as right from wrong. The first stage is the pre-conventional stage, where a person (typically children) experience the world in terms of pain and pleasure, with their moral decisions solely reflecting this experience. Second, the conventional stage (typical for adolescents and adults) is characterized by an acceptance of society's conventions concerning right and wrong, even when there are no consequences for obedience or disobedience. Finally, the post-conventional stage (more rarely achieved) occurs if a person moves beyond society's norms to consider abstract ethical principles when making moral decisions.
Stages of psychosocial development
Erik H. Erikson (1902–1994) explained the challenges throughout the life course. The first stage in the life course is infancy, where babies learn trust and mistrust. The second stage is toddlerhood where children around the age of two struggle with the challenge of autonomy versus doubt. In stage three, preschool, children struggle to understand the difference between initiative and guilt. Stage four, pre-adolescence, children learn about industriousness and inferiority. In the fifth stage called adolescence, teenagers experience the challenge of gaining identity versus confusion. The sixth stage, young adulthood, is when young people gain insight into life when dealing with the challenge of intimacy and isolation. In stage seven, or middle adulthood, people experience the challenge of trying to make a difference (versus self-absorption). In the final stage, stage eight or old age, people are still learning about the challenge of integrity and despair.< This concept has been further developed by Klaus Hurrelmann and Gudrun Quenzel using the dynamic model of "developmental tasks".
Behaviorism
George Herbert Mead (1863–1931) developed a theory of social behaviorism to explain how social experience develops an individual's self-concept. Mead's central concept is the self: It is composed of self-awareness and self-image. Mead claimed that the self is not there at birth, rather, it is developed with social experience. Since social experience is the exchange of symbols, people tend to find meaning in every action. Seeking meaning leads us to imagine the intention of others. Understanding intention requires imagining the situation from the other's point of view. In effect, others are a mirror in which we can see ourselves. Charles Horton Cooley (1902-1983) coined the term looking glass self, which means self-image based on how we think others see us. According to Mead, the key to developing the self is learning to take the role of the other. With limited social experience, infants can only develop a sense of identity through imitation. Gradually children learn to take the roles of several others. The final stage is the generalized other, which refers to widespread cultural norms and values we use as a reference for evaluating others.
Contradictory evidence to behaviorism
Behaviorism makes claims that when infants are born they lack social experience or self. The social pre-wiring hypothesis, on the other hand, shows proof through a scientific study that social behavior is partly inherited and can influence infants and also even influence foetuses. Wired to be social means that infants are not taught that they are social beings, but they are born as prepared social beings.
The social pre-wiring hypothesis refers to the ontogeny of social interaction. Also informally referred to as, "wired to be social". The theory questions whether there is a propensity to socially oriented action already present before birth. Research in the theory concludes that newborns are born into the world with a unique genetic wiring to be social.
Circumstantial evidence supporting the social pre-wiring hypothesis can be revealed when examining newborns' behavior. Newborns, not even hours after birth, have been found to display a preparedness for social interaction. This preparedness is expressed in ways such as their imitation of facial gestures. This observed behavior cannot be contributed to any current form of socialization or social construction. Rather, newborns most likely inherit to some extent social behavior and identity through genetics.
Principal evidence of this theory is uncovered by examining Twin pregnancies. The main argument is, if there are social behaviors that are inherited and developed before birth, then one should expect twin foetuses to engage in some form of social interaction before they are born. Thus, ten foetuses were analyzed over a period of time using ultrasound techniques. Using kinematic analysis, the results of the experiment were that the twin foetuses would interact with each other for longer periods and more often as the pregnancies went on. Researchers were able to conclude that the performance of movements between the co-twins was not accidental but specifically aimed.
The social pre-wiring hypothesis was proved correct, "The central advance of this study is the demonstration that 'social actions' are already performed in the second trimester of gestation. Starting from the 14th week of gestation twin foetuses plan and execute movements specifically aimed at the co-twin. These findings force us to predate the emergence of social behavior: when the context enables it, as in the case of twin foetuses, other-directed actions are not only possible but predominant over self-directed actions."
Types
Primary socialization
Primary socialization occurs when a child learns the attitudes, values, and actions appropriate to individuals as members of a particular culture. Primary socialization for a child is very important because it sets the groundwork for all future socialization. It is mainly influenced by immediate family and friends. For example, if a child's mother expresses a discriminatory opinion about a minority or majority group, then that child may think this behavior is acceptable and could continue to have this opinion about that minority or majority group.
Secondary socialization
Secondary socialization refers to the process of learning what is the appropriate behavior as a member of a smaller group within the larger society. Basically, it involves the behavioral patterns reinforced by socializing agents of society. Secondary socialization takes place outside the home. It is where children and adults learn how to act in a way that is appropriate for the situations they are in. Schools require very different behavior from the home, and children must act according to new rules. New teachers have to act in a way that is different from pupils and learn the new rules from people around them. Secondary socialization is usually associated with teenagers and adults and involves smaller changes than those occurring in primary socialization. Examples of secondary socialization may include entering a new profession or relocating to a new environment or society.
Anticipatory socialization
Anticipatory socialization refers to the processes of socialization in which a person "rehearses" for future positions, occupations, and social relationships. For example, a couple might move in together before getting married in order to try out, or anticipate, what living together will be like. Research by Kenneth J. Levine and Cynthia A. Hoffner identifies parents as the main source of anticipatory socialization in regard to jobs and careers.
Resocialization
Resocialization refers to the process of discarding former behavior-patterns and reflexes while accepting new ones as part of a life transition. This can occur throughout the human life-span. Resocialization can be an intense experience, with individuals experiencing a sharp break with their past, as well as a need to learn and be exposed to radically different norms and values. One common example involves resocialization through a total institution, or "a setting in which people are isolated from the rest of society and manipulated by an administrative staff". Resocialization via total institutions involves a two step process: 1) the staff work to root out a new inmate's individual identity; and 2) the staff attempt to create for the inmate a new identity.
Other examples include the experiences of a young person leaving home to join the military, or of a religious convert internalizing the beliefs and rituals of a new faith. Another example would be the process by which a transsexual person learns to function socially in a dramatically altered gender-role.
Organizational socialization
Organizational socialization is the process whereby an employee learns the knowledge and skills necessary to assume his or her role in an organization. As newcomers become socialized, they learn about the organization and its history, values, jargon, culture, and procedures. Acquired knowledge about new employees' future work-environment affects the way they are able to apply their skills and abilities to their jobs. How actively engaged the employees are in pursuing knowledge affects their socialization process. New employees also learn about their work group, the specific people they will work with on a daily basis, their own role in the organization, the skills needed to do their job, and both formal procedures and informal norms. Socialization functions as a control system in that newcomers learn to internalize and obey organizational values and practices.
Group socialization
Group socialization is the theory that an individual's peer groups, rather than parental figures, become the primary influence on personality and behavior in adulthood. Parental behavior and the home environment has either no effect on the social development of children, or the effect varies significantly between children. Adolescents spend more time with peers than with parents. Therefore, peer groups have stronger correlations with personality development than parental figures do. For example, twin brothers with an identical genetic heritage will differ in personality because they have different groups of friends, not necessarily because their parents raised them differently. Behavioral genetics suggest that up to fifty percent of the variance in adult personality is due to genetic differences. The environment in which a child is raised accounts for only approximately ten percent in the variance of an adult's personality. As much as twenty percent of the variance is due to measurement error. This suggests that only a very small part of an adult's personality is influenced by factors which parents control (i.e. the home environment). Harris grants that while siblings do not have identical experiences in the home environment (making it difficult to associate a definite figure to the variance of personality due to home environments), the variance found by current methods is so low that researchers should look elsewhere to try to account for the remaining variance. Harris also states that developing long-term personality characteristics away from the home environment would be evolutionarily beneficial because future success is more likely to depend on interactions with peers than on interactions with parents and siblings. Also, because of already existing genetic similarities with parents, developing personalities outside of childhood home environments would further diversify individuals, increasing their evolutionary success.
Stages
Individuals and groups change their evaluations of and commitments to each other over time. There is a predictable sequence of stages that occur as an individual transitions through a group: investigation, socialization, maintenance, resocialization, and remembrance. During each stage, the individual and the group evaluate each other, which leads to an increase or decrease in commitment to socialization. This socialization pushes the individual from prospective to new, full, marginal, and ex member.
Stage 1: Investigation
This stage is marked by a cautious search for information. The individual compares groups in order to determine which one will fulfill their needs (reconnaissance), while the group estimates the value of the potential member (recruitment). The end of this stage is marked by entry to the group, whereby the group asks the individual to join and they accept the offer.
Stage 2: Socialization
Now that the individual has moved from a prospective member to a new member, the recruit must accept the group's culture. At this stage, the individual accepts the group's norms, values, and perspectives (assimilation), and the group may adapt to fit the new member's needs (accommodation). The acceptance transition-point is then reached and the individual becomes a full member. However, this transition can be delayed if the individual or the group reacts negatively. For example, the individual may react cautiously or misinterpret other members' reactions in the belief that they will be treated differently as a newcomer.
Stage 3: Maintenance
During this stage, the individual and the group negotiate what contribution is expected of members (role negotiation). While many members remain in this stage until the end of their membership, some individuals may become dissatisfied with their role in the group or fail to meet the group's expectations (divergence).
Stage 4: Resocialization
If the divergence point is reached, the former full member takes on the role of a marginal member and must be resocialized. There are two possible outcomes of resocialization: the parties resolve their differences and the individual becomes a full member again (convergence), or the group and the individual part ways via expulsion or voluntary exit.
Stage 5: Remembrance
In this stage, former members reminisce about their memories of the group and make sense of their recent departure. If the group reaches a consensus on their reasons for departure, conclusions about the overall experience of the group become part of the group's tradition.
Gender socialization
Henslin contends that "an important part of socialization is the learning of culturally defined gender roles". Gender socialization refers to the learning of behavior and attitudes considered appropriate for a given sex: boys learn to be boys and girls learn to be girls. This "learning" happens by way of many different agents of socialization. The behavior that is seen to be appropriate for each gender is largely determined by societal, cultural, and economic values in a given society. Gender socialization can therefore vary considerably among societies with different values. The family is certainly important in reinforcing gender roles, but so are groups - including friends, peers, school, work, and the mass media. Social groups reinforce gender roles through "countless subtle and not so subtle ways". In peer-group activities, stereotypic gender-roles may also be rejected, renegotiated, or artfully exploited for a variety of purposes.
Carol Gilligan compared the moral development of girls and boys in her theory of gender and moral development. She claimed that boys have a justice perspective - meaning that they rely on formal rules to define right and wrong. Girls, on the other hand, have a care-and-responsibility perspective, where personal relationships are considered when judging a situation. Gilligan also studied the effect of gender on self-esteem. She claimed that society's socialization of females is the reason why girls' self-esteem diminishes as they grow older. Girls struggle to regain their personal strength when moving through adolescence as they have fewer female teachers and most authority figures are men.
As parents are present in a child's development from the beginning, their influence in a child's early socialization is very important, especially in regard to gender roles. Sociologists have identified four ways in which parents socialize gender roles in their children: Shaping gender related attributes through toys and activities, differing their interaction with children based on the sex of the child, serving as primary gender models, and communicating gender ideals and expectations.
Sociologist of gender R.W. Connell contends that socialization theory is "inadequate" for explaining gender, because it presumes a largely consensual process except for a few "deviants", when really most children revolt against pressures to be conventionally gendered; because it cannot explain contradictory "scripts" that come from different socialization agents in the same society, and because it does not account for conflict between the different levels of an individual's gender (and general) identity.
Racial socialization
Racial socialization, or racial-ethnic socialization, has been defined as "the developmental processes by which children acquire the behaviors, perceptions, values, and attitudes of an ethnic group, and come to see themselves and others as members of the group". The existing literature conceptualizes racial socialization as having multiple dimensions. Researchers have identified five dimensions that commonly appear in the racial socialization literature: cultural socialization, preparation for bias, promotion of mistrust, egalitarianism, and other. Cultural socialization, sometimes referred to as "pride development", refers to parenting practices that teach children about their racial history or heritage.
Preparation for bias refers to parenting practices focused on preparing children to be aware of, and cope with, discrimination. Promotion of mistrust refers to the parenting practices of socializing children to be wary of people from other races. Egalitarianism refers to socializing children with the belief that all people are equal and should be treated with common humanity. In the United States, white people are socialized to perceive race as a zero-sum game and a black-white binary.
Oppression socialization
Oppression socialization refers to the process by which "individuals develop understandings of power and political structure, particularly as these inform perceptions of identity, power, and opportunity relative to gender, racialized group membership, and sexuality". This action is a form of political socialization in its relation to power and the persistent compliance of the disadvantaged with their oppression using limited "overt coercion".
Language socialization
Based on comparative research in different societies, and focusing on the role of language in child development, linguistic anthropologists Elinor Ochs and Bambi Schieffelin have developed the theory of language socialization.
They discovered that the processes of enculturation and socialization do not occur apart from the process of language acquisition, but that children acquire language and culture together in what amounts to an integrated process. Members of all societies socialize children both to and through the use of language; acquiring competence in a language, the novice is by the same token socialized into the categories and norms of the culture, while the culture, in turn, provides the norms of the use of language.
Planned socialization
Planned socialization occurs when other people take actions designed to teach or train others. This type of socialization can take on many forms and can occur at any point from infancy onward.
Natural socialization
Natural socialization occurs when infants and youngsters explore, play and discover the social world around them. Natural socialization is easily seen when looking at the young of almost any mammalian species (and some birds).
On the other hand, planned socialization is mostly a human phenomenon; all through history, people have made plans for teaching or training others. Both natural and planned socialization can have good and bad qualities: it is useful to learn the best features of both natural and planned socialization in order to incorporate them into life in a meaningful way.
Political socialization
Socialization produces the economic, social, and political development of any particular country. The nature of the compromise between nature and nurture also determines whether society is good or harmful. Political socialization is described as "the long developmental process by which an infant (even an adult) citizen learns, imbibes and ultimately internalizes the political culture (core political values, beliefs, norms and ideology) of his political system in order to make him a more informed and effective political participant."
A society's political culture is inculcated in its citizens and passed down from one generation to the next as part of the political socialization process. Agents of socialization are thus people, organizations, or institutions that have an impact on how people perceive themselves, behave, or have other orientations. In contemporary democratic government, political parties are the main forces behind political socialization.
Socialization enhances business, trade, and foreign investment globally. Building technology is made easy, is improved and carried out due to the ease with which interaction in interest services and media work can be connected. Citizens must instil in themselves excellent morals, ethics, and values and must preserve human rights or have sound judgment to be able to lead a country to a higher developmental level in order to construct a decent and democratic society for nation-building. Developing nations can transfer agricultural technology and machinery like tractors, harvesters, and agrochemicals to enhance the agricultural sector of the economy through socialization.
Positive socialization
Positive socialization is the type of social learning that is based on pleasurable and exciting experiences. Individual humans tend to like the people who fill their social learning processes with positive motivation, loving care, and rewarding opportunities. Positive socialization occurs when desired behaviors are reinforced with a reward, encouraging the individual to continue exhibiting similar behaviors in the future.
Negative socialization
Negative socialization occurs when socialialization agents use punishment, harsh criticisms, or anger to try to "teach us a lesson"; and often we come to dislike both negative socialization and the people who impose it on us. There are all types of mixes of positive and negative socialization, and the more positive social learning experiences we have, the happier we tend to be—especially if we are able to learn useful information that helps us cope well with the challenges of life. A high ratio of negative to positive socialization can make a person unhappy, leading to defeated or pessimistic feelings about life.
Bullying can examplify negative socialization.
Institutions
In the social sciences, institutions are the structures and mechanisms of social order and cooperation governing the behavior of individuals within a given human collectivity. Institutions are identified with a social purpose and permanence, transcending individual human lives and intentions, and with the making and enforcing of rules governing cooperative human behavior.
Productive processing of reality
From the late 1980s, sociological and psychological theories have been connected with the term socialization. One example of this connection is the theory of Klaus Hurrelmann. In his book Social Structure and Personality Development, he develops the model of productive processing of reality. The core idea is that socialization refers to an individual's personality development. It is the result of the productive processing of interior and exterior realities. Bodily and mental qualities and traits constitute a person's inner reality; the circumstances of the social and physical environment embody the external reality. Reality processing is productive because human beings actively grapple with their lives and attempt to cope with the attendant developmental tasks. The success of such a process depends on the personal and social resources available. Incorporated within all developmental tasks is the necessity to reconcile personal individuation and social integration and so secure the "I-dentity". The process of productive processing of reality is an enduring process throughout the life course.
Oversocialization
The problem of order, or Hobbesian problem, questions the existence of social orders and asks if it is possible to oppose them. Émile Durkheim viewed society as an external force controlling individuals through the imposition of sanctions and codes of law. However, constraints and sanctions also arise internally as feelings of guilt or anxiety.
See also
References
Further reading
Bayley, Robert; Schecter, Sandra R. (2003). Multilingual Matters,
Duff, Patricia A.; Hornberger, Nancy H. (2010). Language Socialization: Encyclopedia of Language and Education, Volume 8. Springer,
Kramsch, Claire (2003). Language Acquisition and Language Socialization: Ecological Perspectives – Advances in Applied Linguistics. Continuum International Publishing Group,
McQuail, Dennis (2005). McQuail's Mass Communication Theory: Fifth Edition, London: Sage.
Mehan, Hugh (1991). Sociological Foundations Supporting the Study of Cultural Diversity. National Center for Research on Cultural Diversity and Second Language Learning.
White, Graham (1977). Socialisation, London: Longman.
Conformity
Deviance (sociology)
Sociological terminology
Majority–minority relations | Socialization | Biology | 5,146 |
52,942,486 | https://en.wikipedia.org/wiki/B.%20D.%20Kulkarni | Bhaskar Dattatraya Kulkarni (5 May 1949 – 14 January 2019), popularly known as B. D. among his friends and colleagues, was an Indian chemical reaction engineer and a Distinguished Scientist of Chemical Engineering and Process Development at the National Chemical Laboratory, Pune. An INSA Senior Scientist and a J. C. Bose fellow, he was known for his work on fluidized bed reactors and chemical reactors. He is an elected fellow of the Indian Academy of Sciences, Indian National Science Academy, The World Academy of Sciences and the Indian National Academy of Engineering. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 1988.
Biography
B. D. Kulkarni, born on 5 May 1949 into a Deshastha Brahmin family in Nagpur in the western Indian state of Maharashtra, did his schooling at New English High School and after passing the matriculation with distinction in 1964, he did his pre-university course at Hislop College before joining Laxminarayan Institute of Technology of Nagpur University from where he graduated in chemical engineering in 1970. He continued there to complete his master's degree in chemical engineering in 1972 and enrolled at National Chemical Laboratory, Pune (NCL) in 1973 for his doctoral degree under the guidance of L. K. Doraiswamy, a noted chemical engineer and Padma Bhushan recipient. He worked under Doraiswamy, who is credited with developing Organic Synthesis Engineering as a definitive scientific stream, and secured a PhD in 1978 during which time he was invited by Man Mohan Sharma, a Padma Vibhushan laureate, to join the Institute of Chemical Technology, Mumbai but, on advice from Doraiswamy, he remained at NCL where he would spend the rest of his career. He served the institution in various capacities as Scientist C (1979–84), Scientist EI (1984–88), Scientist EII (1988), Scientist F (1988–93) and superannuated as Scientist G in 2010. On the administration front, he served as the Deputy Director and Head of the Chemical Engineering Division. Post-retirement, he served NCL as a Distinguished Scientist and continues his researches.
Career
Kulkarni's researches were mainly in the fields of Chemical Reaction Engineering, Applied Mathematics and Transport phenomena and he is known for his work on fluidized bed reactors and chemical reactors. He is credited with introducing an integer-solution approach and novel ideas on noise-induced transitions and his work on Artificial Intelligence-based evolutionary formalisms is reported to have assisted in a better understanding of reacting and reactor systems. His work spanned from conventional chemical reaction engineering in gas-liquid and gas-solid catalytic reactions to reactor stability to stochastic analysis of chemically reacting systems as well as inter-disciplinary fields. A model reaction system termed Encillator, an analytical approach for the solving model equations based on arithmetics, use of initial value formalism for modelling fluidized-bed reactors, introduction of normal form theory, evolutionary algorithms and stochastic approximation in analysing reactor behavior and performance are some of the contributions made by him. He holds US and Indian patents for several processes he has developed which include Method and an Apparatus for the Identification and/or Separation of Complex Composite Signals into its Deterministic and Noisy Components, Process for preparation of pure alkyl esters from alkali metal salt of carboxylic acid, and Enantioselective resolution process for arylpropionic acid drugs from the racemic mixture.
Kulakrni's researches have been documented in several peer-reviewed articles; and the online article repository of Indian Academy of Sciences has listed 250 of them. Besides, he has contributed chapters to books edited by others and has published seven edited or authored texts, including Recent Trends in Chemical Reaction Engineering, Advances in Transport Processes, The Analysis of Chemically Reacting Systems: A Stochastic Approach and Transport processes in fluidized bed reactors. He has guided several master's and doctoral scholars in their studies and has conducted training for students on mathematical modelling. He also serves as one of the directors of Hitech Bio Sciences India Limited, a probiotics and nutraceuticals manufacturer based in Pune and is a member of the advisory committee of the International Conference on Sustainable Development for Energy and Environment (ICSDEE 2017).
Awards and honors
The Indian National Science Academy awarded Kulkarni the Young Scientist Medal in 1981, making him the first chemical engineer to receive the honor. He received another award in 1981, the Amar Dye Chem Award of the Indian Institute of Chemical Engineers; IIChE would honor him again in 1988 with the Herdillia Award for Excellence in Basic Research in Chemical Engineering. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards the same year. The National Chemical Laboratory selected him s the Best Scientist of the Year in 1992 and the year 2000 brought him two awards, ChemTech-CEW Award of the ChemTech Foundation and the FICCI Award of the Federation of Indian Chambers of Commerce & Industry.
Kulkarni, a CSIR Distinguished Scientist at NCL, was elected as a fellow by Maharashtra Academy of Sciences in 1988, the same year as he became a fellow of the Indian Academy of Sciences. He received the elected fellowship of the Indian National Academy of Engineering and the Golden Jubilee Fellowship of the Institute of Chemical Technology (then known as the University Department of Chemical Technology-UDCT) in 1989. The Indian National Science Academy elected him as a fellow in 1990 and The World Academy of Sciences chose him as an elected fellow in 2002. When the Science and Engineering Research Board of the Department of Science and Technology selected scientists for the J. C. Bose National Fellowship in 2009, he was also included in the list of recipients. Industrial & Engineering Chemistry Research, the official journal of the American Chemical Society, issued a festschrift on him in 2009 titled Kulkarni Issue with the guest editorial written by his mentor, L. K. Doraiswmy and the issue featured his biosketch jointly written by Ganapati D. Yadav, V. K. Jayaraman and V. Ravikumar, all known chemical engineers.
Selective work
Books
Chapters
Patents
See also
List of chemical engineers
History of chemical engineering
Notes
References
External links
Recipients of the Shanti Swarup Bhatnagar Award in Engineering Science
1949 births
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
People from Nagpur district
TWAS fellows
Indian chemical engineers
Chemical reaction engineering
20th-century Indian inventors
Living people
Fellows of the Indian National Academy of Engineering
Indian technology writers
20th-century Indian engineers
Engineers from Maharashtra | B. D. Kulkarni | Chemistry,Engineering | 1,403 |
54,802,226 | https://en.wikipedia.org/wiki/Phyllis%20Zee | Phyllis C. Zee is the Benjamin and Virginia T. Boshes Professor in Neurology, the director of the Center for Circadian and Sleep Medicine (CCSM) and the chief of the Division of Sleep Medicine (neurology) at the Feinberg School of Medicine, Northwestern University, Chicago. She is also the medical director of Sleep Disorders Center at Northwestern Memorial Hospital.
Career
As director of CCSM, Zee oversees an interdisciplinary program in basic and translational sleep and circadian rhythm research, and findings from her team have paved the way for innovative approaches to improve sleep and circadian health. Zee is the founder of the first circadian medicine clinic in the US, where innovative treatments are available for patients with circadian rhythm disorders.
A central theme of her research program is understand the role of circadian-sleep interactions on the expression and development of cardiometabolic and neurologic disorders. Zee's research has focused on the effects of age and neurodegeneration on sleep and circadian rhythms and pathophysiology of circadian sleep-wake disorders. In addition, her laboratory is studying the effects of circadian-sleep based interventions, such as exercise, bright light and feed-fast schedules on cognitive, cardiovascular and metabolic functions and their potential to delay cardiometabolic aging and neurodegeneration. Recently her research team has also been interested in the use of acoustic and electrical neurostimulation to enhance slow wave sleep and memory in older adults.
Zee also has authored more than 300 peer reviewed original articles, reviews and chapters on the topics of sleep, circadian rhythms, and sleep/wake disorders. She has also trained over 50 pre-doctoral and post-doctoral students and has mentored numerous faculty members. Zee is a fellow of the American Academy of Sleep Medicine, fellow of the American Academy of Neurology and member of the American Neurological Association. She has served on numerous national and international committees, NIH scientific review panels, and international advisory boards. She is past president of the Sleep Research Society, past president of the Sleep Research Society Foundation and past chair of the NIH Sleep Disorders Research Advisory Board. Dr. Zee is a Member of the NIH National Heart Lung and Blood Advisory Council. She is the recipient of the 2011 American Academy of Neurology Sleep Science Award and the 2014 American Academy of Sleep Medicine academic honor, the William C. Dement Academic Achievement Award.
References
External links
http://www.sleepupdates.org/Faculty/PhyllisCZee,MD,PhD.aspx
Living people
Northwestern University faculty
American neurologists
Women neurologists
American women neuroscientists
American neuroscientists
Year of birth missing (living people)
Sleep researchers
American women academics
21st-century American women
Fellows of the American Academy of Neurology | Phyllis Zee | Biology | 574 |
48,708,998 | https://en.wikipedia.org/wiki/Human%20resource%20management%20system | A human resources management system (HRMS), also human resources information system (HRIS) or human capital management (HCM) system, is a form of human resources (HR) software that combines a number of systems and processes to ensure the easy management of human resources, business processes and data. Human resources software is used by businesses to combine a number of necessary HR functions, such as storing employee data, managing payroll, recruitment, benefits administration (total rewards), time and attendance, employee performance management, and tracking competency and training records.
A human resources management system ensures everyday human resources processes are manageable and easy to access. The field merges human resources as a discipline and, in particular, its basic HR activities and processes with the information technology field. This software category is analogous to how data processing systems evolved into the standardized routines and packages of enterprise resource planning (ERP) software. On the whole, these ERP systems have their origin from software that integrates information from different applications into one universal database. The linkage of financial and human resource modules through one database creates the distinction that separates an HRMS, HRIS, or HCM system from a generic ERP solution.
History
Structured resource about human resource management, especially human resource information system started with payroll systems in the late 1950s and continued into the 1960s when the first automated employee data used.
The first enterprise resource planning (ERP) system that integrated human resources functions was SAP R/2 (later to be replaced by R/3 and S/4hana), introduced in 1979. This system gave users the possibility to combine corporate data in real time and regulate processes from a single mainframe environment. Many of today's popular HR systems still offer considerable ERP and payroll functionality.
The first completely HR-centered client-server system for the enterprise market was PeopleSoft, released in 1987 and later bought by Oracle in 2005. Hosted and updated by clients, PeopleSoft overtook the mainframe environment concept in popularity. Oracle has also developed multiple similar BPM systems to automate corporate operations, including Oracle Cloud HCM.
Beginning in the late 1990s, HR vendors, started offering cloud-hosted HR services to make this technology more accessible to small and remote teams. Instead of a client-server, companies began using online accounts on web-based portals to access their employees' performance. Mobile applications have also become more common.
HRIS and HRMS technologies have allowed HR professionals to shy away from their traditional administrative work and have inserted them as strategic assets to the company. For example, these roles include employee development, as well as analyzing the workforce to target talent-rich areas.
Functions
The function of human resources departments is administrative and common to all organizations. Organizations may have formalized selection, evaluation, and payroll processes. Management of "human capital" has progressed to an imperative and complex process. The HR function consists of tracking existing employee data, which traditionally includes personal histories, skills, capabilities, accomplishments, and salary. To reduce the manual workload of these administrative activities, organizations began to electronically automate many of these processes by introducing specialized human resource management systems.
HR executives rely on internal or external IT professionals to develop and maintain an integrated HRMS. Before client–server architectures evolved in the late 1980s, many HR automation processes were relegated to mainframe computers that could handle large amounts of data transactions. In consequence of the high capital investment necessary to buy or program proprietary software, these internally developed HRMS were limited to organizations that possessed a large amount of capital. The advent of client-server, application service provider, and software as a service (SaaS) or human resource management systems enabled higher administrative control of such systems. Currently, human resource management systems tend to encompass:
Retaining staff
Hiring
Onboarding & Offboarding
Administration
Managing payroll
Tracking and Managing employee benefits
HR planning
Recruiting
Learning management
Performance management and appraisals
Employee self-service
Scheduling and rota management
Absence management
Leave management
Reporting and analytics
Employee reassignment
Grievance handling by following precedents
The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic pay cheques and employee tax reports. Data is generally fed from human resources and timekeeping modules to calculate automatic deposit and manual cheque writing capabilities. This module can encompass all employee-related transactions as well as integrate with existing financial management systems.
The time and attendance module gathers standardized time and work related efforts. The most advanced modules provide broad flexibility in data collection methods, labor distribution capabilities and data analysis features. Cost analysis and efficiency metrics are the primary functions.
The benefits administration module provides a system for organizations to administer and track employee participation in benefits programs. These typically encompass insurance, compensation, profit sharing, and retirement.
The HR management module is a component covering many other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and development, capabilities and skills management, compensation planning records and other related activities. Leading edge systems provide the ability to "read" applications and enter relevant data to applicable database fields, notify employers and provide position management and position control. Human resource management function involves the recruitment, placement, evaluation, compensation, and development of the employees of an organization. Initially, businesses used computer-based information systems to:
produce paychecks and payroll reports;
maintain personnel records;
pursue talent management.
Online recruiting has become one of the primary methods employed by HR departments to garner potential candidates for available positions within an organization. Talent management systems, or recruitment modules, offer an integrated hiring solution for HRMS which typically encompass:
analyzing personnel usage within an organization;
identifying potential applicants;
recruiting through company-facing listings;
recruiting through online recruiting sites or publications that market to both recruiters and applicants;
analytics within the hiring process (time to hire, source of hire, turnover);
compliance management to ensure job ads and candidate onboarding follows government regulations.
The significant cost incurred in maintaining an organized recruitment effort, cross-posting within and across general or industry-specific job boards and maintaining a competitive exposure of availabilities has given rise to the development of a dedicated applicant tracking system (ATS) module.
The training module provides a system for organizations to administer and track employee training and development efforts. The system, normally called a "learning management system" (LMS) if a standalone product, allows HR to track education, qualifications, and skills of the employees, as well as outlining what training courses, books, CDs, web-based learning or materials are available to develop which skills. Courses can then be offered in date specific sessions, with delegates and training resources being mapped and managed within the same system. Sophisticated LMSs allow managers to approve training, budgets, and calendars alongside performance management and appraisal metrics.
The employee self-service module allows employees to query HR related data and perform some HR transactions over the system. Employees may query their attendance record from the system without asking the information from HR personnel. The module also lets supervisors approve O.T. requests from their subordinates through the system without overloading the task on HR department.
Many organizations have gone beyond the traditional functions and developed human resource management information systems, which support recruitment, selection, hiring, job placement, performance appraisals, employee benefit analysis, health, safety, and security, while others integrate an outsourced applicant tracking system that encompasses a subset of the above.
The analytics module enables organizations to extend the value of an HRMS implementation by extracting HR related data for use with other business intelligence platforms. For example, organizations combine HR metrics with other business data to identify trends and anomalies in headcount in order to better predict the impact of employee turnover on future output.
There are now many types of HRMS or HRIS, some of which are typically local-machine-based software packages; the other main type is an online cloud-based system that can be accessed via a web browser.
The staff training module enables organizations the ability to enter, track and manage employee and staff training. Each type of activity can be recorded together with the additional data. The performance of each employee or staff member is then stored and can be accessed via the Analytics module.
Employee reassign module is a recent additional functionality of HRMS. This module has the functions of transfer, promotion, pay revision, re-designation, deputation, confirmation, pay mode change and letter form.
Employee self-service
Employee self-service (ESS) provides employees access to their personal records and details. ESS features include allowing employees to change their contact details, banking information, and benefits. ESS also allows for administrative tasks such as applying for leave, seeing absence history, reviewing timesheets and tasks, inquiring about available loan programs, requesting overtime payment, viewing compensation history, and submitting reimbursement slips. With the emergence of ESS, employees are able to transact with their Human Resources office remotely.
With ESS features, employees can take more responsibility for their present job, skill development, and career planning. As part of HRIS, feedback is given for skill profiles, training and learning, objective setting, appraisals and reporting/analytics. These systems are especially useful for businesses with remote workers, where employees are highly mobile, have flexible working, or not collocated with their manager.
See also
References
Further reading
Business computing
Business terms
Human resource management
Information systems | Human resource management system | Technology | 1,918 |
5,859,204 | https://en.wikipedia.org/wiki/Vanishing%20cycle | In mathematics, vanishing cycles are studied in singularity theory and other parts of algebraic geometry. They are those homology cycles of a smooth fiber in a family which vanish in the singular fiber.
For example, in a map from a connected complex surface to the complex projective line, a generic fiber is a smooth Riemann surface of some fixed genus g and, generically, there will be isolated points in the target whose preimages are nodal curves. If one considers an isolated critical value and a small loop around it, in each fiber, one can find a smooth loop such that the singular fiber can be obtained by pinching that loop to a point. The loop in the smooth fibers gives an element of the first homology group of a surface, and the monodromy of the critical value is defined to be the monodromy of the first homology of the fibers as the loop is traversed, i.e. an invertible map of the first homology of a (real) surface of genus g.
A classical result is the Picard–Lefschetz formula, detailing how the monodromy round the singular fiber acts on the vanishing cycles, by a shear mapping.
The classical, geometric theory of Solomon Lefschetz was recast in purely algebraic terms, in SGA7. This was for the requirements of its application in the context of l-adic cohomology; and eventual application to the Weil conjectures. There the definition uses derived categories, and looks very different. It involves a functor, the nearby cycle functor, with a definition by means of the higher direct image and pullbacks. The vanishing cycle functor then sits in a distinguished triangle with the nearby cycle functor and a more elementary functor. This formulation has been of continuing influence, in particular in D-module theory.
See also
Thom–Sebastiani Theorem
References
Dimca, Alexandru; Singularities and Topology of Hypersurfaces.
Section 3 of Peters, C.A.M. and J.H.M. Steenbrink: Infinitesimal variations of Hodge structure and the generic Torelli problem for projective hypersurfaces, in : Classification of Algebraic Manifolds, K. Ueno ed., Progress inMath. 39, Birkhauser 1983.
For the étale cohomology version, see the chapter on monodromy in
, see especially Pierre Deligne, Le formalisme des cycles évanescents, SGA7 XIII and XIV.
External links
Vanishing Cycle in the Encyclopedia of Mathematics
Algebraic topology
Topological methods of algebraic geometry | Vanishing cycle | Mathematics | 530 |
69,565,822 | https://en.wikipedia.org/wiki/Peacocking | In sociology, peacocking is a social behavior in which a male uses ostentatious clothing and behavior to attract a female and to stand out from other competing males, with the intention to become more memorable and interesting. Peacocking is very common among men, and it can happen either consciously or subconsciously. Peacocking happens subconsciously especially when a desirable female suddenly comes into sight. Prevalence of peacocking strongly correlates with woman's level of attractiveness.
According to some feminist scholars, men may tend to peacock because of the patriarchal ideas created by society. This hierarchy created between men and women and this idea of men competing for women's attention leads to peacocking.
References
External links
A Gentleman's Guide to Peacocking
Seduction | Peacocking | Biology | 152 |
192,842 | https://en.wikipedia.org/wiki/Rate%E2%80%93distortion%20theory | Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D.
Introduction
Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions.
Rate–distortion theory was created by Claude Shannon in his foundational work on information theory.
In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix.
Distortion functions
Distortion functions measure the cost of representing a symbol by an approximated symbol . Typical distortion functions are the Hamming distortion and the Squared-error distortion.
Hamming distortion
Squared-error distortion
Rate–distortion functions
The functions that relate the rate and distortion are found as the solution of the following minimization problem:
Here , sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal) for a given input (original signal) , and is the mutual information between and defined as
where and are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively:
The problem can also be formulated as a distortion–rate function, where we find the infimum over achievable distortions for given rate constraint. The relevant expression is:
The two formulations lead to functions which are inverses of each other.
The mutual information can be understood as a measure for 'prior' uncertainty the receiver has about the sender's signal (H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal (). Of course the decrease in uncertainty is due to the communicated amount of information, which is .
As an example, in case there is no communication at all, then and . Alternatively, if the communication channel is perfect and the received signal is identical to the signal at the sender, then and .
In the definition of the rate–distortion function, and are the distortion between and for a given and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitude-continuous signals):
As the above equations show, calculating a rate–distortion function requires the stochastic description of the input in terms of the PDF , and then aims at finding the conditional PDF that minimize rate for a given distortion . These definitions can be formulated measure-theoretically to account for discrete and mixed random variables as well.
An analytical solution to this minimization problem is often difficult to obtain except in some instances for which we next offer two of the best known examples. The rate–distortion function of any source is known to obey several fundamental properties, the most important ones being that it is a continuous, monotonically decreasing convex (U) function and thus the shape for the function in the examples is typical (even measured rate–distortion functions in real life tend to have very similar forms).
Although analytical solutions to this problem are scarce, there are upper and lower bounds to these functions including the famous Shannon lower bound (SLB), which in the case of squared error and memoryless sources, states that for arbitrary sources with finite differential entropy,
where h(D) is the differential entropy of a Gaussian random variable with variance D. This lower bound is extensible to sources with memory and other distortion measures. One important feature of the SLB is that it is asymptotically tight in the low distortion regime for a wide class of sources and in some occasions, it actually coincides with the rate–distortion function. Shannon Lower Bounds can generally be found if the distortion between any two numbers can be expressed as a function of the difference between the value of these two numbers.
The Blahut–Arimoto algorithm, co-invented by Richard Blahut, is an elegant iterative technique for numerically obtaining rate–distortion functions of arbitrary finite input/output alphabet sources and much work has been done to extend it to more general problem instances.
The computation of the rate-distortion function requires knowledge of the underlying distribution, which is often unavailable in contemporary applications in data-science and machine learning. However, this challenge can be addressed using deep learning-based estimators of the rate-distortion function. These estimators are typically referred to as 'neural estimators', involving the optimization of a parametrized variational form of the rate distortion objective.
When working with stationary sources with memory, it is necessary to modify the definition of the rate distortion function and it must be understood in the sense of a limit taken over sequences of increasing lengths.
where
and
where superscripts denote a complete sequence up to that time and the subscript 0 indicates initial state.
Memoryless (independent) Gaussian source with squared-error distortion
If we assume that is a Gaussian random variable with variance , and if we assume that successive samples of the signal are stochastically independent (or equivalently, the source is memoryless, or the signal is uncorrelated), we find the following analytical expression for the rate–distortion function:
The following figure shows what this function looks like:
Rate–distortion theory tell us that 'no compression system exists that performs outside the gray area'. The closer a practical compression system is to the red (lower) bound, the better it performs. As a general rule, this bound can only be attained by increasing the coding block length parameter. Nevertheless, even at unit blocklengths one can often find good (scalar) quantizers that operate at distances from the rate–distortion function that are practically relevant.
This rate–distortion function holds only for Gaussian memoryless sources. It is known that the Gaussian source is the most "difficult" source to encode: for a given mean square error, it requires the greatest number of bits. The performance of a practical compression system working on—say—images, may well be below the lower bound shown.
Memoryless (independent) Bernoulli source with Hamming distortion
The rate-distortion function of a Bernoulli random variable with Hamming distortion is given by:
where denotes the binary entropy function.
Plot of the rate-distortion function for :
Connecting rate-distortion theory to channel capacity
Suppose we want to transmit information about a source to the user with a distortion not exceeding D. Rate–distortion theory tells us that at least bits/symbol of information from the source must reach the user. We also know from Shannon's channel coding theorem that if the source entropy is H bits/symbol, and the channel capacity is C (where ), then bits/symbol will be lost when transmitting this information over the given channel. For the user to have any hope of reconstructing with a maximum distortion D, we must impose the requirement that the information lost in transmission does not exceed the maximum tolerable loss of bits/symbol. This means that the channel capacity must be at least as large as .
See also
References
External links
VcDemo Image and Video Compression Learning Tool
Data compression
Information theory | Rate–distortion theory | Mathematics,Technology,Engineering | 1,744 |
61,171,217 | https://en.wikipedia.org/wiki/C65H82N2O18S2 | {{DISPLAYTITLE:C65H82N2O18S2}}
The molecular formula C65H82N2O18S2 (molar mass: 1243.49 g/mol) may refer to:
Atracurium besilate
Cisatracurium besilate
Molecular formulas | C65H82N2O18S2 | Physics,Chemistry | 70 |
22,287,094 | https://en.wikipedia.org/wiki/AC-262%2C536 | AC-262536 is a drug developed by Acadia Pharmaceuticals which acts as a selective androgen receptor modulator (SARM). Chemically it possesses endo-exo isomerism, with the endo form being the active form. It acts as a partial agonist for the androgen receptor with a Ki of 5nM, and no significant affinity for any other receptors tested. In animal studies it produced a maximal effect of around 66% of the levator ani muscle weight increase of testosterone, but only around 27% of its maximal effect on prostate gland weight. It is an aniline SARM related to ACP-105 and vosilasarm (RAD140).
References
Abandoned drugs
Naphthalenes
Nitriles
Secondary alcohols
Selective androgen receptor modulators
Tropanes | AC-262,536 | Chemistry | 170 |
1,226,674 | https://en.wikipedia.org/wiki/Adipic%20acid | Adipic acid or hexanedioic acid is the organic compound with the formula (CH2)4(COOH)2. From an industrial perspective, it is the most important dicarboxylic acid: about 2.5 billion kilograms of this white crystalline powder are produced annually, mainly as a precursor for the production of nylon. Adipic acid otherwise rarely occurs in nature, but it is known as manufactured E number food additive E355. Salts and esters of adipic acid are known as adipates.
Preparation and reactivity
Adipic acid is produced by oxidation of a mixture of cyclohexanone and cyclohexanol, which is called KA oil, an abbreviation of ketone-alcohol oil. Nitric acid is the oxidant. The pathway is multistep. Early in the reaction, the cyclohexanol is converted to the ketone, releasing nitrous acid:
The cyclohexanone is then nitrosated, setting the stage for the scission of the C-C bond:
Side products of the method include glutaric and succinic acids. Nitrous oxide is produced in about one to one mole ratio to the adipic acid, as well, via the intermediacy of a nitrolic acid.
Related processes start from cyclohexanol, which is obtained from the hydrogenation of phenol.
Alternative methods of production
Several methods have been developed by carbonylation of butadiene. For example, the hydrocarboxylation proceeds as follows:
CH2=CH−CH=CH2 + 2 CO + 2 H2O → HO2C(CH2)4CO2H
Another method is oxidative cleavage of cyclohexene using hydrogen peroxide. The waste product is water.
Auguste Laurent discovered adipic acid in 1837 by oxidation of various fats with nitric acid via sebacic acid and gave it the current name because of that (ultimately from Latin adeps, adipis – "animal fat"; cf. adipose tissue).
Reactions
Adipic acid is a dibasic acid (it has two acidic groups). The pKa values for their successive deprotonations are 4.41 and 5.41.
With the carboxylate groups separated by four methylene groups, adipic acid is suited for intramolecular condensation reactions. Upon treatment with barium hydroxide at elevated temperatures, it undergoes ketonization to give cyclopentanone.
Uses
About 60% of the 2.5 billion kg of adipic acid produced annually is used as monomer for the production of nylon by a polycondensation reaction with hexamethylene diamine forming nylon 66. Other major applications also involve polymers; it is a monomer for production of polyurethane and its esters are plasticizers, especially in PVC.
In medicine
Adipic acid has been incorporated into controlled-release formulation matrix tablets to obtain pH-independent release for both weakly basic and weakly acidic drugs. It has also been incorporated into the polymeric coating of hydrophilic monolithic systems to modulate the intragel pH, resulting in zero-order release of a hydrophilic drug. The disintegration at intestinal pH of the enteric polymer shellac has been reported to improve when adipic acid was used as a pore-forming agent without affecting release in the acidic media. Other controlled-release formulations have included adipic acid with the intention of obtaining a late-burst release profile.
In foods
Small but significant amounts of adipic acid are used as a food ingredient as a flavorant and gelling aid. It is used in some calcium carbonate antacids to make them tart. As an acidulant in baking powders, it avoids the undesirable hygroscopic properties of tartaric acid. Adipic acid, rare in nature, does occur naturally in beets, but this is not an economical source for commerce compared to industrial synthesis.
Safety
Adipic acid, like most carboxylic acids, is a mild skin irritant. It is mildly toxic, with a median lethal dose of 3600 mg/kg for oral ingestion by rats.
Environmental
The production of adipic acid is linked to emissions of , a potent greenhouse gas and cause of stratospheric ozone depletion. At adipic acid producers DuPont and Rhodia (now Invista and Solvay, respectively), processes have been implemented to catalytically convert the nitrous oxide to innocuous products:
2 N2O → 2 N2 + O2
Adipate salts and esters
The anionic (HO2C(CH2)4CO2−) and dianionic (−O2C(CH2)4CO2−) forms of adipic acid are referred to as adipates. An adipate compound is a carboxylate salt or ester of the acid.
Some adipate salts are used as acidity regulators, including:
Sodium adipate (E number E356)
Potassium adipate (E357)
Some adipate esters are used as plasticizers, including:
Bis(2-ethylhexyl) adipate
Dioctyl adipate
Dimethyl adipate
References
Appendix
U.S. FDA citations – GRAS (21 CFR 184.1009), Indirect additive (21 CFR 175.300, 21 CFR 175.320, 21 CFR 176.170, 21 CFR 176.180, 21 CFR 177.1200, 21 CFR 177.1390, 21 CFR 177.1500, 21 CFR 177.1630, 21 CFR 177.1680, 21 CFR 177.2420, 21 CFR 177.2600)
European Union Citations – Decision 1999/217/EC – Flavoing Substance; Directive 95/2/EC, Annex IV – Permitted Food Additive; 2002/72/EC, Annex A – Authorized monomer for Food Contact Plastics
External links
adipic acid on chemicalland
Commodity chemicals
Dicarboxylic acids
Food acidity regulators
Carboxylic acid-based monomers
E-number additives | Adipic acid | Chemistry | 1,290 |
4,708,838 | https://en.wikipedia.org/wiki/Inventory%20turnover | In accounting, the inventory turnover is a measure of the number of times inventory is sold or used in a time period such as a year. It is calculated to see if a business has an excessive inventory in comparison to its sales level. The equation for inventory turnover equals the cost of goods sold divided by the average inventory. Inventory turnover is also known as inventory turns, merchandise turnover, stockturn, stock turns, turns, and stock turnover.
Formulas
The formula for inventory turnover:
or
or
The most basic formula for average inventory:
or just
Multiple data points, for example, the average of the monthly averages, will provide a much more representative turn figure.
The average days to sell the inventory is calculated as follows:
Application in Business
A low turnover rate may point to overstocking, obsolescence, or deficiencies in the product line or marketing effort. However, in some instances a low rate may be appropriate, such as where higher inventory levels occur in anticipation of rapidly rising prices or expected market shortages. Another insight provided by the inventory turnover ratio is that if inventory is turning over slowly, then the warehousing cost attributable to each unit will be higher.
Conversely a high turnover rate may indicate inadequate inventory levels, which may lead to a loss in business as the inventory is too low. This often can result in stock shortages.
Some compilers of industry data (e.g., Dun & Bradstreet) use sales as the numerator instead of cost of sales. Cost of sales yields a more realistic turnover ratio, but it is often necessary to use sales for purposes of comparative analysis. Cost of sales is considered to be more realistic because of the difference in which sales and the cost of sales are recorded. Sales are generally recorded at market value, i.e. the value at which the marketplace paid for the good or service provided by the firm. In the event that the firm had an exceptional year and the market paid a premium for the firm's goods and services then the numerator may be an inaccurate measure. However, cost of sales is recorded by the firm at what the firm actually paid for the materials available for sale. Additionally, firms may reduce prices to generate sales in an effort to cycle inventory. In this article, the terms "cost of sales" and "cost of goods sold" are synonymous.
An item whose inventory is sold (turns over) once a year has higher holding cost than one that turns over twice, or three times, or more in that time. Stock turnover also indicates the briskness of the business. The purpose of increasing inventory turns is to reduce inventory for three reasons.
Increasing inventory turns reduces holding cost. The organization spends less money on rent, utilities, insurance, theft and other costs of maintaining a stock of good to be sold.
Reducing holding cost increases net income and profitability as long as the revenue from selling the item remains constant.
Items that turn over more quickly increase responsiveness to changes in customer requirements while allowing the replacement of obsolete items. This is a major concern in fashion industries.
When making comparison between firms, it's important to take note of the industry, or the comparison will be distorted. Making comparison between a supermarket and a car dealer, will not be appropriate, as supermarket sells fast-moving goods such as sweets, chocolates, soft drinks so the stock turnover will be higher. However, a car dealer will have a low turnover due to the item being a slow moving item. As such only intra-industry comparison will be appropriate.
Even within industry, inventory turns can vary across firms for various reasons, such as the amount of product variety, the extent of price discounts offered, and the structure of the supply chain.
Note
Some computer programs measure the stock turns of an item using the actual number sold.
The important issue is that any organization should be consistent in the formula that it uses.
See also
Cost accounting
Inventory
Inventory management software
Throughput accounting
Stock rotation
References
Further reading
Business Mathematics, 10th Edition, Chapter 7, § 4,
Management accounting
Financial ratios
Working capital management
Inventory | Inventory turnover | Mathematics | 821 |
67,352,985 | https://en.wikipedia.org/wiki/Feminist%20design | Feminist design refers to connections between feminist perspectives and design. Feminist design can include feminist perspectives applied to design disciplines like industrial design, graphic design and fashion design, and parallels work like feminist urbanism, feminist HCI and feminist technoscience. Feminist perspectives can touch any aspect of the design project including processes, artifacts and practitioners.
History
There is a long history of feminist activity in design. Early examples include movements for dress reform (mid–19th century) and concepts for utopian feminist cities (late 19th century to the early 20th century). Over time this work has explored topics like beauty, DIY, feminine approaches to architecture, community-based and grassroots projects, among many examples. Some iconic writing includes Cheryl Buckley's essays on design and patriarchy and Judith Rothschild's Design and feminism: Re-visioning spaces, places, and everyday things.
Scope
Some scholars suggest that all designers should be feminists, as drawn by Chimamanda Ngozi Adichie, approaching feminism not only through gender but through power.
“Not surprisingly, feminist approaches to design have generally been concerned with the relationship between women and design-how they are affected by it and how their contributions to it are regarded. These two inquiries have been thoroughly investigated in existing literature. Historically, they tended to be based on universal accounts of women, which assumed a cisgender, white, heterosexual, able-bodied woman. Only recently has the work expanded to advance our under- standing of the ways in which the impacts of design are felt at the intersections of gender and race, class, and other identities. Most feminist discourse in design seems to imply that the problems raised would not be problems if more designers were women and if their perspectives were valued.”
Feminist insights for design
Isabel Prochner's research explored how feminist perspectives can support positive change in industrial design. She stressed the diversity of feminist perspectives, but also argued that they can help identify systemic social problems and inequities in design and guide socially sustainable and grassroots design solutions. She wrote that feminist perspectives in industrial design often support:
"Emphasizing human life and flourishing over output and growth
Following best practices in labor/ international production /trade
Choosing an empowering workspace
Engaging in non-hierarchical/ interdisciplinary/ collaborative work
Addressing user needs at multiple levels, including support for pleasure/ fun/ happiness
Creating thoughtful products for female users
Creating good jobs through production/ execution/ sale of the design solution"
Related pages
Design justice
FeministDesign.co
Dolores Hayden
Intersectionality
Data feminism
Nina Paim
Futuress
Cyberfeminism
Bibliography
References
Feminism and society
Design | Feminist design | Engineering | 519 |
142,338 | https://en.wikipedia.org/wiki/Network%20mapping | Network mapping is the study of the physical connectivity of networks e.g. the Internet. Network mapping discovers the devices on the network and their connectivity. It is not to be confused with network discovery or network enumeration which discovers devices on the network and their characteristics such as operating system, open ports, listening network services, etc. The field of automated network mapping has taken on greater importance as networks become more dynamic and complex in nature.
Large-scale mapping project
Images of some of the first attempts at a large scale map of the internet were produced by the Internet Mapping Project and appeared in Wired magazine. The maps produced by this project were based on the layer 3 or IP level connectivity of the Internet (see OSI model), but there are different aspects of internet structure that have also been mapped.
More recent efforts to map the internet have been improved by more sophisticated methods, allowing them to make faster and more sensible maps. An example of such an effort is the OPTE project, which is attempting to develop a system capable of mapping the internet in a single day.
The "Map of the Internet Project" maps over 4 billion internet locations as cubes in 3D cyberspace. Users can add URLs as cubes and re-arrange objects on the map.
In early 2011 Canadian based ISP PEER 1 Hosting created their own Map of the Internet that depicts a graph of 19,869 autonomous system nodes connected by 44,344 connections. The sizing and layout of the autonomous systems was calculated based on their eigenvector centrality, which is a measure of how central to the network each autonomous system is.
Graph theory can be used to better understand maps of the internet and to help choose between the many ways to visualize internet maps. Some projects have attempted to incorporate geographical data into their internet maps (for example, to draw locations of routers and nodes on a map of the world), but others are only concerned with representing the more abstract structures of the internet, such as the allocation, structure, and purpose of IP space.
Enterprise network mapping
Many organizations create network maps of their network system. These maps can be made manually using simple tools such as Microsoft Visio, or the mapping process can be simplified by using tools that integrate auto network discovery with Network mapping, one such example being the Fabric platform. Many of the vendors from the Notable network mappers list enable you to customize the maps and include your own labels, add un-discoverable items and background images. Sophisticated mapping is used to help visualize the network and understand relationships between end devices and the transport layers that provide service. Mostly, network scanners detect the network with all its components and deliver a list which is used for creating charts and maps using network mapping software. Items such as bottlenecks and root cause analysis can be easier to spot using these tools.
There are three main techniques used for network mapping: SNMP based approaches, active probing and route analytics.
The SNMP based approach retrieves data from Router and Switch MIBs in order to build the network map. The active probing approach relies on a series of traceroute-like probe packets in order to build the network map. The route analytics approach relies on information from the routing protocols to build the network map. Each of the three approaches have advantages and disadvantages in the methods that they use.
Internet mapping techniques
There are two prominent techniques used today to create Internet maps. The first works on the data plane of the Internet and is called active probing. It is used to infer Internet topology based on router adjacencies. The second works on the control plane and infers autonomous system connectivity based on BGP data. A BGP speaker sends 19-byte keep-alive messages every 60 seconds to maintain the connection.
Active probing
This technique relies on traceroute-like probing on the IP address space. These probes report back IP forwarding paths to the destination address. By combining these paths one can infer router level topology for a given POP. Active probing is advantageous in that the paths returned by probes constitute the actual forwarding path that data takes through networks. It is also more likely to find peering links between ISPs. However, active probing requires massive amounts of probes to map the entire Internet. It is more likely to infer false topologies due to load balancing routers and routers with multiple IP address aliases. Decreased global support for enhanced probing mechanisms such as source-route probing, ICMP Echo Broadcasting, and IP Address Resolution techniques leaves this type of probing in the realm of network diagnosis.
AS PATH inference
This technique relies on various BGP collectors who collect routing updates and tables and provide this information publicly. Each BGP entry contains a Path Vector attribute called the AS Path. This path represents an autonomous system forwarding path from a given origin for a given set of prefixes. These paths can be used to infer AS-level connectivity and in turn be used to build AS topology graphs. However, these paths do not necessarily reflect how data is actually forwarded and adjacencies between AS nodes only represent a policy relationship between them. A single AS link can in reality be several router links. It is also much harder to infer peerings between two AS nodes as these peering relationships are only propagated to an ISP's customer networks. Nevertheless, support for this type of mapping is increasing as more and more ISP's offer to peer with public route collectors such as Route-Views and RIPE. New toolsets are emerging such as Cyclops and NetViews that take advantage of a new experimental BGP collector BGPMon. NetViews can not only build topology maps in seconds but visualize topology changes moments after occurring at the actual router. Hence, routing dynamics can be visualized in real time.
In comparison to what the tools using BGPMon does there is another tool netTransformer able to discover and generate BGP peering maps either through SNMP polling or by converting MRT dumps to a graphml file format. netTransformer allows us also to perform network diffs between any two dumps and thus to reason how does the BGP peering has evolved through the years. WhatsUp Gold, an IT monitoring tool, tracks networks, servers, applications, storage devices, virtual devices and incorporates infrastructure management, application performance management.
See also
Comparison of network diagram software
DIMES
Idea networking
Network topology
Opte Project
Webometrics
Notes
External links
Cheleby Internet Topology Mapping System
Center for Applied Internet Data Analysis
NetViews: Multi-level Realtime Internet Mapping
Cyclops: An AS level Observatory
DIMES Research Project
Internet Mapping Research Project
The Opte Project
Internet architecture
Network mappers
sv:Internetmappning | Network mapping | Technology | 1,374 |
671,956 | https://en.wikipedia.org/wiki/Dirac%20string | In physics, a Dirac string is a one-dimensional curve in space, conceived of by the physicist Paul Dirac, stretching between two hypothetical Dirac monopoles with opposite magnetic charges, or from one magnetic monopole out to infinity. The gauge potential cannot be defined on the Dirac string, but it is defined everywhere else. The Dirac string acts as the solenoid in the Aharonov–Bohm effect, and the requirement that the position of the Dirac string should not be observable implies the Dirac quantization rule: the product of a magnetic charge and an electric charge must always be an integer multiple of . Also, a change of position of a Dirac string corresponds to a gauge transformation. This shows that Dirac strings are not gauge invariant, which is consistent with the fact that they are not observable.
The Dirac string is the only way to incorporate magnetic monopoles into Maxwell's equations, since the magnetic flux running along the interior of the string maintains their validity. If Maxwell equations are modified to allow magnetic charges at the fundamental level then the magnetic monopoles are no longer Dirac monopoles, and do not require attached Dirac strings.
Details
The quantization forced by the Dirac string can be understood in terms of the cohomology of the fibre bundle representing the gauge fields over the base manifold of space-time. The magnetic charges of a gauge field theory can be understood to be the group generators of the cohomology group for the fiber bundle M. The cohomology arises from the idea of classifying all possible gauge field strengths , which are manifestly exact forms, modulo all possible gauge transformations, given that the field strength F must be a closed form: . Here, A is the vector potential and d represents the gauge-covariant derivative, and F the field strength or curvature form on the fiber bundle. Informally, one might say that the Dirac string carries away the "excess curvature" that would otherwise prevent F from being a closed form, as one has that everywhere except at the location of the monopole.
References
Gauge theories
Magnetic monopoles | Dirac string | Physics,Astronomy | 439 |
55,094,808 | https://en.wikipedia.org/wiki/Eprinomectin | Eprinomectin is an avermectin used as a veterinary topical endectocide. It is a mixture of two chemical compounds, eprinomectin B1a and B1b.
References
Antiparasitic agents
Macrocycles
Veterinary medicine
Spiro compounds
Methoxy compounds
Acetamides | Eprinomectin | Chemistry,Biology | 64 |
7,067,238 | https://en.wikipedia.org/wiki/Danishefsky%27s%20diene | Danishefsky's diene (Kitahara diene) is an organosilicon compound and a diene with the formal name trans-1-methoxy-3-trimethylsilyloxy-buta-1,3-diene named after Samuel J. Danishefsky. Because the diene is very electron-rich it is a very reactive reagent in Diels-Alder reactions. This diene reacts rapidly with electrophilic alkenes, such as maleic anhydride. The methoxy group promotes highly regioselective additions. The diene is known to react with amines, aldehydes, alkenes and alkynes. Reactions with imines and nitro-olefins have been reported.
It was first synthesized by the reaction of trimethylsilyl chloride with 4-methoxy-3-buten-2-one and zinc chloride:
The diene has two features of interest: the substituents promote regiospecific addition to unsymmetrical dienophiles and the resulting adduct is amenable to further functional group manipulations after the addition reaction. High regioselectivity is obtained with unsymmetrical alkenes with a preference for a 1,2-relation of the ether group with the electron-deficient alkene-carbon. All this is exemplified in this aza Diels-Alder reaction:
In the cycloaddition product, the silyl ether is a synthon for a carbonyl group through the enol. The methoxy group is susceptible to an elimination reaction enabling the formation of a new alkene group.
Applications in asymmetric synthesis have been reported. Derivatives have been reported.
References
Conjugated dienes
Trimethylsilyl compounds
Reagents for organic chemistry | Danishefsky's diene | Chemistry | 392 |
427,748 | https://en.wikipedia.org/wiki/Bar%20%28diacritic%29 | A bar or stroke is a modification consisting of a line drawn through a grapheme. It may be used as a diacritic to derive new letters from old ones, or simply as an addition to make a grapheme more distinct from others. It can take the form of a vertical bar, slash, or crossbar.
A stroke is sometimes drawn through the numerals 7 (horizontal overbar) and 0 (overstruck foreslash), to make them more distinguishable from the number 1 and the letter O, respectively. (In some typefaces, one or other or both of these characters are designed in these styles; they are not produced by overstrike or by combining diacritic. The normal way in most of Europe to write the number seven is with a bar. )
In medieval English scribal abbreviations, a stroke or bar was used to indicate abbreviation. For example, , the pound sign, is a stylised form of the letter (the letter with a cross bar).
For the specific usages of various letters with bars and strokes, see their individual articles.
Letters with bar
Currency signs with bar
Currency symbols and letters with double bar
See also
Strikethrough
X-bar theory (formal linguistics)
Parallel (operator)
Notes
References
External links
Diacritics Project: All you need to design a font with correct accents
Orthographic diacritics
Diacritics
Diakrytyka
Latin-script diacritics | Bar (diacritic) | Mathematics | 303 |
4,271,629 | https://en.wikipedia.org/wiki/Ky%C5%8Diku%20mama | is a Japanese pejorative term which translates literally as "education mother". The kyōiku mama is a stereotyped figure in modern Japanese society, portrayed as a mother who relentlessly drives her child to study, to the detriment of the child's social and physical development, and emotional well-being.
The kyōiku mama is one of the best-known and least-liked pop-culture figures in contemporary Japan. The kyōiku mama is analogous to American stereotypes such as the stage mother who forces her child to achieve show-business success in Hollywood, the stereotypical Chinese tiger mother who takes an enormous amount of effort to direct much of her maternal influence towards developing their children's educational and intellectual achievement, and the stereotypical Jewish mother's drive for her children to succeed academically and professionally, resulting in a push for perfection and a continual dissatisfaction with anything less or the critical, self-sacrificing mother who coerces her child into medical school or law school.
The stereotype is that a kyōiku mama is feared by her children, blamed by the press for school phobias and youth suicides, and envied and resented by the mothers of children who study less and fare less well on exams.
Factors influencing development of kyōiku mama
In the early 1960s, part-time women's labor began at a few major corporations in Japan and was adopted by other companies within a decade. It became popular among married women in the 1970s and even more so by 1985.
Women's return to the workplace is often explained two-fold: by financial demands to complement the family budget, and by psychological demands to relate themselves to society.
Child-rearing women in the 1960s inspired the media to produce the idiom kyōiku mama, which referred to "the domestic counterpart of sararii-man" (salaryman). This encompassed a major responsibility to "rear children, especially the males, to successfully pass the competitive tests needed to enter high school and college". No such idiom emerged that deemed men "education papas"; it was "mamas" who became a social phenomenon.
The education system
The education system and larger political economy it serves influence why mothers become obsessed with children's education. Social prejudices influence media stereotypes of kyōiku mamas that blame women rather than political conditions. Getting a good, steady job in the future very much depends on getting into a good university, which depends on attaining high scores on the national university exams in a student's last year of high school. Ordinary people, including mothers, feel powerless to change this system.
As a result, there is a clear map pointing students to the right nursery school that leads to the right kindergarten, the best elementary school, junior high school, and high school, all of which may be associated with prestigious universities. To ensure these results, some parents have been known to commit unethical or illegal acts to promote their child's success.
In one case, a restaurant owner paid a $95,000 bribe in an attempt to get his child enrolled in Aoyama Gakuin, a prestigious kindergarten for children who are three or four years old. Because of the kindergarten's affiliation with an elite university, parents are willing to go to extreme lengths to get their children enrolled. Aoyama Gakuin has room for 40 new students a year. Every year, it receives more than 2000 hopeful applicants. The tests the potential students take are known to be extremely difficult.
The issue is compounded by the notion that most important job positions in business and government are held by graduates of the University of Tokyo. In addition, which university a student attends is believed to affect one's choices for a future spouse. Because a child's life appears to be determined by what schools he or she attends, many mothers take extraordinary measures to get children into good schools.
Changing family structures
The older generation of Japanese grew up in larger households than those normally found in Japan today. Back then, ikuji (, "child-raising") included a larger surrounding environment, made up of more relatives and extended family, and more children: siblings and cousins. Children who grew up in that time learned responsibilities through the care of younger siblings. These children relied on themselves in the outside world through much of their childhood lives. In those days, child-raising was more of a private matter, handled only by the child's surrounding family.
In the 1970s, men's wages decreased and women left home earlier to find jobs. These women "considered themselves free" after the child's junior high education. The previous generation did not feel this until after the child had finished high school.
In contemporary Japan, couples are having fewer children and teaching the children self-reliance. This involves consulting child-raising professionals. This new need in professional advice is commonly termed "child-raising neurosis" by professionals. Reliance on professionals has largely created a new generation of young mothers with low self-confidence in their child-raising abilities. Indeed, most Japanese mothers today grew up in smaller families with only one or two children. Their mothers provided them with everything they needed and gave them little to no responsibilities involving their siblings. Thus, that generation of children has grown up to become mothers who have no idea how to raise their children.
In addition, in contemporary Japan there are mothers who completely devote themselves to child-raising. Another subtype, described by Nishioka Rice, is the kosodate mama (), who adds psychosociological elements into child-raising. In addition to providing for her a good education, she develops an emotional and psychological relationship with her children. One way to do this is through "skinship"—being in constant close physical contact with her children. This could, for example, involve carrying her child on her back wherever she goes or bathing with her children every night. Through skinship, ittaikan () is achieved, a "one-ness and balanced, positively valenced dependency" between mother and child.
Societal views
In Japan, a mother who works is commonly seen as selfish in a society where child-raising is linked directly with the physical closeness between mother and child. This emphasis can be a cause of the development of a kyōiku mama who always worries about her children's education success. This produces children that society views as lacking self-reliance, antisocial, and selfish.
When compared to American mothers, Japanese mothers have a stronger belief in effort as opposed to innate ability. Japanese children see their efforts as necessary to fulfill a social obligation to family, peers, and community. Children are forced to focus on their effort, seeing it as the cause of success. According to society, if a child does not succeed, they were not trying hard enough. This is unrelated to the child's grades; children always need to put forth more effort. Mothers pressure children because they are held strongly accountable for their children's actions.
It is very hard to find daycare in some parts of Japan, and it is socially looked down upon if a mother sends her child to one. The mother is seen as insufficient, not having the skills to raise a child on her own, or selfish, giving her child over to a caretaker while she pursues her own separate goals.
The term kyōiku mama became used in other similar contexts. For example, the former Ministry of International Trade and Industry was dubbed kyōiku mama for its approach and initiatives in guiding industrial growth, in a manner similar to the definition of a nanny state.
Media
Housewives are surrounded by popular media that encourages their actions. Daytime television, magazines, products, and services for mothers are largely focused on improving the home and raising the children. Thus, the job of motherhood is taken very seriously by mothers in Japan. A common description of a mother's free time is “three meals and a nap.”
Class distinctions
Kyōiku mamas, preparatory preschools, and heavily academic curricula exist in Japan, yet they are relatively rare and concentrated in urban, wealthy areas. Kyōiku mamas are prominent in the middle classes. Middle-class women train the children, the next generation of the middle class. In a speech at the 1909 Mitsukoshi children's exhibition, First Higher School principal Nitobe Inazō asserted, "The education of a citizenry begins not with the infant but with the education of a country's mothers."
In the post-World War II era in Japan, the mother was the creator of a new child-centered world stamped with middle-class values. The mother was linked with the success of the child's education. A woman was expected to be a "good wife, wise mother" and became the single most important figure in raising the child to become a successful future adult. Mothers needed to put their efforts into raising and teaching their children. Through self-cultivation and rearing of the children, the woman was crucial to a family's ability to claim a place in the so-called middle stratum.
As education credentials became the recognized prerequisite to social advancement in the early 20th century, kyōiku mama actively looked to the education system, especially admission into middle school for boys and higher school for girls, to help improve the family's social position. The competition to pass the entrance examination to middle school and girls' higher school became intense, creating the social phenomenon known as shiken jigoku (): examination hell. While risshin shusse (), or rising in the world, was the clarion call of the mass of the middle class, there was no risshin shusse without a kyōiku mama. For the education mother, making the child into a superior student was a concern that began with the child's entrance into elementary school at age six and extended to all aspects of the child's education.
Working-class mothers are not as intensely active in their children's education as middle-class mothers. An ethnographic study by Shimizu Tokuda (1991) portrayed one middle school that faced persistent academic problems in a working-class neighborhood of Osaka. The study illustrated efforts by teachers to improve the student's academic performance: providing tests, promoting monthly teacher discussions, painting walls to enhance the study environment, and restricting hours spent in extracurricular activities. While students' enrollment in high school slightly improved, academic achievement level remained lower than the national average. This study revealed that students' academic problems were deeply related to their home environments. Most students had parents who were uneducated and not involved in their children's education.
American view
In contrast to Japan's mostly negative images of kyōiku mamas, American leaders who put forth the image of "superhuman Japan" to boost American education performance extolled Japan's education-minded mothers. Both of Ronald Reagan's education secretaries focused attention on Japanese mothers as mirrors to improve American families and schools. Reagan's first Secretary of Education, Terrel Bell (credited for the wording of A Nation at Risk) wrote an enthusiastic foreword to Guy Odom's Mothers, Leadership and Success—a book whose basic point was that only vigorous, aggressive and intelligent Super Moms exemplified by Japanese mothers could reinvigorate America. William J. Bennett, head of the Department of Education in Reagan's second term, praised Japan's "one parent on the scene" who "stays in touch with the teachers, supervises the homework, arranges extra instructional help if needed, and buttresses the child's motivation to do well in school and beyond".
Contemporary kyōiku mamas
Many Japanese mothers dedicate much time to get their children from one entrance exam to another. At the national university entrance exams, held in Tokyo, most mothers travel with their children to the examination hall. They arrive and stay at a nearby hotel, grilling their children on last-minute statistics and making sure that they are not late to the exam.
Some mothers are beginning their children's education at even younger ages. A 30-year-old mother in Japan says, "This is my first baby, and I didn't know how to play with her or help her develop". She sends her 6-month-old daughter to a pre-pre-school in Tokyo. A headmaster at another pre-pre-school claims that the school, for children one year or older, helps to nurture and develop the children's curiosity through "tangerine-peeling or collecting and coloring snow".
Mothers are essentially in heavy competition with other mothers who want their children to get into the elite universities. In some cases, to make it seem like her own child is not studying as much, mothers will let their child use the parents' bedroom to study while the mothers watch television in the living room. Other mothers who pass by the house will see the child's bedroom light off, assuming that the child has shirked his or her studies to watch television. The next morning, the mother will report what happened on the shows to her child, who will go to school and talk about it to his or her classmates, who will also assume that their friend is a slacker, lowering their expectations of their friend and for themselves. However, when examination time rolls around, the "slacker" will be admitted into an elite school while his or her friends will drop behind.
Kyōiku mamas often give their children a big first appearance in the neighborhood through a kōen debyū (), where the mothers "parade their offspring around the neighborhood parks for approval".
Mothers send their children to cram schools (juku), where children may stay until 10 or 11pm. Japan has over 35,000 cram schools for college examinations. In addition to cram schools, children are sent to calligraphy, keyboard, abacus, or kendo classes. As revealed by Marie Thorsten, moral panics about juku and education mamas occurred at the same time, in the 1970s. "As 'second schools', the juku, as consumer services, appealed to mothers’ anxieties about their children, shaping the image of the 'normal' mother as one who sends her children to juku and stays up to date with commercialized trends in examination preparation."
Effects on children
In the 1950s, full-time mothers devoted themselves to a smaller number of children. Parental stress resulted in the commonality of new childhood problems; these include bronchial asthma, stammering, poor appetite, proneness to bone fractures, and school phobia. Children were aware they were their mother's purpose in life. Mothers played the role of their children's school teachers while they were at home.
Sometimes, a child who grows up with a kyōiku mama turns into a tenuki okusan (, "hands-off housewife"). This stereotype describes women who typically have jobs and are not around the children as much, essentially becoming the female version of the stereotypical absent Japanese father, a "leisure-time parent" or "Sunday friend". These mothers are said to not do a lot of homemaking, commonly making large, freezable meals that are easy to reheat in case they are not home or too busy to do the cooking. They do not attempt to represent their families in the community through participation in their children's school PTA and other community functions.
Compared to modern American children, Japanese youths have less drug use, depression, violence, and teenage pregnancy, although these may be caused due to harsher laws and intrinsic social values in the Japanese culture.
Government regulations
The Ministry of Education, Culture, Sports, Science and Technology has admitted that the education system and parental pressure are taking their toll on children. Education reforms that the Ministry of Education has enacted beginning in the 1970s have challenged Japan's egalitarian school system. To decrease academic pressure among students from examination competition, the Ministry of Education cut school hours and increased non-academic activities such as recess and clubs in elementary and junior high schools.
In 2002, the central government reduced school hours again, decreased content, and introduced a new curriculum at all public elementary schools to encourage individual students' learning interests and motivation. The Japanese Ministry of Education published a white paper stating that children do not have opportunities such as "coming into contact with nature, feeling awe and respect for life, and experiencing the importance of hard work learning from difficulties".
Japanese education and related stress
Post-war Japan in the 1950s made it a "national mission to accelerate its education program. Children of this era had to distinguish themselves from peers at an early age if they hoped to get into a top university. Entrance exams for these children began in kindergarten.
By the mid-1970s, pressure to achieve in children created the need for specialty schools. Seventy percent of students continued their long school day at juku or "cram schools".
In the 1980s, a series of suicides linked to school pressures began. Elementary and middle school students took their lives after failing entrance exams.
During the 1990s, the economic collapse in Japan (after its global economic dominance in the previous decade) led to a loss of motivation by students. The once highly touted academic ratings of Japan in math and science fell behind those of American levels. The stress began to lead to classroom disruption.
In 2001, the National Education Research Institute found that 33 percent of teachers and principals polled said that they had witnessed a complete breakdown of class "over a continuous period" due to defiant children "engaging in arbitrary activity". In 2002, the Japanese Education Ministry — pressured by the need to reform — eliminated 30 percent of its core curriculum. This freed up time for students to learn in groups according to the students' chosen path.
The use of the term mukatsuku, meaning "irritating and troublesome", has been rising in use among students as a description of the feelings they experience of being fed up with teachers, parents, and life.
See also
Education in Japan
Helicopter parent
Hong Kong children
Tiger parenting, a similar parenting style in Mainland China and other parts of East Asia, South Asia and Southeast Asia
Soccer mom
References
1960s neologisms
Academic pressure in East Asian culture
Behavior modification
Education in Japan
Japanese family structure
Japanese values
Maternity in Japan
Pejorative terms for women
Social issues in Japan
Stereotypes of middle class women
Suicide in Japan | Kyōiku mama | Biology | 3,747 |
34,034,552 | https://en.wikipedia.org/wiki/Salad%20spinner | A salad spinner, also known as a salad tosser, is a kitchen tool used to wash and remove excess water from salad greens. It uses centrifugal force to separate the water from the leaves, enabling salad dressing to stick to the leaves without dilution.
Salad spinners are usually made from plastic and include an outer bowl with an inner removable colander or strainer basket. A cover, which fits around the outside bowl, contains a spinning mechanism that when initiated causes the inside strainer to rotate rapidly. The water is driven through the slits in the basket into the outer bowl. There are a number of different mechanisms used to operate the device, including crank handles, push buttons and pull-cords. The salad spinner is generally easy to use, though its large and rigid shape has been criticized by food editor Leanne Kitchen and Herald-Journal reporter Mary Hunt. A salad spinner is often considered bulky and difficult to store.
Although devices used to wash, dry and spin salad have long been in existence, including one from the 19th century, the modern mechanism-operated spinner originated in the early 1970s. In 1974, the Mouli Manufacturing Co. introduced a crank-operated salad spinner to the American market; other companies were not far behind with their own patented variations. The product sold favorably and demand was high, with stores struggling to keep it in stock. Despite the product's popularity, however, it was not entirely without criticism; some were skeptical about the necessity of "another gourmet gadget".
History
Although the invention of the salad spinner is considered to be modern, earlier devices, including one from the 19th century, did exist and performed similar functions. When the salad spinner was introduced to the mass market in the 1970s, a number of other techniques and products were already available and employed for the drying of vegetables and salad. One such device was a wire basket dryer, in essence a collapsible colander, which could be shaken or spun to expel the excess water. This method has been criticized by some for its impracticalities and according to one writer, the process was "akin to standing near a dog that’s shaking himself dry." Another product was a wire lettuce dryer designed for use in the sink. A basket was fixed with suction cups to the bottom of the sink, and pushing a pump caused the basket to spin around a center post, often spraying expelled water on the operator. Paper or fabric towels were also commonly used for drying salad and vegetables after washing, however the method was perceived as time-consuming and costly.
Early patents
In 1971, Jean Mantelet filed a patent for a "Salad Dryer," a hand-operated, centrifugally-driven device along with another called the "Household Drying Machine" which could also be used for salads. Mantelet was a prominent designer of domestic appliances and the founder of the French company Moulinex. He patented another salad dryer device in 1974. Mantelet was particularly proud of his salad dryer design. Of the new product, one user commented that, "it saves shaking my salad basket out of the kitchen window."
Gilberte Fouineteau, another French inventor, has been credited as the creator of the modern salad spinner. He filed a patent for a device in 1973. It too used centrifugal force to dry and drain vegetables and salads. The patent describes its difference being the removable basket and a lack of a central post.
American market
The Mouli Manufacturing Co., a kitchen supply brand, introduced the salad dryer to the American market in 1974. Others companies soon followed with their own, similar versions of the device. The salad spinner proved to be an immediate success however it was not without some skepticism. The product received criticism for being “another gourmet gadget” and the latest piece of “kitchen junk” to add to the growing list of new appliances which included hot dog cookers, cookie shooters and electric potato peelers.
Despite reservations, the salad dryer was selling well in the American market. Over the five-year period since the product's introduction, sales of the dryer had quadrupled, with an estimated 500,000 units said to have been sold in 1978 alone. Cookware stores were unable to keep hold with demand and reported being out of stock of the device. The product even appealed to skeptics and was thought to be a time saver compared to the “tedious hand method of patting dry each leaf” that was known for leaving "soggy linen" and towels. A report at the time estimated that the money spent on a month's supply of paper towel reserved for drying lettuce could easily be enough to pay for a new salad dryer.
Design
Salad spinners are usually made from plastic. The design is an outer bowl with an inner, removable colander or strainer basket. A cover, which fits around the outside bowl, contains a spinning mechanism which causes the inside strainer to rotate rapidly. There are a number of different mechanisms used to operate the device, including the use of a crank handle, push button or pull-cord.
Crank or handle This is the original design, and still very popular. It works through a system of gears. The handle or crank is turned in a circular motion, which rotates a gear. This is connected to another gear that turns the basket. Critics describe it as "cumbersome", saying the motion is awkward. The crank style is thought to require more physical effort than the other spinning mechanisms.
Pull cord Operated by pulling a cord. This type is thought to be gentler on the greens than the push button method.
Push or pump button This spinner is operated by pushing down on a pump which initiates the basket spinning. There is a button which, when pressed, stops the centrifuge motion. There is also a lock to secure the pump down for easier storage.
Electrically powered Scaled-up, electrically driven spinners are a more powerful version of the device. They are used in commercial or industrial operations and can spin many pounds of salad at one time.
Usage
Raw leaves in a salad need to be clean as well as dry. Salad which has not been properly washed can be potentially harmful, although generally the risk of illness is low. Listeria, E. coli, and other causes of food poisoning have been known to be present in unwashed salads. Costas Katsigris suggests that an increased rise in foodborne illness outbreaks has encouraged the use of salad washers and spinners.
Drying the leaves for the salad is important as salad dressings and oils do not stick well to wet lettuce or salad leaves. Salad greens left in water for a long period of time will go limp, and fragile salad leaves can be easily damaged and bruised if handled harshly during the washing and drying process.
The greens are placed in the colander section of the spinner and the container is filled with water. The floating salad is spun and left to sit before the water is poured out. This process can be repeated until no visible traces of dirt or sand remain. Once drained, the greens are spun, which generates a centripetal force that drives the excess moisture, through the perforation in the central strainer, to the outer bowl.
Salad spinners are considered to be an easy product to use and operate, but have been criticized for being bulky, space-consuming and difficult-to-store.
References
Centrifuges
Food preparation appliances | Salad spinner | Chemistry,Engineering | 1,544 |
22,280,100 | https://en.wikipedia.org/wiki/List%20of%20DVD%20authoring%20software | The following applications can be used to create playable DVDs.
Free software
Free software implementations often lack features such as encryption and region coding due to licensing restrictions issues, and depending on the demands of the DVD producer, may not be considered
suitable for mass-market use.
DeVeDe (Linux)
DVD Flick (Windows only)
DVDStyler (Windows, Mac OS X, and Linux using wxWidgets. Recent versions are bundled with Potentially Unwanted Programs that may accidentally be installed unless care is taken during installation.)
Professional studio software
MAGIX Vegas DVD Architect (previously known as Sony Creative Software's DVD Architect Pro) (discontinued)
Apple DVD Studio Pro (Mac) (discontinued)
Sonic DVDit Pro (formerly DVD Producer) (discontinued)
Adobe Encore (EOL / discontinued)
Sonic DVD Creator (discontinued)
Professional corporate software
MAGIX Vegas DVD Architect (previously known as Sony Creative Software's DVD Architect Pro) (discontinued)
Adobe Encore (Last version is CS6, bundled with Adobe Premiere Pro CS6 / EOL) (discontinued)
Sonic Scenarist SD/BD/UHD
MediaChance DVD-lab (discontinued)
Home
Apple iDVD (Mac) (discontinued)
CyberLink Media Suite
Nero Vision
Pinnacle Studio
Roxio Easy Media Creator
Roxio Toast (for Mac OS)
Sonic MyDVD
TMPGEnc DVD Author
Ulead DVD MovieFactory
Windows DVD Maker (discontinued)
WinDVD Creator
See also
DVD-Video
DVD authoring
DVD ripper
References
List
DVD
DVD
DVD | List of DVD authoring software | Technology | 304 |
36,775,486 | https://en.wikipedia.org/wiki/Constraint%20%28mechanics%29 | In classical mechanics, a constraint on a system is a parameter that the system must obey. For example, a box sliding down a slope must remain on the slope. There are two different types of constraints: holonomic and non-holonomic.
Types of constraint
First class constraints and second class constraints
Primary constraints, secondary constraints, tertiary constraints, quaternary constraints
Holonomic constraints, also called integrable constraints, (depending on time and the coordinates but not on the momenta) and Nonholonomic system
Pfaffian constraints
Scleronomic constraints (not depending on time) and rheonomic constraints (depending on time)
Ideal constraints: those for which the work done by the constraint forces under a virtual displacement vanishes.
References
Classical mechanics
fa: دستگاههای مقید | Constraint (mechanics) | Physics | 173 |
2,526,860 | https://en.wikipedia.org/wiki/Isotopes%20of%20protactinium | Protactinium (91Pa) has no stable isotopes. The four naturally occurring isotopes allow a standard atomic weight to be given.
Twenty-nine radioisotopes of protactinium have been characterized, ranging from 211Pa to 239Pa. The most stable isotope is 231Pa with a half-life of 32,760 years, 233Pa with a half-life of 26.967 days, and 230Pa with a half-life of 17.4 days. All of the remaining radioactive isotopes have half-lives less than 1.6 days, and the majority of these have half-lives less than 1.8 seconds. This element also has five meta states, 217mPa (t1/2 1.15 milliseconds), 220m1Pa (t1/2 = 308 nanoseconds), 220m2Pa (t1/2 = 69 nanoseconds), 229mPa (t1/2 = 420 nanoseconds), and 234mPa (t1/2 = 1.17 minutes).
The only naturally occurring isotopes are 231Pa, 234Pa and 234mPa. The former occurs as an intermediate decay product of 235U, while the latter two occur as intermediate decay products of 238U. 231Pa makes up nearly all natural protactinium.
The primary decay mode for isotopes of Pa lighter than (and including) the most stable isotope 231Pa is alpha decay, except for 228Pa to 230Pa, which primarily decay by electron capture to isotopes of thorium. The primary mode for the heavier isotopes is beta minus (β−) decay. The primary decay products of 231Pa and isotopes of protactinium lighter than and including 227Pa are isotopes of actinium and the primary decay products for the heavier isotopes of protactinium are isotopes of uranium.
List of isotopes
|-id=Protactinium-211
| 211Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 120
|
| 3.8(+4.6−1.4) ms
| α
| 207Ac
| 9/2−#
|
|-id=Protactinium-212
| 212Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 121
| 212.02320(8)
| 8(5) ms[5.1(+61−19) ms]
| α
| 208Ac
| 7+#
|
|-id=Protactinium-213
| 213Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 122
| 213.02111(8)
| 7(3) ms[5.3(+40−16) ms]
| α
| 209Ac
| 9/2−#
|
|-id=Protactinium-214
| 214Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 123
| 214.02092(8)
| 17(3) ms
| α
| 210Ac
|
|
|-id=Protactinium-215
| 215Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 124
| 215.01919(9)
| 14(2) ms
| α
| 211Ac
| 9/2−#
|
|-id=Protactinium-216
| rowspan=2|216Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 125
| rowspan=2|216.01911(8)
| rowspan=2|105(12) ms
| α (80%)
| 212Ac
| rowspan=2|
| rowspan=2|
|-
| β+ (20%)
| 216Th
|-id=Protactinium-217
| 217Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 126
| 217.01832(6)
| 3.48(9) ms
| α
| 213Ac
| 9/2−#
|
|-id=Protactinium-217m
| rowspan=2 style="text-indent:1em" | 217mPa
| rowspan=2|
| rowspan=2 colspan="3" style="text-indent:2em" | 1860(7) keV
| rowspan=2|1.08(3) ms
| α
| 213Ac
| rowspan=2|29/2+#
| rowspan=2|
|-
| IT (rare)
| 217Pa
|-id=Protactinium-218
| 218Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 127
| 218.020042(26)
| 0.113(1) ms
| α
| 214Ac
|
|
|-id=Protactinium-219
| 219Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 128
| 219.01988(6)
| 53(10) ns
| α
| 215Ac
| 9/2−
|
|-id=Protactinium-220
| 220Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 129
| 220.02188(6)
| 780(160) ns
| α
| 216Ac
| 1−#
|
|-id=Protactinium-220m1
| style="text-indent:1em" | 220m1Pa
|
| colspan="3" style="text-indent:2em" | 34(26) keV
| 308(+250-99) ns
| α
| 216Ac
|
|
|-id=Protactinium-220m2
| style="text-indent:1em" | 220m2Pa
|
| colspan="3" style="text-indent:2em" | 297(65) keV
| 69(+330-30) ns
| α
| 216Ac
|
|
|-id=Protactinium-221
| 221Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 130
| 221.02188(6)
| 4.9(8) μs
| α
| 217Ac
| 9/2−
|
|-id=Protactinium-222
| 222Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 131
| 222.02374(8)#
| 3.2(3) ms
| α
| 218Ac
|
|
|-id=Protactinium-223
| rowspan=2|223Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 132
| rowspan=2|223.02396(8)
| rowspan=2|5.1(6) ms
| α
| 219Ac
| rowspan=2|
| rowspan=2|
|-
| β+ (.001%)
| 223Th
|-id=Protactinium-224
| rowspan=2|224Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 133
| rowspan=2|224.025626(17)
| rowspan=2|844(19) ms
| α (99.9%)
| 220Ac
| rowspan=2|5−#
| rowspan=2|
|-
| β+ (.1%)
| 224Th
|-id=Protactinium-225
| 225Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 134
| 225.02613(8)
| 1.7(2) s
| α
| 221Ac
| 5/2−#
|
|-id=Protactinium-226
| rowspan=2|226Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 135
| rowspan=2|226.027948(12)
| rowspan=2|1.8(2) min
| α (74%)
| 222Ac
| rowspan=2|
| rowspan=2|
|-
| β+ (26%)
| 226Th
|-id=Protactinium-227
| rowspan=2|227Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 136
| rowspan=2|227.028805(8)
| rowspan=2|38.3(3) min
| α (85%)
| 223Ac
| rowspan=2|(5/2−)
| rowspan=2|
|-
| EC (15%)
| 227Th
|-id=Protactinium-228
| rowspan=2|228Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 137
| rowspan=2|228.031051(5)
| rowspan=2|22(1) h
| β+ (98.15%)
| 228Th
| rowspan=2|3+
| rowspan=2|
|-
| α (1.85%)
| 224Ac
|-id=Protactinium-229
| rowspan=2|229Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 138
| rowspan=2|229.0320968(30)
| rowspan=2|1.50(5) d
| EC (99.52%)
| 229Th
| rowspan=2|(5/2+)
| rowspan=2|
|-
| α (.48%)
| 225Ac
|-id=Protactinium-229m
| style="text-indent:1em" | 229mPa
|
| colspan="3" style="text-indent:2em" | 11.6(3) keV
| 420(30) ns
|
|
| 3/2−
|
|-
| rowspan=3|230Pa
| rowspan=3|
| rowspan=3 style="text-align:right" | 91
| rowspan=3 style="text-align:right" | 139
| rowspan=3|230.034541(4)
| rowspan=3|17.4(5) d
| β+ (91.6%)
| 230Th
| rowspan=3|(2−)
| rowspan=3|
|-
| β− (8.4%)
| 230U
|-
| α (.00319%)
| 226Ac
|-
| rowspan=4|231Pa
| rowspan=4|Protoactinium
| rowspan=4 style="text-align:right" | 91
| rowspan=4 style="text-align:right" | 140
| rowspan=4|231.0358840(24)
| rowspan=4|3.276(11)×104 y
| α
| 227Ac
| rowspan=4|3/2−
| rowspan=4|1.0000
|-
| CD (1.34×10−9%)
| 207Tl24Ne
|-
| SF (3×10−10%)
| (various)
|-
| CD (10−12%)
| 208Pb23F
|-id=Protactinium-232
| rowspan=2|232Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 141
| rowspan=2|232.038592(8)
| rowspan=2|1.31(2) d
| β−
| 232U
| rowspan=2|(2−)
| rowspan=2|
|-
| EC (.003%)
| 232Th
|-
| 233Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 142
| 233.0402473(23)
| 26.975(13) d
| β−
| 233U
| 3/2−
| Trace
|-
| rowspan=2|234Pa
| rowspan=2|Uranium Z
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 143
| rowspan=2|234.043308(5)
| rowspan=2|6.70(5) h
| β−
| 234U
| rowspan=2|4+
| rowspan=2|Trace
|-
| SF (3×10−10%)
| (various)
|-
| rowspan=3 style="text-indent:1em" | 234mPa
| rowspan=3|Uranium X2Brevium
| rowspan=3 colspan="3" style="text-indent:2em" | 78(3) keV
| rowspan=3|1.17(3) min
| β− (99.83%)
| 234U
| rowspan=3|(0−)
| rowspan=3|Trace
|-
| IT (.16%)
| 234Pa
|-
| SF (10−10%)
| (various)
|-id=Protactinium-235
| 235Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 144
| 235.04544(5)
| 24.44(11) min
| β−
| 235U
| (3/2−)
|
|-id=Protactinium-236
| rowspan=2|236Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 145
| rowspan=2|236.04868(21)
| rowspan=2|9.1(1) min
| β−
| 236U
| rowspan=2|1(−)
| rowspan=2|
|-
| β−, SF (6×10−8%)
| (various)
|-id=Protactinium-237
| 237Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 146
| 237.05115(11)
| 8.7(2) min
| β−
| 237U
| (1/2+)
|
|-id=Protactinium-238
| rowspan=2|238Pa
| rowspan=2|
| rowspan=2 style="text-align:right" | 91
| rowspan=2 style="text-align:right" | 147
| rowspan=2|238.05450(6)
| rowspan=2|2.27(9) min
| β−
| 238U
| rowspan=2|(3−)#
| rowspan=2|
|-
| β−, SF (2.6×10−6%)
| (various)
|-id=Protactinium-239
| 239Pa
|
| style="text-align:right" | 91
| style="text-align:right" | 148
| 239.05726(21)#
| 1.8(5) h
| β−
| 239U
| (3/2)(−#)
|
Actinides and fission products
Protactinium-230
Protactinium-230 has 139 neutrons and a half-life of 17.4 days. Most of the time (92%), it undergoes beta plus decay to 230Th, with a minor (8%) beta-minus decay branch leading to 230U. It also has a very rare (.003%) alpha decay mode leading to 226Ac. It is not found in nature because its half-life is short and it is not found in the decay chains of 235U, 238U, or 232Th. It has a mass of 230.034541 u.
Protactinium-230 is of interest as a progenitor of uranium-230, an isotope that has been considered for use in targeted alpha-particle therapy (TAT). It can be produced through proton or deuteron irradiation of natural thorium.
Protactinium-231
Protactinium-231 is the longest-lived isotope of protactinium, with a half-life of 32,760 years. In nature, it is found in trace amounts as part of the actinium series, which starts with the primordial isotope uranium-235; the equilibrium concentration in uranium ore is 46.55 231Pa per million 235U. In nuclear reactors, it is one of the few long-lived radioactive actinides produced as a byproduct of the projected thorium fuel cycle, as a result of (n,2n) reactions where a fast neutron removes a neutron from 232Th or 232U, and can also be destroyed by neutron capture, though the cross section for this reaction is also low.
binding energy: 1759860 keV
beta decay energy: −382 keV
spin: 3/2−
mode of decay: alpha to 227Ac, also others
possible parent nuclides: beta from 231Th, EC from 231U, alpha from 235Np.
Protactinium-233
Protactinium-233 is also part of the thorium fuel cycle. It is an intermediate beta decay product between thorium-233 (produced from natural thorium-232 by neutron capture) and uranium-233 (the fissile fuel of the thorium cycle). Some thorium-cycle reactor designs try to protect Pa-233 from further neutron capture producing Pa-234 and U-234, which are not useful as fuel.
Protactinium-234
Protactinium-234 is a member of the uranium series with a half-life of 6.70 hours. It was discovered by Otto Hahn in 1921.
Protactinium-234m
Protactinium-234m is a member of the uranium series with a half-life of 1.17 minutes. It was discovered in 1913 by Kazimierz Fajans and Oswald Helmuth Göhring, who named it brevium for its short half-life. About 99.8% of decays of 234Th produce this isomer instead of the ground state (t1/2 = 6.70 hours).
References
Isotope masses from:
Isotopic compositions and standard atomic masses from:
Half-life, spin, and isomer data selected from the following sources.
Protactinium
Protactinium | Isotopes of protactinium | Chemistry | 4,205 |
49,174,103 | https://en.wikipedia.org/wiki/Sarcodon%20rimosus | Sarcodon rimosus, commonly known as the cracked hydnum, is a species of tooth fungus in the family Bankeraceae. Found in the Pacific Northwest region of North America, it was described as new to science in 1964 by mycologist Kenneth A. Harrison, who initially called it Hydnum rimosum. He transferred it to the genus Sarcodon in 1984. Fruit bodies of S. rimosum have convex to somewhat depressed caps that are in diameter. The surface becomes scaly in age, often developing conspicuous cracks and fissures. It is brown with violet tints. The flesh lacks any significant taste and odor. Underneath the cap cuticle, the flesh turns a bluish-green color when tested with a solution of potassium hydroxide. The brownish-pinks spines on the cap underside are typically 2.5–7 mm long, extending decurrently on the stipe. Spores are roughly spherical with fine warts on the surface, and measure 5–6.5 by 4.5–5 μm. The hyphae do not have clamp connections.
Sarcodon rimosus is common in the states of Idaho, Oregon, and Washington, where it fruits in groups under pines, or in coniferous forest. Fruiting occurs in late summer and autumn.
References
External links
Fungi described in 1964
Fungi of the United States
rimosus
Fungi without expected TNC conservation status
Fungus species | Sarcodon rimosus | Biology | 293 |
20,143,385 | https://en.wikipedia.org/wiki/J-113%2C397 | J-113,397 is an opioid drug which was the first compound found to be a highly selective antagonist for the nociceptin receptor, also known as the ORL-1 receptor. It is several hundred times selective for the ORL-1 receptor over other opioid receptors, and its effects in animals include preventing the development of tolerance to morphine, the prevention of hyperalgesia induced by intracerebroventricular administration of nociceptin (orphanin FQ), as well as the stimulation of dopamine release in the striatum, which increases the rewarding effects of cocaine, but may have clinical application in the treatment of Parkinson's disease.
Synthesis
Patents for treating arrhythmia:
Condensation between 1-Benzyl-3-methoxycarbonyl-4-piperidone [57611-47-9] (1) and O-Phenylenediamine (2) gives CID:16726310 (3). Reaction with boc anhydride followed by treatment with trifluoroacetic acid gives CID:16726358 (4). Reaction with iodoethane in the presence of base alkylates the urea nitrogen giving CID:16726359 (5). Reduction of the enamine by treatment with magnesium metal in methanol solvent occurs to give predominantly the trans isomer, CID:16726360 (6). Catalytic removal of the benzyl group gives CID:16726362 (7). Reductive amination with Cyclooctanecarbaldehyde [6688-11-5] (7) gives CID:16726364 (9). Lastly, reduction of the ester with lithium aluminium hydride completed the synthesis of J-113397 (10).
See also
JTC-801
LY-2940094
SB-612,111
Trap-101 (unsaturated olefin not reduced).
References
Synthetic opioids
Benzimidazoles
Piperidines
Ureas
Primary alcohols
Nociceptin receptor antagonists | J-113,397 | Chemistry | 449 |
5,765,010 | https://en.wikipedia.org/wiki/Advanced%20silicon%20etching | Advanced Silicon Etching (ASE) is a deep reactive-ion etching (DRIE) technique to etch deep and high aspect ratio structures in silicon. ASE was created by Surface Technology Systems Plc (STS) in 1994 in the UK. STS has continued to develop this process with faster etch rates. STS developed and first implemented the switched process, originally invented by Dr. Larmer in Bosch, Stuttgart. ASE consists in combining the faster etch rates achieved in an isotropic Si etch (usually making use of an SF6 plasma) with a deposition or passivation process (usually utilising a C4F8 plasma condensation process) by alternating the two process steps. This approach achieves the fastest etch rates while maintaining the ability to etch anisotropically, typically vertically in Microelectromechanical Systems (microelectromechanical systems (MEMS)) applications.
The ASE HRM claims to be an improvement on previous generations of ICP design, now incorporating a decoupled plasma source (patent pending). The decoupled source generates high-density plasma which is allowed to diffuse into a separate process chamber. Using a specialized chamber design, the excess ions (which negatively affect process control) are reduced, leaving a uniform distribution of fluorine free-radicals at a higher density than that available from the conventional ICP sources. The higher fluorine free-radical density facilitates increased etch rates, typically over three times the etch rates achieved with the original process.
Notes
References
.
Further reading
Surface Technology Systems
Semiconductor device fabrication
Microtechnology
Etching (microfabrication) | Advanced silicon etching | Materials_science,Engineering | 342 |
44,059,712 | https://en.wikipedia.org/wiki/Goverlan%20Systems%20Management | Goverlan Reach Systems Management is an on-premises client management software used primarily for tasks such as remote control, active directory management, configuration change management, and reporting within a Windows IT Infrastructure.
History
Goverlan Reach, initially developed in 1996 for internal use at a New York City investment bank, later became a commercial product with the incorporation of Goverlan, Inc. in 1998.
Features
Goverlan Reach provides various functionalities including remote support, IT process automation, software installation, inventory management, and remote control. The software includes tools for displaying system information, mapping printers, and Wake-on-LAN settings.
Remote Control
Goverlan Reach Remote Control (RC) is a remote desktop support software option for IT specialists. Goverlan allows for remote control and desktop sharing. With Goverlan, administrators can remote shadow multiple client sessions in a single pane and multiple administrators can participate in a single remote control session. In addition, an administrator can capture screenshots or video recordings during a remote session.
There are Other features that Goverlan Remote Control supports such as: remote assistance with the ability to connect to computers over the internet, transfer files, or view multiple sessions in one screen and control bandwidth used during a remote session. Goverlan supports Citrix XenApp and Microsoft Terminal Services shadowing.
Behind-the-scenes systems management
The Goverlan Administration & Diagnostics tool integrates into an existing Active Directory (AD) organization unit (OU) structure for Windows Systems management. Goverlan can perform remote administration on a single machine, group of machines, or entire domain. Goverlan is compatible with VDI, RDP, and Citrix deployments.
Global IT Process Automation module
The Goverlan IT Process Automation module allows IT administrators to manage various objects such as : software updates, reports generation, adding or removing registry keys, or any other actions that can be applied to a single computer or a network. Scope Actions allow IT administrators to execute configuration management tasks on client machines, query machines, collect information about user logged-in machines, hardware, software, or processes, and remote monitor workstations in real time, as opposed to retrieving information from a database. IT administrators may also use Goverlan for patch management to push patches to servers or workstations.
WMIX
WMIX is Goverlan free WMI Explorer which generates WMI queries using the WQL wizard and exports custom queries to other Windows. The WMIX tool makes use of pre-existing Windows Management Instrumentation scripts within an interface. A technician can generate a VBScript by defining parameters and clicking the generate script button.
Technologies
LDAP – The Lightweight Directory Access Protocol is used by Goverlan for Active Directory integration.
WMI – The Windows Management Instrumentation technology is used by Goverlan to expose agent-free systems management services to Windows systems.
Intel vPro AMT – The Intel Active Management Technology allows the out-of-band management of Intel vPro ready systems regardless of the system's power state.
Security
Goverlan Systems Management Software provides the following security features:
AES 256 bit Encryption (Windows Vista and later) or RSA 128 bit Encryption (Windows XP and earlier).
Microsoft Security Support Provider Interface technology (SSPI) securely authenticates the identity of the person initiating a connection. SSPI is also used to impersonate the identity of this person on the client machine. Using the identity and privileges of the person who initiated the remote control session, the remote control session is either authorized or rejected.
Central or machine level auditing of executed remote control sessions.
Agents communicate through a single, encrypted TCP port.
Limitations
Goverlan's desktop software can only be installed on Windows based computers (Windows XP and Above). Goverlan client agents can only be installed on Windows based computers (Windows 2000 and above) Goverlan requires the installation of client agents. However, client agents can be installed via a network rather than independently.
See also
Remote support
List of systems management systems
Comparison of remote desktop software
Remote desktop software
Desktop sharing
References
External links
Goverlan, Inc. Official Site
Remote desktop
Mobile device management software
Help desk software
System administration
Computer access control
Windows remote administration software
Systems management | Goverlan Systems Management | Technology,Engineering | 867 |
57,482,552 | https://en.wikipedia.org/wiki/Fuchsian%20theory | The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.
At any ordinary point of a homogeneous linear differential equation of order there exists a fundamental system of linearly independent power series solutions. A non-ordinary point is called a singularity. At a singularity the maximal number of linearly independent power series solutions may be less than the order of the differential equation.
Generalized series solutions
The generalized series at is defined by
which is known as Frobenius series, due to the connection with the Frobenius series method. Frobenius series solutions are formal solutions of differential equations. The formal derivative of , with , is defined such that . Let denote a Frobenius series relative to , then
where denotes the falling factorial notation.
Indicial equation
Let be a Frobenius series relative to . Let be a linear differential operator of order with one valued coefficient functions . Let all coefficients be expandable as Laurent series with finite principle part at . Then there exists a smallest such that is a power series for all . Hence, is a Frobenius series of the form , with a certain power series in . The indicial polynomial is defined by which is a polynomial in , i.e., equals the coefficient of with lowest degree in . For each formal Frobenius series solution of , must be a root of the indicial polynomial at , i. e., needs to solve the indicial equation .
If is an ordinary point, the resulting indicial equation is given by . If is a regular singularity, then and if is an irregular singularity, holds. This is illustrated by the later examples. The indicial equation relative to is defined by the indicial equation of , where denotes the differential operator transformed by which is a linear differential operator in , at .
Example: Regular singularity
The differential operator of order , , has a regular singularity at . Consider a Frobenius series solution relative to , with .
This implies that the degree of the indicial polynomial relative to is equal to the order of the differential equation, .
Example: Irregular singularity
The differential operator of order , , has an irregular singularity at . Let be a Frobenius series solution relative to .
Certainly, at least one coefficient of the lower derivatives pushes the exponent of down. Inevitably, the coefficient of a lower derivative is of smallest exponent. The degree of the indicial polynomial relative to is less than the order of the differential equation, .
Formal fundamental systems
We have given a homogeneous linear differential equation of order with coefficients that are expandable as Laurent series with finite principle part. The goal is to obtain a fundamental set of formal Frobenius series solutions relative to any point . This can be done by the Frobenius series method, which says: The starting exponents are given by the solutions of the indicial equation and the coefficients describe a polynomial recursion. W.l.o.g., assume .
Fundamental system at ordinary point
If is an ordinary point, a fundamental system is formed by the linearly independent formal Frobenius series solutions , where denotes a formal power series in with , for . Due to the reason that the starting exponents are integers, the Frobenius series are power series.
Fundamental system at regular singularity
If is a regular singularity, one has to pay attention to roots of the indicial polynomial that differ by integers. In this case the recursive calculation of the Frobenius series' coefficients stops for some roots and the Frobenius series method does not give an -dimensional solution space. The following can be shown independent of the distance between roots of the indicial polynomial: Let be a -fold root of the indicial polynomial relative to . Then the part of the fundamental system corresponding to is given by the linearly independent formal solutions
where denotes a formal power series in with , for . One obtains a fundamental set of linearly independent formal solutions, because the indicial polynomial relative to a regular singularity is of degree .
General result
One can show that a linear differential equation of order always has linearly independent solutions of the form
where and , and the formal power series .
is an irregular singularity if and only if there is a solution with . Hence, a differential equation is of Fuchsian type if and only if for all there exists a fundamental system of Frobenius series solutions with at .
References
Differential equations | Fuchsian theory | Mathematics | 923 |
13,343,317 | https://en.wikipedia.org/wiki/Photoinitiator | In chemistry, a photoinitiator is a molecule that creates reactive species (free radicals, cations or anions) when exposed to radiation (UV or visible). Synthetic photoinitiators are key components in photopolymers (for example, photo-curable coatings, adhesives and dental restoratives).
Some small molecules in the atmosphere can also act as photoinitiators by decomposing to give free radicals (in photochemical smog). For instance, nitrogen dioxide () is produced in large quantities by gasoline-burning internal combustion engines. in the troposphere gives smog its brown coloration and catalyzes production of toxic ground-level ozone (). Molecular oxygen () also serves as a photoinitiator in the stratosphere, breaking down into atomic oxygen and combining with in order to form the ozone in the ozone layer.
Reactions
Photoinitators can create reactive species by different pathways including photodissociation and electron transfer. As an example of dissociation, hydrogen peroxide can undergo homolytic cleavage, with the bond cleaving to form two hydroxyl radicals.
Certain azo compounds (such as azobisisobutyronitrile), can also photolytically cleave, forming two alkyl radicals and nitrogen gas:
These free radicals can now promote other reactions.
Atmospheric photoinitiators
Peroxides
Since molecular oxygen can abstract H atoms from certain radicals, the HOO· radical is easily created. This particular radical can further abstract H atoms, creating , or hydrogen peroxide; peroxides can further cleave photolytically into two hydroxyl radicals. More commonly, HOO can react with free oxygen atoms to yield a hydroxyl radical (·OH) and oxygen gas. In both cases, the ·OH radicals formed can serve to oxidize organic compounds in the atmosphere.
Nitrogen dioxide
Nitrogen dioxide can also be photolytically cleaved by photons of wavelength less than 400 nm producing atomic oxygen and nitric oxide.
Atomic oxygen is a highly reactive species, and can abstract a H atom from anything, including water.
Nitrogen dioxide can be regenerated through a reaction between certain peroxy-containing radicals and NO.
Molecular oxygen
In the stratosphere, molecular oxygen is an important photoinitiator that begins the ozone-production process in the ozone layer. Oxygen can be photolyzed into atomic oxygen by light with wavelength less than 240 nm.
Atomic oxygen can then combine with more molecular oxygen to form ozone.
However, ozone can also be photolyzed back into O and .
Furthermore, atomic oxygen and ozone can combine into .
This set of reactions govern the production of ozone and can be combined to calculate its equilibrium concentration.
Commercial photoinitiators and uses
AIBN
Azobisisobutyronitrile is a white powder often used as a photoinitiator for vinyl-based polymers such as polyvinyl chloride, also known as PVC. Because this particular photoinitiator produces nitrogen gas upon decomposition, it is often used as a blowing agent to change the shape and/or texture of plastics.
Benzoyl peroxide
Benzoyl peroxide, much like azobisisobutyronitrile, is a white powder used as a photoinitiator in various commercial and industrial processes, including plastics production. Unlike AIBN, however, benzoyl peroxide produces oxygen gas upon decomposing, giving this compound a host of medical uses as well.
Upon contact with the skin, benzoyl peroxide breaks down, producing oxygen gas, among other things. The oxygen gas is absorbed into the pores of the skin, where it kills off the acne-causing bacterium Cutibacterium acnes.
In addition, the free radicals produced can break down dead skin cells. Clearing out these dead cells prevents pore blockage and, by extension, acne breakouts.
2,2-Dimethoxy-2-phenylacetophenone
Camphorquinone
Camphorquinone (CQ) is a photosensitiser used with an amine system, that generates primary radicals with light irradiation. These free electron then attack the double bonds of resin monomers resulting in polymerization. The physical properties of the cured resins are affected by the generation of primary radicals during the initial stage of polymerization.
Irgacure 819
Irgacure 819 (BAPO Bis(2,4,6-trimethylbenzoyl)-phenylphosphineoxide) is a Norrish type photoinitiator used in polymerization processes like two-photon Polymerization. When exposed to light it forms four radicals (2, 3, 5) per decomposed molecule (1), making it highly efficient in initiating polymerization. The second set of radicals forms through abstraction or chain transfer, further driving the reaction.
See also
Radical initiator
References
Bibliography
Air pollution
Atmospheric chemistry | Photoinitiator | Chemistry | 1,024 |
2,342,292 | https://en.wikipedia.org/wiki/Pintle | A pintle is a pin or bolt, usually inserted into a gudgeon, which is used as part of a pivot or hinge. Other applications include pintle and lunette ring for towing, and pintle pins securing casters in furniture.
Use
Pintle/gudgeon sets have many applications, for example in sailing, to hold the rudder onto the boat; in transportation, in which a pincer-type device clamps through a lunette ring on the tongue of a trailer; and in controllable solid rocket motors, in which a plug moves into and out of the motor throat to control thrust.
In electrical cubicle manufacture, a pintle hinge is a hinge with fixed and moving parts. The hinge has a pin - the pintle - which can be both external and internal. The most common type consists of three parts, one part on the body of the cubicle, one part on the door, and the third being the pintle.
In transportation, a pintle hitch is a type of tow hitch that uses a tow ring configuration to secure to a hook or a ball combination for the purpose of towing an unpowered vehicle.
As a weapon mount, a pintle mount is used with machine guns as the mounting hardware that mates the machine gun to a vehicle or tripod. Essentially, the pintle is a bracket with a cylindrical bottom and a cradle for the gun on top; the cylindrical bottom fits into a hole in the tripod while the cradle holds the gun.
In furniture, a pintle is usually fitted to a caster; the pintle is then inserted into a base, fixing the caster to that base.
In rocketry, a pintle injector uses a single-feed fuel injector rather than the hundreds of smaller holes used in a typical rocket engine. This simplifies the engine, reducing cost and improving reliability, while surrendering some performance. TRW Notable modern uses are
Pintle is also a common term used in the design of aircraft landing gears. It describes the attachment point between the landing gear structure and the aircraft structure. The pintle is the bolt around which the landing gear rotates when it is extended/retracted into/out of the aircraft. The pintle is a highly stressed component during landing manoeuvres and is often made from exotic metal alloys. For World War II aircraft with sideways-retracting main gear units, carefully set-up "pintle angles" for such axes of rotation during retraction and extension allowed the maingear struts to be raked forward while fully extended for touchdown and better ground handling, while permitting retraction into rearwards-angled landing gear wells in their wings to usually clear the forward wing spar for stowing while in flight.
Gallery
See also
Hinge
Pintle and gudgeon
Pintle hook and lunette ring
Spindle (automobile)
References
Bearings (mechanical)
Fasteners
Hardware (mechanical) | Pintle | Physics,Technology,Engineering | 606 |
6,001,461 | https://en.wikipedia.org/wiki/Causes%20of%20gender%20incongruence | Gender incongruence is the state of having a gender identity that does not correspond to one's sex assigned at birth. This is experienced by people who identify as transgender or transsexual, and often results in gender dysphoria. The causes of gender incongruence have been studied for decades.
Transgender brain studies, especially those on trans women attracted to women (gynephilic), and those on trans men attracted to men (androphilic), are limited, as they include only a small number of tested individuals. Twin studies indicate that genes play a role in gender incongruence, although the precise genes involved are not known or well understood.
Environmental factors, such as prenatal hormone exposure, have also been investigated but are difficult to test.
Genetics
Gender identity is genetically heritable, but no convincing candidate genes are known. Gender incongruence has been associated with certain alleles relevant to steroidogenesis.
In 2013, a twin study combined a survey of pairs of twins where one or both had undergone, or had plans and medical approval to undergo, gender transition, with a literature review of published reports of transgender twins. The study found that one third of identical twin pairs in the sample were both transgender: 13 of 39 (33%) monozygotic or identical pairs of assigned males and 8 of 35 (22.8%) pairs of assigned females. Among dizygotic or genetically non-identical twin pairs, there was only 1 of 38 (2.6%) pairs where both twins were trans. The significant percentage of identical twin pairs in which both twins are trans and the virtual absence of dizygotic twins (raised in the same family at the same time) in which both were trans would provide evidence that transgender identity is significantly influenced by genetics if both sets were raised in different families.
In 2018 a review of family and twin studies found that there was "significant and consistent evidence" for gender identity being genetically heritable.
Prenatal hormonal environment
Sex hormones in the prenatal environment differentiate the male and female brain. One hypothesis proposes that transgender individuals may have been exposed to atypical levels of sex hormones during later stages of fetal development, leading to brain structures atypical of their sex assigned at birth.
In people with XX chromosomes, congenital adrenal hyperplasia (CAH) results in heightened exposure to prenatal androgens, resulting in masculinization of the genitalia. Individuals with CAH are typically subjected to medical interventions including prenatal hormone treatment and postnatal genital reconstructive surgeries. Such treatments are sometimes criticized by intersex rights organizations as non-consensual, invasive, and unnecessary interventions. Individuals with CAH are usually assigned female and tend to develop similar cognitive abilities to the typical females, including spatial ability, verbal ability, language lateralization, handedness and aggression. Research has shown that people with CAH and XX chromosomes will be more likely to experience same-sex attraction, and at least 5.2% of these individuals develop serious gender dysphoria.
In males with 5-alpha-reductase deficiency, conversion of testosterone to dihydrotestosterone is disrupted, decreasing the masculinization of genitalia. Individuals with this condition are typically assigned female and raised as girls due to their feminine appearance at a young age. However, more than half of males with this condition raised as females come to identify as male later in life. Scientists speculate that the definition of masculine characteristics during puberty and the increased social status afforded to men are two possible motivations for a female-to-male transition.
Brain structure
Transgender brain studies, especially those on trans women attracted to women (gynephilic), and those on trans men attracted to men (androphilic), are limited, as they include only a small number of tested individuals.
Several studies have found a correlation between gender identity and brain structure. A first-of-its-kind study by Zhou et al. (1995) found that in the bed nucleus of the stria terminalis (BSTc), a region of the brain known for sex and anxiety responses (and which is affected by prenatal androgens), cadavers of six trans women had female-normal BSTc size, similar to the study's cadavers of cisgender women. While the trans women had undergone hormone therapy, and all but one had undergone sex reassignment surgery, this was accounted for by including cadavers of cisgender men and cisgender women as controls who, for a variety of medical reasons, had experienced hormone reversal. The controls still had sizes typical for their sex, and thus no relationship to post-natal hormone levels (nor to sexual orientation) was found. Other post-mortem studies also found brain differences between cisgender and transgender individuals.
In 2002, a follow-up study by Chung et al. found that significant sexual dimorphism in BSTc did not establish until adulthood. Chung et al. theorized that changes in fetal hormone levels produce changes in BSTc synaptic density, neuronal activity, or neurochemical content which later lead to size and neuron count changes in BSTc, or alternatively, that the size of BSTc is affected by the generation of a gender identity inconsistent with one's assigned sex.
In the textbook Adult Psychopathology and Diagnosis, 7th edition, Lawrence and Zucker suggested that the BSTc may not be a valid biomarker for gender incongruence, as differences in size could be caused by gender-affirming hormone therapy or paraphilias, and might not occur in homosexual transsexuals.
In a review of the evidence in 2006, Gooren considered the earlier research as supporting the concept of gender incongruence as a "sexual differentiation disorder" of the sexually dimorphic brain. Dick Swaab (2004) concurred.
In 2008, Garcia-Falgueras & Swaab discovered that the interstitial nucleus of the anterior hypothalamus (INAH-3), part of the hypothalamic uncinate nucleus, had properties similar to the BSTc with respect to sexual dimorphism and gender incongruence, likewise in line with the trans individuals’ declared genders and likewise regardless of if hormonal transition had occurred or not.
A 2009 MRI study by Luders et al. found that among 24 trans women not treated with hormone therapy, regional gray matter concentrations were more similar to those of cisgender men than of cisgender women, but there was a significantly greater volume of gray matter in the right putamen compared to cisgender men. Like earlier studies, researchers concluded that transgender identity was associated with a distinct cerebral pattern. MRI scanning allows easier study of larger brain structures, but independent nuclei are not visible due to lack of contrast between different neurological tissue types, hence other studies on e.g. BSTc were done by dissecting brains post-mortem.
Rametti et al. (2011) studied 18 trans men who had not undergone hormone therapy using diffusion tensor imaging (DTI), an MRI technique which allows visualizing white matter, the structure of which is sexually dimorphic. Rametti et al. discovered that the trans men's white matter, compared to 19 cisgender gynephilic females, showed higher fractional anisotropy values in posterior part of the right SLF, the forceps minor and corticospinal tract". Compared to 24 cisgender males, they showed only lower FA values in the corticospinal tract. The white matter patterns in trans men were found to be shifted in the direction of cis men.
A 2011 review published in Frontiers in Neuroendocrinology found that "Female INAH3 and BSTc have been found in MtF transsexual persons. The only female-to-male (FtM) transsexual person available to us for study so far had a BSTc and INAH3 with clear male characteristics. (...) These sex reversals were found not to be influenced by circulating hormone levels in adulthood, and seem thus to have arisen during development" and that "All observations that support the neurobiological theory about the origin of transsexuality, i.e. that it is the sizes, the neuron numbers, and the functions and connectivity of brain structures, not the sex of their sexual organs, birth certificates or passports, that match their gender identities".
In 2012 and 2016 studies by Taziaux et al. reported that MtF subjects had infundibular nuclei similar to those of cis women.
A 2015 review reported that two studies found a pattern of white matter microstructure differences away from a transgender person's birth sex, and toward their desired sex. In one of these studies, sexual orientation had no effect on the diffusivity measured.
A 2016 review reported that, for androphilic trans women and gynephilic trans men, hormone treatment may have large effects on the brain, and that cortical thickness, which is generally thicker in cisgender women's brains than in cisgender men's brains, may also be thicker in trans women's brains, but is present in a different location to cisgender women's brains. It also stated that for both trans women and trans men, "cross-sex hormone treatment affects the gross morphology as well as the white matter microstructure of the brain. Changes are to be expected when hormones reach the brain in pharmacological doses. Consequently, one cannot take hormone-treated transsexual brain patterns as evidence of the transsexual brain phenotype because the treatment alters brain morphology and obscures the pre-treatment brain pattern."
A 2019 review in Neuropsychopharmacology found that among transgender individuals meeting diagnostic criteria for gender dysphoria, "cortical thickness, gray matter volume, white matter microstructure, structural connectivity, and corpus callosum shape have been found to be more similar to cisgender control subjects of the same preferred gender compared with those of the same natal sex."
A 2021 review of brain studies published in the Archives of Sexual Behavior found that "although the majority of neuroanatomical, neurophysiological, and neurometabolic features" in transgender people "resemble those of their natal sex rather than those of their experienced gender", for trans women they found feminine and demasculinized traits, and vice versa for trans men. They stated that due to limitations and conflicting results in the studies that had been done, they could not draw general conclusions or identify-specific features that consistently differed between cisgender and transgender people. The review also found differences when comparing cisgender homosexual and heterosexual people, with the same limitations applying.
Androphilic vs. gynephilic trans women
A 2016 review reported that early-onset androphilic transgender women have a brain structure similar to cisgender women's and unlike cisgender men's, but that they have their own brain phenotype. It also reported that gynephilic trans women differ from both cisgender female and male controls in non-dimorphic brain areas.
The available research indicates that the brain structure of androphilic trans women with early-onset gender dysphoria is closer to that of cisgender women than that of cisgender men. It also reports that gynephilic trans women differ from both cisgender female and male controls in non-dimorphic brain areas. Cortical thickness, which is generally thicker in cisgender women's brains than in cisgender men's brains, may also be thicker in trans women's brains, but is present in a different location to cisgender women's brains. For trans men, research indicates that those with early-onset gender dysphoria and who are gynephilic have brains that generally correspond to their assigned sex, but that they have their own phenotype with respect to cortical thickness, subcortical structures, and white matter microstructure, especially in the right hemisphere. Hormone therapy can also affect transgender people's brain structure; estrogen can cause transgender women's brains to become closer to those of cisgender women, and morphological changes observed in the brains of trans men might be due to the anabolic effects of testosterone.
MRI taken on gynephilic trans women have likewise shown differences in the brain from non-trans people, though in ways not directly related to sexual dimorphism.
Gynephilic trans men
Fewer brain structure studies have been performed on transgender men than on transgender women. A 2016 review reported that the brain structure of early-onset gynephilic trans men generally corresponds to their assigned sex, but that they have their own phenotype with respect to cortical thickness, subcortical structures, and white matter microstructure, especially in the right hemisphere. Morphological increments observed in the brains of trans men might be due to the anabolic effects of testosterone.
Onset
According to the DSM-5, gender dysphoria in those assigned male at birth tends to follow one of two broad trajectories: early-onset or late-onset. Early-onset gender dysphoria is behaviorally visible in childhood. Sometimes, gender dysphoria may stop for a while in this group, and they may identify as gay or homosexual for a period of time, followed by recurrence of gender dysphoria. This group is usually androphilic in adulthood. Late-onset gender dysphoria does not include visible signs in early childhood, but some report having had wishes to be the opposite sex in childhood that they did not report to others. Trans women who experience late-onset gender dysphoria are more likely be attracted to women and may identify as lesbians or bisexual. It is common for people assigned male at birth who have late-onset gender dysphoria to experience sexual excitement from cross-dressing. In those assigned female at birth, early-onset gender dysphoria is the most common course. This group is usually sexually attracted to women. Trans men who experience late-onset gender dysphoria will usually be sexually attracted to men and may identify as gay.
Blanchard's typology
In the 1980s and 1990s, sexologist Ray Blanchard developed a taxonomy of male-to-female transsexualism built upon the work of his colleague Kurt Freund, which argues that trans women have one of two primary causes of gender dysphoria. Blanchard theorized that "homosexual transsexuals" (a taxonomic category referring to trans women attracted to men) are attracted to men and develop gender dysphoria typically during childhood, and characterizes them as displaying overt and obvious femininity since childhood; he characterizes "non-homosexual transsexuals" (trans women who are sexually attracted to women) as developing gender dysphoria primarily due to autogynephilia (sexual arousal by the thought or image of themselves as a woman), and as attracted to women, attracted to both women and men (Blanchard calls this "pseudo-bisexuality", believing attraction to males to be not genuine, but part of the performance of an autogynephilic sexual fantasy), or asexual.
Blanchard's theory has received support from J. Michael Bailey, Anne Lawrence, and James Cantor. Blanchard argued that there are significant differences between the two groups, including sexuality, age of transition, ethnicity, IQ, fetishism, and quality of adjustment.
Blanchard's typology has been criticized in papers from Veale, Nuttbrock, Moser, and others who argue that it is poorly representative of trans women and non-instructive, and that the experiments behind it are poorly controlled and/or contradicted by other data. Charles Moser conducted a survey of 29 cisgender women in the healthcare field based on Blanchard's methods for identifying autogynephilia, found that 93% of respondents qualified as autogynephiles based on their own responses. Anne Lawrence criticized the methodology of Mosers survey.
Blanchard proposed that "homosexual transsexuals", but not "autogynephilic transsexuals", would have feminized brain structure, stating: "if there is any neuroanatomic intersexuality, it is in the homosexual group". James Cantor has argued that MRI studies of transgender women offer support for Blanchard's prediction. A 2016 review of transgender brain structure states: "Cantor seems to be right. Nonhomosexual MtFs present differences with heterosexual males in structures that are not sexually dimorphic (Savic & Arver, 2011), while homosexual MtFs (as well as homosexual FtMs) show differences with respect to male and female controls in a series of brain fascicles". The review notes that only one study has compared gynephilic and androphilic transgender women, and that "more independent studies on nonhomosexual MtFs are needed".
See also
References
Transgender studies
Psychological theories
Neuroscience
Behavioral neuroscience
Transsexualism
Behavioural genetics
Gender identity
Transgender health care | Causes of gender incongruence | Biology | 3,608 |
44,804,203 | https://en.wikipedia.org/wiki/Dutch%20cask | Dutch cask is a UK unit of weight for butter and cheese.
Definition
The dutch cask is defined as , (i.e., equivalent to one long hundredweight or eight stone).
Conversion
1 Dutch cask ≡ 32/21 Tub
1 Dutch cask ≡ 112 pounds(avdp.)
1 Dutch cask ≡ kg
References
Cooking weights and measures
Units of mass | Dutch cask | Physics,Mathematics | 78 |
43,058,880 | https://en.wikipedia.org/wiki/CWTS%20Leiden%20Ranking | The CWTS Leiden Ranking is an annual global university ranking based exclusively on bibliometric indicators. The rankings are compiled by the Centre for Science and Technology Studies (Dutch: Centrum voor Wetenschap en Technologische Studies, CWTS) at Leiden University in the Netherlands. The Clarivate Analytics bibliographic database Web of Science is used as the source of the publication and citation data.
The Leiden Ranking ranks universities worldwide by number of academic publications according to the volume and citation impact of the publications at those institutions. The rankings take into account differences in language, discipline and institutional size. Multiple ranking lists are released according to various bibliometric normalization and impact indicators, including the number of publications, citations per publication, and field-normalized impact per publication. In addition to citation impact, the Leiden Ranking also ranks universities by scientific collaboration, including collaboration with other institutions and collaboration with an industry partner.
The first edition of the Leiden Ranking was produced in 2007. The 2014 rankings include 750 universities worldwide, which were selected based on the number of articles and reviews published by authors affiliated with those institutions in 2009–2012 in so-called "core" journals, a set of English-language journals with international scope and a "sufficiently large" number of references in the Web of Science database.
According to the Netherlands Centre for Science and Technology Studies, the crown indicator is Indicator 4 (PP top 10%), and is the only one presented in university rankings by the Swiss State Secretariat for Education, Research and Innovation website (UniversityRankings.ch).
Results
As of 2023, Chinese universities dominate the rankings, with 16 out of the top 25 universities ranked being in China., while in previous years, the top was heavily dominated by American universities. In the 2014 rankings, Rockefeller University was first by citation impact, as measured by both mean citation score and mean normalized citation score, as well as by the proportion of papers belonging to the top 10% in their field. Notably, the University of Oxford, the University of Cambridge, and other British universities score much lower than in other university rankings, such as the Times Higher Education World University Rankings and QS World University Rankings, which are based in part on reputational surveys among academics.
When measuring by collaboration with other universities (the proportion of number of publications co-authored with other institutions), the top three spots were occupied by National Yang-Ming University and two other institutions from Taiwan in 2014, followed by universities from France, the United Kingdom and a number of other European countries. King Abdulaziz University and King Saud University in Saudi Arabia led the list in 2014 when measured by international collaboration.
Indicators
The Leiden Ranking ranks universities by the following indicators:
Citation impact
MCS – mean citation score. The average number of citations of the publications of a university.
MNCS – mean normalized citation score. The average number of citations of the publications of a university, normalized for field differences and publication year. For example, an MNCS value of 2 means that the publications of a university have been cited twice above world average.
PP(top 10%) – proportion of top 10% publications. The proportion of the publications of a university that belong to the top 10% most frequently cited, compared with other publications in the same field and in the same year.
Scientific collaboration
PP(collab) – proportion of interinstitutionally collaborative publications. The proportion of the publications of a university that have been co-authored with one or more other organizations.
PP(int collab) – proportion of internationally collaborative publications. The proportion of the publications of a university that have been co-authored by two or more countries.
PP(UI collab) – proportion of collaborative publications with industry. The proportion of the publications of a university that have been co-authored with one or more industrial partners.
PP(<100 km) – proportion of short-distance collaborative publications. The proportion of the publications of a university with a geographical collaboration distance of less than 100 km.
PP(>1000 km) – proportion of long-distance collaborative publications. The proportion of the publications of a university with a geographical collaboration distance of more than 1000 km.
Criticism
In a 2010 article, Loet Leydesdorff criticized the method used by the Leiden Ranking to normalize citation impact by subject field. The mean normalized citation score (MNCS) indicator is based on the ISI subject category classification used in Web of Science, which was "not designed for the scientometric evaluation, but for the purpose of information retrieval". Also, normalizing at a higher aggregation level, rather than at the level of individual publications, gives more weight to older publications, particularly reviews, and to publications in fields where citation levels are traditionally higher.
References
External links
Website of the CWTS Leiden Ranking
University and college rankings
Bibliometrics
Leiden University | CWTS Leiden Ranking | Mathematics,Technology | 991 |
46,591,506 | https://en.wikipedia.org/wiki/Helium%20trimer | The helium trimer (or trihelium) is a weakly bound molecule consisting of three helium atoms. Van der Waals forces link the atoms together. The combination of three atoms is much more stable than the two-atom helium dimer. The three-atom combination of helium-4 atoms is an Efimov state. Helium-3 is predicted to form a trimer, although ground state dimers containing helium-3 are completely unstable.
Helium trimer molecules have been produced by expanding cold helium gas from a nozzle into a vacuum chamber. Such a set up also produces the helium dimer and other helium atom clusters. The existence of the molecule was proven by matter wave diffraction through a diffraction grating. Properties of the molecules can be discovered by Coulomb explosion imaging. In this process, a laser ionizes all three atoms simultaneously, which then fly away from each other due to electrostatic repulsion and are detected.
The helium trimer is large, being more than 100 Å, which is even larger than the helium dimer. The atoms are not arranged in an equilateral triangle, but instead form random shaped triangles.
Interatomic Coulombic decay can occur when one atom is ionised and excited. It can transfer energy to another atom in the trimer, even though they are separated. However this is much more likely to occur when the atoms are close together, and so the interatomic distances measured by this vary with half full height from 3.3 to 12 Å. The predicted mean distance for Interatomic Coulombic decay in 4He3 is 10.4 Å. For 3He4He2 this distance is even larger at 20.5 Å.
References
Extra reading
Homonuclear triatomic molecules
Helium compounds
Van der Waals molecules
Allotropes | Helium trimer | Physics,Chemistry | 375 |
1,144,211 | https://en.wikipedia.org/wiki/Perrier | Perrier ( , also , ) is a French brand of natural bottled mineral water obtained at its source in Vergèze, located in the Gard département. Perrier was part of the Perrier Vittel Group SA, which became Nestlé Waters France after the acquisition of the company by Nestlé in 1992. Perrier is known for its carbonation and its distinctive green bottle.
Overview
The spring from which Perrier water is sourced is naturally carbonated, but the water and natural carbon dioxide gas are obtained independently. The water is then purified, and during bottling, the carbon dioxide gas is re-added so that the level of carbonation in bottled Perrier matches that of the Vergèze spring.
In 1990, Perrier removed the "naturally sparkling" claim from its bottles under pressure from the United States Food and Drug Administration (FDA).
Since at least 2019, Perrier water is no longer "reinforced with gas from the source" but "with the addition of carbon dioxide". According to the company, this change allows it to considerably reduce its total water consumption and reduce its ecological impact.
History
The spring in Southern France from which Perrier is drawn was originally known as Les Bouillens (The Bubbles). It had been used as a spa since Roman times. During 218 BC, Hannibal and his army, having passed through Spain en route to his intended conquest of Rome, decided to rest for a while at Les Bouillens, from which the men took water for refreshment.
Perrier was first introduced to Britain during 1863. Local doctor Louis Perrier bought the spring in 1898 and operated a commercial spa there; he also bottled the water for sale. He later sold the spring to St John Harmsworth, a wealthy British visitor. Harmsworth was the younger brother of the newspaper magnates Lord Northcliffe and Lord Rothermere. He had come to France to learn the language. Dr. Perrier showed him the spring, and he decided to buy it. He sold his share of the family newspapers to raise the money. Harmsworth closed the spa, as spas were becoming unfashionable. He renamed the spring Source Perrier and started bottling the water in distinctive green bottles. The shape was that of the Indian clubs which Harmsworth used for exercise.
Harmsworth marketed the product in Britain at a time when Frenchness was seen as chic and aspirational to the middle classes. It was advertised as the Champagne of mineral water. Advertising in newspapers like the Daily Mail established the brand. For a time, 95% of sales were in Britain and the US.
Perrier's reputation for purity suffered a blow in 1990 when a laboratory in North Carolina in the United States found benzene, a carcinogen, in several bottles. Perrier stated that it was an isolated incident of a worker having made a mistake in filtering and that the spring itself was unpolluted. The incident ultimately led to the worldwide withdrawal of the product, some 160 million bottles of Perrier.
Two years later in 1992, Perrier was bought by Nestlé, one of the world's leading food and drink companies. Nestlé had to contend with competition from the Agnelli family for ownership of the business.
In 2004, a crisis erupted when Nestlé announced a restructuring plan for Perrier. The following year, Perrier was ordered to halt restructuring due to a failure to consult adequately with staff.
In April 2024, following reports that products had been contaminated with germs of possible faecal origin, an estimated 2.9 million bottles of Perrier water were destroyed before reaching the market. This was followed by an announcement in June that year that one-litre bottles of Perrier Vert would be pulled from the French market after a majority of wells used to capture the water at the Vergèze manufacturing site had their use terminated, suspended or diverted to other product lines, following a product safety inspection at the manufacturing site on 30 May conducted by government agencies.
Bottling
Perrier is available in 750 ml, 330 ml, and 200 ml glass bottles in Europe, as well as in 330 ml cans. In other markets, the 250 ml can is also available. Perrier bottles all have a distinctive 'teardrop' shape and are a signature green colour. In August 2001, the company introduced a new bottling format using polyethylene terephthalate to offer Perrier in plastic, a change that was researched for 11 years to determine which material would best help retain both the water's flavour and its purported "50 million bubbles."
In 2013, Perrier celebrated its 150th anniversary by launching a limited edition series of bottles inspired by Andy Warhol.
In 2019, Perrier released Perrier ARTXTRA limited edition packaging featuring artwork of artist duo Dabsmyla to help support the contemporary artist community.
Varieties
Perrier comes in several flavours: Natural, Lemon, and Lime have been on the market for many years, and in 2007, Citron Lemon-Lime and Pamplemousse Rose (Pink Grapefruit) flavours debuted in the United States. In 2015, a Green Apple flavour was launched in France as well as the US. In 2016, a Mint flavour (Saveur Menthe) was introduced in France.
Since 2002, new varieties of Perrier have been introduced in France, for example, Eau de Perrier is less carbonated than the original, and comes in a blue bottle. Perrier Fluo comes in flavours such as ginger-cherry, peppermint, orange-lychee, raspberry, and ginger-lemon.
In 2017, Perrier introduced two new flavours, Perrier Strawberry and Perrier Watermelon, to their existing Lime, L’Orange, Pink Grapefruit, and Green Apple flavour.
Distribution
As of January 2013, Perrier was available in 140 countries, and almost 1 billion bottles are sold every year.
The Perrier Awards
From 1981 to 2005, the company sponsored an annual comedy award in the United Kingdom, the Perrier Comedy Award, also known as "The Perriers". It was described as a means of supporting young comedic talent at the Edinburgh Festival Fringe, an arts festival touted as "the world's largest". Initially for comedy reviews, by 1987 this included a standup comedian award. The award's sponsorship was taken over by various other advertisers starting in 2006 with commensurate renaming, and it eventually came to be called the Edinburgh Comedy Awards.
The Perrier Young Jazz Awards were set up by Perrier in 1998, though never attained the success and recognition of their longer running comedy equivalent. The awards ran for four years, releasing an album showcasing its winners each year, before being discontinued. The last year the awards ceremony ran was 2001.
See also
Apollinaris (water)
Badoit
Evian
Farris
Gerolsteiner Brunnen
Acqua Panna
Ramlösa
Spa
Clearly Canadian
Notes
References
Further reading
External links
Nestlé brands
Awards established in 1998
Mineral water
Bottled water brands
Carbonated water
French drinks
French brands
Soft drinks
Jazz awards
Awards disestablished in 2001
1898 establishments in France
Youth music competitions | Perrier | Chemistry | 1,461 |
5,179,474 | https://en.wikipedia.org/wiki/Double-exchange%20mechanism | The double-exchange mechanism is a type of a magnetic exchange that may arise between ions in different oxidation states. First proposed by Clarence Zener, this theory predicts the relative ease with which an electron may be exchanged between two species and has important implications for whether materials are ferromagnetic, antiferromagnetic, or exhibit spiral magnetism. For example, consider the 180 degree interaction of Mn-O-Mn in which the Mn "eg" orbitals are directly interacting with the O "2p" orbitals, and one of the Mn ions has more electrons than the other. In the ground state, electrons on each Mn ion are aligned according to the Hund's rule:
If O gives up its spin-up electron to Mn4+, its vacant orbital can then be filled by an electron from Mn3+. At the end of the process, an electron has moved between the neighboring metal ions, retaining its spin. The double-exchange predicts that this electron movement from one species to another will be facilitated more easily if the electrons do not have to change spin direction in order to conform with Hund's rules when on the accepting species. The ability to hop (to delocalize) reduces the kinetic energy. Hence the overall energy saving can lead to ferromagnetic alignment of neighboring ions.
This model is superficially similar to superexchange. However, in superexchange, a ferromagnetic or antiferromagnetic alignment occurs between two atoms with the same valence (number of electrons); while in double-exchange, the interaction occurs only when one atom has an extra electron compared to the other.
References
External links
Exchange Mechanisms in E. Pavarini, E. Koch, F. Anders, and M. Jarrell: Correlated Electrons: From Models to Materials, Jülich 2012,
Quantum chemistry
Magnetic exchange interactions
es:Doble canje | Double-exchange mechanism | Physics,Chemistry | 394 |
61,351,273 | https://en.wikipedia.org/wiki/Monofora | Monofora is a type of the single-light window, usually narrow, crowned by an arch, and decorated by small columns or pilasters.
Overview
The term usually refers to a certain type of window designed during the Romanesque, Gothic, and Renaissance periods, and also during the nineteenth-century Eclecticism in architecture. In other cases, the term may mean an arched window with a single opening.
Gallery
See also
Lancet window
Bifora
Trifora
Quadrifora
Polifora
References
Architectural elements
Windows | Monofora | Technology,Engineering | 104 |
74,641,090 | https://en.wikipedia.org/wiki/Hisham%20Khatib | Dr. Hisham Khatib (Arabic: "هشام الخطيب") (5 January 1936 – 31 May 2022) was a Jordanian politician and civil servant. He modernized and expanded the Jordanian and Palestinian electrical power capabilities during his service in the sector. He was the first Minister of Power and Mineral Resources in Jordan and Chairman of the Power Commission. During his later years, he served as a member of the Jordanian Senate.
Being of Palestinian origin and having spent much of his youth in Jerusalem, he developed a passion for the art and history of Palestine and the Holy Land. This developed into an interest in the region's 18th and 19th-century art and history. He spent considerable time collecting art, books, and artifacts relating to that era. He has also written many books on the topic, including his collection.
Early life
Dr. Khatib was born in January 1936 in Acre at the house of his maternal grandfather, Sheikh Musa Al Tabari. His father Sheikh Mohammad Hashem Khatib was a judge and would receive appointments all over the country, with his family moving around with him. They moved to Hebron for 1 year in 1941, then quickly moved to Tulkarem in 1943 where they lived until 1945. They then moved again to Nablus where they lived until 1949 when his father was appointed as the Qadi (Judge) of Nablus. In 1949 the family moved to Jerusalem where his father was appointed to the Moslem Sharia Appeal Court of Jerusalem. In Jerusalem, Dr. Khatib attended El Rashadyia School headed by Tawfiq Abu Saud, then attended his final year of schooling in Egypt in 1953 at El Nahrareh School.
Education
After graduating from school, Dr. Khatib enrolled at the Engineering School of Ain Shams University, Egypt, in 1954 where he studied Electrical Engineering, finally receiving his BSc in 1959, the first of many degrees. In 1960 he won a scholarship to spend a two-year post-graduate apprenticeship in the UK. The apprenticeship did not live up to his ambitions, and he seized the opportunity to leave it and join a twelve-month M.Sc. course on electrical machines at the University of Birmingham. He completed his MSc in 1962 and returned to Jerusalem. After some time in the industry, he enrolled with Queen Mary University of London, UK to attain his two final degrees, a B.Sc. in Economics, and a Ph.D. in Electrical Engineering, for which he had to take a sabbatical leave from work. He finally received his Ph.D. in 1974.
Early career
Upon returning to Jerusalem in 1959, after receiving his first B.Sc., he accepted an engineering position at the Jordan Jerusalem District Electricity Company (JJDEC). After completing his M.Sc. course, and continuing in his previous position for 4 years, he was appointed as Chief Engineer at JJDEC in 1966.
Later career
In 1974, after receiving his Ph.D., he moved to Jordan where he worked as Deputy Director-General of the Jordan Electricity Authority. In 1976 he joined the Arab Fund for Economic and Social Development based in Kuwait as an Energy Expert. He returned to Jordan in 1980 to reclaim his position as the Director-General of the Jordan Electricity Authority. And was then also appointed as chairman of the board for the Jordan South Cement Company in 1982.
In 1984, Dr. Khatib was appointed as the first Minister of Energy and Mineral Resources in Jordan where he served until 1989. He then served as the Minister of Water and Irrigation from 1993 until 1994. Then, he served as Minister of Planning and International Cooperation from 1994 to 1995. In 2005 he was appointed as the first Chairman of the Energy and Minerals Commission where he served until 2009.
He was appointed to the 27th Jordanian Senate where he served from September 2016 to September 2020. He was a member of the Chairman of the Energy and mineral resources committee; Member of the Finance and Economics Committee; Member of the Palestine Committee.
He was appointed as chairman of the board of trustees for the Al-Balqaʼ Applied University in 2019 until he finally retired in 2021.
In between careers, he had a private consulting practice where he was contracted by numerous international agencies such as the United Nations Development Programme, the World Bank, the Arab Fund for Economic and Social Development, and many others.
International Memberships and Affiliations
Dr. Khatib was very active internationally and a member of many committees and agencies all over the world, known for his diversified expertise in various fields of Engineering, Economics and Art. Dr. Khatib was a member of the following committees and agencies:
Fellow of the Institute of Engineering and Technology (IET) in the UK
Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in the USA
International Association for Energy Economics (IAEE) in the USA
Palestine Exploration Fund (PEF)
Association for the Study of Travellers to Egypt and the Near East (ASTENE)
International Council on Large Electric Systems (CIGRE)
Vice Chairman and then Honorary Vice Chairman of the World Energy Council (WEC)
World Federation of Scientific Workers
Darat Al Funun Honorary Board
Family
Hisham was the loving brother to his sisters Aida, Maha, and Ghada Khatib, of whom he is the oldest. He was the loving husband to his wife Maha Daher Al-Khatib, whom he married in 1970.
He and Maha were the loving parents to their three children: Mohammad, born in 1972, Lynn, born in 1975, and Isam, born in 1979.
Mohammad married his wife Ruwaida Share in 2004. Hisham was a loving grandfather to their three sons (his grandchildren) Hisham, Zaid, and Kareem.
Isam married his wife Dima Bilbaisi in 2006. Hisham was a loving grandfather to their three children (his grandchildren) Jeeda, Kinan, and Naya.
Art collection
Dr. Khatib was a philanthropist of the arts and heritage, supporting many local and international organizations, including Darat al Funun, Tiraz Centre, and the Palestine Exploration Fund. Over 60 years, he personally collected an extensive collection of art from all over MENA (Middle East and North Africa), especially the Holy Land (Jerusalem), containing a variety of books, manuscripts, maps, photographs, paintings, Qurans, and many others. Using this collection, Hisham was able to preserve the Arab and Palestinian cultural heritage and knowledge for the coming generations, publishing 7 books on his collection, including:
“The Holy Land: Palestine and Egypt Under the Ottomans” I.B. Tauris, UK, 2003
“Palestine and Jordan 1500 – 1900” Darat al Funun, The Khalid Shoman Foundation, Jordan, 2005
“Jerusalem, Palestine, and Jordan” Gilgamesh Publishing, UK, 2013
“Panoramic Jerusalem” Pro-Jerusalem Society, 2015
“Wild Flowers of Palestine and Jordan” Private Edition, 2016
“Valuable Printed Books and Manuscripts in the Khatib Collection” Private Edition, 2019
“A Voyage to Jerusalem” Reprint of a manuscript from 1901, 2019
Other publications
Outside of art, Dr. Khatib was also a persevering scientist and economist, making other publications with the Institution of Engineering and Technology (IET) about economic evaluations in the electricity supply industry, broadening his range of expertise even further. These publications include:
“Financial and economic evaluation of projects in the electricity supply industry” IET, 1998
“Economic Evaluation of Projects in the Electricity Supply” IET, 2003
“Economic Evaluation of Projects in the Electricity Supply Industry (3rd Edition)” IET, 2014
These publications, of course, do not include the countless articles on electrical engineering, economics, art, and many other topics found across many industries.
His final publication was his memoirs, titled “86 and still going… Slowly”, published in 2020, two years before his passing on the 31st of May 2022. In these memoirs, reserved for his friends, family, and loved ones, Dr. Khatib details intimate and personal details about his life, from beginning to end, teaching his grandchildren about their heritage, and highlighting the importance of family.
References
Jordanian engineers
Jordanian historians
Rashidiya School alumni
Ain Shams University alumni
Alumni of the University of Birmingham
Alumni of Queen Mary University of London
Palestinian arts
Energy ministers of Jordan
Public works ministers of Jordan
Water ministers of Jordan
Planning ministers of Jordan
Members of the Senate of Jordan
Fellows of the IEEE
Fellows of the Institute of Engineering and Technology
Publishers
Electrical engineers | Hisham Khatib | Engineering | 1,741 |
14,631,918 | https://en.wikipedia.org/wiki/Cytochrome%20b-245 | The Cytochrome b (-245) protein complex is composed of cytochrome b alpha (CYBA) and beta (CYBB) chain.
References
Cytochromes | Cytochrome b-245 | Chemistry | 38 |
2,626,472 | https://en.wikipedia.org/wiki/Michel%20Callon | Michel Callon (born 1945) is a professor of sociology at the École des mines de Paris and member of the Centre de sociologie de l'innovation. He is an author in the field of Science and Technology Studies and one of the leading proponents of actor–network theory (ANT) with Bruno Latour.
Recent career
Since the late 1990s, Michel Callon has led efforts to apply ANT approaches to study economic life, notably economic markets. This body of work interrogates the interrelation between the economy and economics, highlighting the ways in which economics and economics-inspired disciplines such as marketing shape the economy (Callon 1998 and 2005).
Bibliography
Books
Callon, Michel (ed.) (1998). The Laws of the Markets. London: Blackwell Publishers.
Callon, Michel (2005). "Why virtualism paves the way to political impotence", Economic Sociology - the European electronic newsletter. Read as PDF
Callon, M., Lascoumes, P., & Barthe, Y. (2009). Acting in an uncertain world: an essay on technical democracy. The MIT Press.
Chapters in books
Callon, Michel (1980). "Struggles and Negotiations to Define What is Problematic and What is Not: The Socio-logic of Translation." pp. 197–221 in The Social Process of Scientific Investigation, edited by Karin D. Knorr. Dordrecht: Reidel Publishing.
Callon, Michel (1986). "Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay." pp. 196–233 in Power, Action and Belief: A New Sociology of Knowledge, edited by John Law. London: Routledge & Kegan Paul.
See also
Obligatory passage point
External links
Michel Callon's University Home Page
Innovation economists
French sociologists
Academic staff of Mines Paris - PSL
French engineers
Living people
Sociologists of science
1945 births
Actor-network theory
French male writers
Philosophers of technology | Michel Callon | Technology | 409 |
10,089,291 | https://en.wikipedia.org/wiki/Comparison%20of%20DVR%20software%20packages | This is a comparison of digital video recorder (DVR), also known as personal video recorder (PVR), software packages. Note: this is may be considered a comparison of DVB software, not all listed packages have recording capabilities.
General information
Basic general information for popular DVR software packages - not all actually record.
Features
Information about what common and prominent DVR features are implemented natively (without third-party add-ons unless stated otherwise):
Video format support
Information about what video codecs are implemented natively (without third-party add-ons) in the PVRs.
Network support
Each features is in context of computer-to-computer interaction.
All features must be available after the default install otherwise the feature needs a footnote.
1 Yes with registry change
2 Yes with retail third-party plugin
3 Yes with free supported third-party plugin
4 Yes with free unsupported third-party plugin
5 Yes with free third-party software Web Guide 4
6 Yes with add-on software called DVBLink Server
7 Yes with using symlinks, or just adding folders in settings
TV tuner hardware
TV gateway network tuner TV servers
DVRs require TV tuner cards to receive signals. Many DVRs, as seen above, can use multiple tuners.
HdHomerun has CableCARD Models (HDHomeRun Prime) and OTA Models (HDHomeRun Connect) that are networked TV Tuners
See also
List of free television software
Comparison of video player software
Home cinema
Home theater PC (HTPC)
Digital video recorder
Hard disk recorder
DVD recorder
Quiet PC
Media server
Notes
External links
FLOSS Media Centers Comparison Chart
PVR software packages
Television technology
Television time shifting technology | Comparison of DVR software packages | Technology | 356 |
23,437,187 | https://en.wikipedia.org/wiki/Softwarepark%20Hagenberg | The Softwarepark Hagenberg is the Austrian technology park for software, located in Hagenberg im Mühlkreis, Austria, founded in 1989 by Professor Bruno Buchberger. The Softwarepark Hagenberg is a unique collaboration where research, business and education are intensively cooperating. Because of its success, the park has been called the "Wunder von Hagenberg" (miracle from Hagenberg).
History
Buchberger devised the concept for the park in 1989. It was created to balance both economic needs and desire for research and as such, the concept focused one third on economic production, one third on research and one third on academic education in the field of IT and related sciences. These three components drive a synergetic "spirale of innovation". Softwarepark has four main partners: The Province of Upper Austria, the Johannes Kepler University Linz, the Community of Hagenberg, and Raiffeisen Landesbank Oberösterreich, a major local bank. The first institute to be established in the park was Buchberger's Research Institute for Symbolic Computation (RISC) with then 25 students and employees.
Today Softwarepark is increasing its international activities, e.g. with ISI - International Studies in Informatics Hagenberg or the recently launched International Incubator Hagenberg.
Within the twenty years of its existence, 100 million € were invested into developing the park.
Composition
The park currently houses 50 companies with more than 1000 employees as well as more than 1400 students of a number of universities and colleges, amongst them the Research Institute for Symbolic Computation, and the Software Competence Center Hagenberg SCCH funded by the Austrian COMET program.
Additionally, the park houses a social project for people with disabilities to provide computer support in cooperation with the Diakonisches Werk Gallneukirchen and the Fachhochschule Hagenberg.
Expansion
The current expansion schedule plans to expand the park with 20 million € in funds until 2012, with the expansion of the Research Institute for Symbolic Computation as well as a hotel, sport arenas and an international student union building. An additional 50 million € are planned to be invested until 2014 for the expansion of the companies and research institutes in the park.
Books
References
External links
Softwarepark Hagenberg website
1989 establishments in Austria
Science parks in Austria
Information technology places
Buildings and structures in Upper Austria
Economy of Upper Austria
20th-century architecture in Austria | Softwarepark Hagenberg | Technology | 481 |
7,002,202 | https://en.wikipedia.org/wiki/Claudia%20Mitchell | Claudia Mitchell (born 1980) is a former United States Marine whose left arm was amputated near the shoulder following a motorcycle crash in 2004. She became the first woman to be outfitted with a bionic arm. The arm is controlled through muscles in her chest and side, which in turn are controlled by the nerves that had previously controlled her real arm. The nerves were rerouted to these muscles in a process of targeted reinnervation.
Her prosthesis, a prototype developed by the Rehabilitation Institute of Chicago, was one of the most advanced prosthetic arms developed to date.
References
External links
New Yorker article about Mitchell and the prosthetic procedure
Video of Mitchell demonstrating the prosthetic on youtube from New Scientist magazine
1980 births
American amputees
Cyborgs
Living people
United States Marines
Female United States Marine Corps personnel
Place of birth missing (living people)
21st-century American women | Claudia Mitchell | Biology | 181 |
458,673 | https://en.wikipedia.org/wiki/Chlorine%20dioxide | Chlorine dioxide is a chemical compound with the formula ClO2 that exists as yellowish-green gas above 11 °C, a reddish-brown liquid between 11 °C and −59 °C, and as bright orange crystals below −59 °C. It is usually handled as an aqueous solution. It is commonly used as a bleach. More recent developments have extended its applications in food processing and as a disinfectant.
Structure and bonding
The molecule ClO2 has an odd number of valence electrons, and therefore, it is a paramagnetic radical. It is an unusual "example of an odd-electron molecule stable toward dimerization" (nitric oxide being another example).
ClO2 crystallizes in the orthorhombic Pbca space group.
History
In 1933, Lawrence O. Brockway, a graduate student of Linus Pauling, proposed a structure that involved a three-electron bond and two single bonds. However, Pauling in his General Chemistry shows a double bond to one oxygen and a single bond plus a three-electron bond to the other. The valence bond structure would be represented as the resonance hybrid depicted by Pauling. The three-electron bond represents a bond that is weaker than the double bond. In molecular orbital theory this idea is commonplace if the third electron is placed in an anti-bonding orbital. Later work has confirmed that the highest occupied molecular orbital is indeed an incompletely-filled antibonding orbital.
Preparation
Chlorine dioxide was first prepared in 1811 by Sir Humphry Davy.
The reaction of chlorine with oxygen under conditions of flash photolysis in the presence of ultraviolet light results in trace amounts of chlorine dioxide formation.
Cl2 + 2 O2 ->[\ce{UV}] 2 ClO2 ^.
Chlorine dioxide can decompose violently when separated from diluting substances. As a result, preparation methods that involve producing solutions of it without going through a gas-phase stage are often preferred.
Oxidation of chlorite
In the laboratory, ClO2 can be prepared by oxidation of sodium chlorite with chlorine:
Traditionally, chlorine dioxide for disinfection applications has been made from sodium chlorite or the sodium chlorite–hypochlorite method:
or the sodium chlorite–hydrochloric acid method:
or the chlorite–sulfuric acid method:
All three methods can produce chlorine dioxide with high chlorite conversion yield. Unlike the other processes, the chlorite–sulfuric acid method is completely chlorine-free, although it suffers from the requirement of 25% more chlorite to produce an equivalent amount of chlorine dioxide. Alternatively, hydrogen peroxide may be efficiently used in small-scale applications.
Addition of sulfuric acid or any strong acid to chlorate salts produces chlorine dioxide.
Reduction of chlorate
In the laboratory, chlorine dioxide can also be prepared by reaction of potassium chlorate with oxalic acid:
or with oxalic and sulfuric acid:
Over 95% of the chlorine dioxide produced in the world today is made by reduction of sodium chlorate, for use in pulp bleaching. It is produced with high efficiency in a strong acid solution with a suitable reducing agent such as methanol, hydrogen peroxide, hydrochloric acid or sulfur dioxide. Modern technologies are based on methanol or hydrogen peroxide, as these chemistries allow the best economy and do not co-produce elemental chlorine. The overall reaction can be written as:
As a typical example, the reaction of sodium chlorate with hydrochloric acid in a single reactor is believed to proceed through the following pathway:
which gives the overall reaction
The commercially more important production route uses methanol as the reducing agent and sulfuric acid for the acidity. Two advantages of not using the chloride-based processes are that there is no formation of elemental chlorine, and that sodium sulfate, a valuable chemical for the pulp mill, is a side-product. These methanol-based processes provide high efficiency and can be made very safe.
The variant process using sodium chlorate, hydrogen peroxide and sulfuric acid has been increasingly used since 1999 for water treatment and other small-scale disinfection applications, since it produce a chlorine-free product at high efficiency, over 95%.
Other processes
Very pure chlorine dioxide can also be produced by electrolysis of a chlorite solution:
High-purity chlorine dioxide gas (7.7% in air or nitrogen) can be produced by the gas–solid method, which reacts dilute chlorine gas with solid sodium chlorite:
Handling properties
Chlorine dioxide is very different from elemental chlorine. One of the most important qualities of chlorine dioxide is its high water solubility, especially in cold water. Chlorine dioxide does not react with water; it remains a dissolved gas in solution. Chlorine dioxide is approximately 10 times more soluble in water than elemental chlorine but its solubility is very temperature-dependent.
At partial pressures above (or gas-phase concentrations greater than 10% volume in air at STP) of ClO2 may explosively decompose into chlorine and oxygen. The decomposition can be initiated by light, hot spots, chemical reaction, or pressure shock. Thus, chlorine dioxide is never handled as a pure gas, but is almost always handled in an aqueous solution in concentrations between 0.5 to 10 grams per liter. Its solubility increases at lower temperatures, so it is common to use chilled water (5 °C, 41 °F) when storing at concentrations above 3 grams per liter. In many countries, such as the United States, chlorine dioxide may not be transported at any concentration and is instead almost always produced on-site. In some countries, chlorine dioxide solutions below 3 grams per liter in concentration may be transported by land, but they are relatively unstable and deteriorate quickly.
Uses
Chlorine dioxide is used for bleaching of wood pulp and for the disinfection (called chlorination) of municipal drinking water, treatment of water in oil and gas applications, disinfection in the food industry, microbiological control in cooling towers, and textile bleaching. As a disinfectant, it is effective even at low concentrations because of its unique qualities.
Bleaching
Chlorine dioxide is sometimes used for bleaching of wood pulp in combination with chlorine, but it is used alone in ECF (elemental chlorine-free) bleaching sequences. It is used at moderately acidic pH (3.5 to 6). The use of chlorine dioxide minimizes the amount of organochlorine compounds produced. Chlorine dioxide (ECF technology) currently is the most important bleaching method worldwide. About 95% of all bleached kraft pulp is made using chlorine dioxide in ECF bleaching sequences.
Chlorine dioxide has been used to bleach flour.
Water treatment
The water treatment plant at Niagara Falls, New York first used chlorine dioxide for drinking water treatment in 1944 for destroying "taste and odor producing phenolic compounds." Chlorine dioxide was introduced as a drinking water disinfectant on a large scale in 1956, when Brussels, Belgium, changed from chlorine to chlorine dioxide. Its most common use in water treatment is as a pre-oxidant prior to chlorination of drinking water to destroy natural water impurities that would otherwise produce trihalomethanes upon exposure to free chlorine. Trihalomethanes are suspected carcinogenic disinfection by-products associated with chlorination of naturally occurring organics in raw water. Chlorine dioxide also produces 70% fewer halomethanes in the presence of natural organic matter compared to when elemental chlorine or bleach is used.
Chlorine dioxide is also superior to chlorine when operating above pH 7, in the presence of ammonia and amines, and for the control of biofilms in water distribution systems. Chlorine dioxide is used in many industrial water treatment applications as a biocide, including cooling towers, process water, and food processing.
Chlorine dioxide is less corrosive than chlorine and superior for the control of Legionella bacteria.
Chlorine dioxide is superior to some other secondary water disinfection methods, in that chlorine dioxide is not negatively impacted by pH, does not lose efficacy over time, because the bacteria will not grow resistant to it, and is not negatively impacted by silica and phosphates, which are commonly used potable water corrosion inhibitors. In the United States, it is an EPA-registered biocide.
It is more effective as a disinfectant than chlorine in most circumstances against waterborne pathogenic agents such as viruses, bacteria, and protozoa – including the cysts of Giardia and the oocysts of Cryptosporidium.
The use of chlorine dioxide in water treatment leads to the formation of the by-product chlorite, which is currently limited to a maximum of 1 part per million in drinking water in the USA. This EPA standard limits the use of chlorine dioxide in the US to relatively high-quality water, because this minimizes chlorite concentration, or water that is to be treated with iron-based coagulants, because iron can reduce chlorite to chloride. The World Health Organization also advises a 1ppm dosification.
Use in public crises
Chlorine dioxide has many applications as an oxidizer or disinfectant. Chlorine dioxide can be used for air disinfection and was the principal agent used in the decontamination of buildings in the United States after the 2001 anthrax attacks. After the disaster of Hurricane Katrina in New Orleans, Louisiana, and the surrounding Gulf Coast, chlorine dioxide was used to eradicate dangerous mold from houses inundated by the flood water.
In addressing the COVID-19 pandemic, the U.S. Environmental Protection Agency has posted a list of many disinfectants that meet its criteria for use in environmental measures against the causative coronavirus. Some are based on sodium chlorite that is activated into chlorine dioxide, though differing formulations are used in each product. Many other products on the EPA list contain sodium hypochlorite, which is similar in name but should not be confused with sodium chlorite because they have very different modes of chemical action.
Other disinfection uses
Chlorine dioxide may be used as a fumigant treatment to "sanitize" fruits such as blueberries, raspberries, and strawberries that develop molds and yeast.
Chlorine dioxide may be used to disinfect poultry by spraying or immersing it after slaughtering.
Chlorine dioxide may be used for the disinfection of endoscopes, such as under the trade name Tristel. It is also available in a trio consisting of a preceding pre-clean with surfactant and a succeeding rinse with deionized water and a low-level antioxidant.
Chlorine dioxide may be used for control of zebra and quagga mussels in water intakes.
Chlorine dioxide was shown to be effective in bedbug eradication.
For water purification during camping, disinfecting tablets containing chlorine dioxide are more effective against pathogens than those using household bleach, but typically cost more.
Other uses
Chlorine dioxide is used as an oxidant for destroying phenols in wastewater streams and for odor control in the air scrubbers of animal byproduct (rendering) plants. It is also available for use as a deodorant for cars and boats, in chlorine dioxide-generating packages that are activated by water and left in the boat or car overnight.
In dilute concentrations, chlorine dioxide is an ingredient that acts as an antiseptic agent in some mouthwashes.
Safety issues in water and supplements
Potential hazards with chlorine dioxide include poisoning and the risk of spontaneous ignition or explosion on contact with flammable materials.
Chlorine dioxide is toxic, and limits on human exposure are required to ensure its safe use. The United States Environmental Protection Agency has set a maximum level of 0.8 mg/L for chlorine dioxide in drinking water. The Occupational Safety and Health Administration (OSHA), an agency of the United States Department of Labor, has set an 8-hour permissible exposure limit of 0.1 ppm in air (0.3 mg/m3) for people working with chlorine dioxide.
Chlorine dioxide has been fraudulently and illegally marketed as an ingestible cure for a wide range of diseases, including childhood autism and coronavirus. Children who have been given enemas of chlorine dioxide as a supposed cure for childhood autism have suffered life-threatening ailments. The U.S. Food and Drug Administration (FDA) has stated that ingestion or other internal use of chlorine dioxide, outside of supervised oral rinsing using dilute concentrations, has no health benefits of any kind, and it should not be used internally for any reason.
Pseudomedicine
On 30 July and 1 October 2010, the United States Food and Drug Administration warned against the use of the product "Miracle Mineral Supplement", or "MMS", which when prepared according to the instructions produces chlorine dioxide. MMS has been marketed as a treatment for a variety of conditions, including HIV, cancer, autism, acne, and, more recently, COVID-19. Many have complained to the FDA, reporting life-threatening reactions, and even death. The FDA has warned consumers that MMS can cause serious harm to health, and stated that it has received numerous reports of nausea, diarrhea, severe vomiting, and life-threatening low blood pressure caused by dehydration. This warning was repeated for a third time on 12 August 2019, and a fourth on 8 April 2020, stating that ingesting MMS is just as hazardous as ingesting bleach, and urging consumers not to use them or give these products to their children for any reason, as there is no scientific evidence showing that chlorine dioxide has any beneficial medical properties.
References
External links
Chlorine oxides
Bleaches
Disinfectants
Free radicals
Gases with color
Explosive gases
Explosive chemicals | Chlorine dioxide | Chemistry,Biology | 3,025 |
15,875,142 | https://en.wikipedia.org/wiki/Fit-PC | The fit-PC is a small, light, fan-less nettop computer manufactured by the Israeli company CompuLab.
Many fit-PC models are available. fit-PC 1.0 was introduced in July 2007, fit-PC Slim was introduced in September 2008, fit-PC 2 was introduced in May 2009, fit-PC 3 was introduced in early 2012, and fit-PC 4 was introduced spring 2014. The device is power-efficient (fit-PC 1 was about 5 W) and therefore considered to be a green computing project, capable of using open source software and creating minimal electronic waste.
Current models
fit-PC2
On February 19, 2009, Compulab announced the fit-PC2, which is "a major upgrade to the fit-PC product line".
Detailed specifications for the fit-PC2 include an Intel Atom Z5xx Silverthorne processor (1.1/1.6/2.0 GHz options), up to 2GB of RAM, 160GB SATA Hard Drive, GigaBit LAN and more. The fit-PC2 is also capable of HD video playback. Its declared power consumption is only 6W, and according to the manufacturer, it saves 96% of the power used by a standard desktop. fit-PC2 is the most power efficient PC on the Energy-Star list.
The fit-PC2 is based on the GMA 500 (Graphics Media Accelerator). Unfortunately the open source driver included in Linux kernel 2.6.39 does not support VA-API video or OpenGL/3D acceleration.
The fit-PC2 is being phased out and is being replaced by the fitlet, the fitlet was designed to replace the groundbreaking (and still popular) CompuLab fit-PC2.
fit-PC2i
On December 2, 2009, Compulab announced the fit-PC2i, a fit-PC2 variation targeting networking and industrial applications.
fit-PC2i adds a second Gbit Ethernet port, Wake-on-LAN, S/PDIF output and RS-232 port, has two fewer USB ports, and no IR.
fit-PC3
The fit-PC3 has been released early 2012.
See the fit-PC3 article.
fit-PC4
The fit-PC4 has been released spring 2014.
fitlet
The fitlet has been announced January 14, 2015.
It has 3 CPU/SoC variations, and 5 feature variations, though only 7 models have been announced so far.
Obsolete models
fit-PC Slim
On September 16, 2008, Compulab announced the Fit-PC Slim, which at 11 x 10 x 3 cm is smaller than fit-PC 1.0.
Hardware
fit-PC Slim uses 500 MHz AMD Geode LX800 processor and has 512mb soldered-on RAM. The computer includes a VGA output, a serial port with a custom connector, Ethernet, b/g WLAN, and 3 USB ports (2 on the front panel). The system has an upgradeable 2.5" 60GB ATA hard drive.
Software
fit-PC Slim has General Software BIOS supporting PXE and booting from a USB CDROM or USB thumb drive. It is pre-installed with either Windows Vista or with Ubuntu 8.10 and Gentoo Linux 2008.0 . Also Windows Embedded can be used, or pre-installed on a FlowDrive.
Availability
The fit-PC Slim end-of-life was announced on 19 June 2009 with the general availability of fit-PC2.
fit-PC 1.0
fit-PC 1.0 is an earlier model that has the following differences
Limited to 256mb RAM
No Wi-Fi
Dual 100BaseT Ethernet
Larger form factor - 12 x 11.6 x 4 cm
Only 2 USB ports
Hard disk is upgradeable
No power button and indicator LEDs
5 V power supply
See also
Trim-Slice, an ARM mini-computer also made by CompuLab
Industrial PC
Media center (disambiguation)
Media PC
Nettop
References
External links
fit-PC website
Compulab website
fit-PC Australia website
fit-PC2 Users forum
fit-PC US Website
Computers and the environment
Israeli brands
Linux-based devices
Mini PC
Products introduced in 2007 | Fit-PC | Technology | 867 |
40,073,915 | https://en.wikipedia.org/wiki/Dyakonov%20surface%20wave | Dyakonov surface waves (DSWs) are surface electromagnetic waves that travel along the interface in between an isotropic and an uniaxial-birefringent medium. They were theoretically predicted in 1988 by the Russian physicist Mikhail Dyakonov. Unlike other types of acoustic and electromagnetic surface waves, the DSW's existence is due to the difference in symmetry of materials forming the interface. He considered the interface between an isotropic transmitting medium and an anisotropic uniaxial crystal, and showed that under certain conditions waves localized at the interface should exist. Later, similar waves were predicted to exist at the interface between two identical uniaxial crystals with different orientations.
The previously known electromagnetic surface waves, surface plasmons and surface plasmon polaritons, exist under the condition that the permittivity of one of the materials forming the interface is negative, while the other one is positive (for example, this is the case for the air/metal interface below the plasma frequency). In contrast, the DSW can propagate when both materials are transparent; hence they are virtually lossless, which is their most fascinating property.
In recent years, the significance and potential of the DSW have attracted the attention of many researchers: a change of the constitutive properties of one or both of the two partnering materials – due to, say, infiltration by any chemical or biological agent – could measurably change the characteristics of the wave. Consequently, numerous potential applications are envisaged, including devices for integrated optics, chemical and biological surface sensing, etc.
However, it is not easy to satisfy the necessary conditions for the DSW, and because of this the first proof-of-principle experimental observation of DSW
was reported only 20 years after the original prediction.
A large number of theoretical work appeared dealing with various aspects of this phenomenon, see the detailed review. In particular, DSW propagation at magnetic interfaces, in left-handed materials, in electro-optical, and chiral materials was studied. Resonant transmission due to DSW in structures using prisms was predicted, and combination and interaction between DSW and surface plasmons (Dyakonov plasmons) was studied and observed.
Physical properties
The simplest configuration considered in Ref. 1 consists of an interface between an isotropic material with permittivity and a uniaxial crystal with permittivities and for the ordinary and the extraordinary waves respectively. The crystal C axis is parallel to the interface. For this configuration, the DSW can propagate along the interface within certain angular intervals with respect to the C axis, provided that the condition of is satisfied. Thus DSW are supported by interfaces with positive birefringent crystals only (). The angular interval is defined by the parameter
.
The angular intervals for the DSW phase and group velocities ( and ) are different. The phase velocity interval is proportional to and even for the most strongly birefringent natural crystals is very narrow (rutile) and (calomel). However the physically more important group velocity interval is substantially larger (proportional to ). Calculations give for rutile, and for calomel.
Perspectives
A widespread experimental investigation of DSW material systems and evolution of related practical devices has been largely limited by the stringent anisotropy conditions necessary for successful DSW propagation, particularly the high degree of birefringence of at least one of the constituent materials and the limited number of naturally available materials fulfilling this requirement. However, this is about to change in light of novel artificially engineered metamaterials and revolutionary material synthesis techniques.
The extreme sensitivity of DSW to anisotropy, and thereby to stress, along with their low-loss (long-range) character render them particularly attractive for enabling high sensitivity tactile and ultrasonic sensing for next-generation high-speed transduction and read-out technologies. Moreover, the unique directionality of DSW can be used for the steering of optical signals.
See also
Dyakonov–Voigt wave
Surface wave
Leaky mode
References
Condensed matter physics
Surface science
Surface waves | Dyakonov surface wave | Physics,Chemistry,Materials_science,Engineering | 854 |
9,096,236 | https://en.wikipedia.org/wiki/New%20England%20Biolabs | New England Biolabs (NEB) is an American life sciences company which produces and supplies recombinant and native enzyme reagents for life science research. It also provides products and services supporting genome editing, synthetic biology and next-generation sequencing. NEB also provides free access to research tools such as REBASE, InBASE, and Polbase.
About
The company was founded in 1974 by Donald "Don" Comb, a Harvard Medical School professor, as a cooperative laboratory of experienced scientists and initially produced restriction enzymes on a commercial scale. Comb held the CEO title until 2005 when, at 78 years old, he moved from management back into research at the firm.
NEB received approximately $1.7 million in Small Business Innovation Research (SBIR) grants between 2009 and 2013 for this research.
NEB produces 230 recombinant and 30 native restriction enzymes for genomic research, as well as nicking enzymes and DNA methylases. It pursues research in areas related to proteomics, DNA Sequencing, and drug discovery. NEB scientists also conduct basic research in Molecular Biology and Parasitology.
The company has subsidiaries in Singapore, Canada, China, France, Germany, Japan, the U.K., and Australia, and distributors in South America, Australia, and other countries in Europe and Asia. Its headquarters are in Ipswich, MA. Development of the current headquarters began in 2000, and was completed in 2005. Donald Comb served as the company's Chairman and CEO from the company's founding in 1974, until 2005. In 2005, he was replaced as chief executive by James Ellard, though Comb continued to serve as Chairman of the Board of Directors. In October 2020 Comb passed away at the age of 93. NEB employs over 450 people at its headquarters. As company policy, all scientists and some executives must work at least one day per month on the customer support telephone line, answering technical support questions about the company's products. In 2022 Jim Ellard stepped down as CEO, but remained chairman of the board of directors, he was succeeded by Salvatore (Sal) Russello, previously NEB's director of OEM & customized solutions.
Sir Richard John Roberts is the company's Chief Scientific Officer. He shared the 1993 Nobel Prize in Physiology or Medicine with Phillip Allen Sharp for the discovery of introns in eukaryotic DNA and the mechanism of gene-splicing.
In 2015, NEB committed to establishing a GMP manufacturing facility near its headquarters in Ipswich, Massachusetts, and the 40,000-sq-ft facility was completed in 2018. The multi-product Rowley Cleanroom Manufacturing Facility makes GMP-grade products and has a 10,000-sq-ft mechanical mezzanine.
Applications and Tools
Luna kits
In January 2017, NEB released Luna universal quantitative real-time polymerase chain reaction (qPCR) and reverse-transcription quantitative polymerase chain reaction (RT-qPCR) kits. The Luna kits are used for DNA or RNA quantitation.
NEBNext products
In December 2017, the company released the NEBNext Ultra II FS DNA library prep kit for next-generation sequencing (NGS). In October 2019, NEB released a new RNA depletion product, the NEBNext Globin & rRNA Depletion Kit (Human/Mouse/Rat) and NEBNext rRNA Depletion Kit (Bacteria). The kits offer specific depletion of the RNA species that interfere with the analysis of coding and non-coding RNAs. That same month, the company announced its NEBNext Direct Genotyping Solution. The product delivers a one day, automatable genotyping workflow for a variety of applications in Agricultural biotechnology.
In January 2020, NEB signed an agreement with ERS Genomics Limited that gave NEB rights to sell CRISPR/Cas9 tools and reagents, used for gene editing.
Cloning and synthetic biology
The NEBuilder HiFi DNA Assembly Cloning Kit and Master Mix enable one-step cloning and multiple DNA fragment assembly. The proprietary DNA polymerase in the NEBuilder HiFi enzyme mix can assemble DNA fragments ranging from 100 bp to 19 kb. NEB also offers the Gibson Assembly Master Mix.
Monarch nucleic acid purification
NEB provides purification kits for both DNA and RNA. In May 2019, NEB released the Monarch Genomic DNA Purification Kit which is designed to minimize RNA contamination and allow high-yield purification of large DNA fragments. NEB’s nucleic acid purification products have been used in various studies, including:
Purification of genomic DNA used to discover naturally-occurring DNA modifications in bacteriophages.
Purification of RNA from wound biopsies to study the relationship between genetics, wound microbiome diversity, and wound healing.
Purification of genomic DNA from squid embryos used in the first gene knockout in a cephalopod.
Purification of RNA from mouse samples in a study identifying a pathway that selectively regulates cancer stem cells, which may be responsible for treatment resistance, tumor metastasis, and disease recurrence.
Purification of total RNA from Arabidopsis seedlings in a study demonstrating the first known response by a biological receptor to radio frequency exposure.
Response to COVID-19
New England Biolabs developed a colorimetric loop-mediated isothermal amplification (LAMP) assay for research use. This assay can be used to test for the presence of virus through nucleic acid detection, returning results in only 30 minutes. In 2020, the LAMP method was one of several molecular tests used to detect RNA from SARS-CoV-2, a strain of coronavirus that causes COVID-19.
RNA isolation kits were also used to develop assays to detect SARS-CoV-2. NEB’s Monarch Total RNA Miniprep Kit was not designed specifically for viral RNA extraction, but it was successfully used by different companies to extract viral RNA from biological samples. NEB also released a supplementary protocol for processing saliva, buccal swabs, and nasopharyngeal samples.
Three next-generation sequencing kits to support SARS-CoV-2 monitoring were launched in February, 2021. These kits, based on ARTIC Network protocols, provide virus transmission and evolution insights.
In April, 2021, the Color SARS-CoV-2 RT-LAMP Diagnostic Assay, utilizing New England Biolabs reagents, was approved for emergency use at Color Health Inc in Burlingame, California.
Databases
The company runs free scientific databases. REBASE, the restriction enzyme database, contains the details of commercial and research endonucleases. In 2011 the company founded Polbase, an online database which provides information specifically about polymerases. Another free NEB database is InBase, an intein database, which includes the Intein Registry and information about each intein.
Partnerships
In 2001, NEB co-founded the marine DNA library Ocean Genome Legacy (OGL), which according to the Boston Globe, “catalogues samples of organisms from all over the world, to be made available to scientists for research”. Though originally located on the NEB campus, OGLF relocated to the Nahant campus of Northeastern University in 2014. To enable point-of-use sales of its reagents, NEB created a digital interface for enzyme-housing freezers to be used at customer storage sites, through a partnership with Ionia Corp. and Salesforce.com. The data is used by the company for both sales logistics and as a part of future enzyme research development. It has also partnered with Harvard University on recycling and reclamation initiatives when its products and packaging come to the end of their use or lifecycle. , NEB also had a distribution agreement with VWR.
In June 2019, NEB, Waters, and Genos announced they would work together on The Human Glycome Project, a global initiative to map the structure and function of human glycans. NEB will supply a version of its Rapid PNGase F technology to aid in increased sample preparation and improve process throughput.
That same month, NEB entered a partnership with Bioz, Inc., an artificial intelligence technology company, to provide its customers with access to examples of real-world applications of its products.
References
Notes
Research support companies
Biotechnology companies established in 1974
Life sciences industry
Life science companies based in Massachusetts
Companies based in Essex County, Massachusetts
1974 establishments in Massachusetts | New England Biolabs | Biology | 1,740 |
46,689,760 | https://en.wikipedia.org/wiki/Nurse%20Hitomi%27s%20Monster%20Infirmary | is a Japanese manga series written and illustrated by Shake-O. It follows the daily life and adventures of Hitomi Manaka, a cyclops who works as a school nurse, and her co-workers and students dealing with their human (and not-so-human) problems.
Nurse Hitomi's Monster Infirmary is published in Japan by Tokuma Shoten in their Monthly Comic Ryū magazine, and by Seven Seas Entertainment in North America.
Plot
In a world where certain individuals deal with unique and abnormal changes during puberty, Damoto Junior High's school nurse Hitomi Manaka does her best to help her patients work through their transition from insecurities to incidents like limbs that just will not stay attached and even shrinking spurts.
Characters
A cyclops and main heroine of the series who works as the school nurse, nicknamed . As a cyclops, and the eldest sibling in her family, despite her lack of depth perception, her eye allows her to observe any abnormalities in students to better help them.
A plant-based being of unknown gender who serves as Hitomi's assistant, keeping a record of all students with abnormalities who visit the nurse's office.
A 2nd year student from Class A whose tongue became elastic and grew to 320.5 cm in length, being the first of Hitomi's patients.
Class B's teacher at the school, a kind-hearted gentleman despite his initial imposing appearance due to fur covering his entire body.
Class A's teacher, he is a childhood friend of Hitomi's and their parents' houses are next door to each other. He has two extra arms growing from his torso.
A winged girl and delinquent 2nd year student from Class B. She refuses to obey authority much of the time and is anti-social. She can fly, although it takes a lot out of her.
Conjoined twin 2nd year students from Class D, Naruki being popular with the girls prior to Kaori appearing on his body. Kaori, identifying herself as female and initially a growth before becoming Naruki's right head, shares many of her brother's traits with admirers of her own.
Publication
Shake-O began publishing the series in Tokuma Shoten's Monthly Comic Ryū magazine on 19 September 2013. The series moved to online-only serialization when Comic Ryū changed formats on 19 June 2018. Seven Seas Entertainment licensed the series for publication in North America.
The series has been collected into eighteen volumes, of which thirteen have been published in English.
Reception
Lynzee Loveridge ranked the series at number five on her list of "7 Manga for Monster Girl Lovers" on Anime News Network.
References
External links
at Monthly Comic Ryū
at Seven Seas Entertainment
Anime and manga set in schools
Comedy anime and manga
Comics about monsters
Cyclopes
Fantasy anime and manga
Fiction about size change
Seinen manga
Seven Seas Entertainment titles
Slice of life anime and manga
Tokuma Shoten manga | Nurse Hitomi's Monster Infirmary | Physics,Mathematics | 599 |
76,755,445 | https://en.wikipedia.org/wiki/1973%20Concorde%20eclipse%20flight | On 30 June 1973, the supersonic jet Concorde 001 intercepted the path of a total solar eclipse and followed the path of totality as it crossed Africa. This feat allowed the passengers to experience a total solar eclipse for 74 minutes, the longest-ever total eclipse observation. Five experiments were carried out during the flight, but they have had limited scientific impact.
Sequence of events
Preparation and lead up
In May 1972, Pierre Léna, an astronomer with the Paris Observatory, met with French Concorde test pilot André Turcat over lunch at a restaurant at Toulouse Airport to propose his idea to view the 1973 eclipse from an aircraft. Léna describes this meeting in his book about the project, Concorde 001 et l’ombre de la Lune (2015), while Turcat describes it in Un mythe éclipsé in Bulletin de l’Académie des sciences, agriculture, arts et belles lettres d’Aix-en-Provence (2013). British astrophysicist John Beckman had previously tried to obtain permission to use the 002 Concorde prototype to conduct a similar experiment, but was turned down.
In autumn 1972, Léna was told that he, Turcat and their teams could begin work, but that no firm decision would be made about the flight before February 1973. On 2 February, it was announced that the flight would proceed. The scientists were able to carry out a test flight with their equipment on 17 May 1973, in their maiden supersonic flight. The final 2-hour-and-36-minute rehearsal flight took place on 28 June.
30 June 1973
At 10:08 GMT on 30 June 1973, Concorde 001 departed Las Palmas, Gran Canaria, piloted by André Turcat and Jean Dabo. Aboard the flight were Turcat and Dabo; flight mechanic Michel Rétif; radio navigator Hubert Guyonnet; Henri Perrier; and astronomers Léna, Beckman, Donald Hall, Donald Liebenberg, Alain Soufflot, Paul Wraight, and Serge Koutchmy.
The plane intercepted the path of totality over Mauritania within one second of the planned rendezvous and flew at an altitude of 58,000 feet at Mach 2. Mauritania closed its airspace to commercial air traffic to ensure the success of the Concorde's flight. The aircraft flew in the lunar shadow over the Sahara including Mali, Nigeria and Niger, before landing in Fort-Lamy (present-day N'Djamena), in Chad.
On the ground on Earth, the longest possible viewing of totality of this eclipse from a fixed location was 7 minutes and 4 seconds. The Concorde experienced 74 minutes of totality with an extended second contact of 7 minutes and extended third contact of 12 minutes.
Aircraft
The original Concorde prototype 001 made its first test flight in 1969 from Toulouse Airport. The specific modified version of the aircraft used for this experiment was the Concorde 001 registered as F-WTSS. The aircraft has four twin-spool Olympus 593 engines and two onboard inertial guidance systems. Four specially-made portholes were installed in the roof of the aircraft's fuselage to facilitate viewing of the Sun. Infrared and optical cameras were installed in portholes in the plane's roof to capture the Sun's corona with less atmospheric interference than there would be from the ground.
F-WTSS is now on display as an exhibit at the Musée de l’air et de l’espace in France along with Air France Concorde 213, registered as F-BTSD.
Scientific observations
Five experiments were carried out during the 1973 Concorde 001 flight. Léna and his team (Université Paris) focused their efforts on studying the F-corona (the outer part of the Sun's corona, made up of dust particles). Wraight (University of Aberdeen) measured the effects of the eclipse on oxygen atoms in the Earth's atmosphere through a side-porthole. Liebenberg (University of California, Los Alamos Scientific Laboratories) measured pulsations in light intensity, while Beckman (Queen Mary College) observed the far infrared emissions from the chromosphere.
Legacy
Though this event garnered wide and lasting media attention, solar researchers generally agree that the Concorde's flight has had limited scientific impact. Kevin Reardon of the National Solar Observatory said of the flight, "Strangely no significant results were ever published from the effort. [...] The overall science output was not as notable as the flight itself." Léna himself has admitted, "The five experiments all succeeded, but none of them revolutionized our understanding of the corona" and that "[the experiments] all played their role in the normal progression of scientific knowledge, but there were no extraordinary results."
On 11 August 1999, three Concorde aircraftone from France and two from the United Kingdomcarried out a similar feat carrying tourists instead of scientists. Passengers paid $2,400, but experienced only four or five minutes of totality, which was difficult to see because of the aircraft's small windows and the location of the Sun. A similar flight was planned for the 21 June 2001 solar eclipse, but was cancelled after the 2000 plane crash of Air France Flight 4590. Airborne eclipse chasing has been successfully attempted on other non-supersonic aircraft including a LATAM Airlines Boeing 787-9 Dreamliner (E-Flight 2019-MAX), and a 2024 Gulfstream V jet.
The Concorde's 74 minutes of totality remains the longest-ever total eclipse observation.
Notes
References
Aviation occurrences
1973 in science
20th-century astronomical events
Solar eclipses
June 1973 events in Africa | 1973 Concorde eclipse flight | Astronomy | 1,148 |
5,173,024 | https://en.wikipedia.org/wiki/HD%20119921 | HD 119921 is a single, white-hued star in the southern constellation of Centaurus. it has the Bayer designation z Centauri. This is faintly visible to the naked eye, having an apparent visual magnitude of 5.15. It forms a wide double star with a faint, magnitude 12.50 visual companion, which is located at an angular separation of as of 2010. HD 119921 is moving closer to us with a heliocentric radial velocity of around −10 km/s, and is currently located some from the Sun. At that distance, the visual magnitude of this star is diminished by 0.15 from extinction due to interstellar dust.
This is an A-type main-sequence star with a stellar classification of A0 V, per Houk (1979). However, Gray & Garrison (1987) have it classed as B9.5 III-n, suggesting it is a more evolved giant star. HD 119921 is spinning rapidly with a projected rotational velocity of 220 km/s. The star is radiating around 125 times the Sun's luminosity from its photosphere at an effective temperature of 8,801 K.
In 1983, Molaro et al. reported the presence of super-ionized elements (triple-ionized carbon and silicon) in the far ultraviolet spectrum of HD 119921. These anomalous features are not normally detected from a star in this temperature range. Instead, these blue-shifted absorption features may originate in the local interstellar medium.
References
A-type main-sequence stars
B-type giants
Centaurus
Centauri, z
Durchmusterung objects
119921
067244
5174 | HD 119921 | Astronomy | 348 |
8,429,860 | https://en.wikipedia.org/wiki/Craig%20plot | The Craig plot, named after Paul N. Craig, is a plot of two substituent parameters (e.g. Hansch-Fujita π constant and sigma constant) used in rational drug design.
Two most used forms of a Craig plot are
plotting the sigma constants of the Hammett equation versus hydrophobicity
plotting the steric terms of the Taft equation against hydrophobicity
See also
Quantitative structure-activity relationship
pKa
References
Further reading
Medicinal chemistry | Craig plot | Chemistry,Biology | 96 |
59,908,228 | https://en.wikipedia.org/wiki/Abnormal%20urine%20color | Normally, human urine color is straw-yellow. Urine color other than straw-yellow sometimes reflects an abnormality—an underlying pathological condition—in human beings.
Signs and symptoms
The signs and symptoms of abnormal urine color are shown as follows:
Unexplained urine color other than straw-yellow has continued for a long time.
Once observe blood in urine.
Clear, dark-brown urine.
Risk factors of clinical abnormal urine color include elderly age, strenuous exercise, and family history of related diagnosis.
Cause
Infection, disease, medicines, or food can all affect urine color temporarily. For instance, cloudy or milky urine usually accompanied by bad smell possibly indicates urinary tract infection, excessive discharge of crystals, fat, white blood cells, red blood cells, or mucus.
Dark urine that looks brown but clear might be a warning sign of a serious liver disease like hepatitis or cirrhosis. In which, an excess of bilirubin being discharged through urine.
In case the urine looks in pink, red, or lighter brown is generally caused by beets, blackberries, certain food colorings, hemolytic anemia, renal impairment, urinary tract infection, medication, porphyria, intra-abdominal bleeding, vaginal bleeding, neoplasm located in either bladder or kidneys pathways.
If urine looks dark yellow or similar to orange color, the causative factors might be recent uses of a riboflavin-containing dietary supplement, carotene, phenazopyridine, rifampin, warfarin or laxative.
The causation or contributing factors of the urine color changing to green or blue are those artificial colors seen in foods and drugs, the presence of bilirubin, medicines such as methylene blue, and urinary tract infections.
Diagnosis
Doctor may prescribe some tests to help get the full picture of the situation, such as blood tests, liver function tests, ultrasound for kidneys and bladder, urinalysis, urine culture for infection, and cystoscopy.
Doctor may also ask for the medical history to collect information before making a diagnosis.
See also
Urine § color
References
Urine | Abnormal urine color | Biology | 444 |
35,347,675 | https://en.wikipedia.org/wiki/Caesium%20bicarbonate | Caesium bicarbonate or cesium bicarbonate is a chemical compound with the chemical formula CsHCO3. It can be produced through the following reaction:
Cs2CO3 + CO2 + H2O → 2 CsHCO3
The compound can be used for synthesizing caesium salts, but less common than caesium carbonate.
References
Caesium compounds
Bicarbonates | Caesium bicarbonate | Chemistry | 85 |
60,719,867 | https://en.wikipedia.org/wiki/Lactarius%20abbotanus | Lactarius abbotanus is a member of the large milk-cap genus Lactarius in the order Russulales. It is found in India, and was first described by mycologists J. R. Sharma and Kanad Das in 2003.
Description
The cap is convex with a depressed center, measuring between 64 and 83 mm in diameter. The lamellae are yellowish white and distant, with about 4 being observed per 10 mm. The stipe measures from 38 to 45 mm in height and from 14 to 18 mm in diameter, being cilyndrical or having a slightly wider base. It is hollow, with the base being hairy. L. abbotanus exudes white latex, which turns brilliant to yellow immediately after exposure to air. This species is closely related to L. citriolens, L. delicatus and L. aquizonatus.
Distribution and ecology
The species was observed as solitary, forming ectomycorrhizae with specimens of Quercus leucotrichophora in temperate deciduous forests of the Kumaon mountains. The type specimens were collected in Abbot Mount, Champawat, Uttaranchal. The species is named after the type locality; where they were observed at an altitude of 2200 m.
See also
List of Lactarius species
References
External links
abbotanus
Fungi described in 2003
Fungi of Asia
Fungus species | Lactarius abbotanus | Biology | 281 |
38,097,307 | https://en.wikipedia.org/wiki/Rho%20Orionis | Rho Orionis, Latinised from ρ Orionis, is the Bayer designation for an orange-hued binary star system in the equatorial constellation of Orion. It is visible to the naked eye with an apparent visual magnitude of +4.44. The star shows an annual parallax shift of 9.32 mas due to the orbital motion of the Earth, which provides a distance estimate of roughly 350 light-years from the Sun. It is moving away from the Sun with a radial velocity of +40.5 km/s. About 2.6 million years ago, Rho Orionis made its perihelion passage at a distance of around .
This is a single-lined spectroscopic binary system with an orbital period of 2.8 years and an eccentricity of 0.1. The visible component is an evolved giant star of type K with a stellar classification of K0 III. Its measured angular diameter is , which, at its estimated distance yields a physical size of about 25 times the radius of the Sun. It has 2.67 times the mass of the Sun and is about 650 million years old. The star is radiating 251 times the Sun's luminosity from its enlarged photosphere at an effective temperature of .
Notes
References
K-type giants
Spectroscopic binaries
Orion (constellation)
Orionis, Rho
Durchmusterung objects
Orionis, 17
033856
024331
1698 | Rho Orionis | Astronomy | 292 |
8,152,998 | https://en.wikipedia.org/wiki/Work%20output | In physics, work output is the work done by a simple machine, compound machine, or any type of engine model. In common terms, it is the energy output, which for simple machines is always less than the energy input, even though the forces may be drastically different.
In [thermodynamics], work output can refer to the thermodynamic work done by a heat engine, in which case the amount of work output must be less than the input as energy is lost to heat, as determined by the engine's efficiency.
References
Thermodynamics | Work output | Physics,Chemistry,Mathematics | 120 |
624,231 | https://en.wikipedia.org/wiki/Voltage%20regulator | A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism, or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages.
Electronic voltage regulators are found in devices such as computer power supplies where they stabilize the DC voltages used by the processor and other elements. In automobile alternators and central power station generator plants, voltage regulators control the output of the plant. In an electric power distribution system, voltage regulators may be installed at a substation or along distribution lines so that all customers receive steady voltage independent of how much power is drawn from the line.
Electronic voltage regulators
A simple voltage/current regulator can be made from a resistor in series with a diode (or series of diodes). Due to the logarithmic shape of diode V-I curves, the voltage across the diode changes only slightly due to changes in current drawn or changes in the input. When precise voltage control and efficiency are not important, this design may be fine. Since the forward voltage of a diode is small, this kind of voltage regulator is only suitable for
low voltage regulated output. When higher voltage output is needed, a zener diode or series of zener diodes may be employed. Zener diode regulators make use of the zener diode's fixed reverse voltage, which can be quite large.
Feedback voltage regulators operate by comparing the actual output voltage to some fixed reference voltage. Any difference is amplified and used to control the regulation element in such a way as to reduce the voltage error. This forms a negative feedback control loop; increasing the open-loop gain tends to increase regulation accuracy but reduce stability. (Stability is the avoidance of oscillation, or ringing, during step changes.) There will also be a trade-off between stability and the speed of the response to changes. If the output voltage is too low (perhaps due to input voltage reducing or load current increasing), the regulation element is commanded, up to a point, to produce a higher output voltage–by dropping less of the input voltage (for linear series regulators and buck switching regulators), or to draw input current for longer periods (boost-type switching regulators); if the output voltage is too high, the regulation element will normally be commanded to produce a lower voltage. However, many regulators have over-current protection, so that they will entirely stop sourcing current (or limit the current in some way) if the output current is too high, and some regulators may also shut down if the input voltage is outside a given range (see also: crowbar circuits).
Electromechanical regulators
In electromechanical regulators, voltage regulation is easily accomplished by coiling the sensing wire to make an electromagnet. The magnetic field produced by the current attracts a moving ferrous core held back under spring tension or gravitational pull. As voltage increases, so does the current, strengthening the magnetic field produced by the coil and pulling the core towards the field. The magnet is physically connected to a mechanical power switch, which opens as the magnet moves into the field. As voltage decreases, so does the current, releasing spring tension or the weight of the core and causing it to retract. This closes the switch and allows the power to flow once more.
If the mechanical regulator design is sensitive to small voltage fluctuations, the motion of the solenoid core can be used to move a selector switch across a range of resistances or transformer windings to gradually step the output voltage up or down, or to rotate the position of a moving-coil AC regulator.
Early automobile generators and alternators had a mechanical voltage regulator using one, two, or three relays and various resistors to stabilize the generator's output at slightly more than 6.7 or 13.4 V to maintain the battery as independently of the engine's rpm or the varying load on the vehicle's electrical system as possible. The relay(s) modulated the width of a current pulse to regulate the voltage output of the generator by controlling the average field current in the rotating machine which determines strength of the magnetic field produced which determines the unloaded output voltage per rpm. Capacitors are not used to smooth the pulsed voltage as described earlier. The large inductance of the field coil stores the energy delivered to the magnetic field in an iron core so the pulsed field current does not result in as strongly pulsed a field. Both types of rotating machine produce a rotating magnetic field that induces an alternating current in the coils in the stator. A generator uses a mechanical commutator, graphite brushes running on copper segments, to convert the AC produced into DC by switching the external connections at the shaft angle when the voltage would reverse. An alternator accomplishes the same goal using rectifiers that do not wear down and require replacement.
Modern designs now use solid state technology (transistors) to perform the same function that the relays perform in electromechanical regulators.
Electromechanical regulators are used for mains voltage stabilisation—see AC voltage stabilizers below.
Automatic voltage regulator
Generators, as used in power stations, ship electrical power production, or standby power systems, will have automatic voltage regulators (AVR) to stabilize their voltages as the load on the generators changes. The first AVRs for generators were electromechanical systems, but a modern AVR uses solid-state devices. An AVR is a feedback control system that measures the output voltage of the generator, compares that output to a set point, and generates an error signal that is used to adjust the excitation of the generator. As the excitation current in the field winding of the generator increases, its terminal voltage will increase. The AVR will control current by using power electronic devices; generally a small part of the generator's output is used to provide current for the field winding. Where a generator is connected in parallel with other sources such as an electrical transmission grid, changing the excitation has more of an effect on the reactive power produced by the generator than on its terminal voltage, which is mostly set by the connected power system. Where multiple generators are connected in parallel, the AVR system will have circuits to ensure all generators operate at the same power factor. AVRs on grid-connected power station generators may have additional control features to help stabilize the electrical grid against upsets due to sudden load loss or faults.
AC voltage stabilizers
Coil-rotation AC voltage regulator
This is an older type of regulator used in the 1920s that uses the principle of a fixed-position field coil and a second field coil that can be rotated on an axis in parallel with the fixed coil, similar to a variocoupler.
When the movable coil is positioned perpendicular to the fixed coil, the magnetic forces acting on the movable coil balance each other out and voltage output is unchanged. Rotating the coil in one direction or the other away from the center position will increase or decrease voltage in the secondary movable coil.
This type of regulator can be automated via a servo control mechanism to advance the movable coil position in order to provide voltage increase or decrease. A braking mechanism or high-ratio gearing is used to hold the rotating coil in place against the powerful magnetic forces acting on the moving coil.
Electromechanical
Electromechanical regulators called voltage stabilizers or tap-changers, have also been used to regulate the voltage on AC power distribution lines. These regulators operate by using a servomechanism to select the appropriate tap on an autotransformer with multiple taps, or by moving the wiper on a continuously variable auto transfomer. If the output voltage is not in the acceptable range, the servomechanism switches the tap, changing the turns ratio of the transformer, to move the secondary voltage into the acceptable region. The controls provide a dead band wherein the controller will not act, preventing the controller from constantly adjusting the voltage ("hunting") as it varies by an acceptably small amount.
Constant-voltage transformer
The ferroresonant transformer, ferroresonant regulator or constant-voltage transformer is a type of saturating transformer used as a voltage regulator. These transformers use a tank circuit composed of a high-voltage resonant winding and a capacitor to produce a nearly constant average output voltage with a varying input current or varying load. The circuit has a primary on one side of a magnet shunt and the tuned circuit coil and secondary on the other side. The regulation is due to magnetic saturation in the section around the secondary.
The ferroresonant approach is attractive due to its lack of active components, relying on the square loop saturation characteristics of the tank circuit to absorb variations in average input voltage. Saturating transformers provide a simple rugged method to stabilize an AC power supply.
Older designs of ferroresonant transformers had an output with high harmonic content, leading to a distorted output waveform. Modern devices are used to construct a perfect sine wave. The ferroresonant action is a flux limiter rather than a voltage regulator, but with a fixed supply frequency it can maintain an almost constant average output voltage even as the input voltage varies widely.
The ferroresonant transformers, which are also known as constant-voltage transformers (CVTs) or "ferros", are also good surge suppressors, as they provide high isolation and inherent short-circuit protection.
A ferroresonant transformer can operate with an input voltage range ±40% or more of the nominal voltage.
Output power factor remains in the range of 0.96 or higher from half to full load.
Because it regenerates an output voltage waveform, output distortion, which is typically less than 4%, is independent of any input voltage distortion, including notching.
Efficiency at full load is typically in the range of 89% to 93%. However, at low loads, efficiency can drop below 60%. The current-limiting capability also becomes a handicap when a CVT is used in an application with moderate to high inrush current, like motors, transformers or magnets. In this case, the CVT has to be sized to accommodate the peak current, thus forcing it to run at low loads and poor efficiency.
Minimum maintenance is required, as transformers and capacitors can be very reliable. Some units have included redundant capacitors to allow several capacitors to fail between inspections without any noticeable effect on the device's performance.
Output voltage varies about 1.2% for every 1% change in supply frequency. For example, a 2 Hz change in generator frequency, which is very large, results in an output voltage change of only 4%, which has little effect for most loads.
It accepts 100% single-phase switch-mode power-supply loading without any requirement for derating, including all neutral components.
Input current distortion remains less than 8% THD even when supplying nonlinear loads with more than 100% current THD.
Drawbacks of CVTs are their larger size, audible humming sound, and the high heat generation caused by saturation.
Power distribution
Voltage regulators or stabilizers are used to compensate for voltage fluctuations in mains power. Large regulators may be permanently installed on distribution lines. Small portable regulators may be plugged in between sensitive equipment and a wall outlet. Automatic voltage regulators on generator sets to maintain a constant voltage for changes in load. The voltage regulator compensates for the change in load. Power distribution voltage regulators normally operate on a range of voltages, for example 150–240 V or 90–280 V.
DC voltage stabilizers
Many simple DC power supplies regulate the voltage using either series or shunt regulators, but most apply a voltage reference using a shunt regulator such as a Zener diode, avalanche breakdown diode, or voltage regulator tube. Each of these devices begins conducting at a specified voltage and will conduct as much current as required to hold its terminal voltage to that specified voltage by diverting excess current from a non-ideal power source to ground, often through a relatively low-value resistor to dissipate the excess energy. The power supply is designed to only supply a maximum amount of current that is within the safe operating capability of the shunt regulating device.
If the stabilizer must provide more power, the shunt output is only used to provide the standard voltage reference for the electronic device, known as the voltage stabilizer. The voltage stabilizer is the electronic device, able to deliver much larger currents on demand.
Active regulators
Active regulators employ at least one active (amplifying) component such as a transistor or operational amplifier. Shunt regulators are often (but not always) passive and simple, but always inefficient because they (essentially) dump the excess current which is not available to the load. When more power must be supplied, more sophisticated circuits are used. In general, these active regulators can be divided into several classes:
Linear series regulators
Switching regulators
SCR regulators
Linear regulators
Linear regulators are based on devices that operate in their linear region (in contrast, a switching regulator is based on a device forced to act as an on/off switch). Linear regulators are also classified in two types:
series regulators
shunt regulators
In the past, one or more vacuum tubes were commonly used as the variable resistance. Modern designs use one or more transistors instead, perhaps within an integrated circuit. Linear designs have the advantage of very "clean" output with little noise introduced into their DC output, but are most often much less efficient and unable to step-up or invert the input voltage like switched supplies. All linear regulators require a higher input than the output. If the input voltage approaches the desired output voltage, the regulator will "drop out". The input to output voltage differential at which this occurs is known as the regulator's drop-out voltage. Low-dropout regulators (LDOs) allow an input voltage that can be much lower (i.e., they waste less energy than conventional linear regulators).
Entire linear regulators are available as integrated circuits. These chips come in either fixed or adjustable voltage types. Examples of some integrated circuits are the 723 general purpose regulator and 78xx /79xx series
Switching regulators
Switching regulators rapidly switch a series device on and off. The duty cycle of the switch sets how much charge is transferred to the load. This is controlled by a similar feedback mechanism as in a linear regulator. Because the series element is either fully conducting, or switched off, it dissipates almost no power; this is what gives the switching design its efficiency. Switching regulators are also able to generate output voltages which are higher than the input, or of opposite polarity—something not possible with a linear design. In switched regulators, the pass transistor is used as a "controlled switch" and is operated at either cutoff or saturated state. Hence the power transmitted across the pass device is in discrete pulses rather than a steady current flow. Greater efficiency is achieved since the pass device is operated as a low-impedance switch. When the pass device is at cutoff, there is no current and it dissipates no power. Again when the pass device is in saturation, a negligible voltage drop appears across it and thus dissipates only a small amount of average power, providing maximum current to the load. In either case, the power wasted in the pass device is very little and almost all the power is transmitted to the load. Thus the efficiency of a switched-mode power supply is remarkably highin the range of 70–90%.
Switched mode regulators rely on pulse-width modulation to control the average value of the output voltage. The average value of a repetitive-pulse waveform depends on the area under the waveform. When the duty cycle is varied, the average voltage changes proportionally.
Like linear regulators, nearly complete switching regulators are also available as integrated circuits. Unlike linear regulators, these usually require an inductor that acts as the energy storage element. The IC regulators combine the reference voltage source, error op-amp, and pass transistor with short-circuit current limiting and thermal-overload protection.
Switching regulators are more prone to output noise and instability than linear regulators. However, they provide much better power efficiency than linear regulators.
SCR regulators
Regulators powered from AC power circuits can use silicon controlled rectifiers (SCRs) as the series device. Whenever the output voltage is below the desired value, the SCR is triggered, allowing electricity to flow into the load until the AC mains voltage passes through zero (ending the half cycle). SCR regulators have the advantages of being both very efficient and very simple, but because they can not terminate an ongoing half cycle of conduction, they are not capable of very accurate voltage regulation in response to rapidly changing loads. An alternative is the SCR shunt regulator which uses the regulator output as a trigger. Both series and shunt designs are noisy, but powerful, as the device has a low on resistance.
Combination or hybrid regulators
Many power supplies use more than one regulating method in series. For example, the output from a switching regulator can be further regulated by a linear regulator. The switching regulator accepts a wide range of input voltages and efficiently generates a (somewhat noisy) voltage slightly above the ultimately desired output. That is followed by a linear regulator that generates exactly the desired voltage and eliminates nearly all the noise generated by the switching regulator. Other designs may use an SCR regulator as the "pre-regulator", followed by another type of regulator. An efficient way of creating a variable-voltage, accurate output power supply is to combine a multi-tapped transformer with an adjustable linear post-regulator.
Example of linear regulators
Transistor regulator
In the simplest case a common base amplifier is used with the base of the regulating transistor connected directly to the voltage reference:
A simple transistor regulator will provide a relatively constant output voltage Uout for changes in the voltage Uin of the power source and for changes in load RL, provided that Uin exceeds Uout by a sufficient margin and that the power handling capacity of the transistor is not exceeded.
The output voltage of the stabilizer is equal to the Zener diode voltage minus the base–emitter voltage of the transistor, UZ − UBE, where UBE is usually about 0.7 V for a silicon transistor, depending on the load current. If the output voltage drops for any external reason, such as an increase in the current drawn by the load (causing an increase in the collector–emitter voltage to observe KVL), the transistor's base–emitter voltage (UBE) increases, turning the transistor on further and delivering more current to increase the load voltage again.
Rv provides a bias current for both the Zener diode and the transistor. The current in the diode is minimal when the load current is maximal. The circuit designer must choose a minimum voltage that can be tolerated across Rv, bearing in mind that the higher this voltage requirement is, the higher the required input voltage Uin, and hence the lower the efficiency of the regulator. On the other hand, lower values of Rv lead to higher power dissipation in the diode and to inferior regulator characteristics.
Rv is given by
where
min VR is the minimum voltage to be maintained across Rv,
min ID is the minimum current to be maintained through the Zener diode,
max IL is the maximum design load current,
hFE is the forward current gain of the transistor (IC/IB).
Regulator with a differential amplifier
The stability of the output voltage can be significantly increased by using a differential amplifier, possibly implemented as an operational amplifier:
In this case, the operational amplifier drives the transistor with more current if the voltage at its inverting input drops below the output of the voltage reference at the non-inverting input. Using the voltage divider (R1, R2 and R3) allows choice of the arbitrary output voltage between Uz and Uin.
Regulator specification
The output voltage can only be held constant within specified limits. The regulation is specified by two measurements:
Load regulation is the change in output voltage for a given change in load current (for example, "typically 15 mV, maximum 100 mV for load currents between 5 mA and 1.4 A, at some specified temperature and input voltage").
Line regulation or input regulation is the degree to which output voltage changes with input (supply) voltage changes—as a ratio of output to input change (for example, "typically 13 mV/V"), or the output voltage change over the entire specified input voltage range (for example, "plus or minus 2% for input voltages between 90 V and 260 V, 50–60 Hz").
Other important parameters are:
Temperature coefficient of the output voltage is the change with temperature (perhaps averaged over a given temperature range).
Initial accuracy of a voltage regulator (or simply "the voltage accuracy") reflects the error in output voltage for a fixed regulator without taking into account temperature or aging effects on output accuracy.
is the minimum difference between input voltage and output voltage for which the regulator can still supply the specified current. The input-output differential at which the voltage regulator will no longer maintain regulation is the dropout voltage. Further reduction in input voltage will result in reduced output voltage. This value is dependent on load current and junction temperature.
Inrush current or input surge current or switch-on surge is the maximum, instantaneous input current drawn by an electrical device when first turned on. Inrush current usually lasts for half a second, or a few milliseconds, but it is often very high, which makes it dangerous because it can degrade and burn components gradually (over months or years), especially if there is no inrush current protection. Alternating current transformers or electric motors in automatic voltage regulators may draw and output several times their normal full-load current for a few cycles of the input waveform when first energized or switched on. Power converters also often have inrush currents much higher than their steady state currents, due to the charging current of the input capacitance.
Absolute maximum ratings are defined for regulator components, specifying the continuous and peak output currents that may be used (sometimes internally limited), the maximum input voltage, maximum power dissipation at a given temperature, etc.
Output noise (thermal white noise) and output dynamic impedance may be specified as graphs versus frequency, while output ripple noise (mains "hum" or switch-mode "hash" noise) may be given as peak-to-peak or RMS voltages, or in terms of their spectra.
Quiescent current in a regulator circuit is the current drawn internally, not available to the load, normally measured as the input current while no load is connected and hence a source of inefficiency (some linear regulators are, surprisingly, more efficient at very low current loads than switch-mode designs because of this).
Transient response is the reaction of a regulator when a (sudden) change of the load current (called the load transient) or input voltage (called the line transient) occurs. Some regulators will tend to oscillate or have a slow response time which in some cases might lead to undesired results. This value is different from the regulation parameters, as that is the stable situation definition. The transient response shows the behaviour of the regulator on a change. This data is usually provided in the technical documentation of a regulator and is also dependent on output capacitance.
Mirror-image insertion protection means that a regulator is designed for use when a voltage, usually not higher than the maximum input voltage of the regulator, is applied to its output pin while its input terminal is at a low voltage, volt-free or grounded. Some regulators can continuously withstand this situation. Others might only manage it for a limited time such as 60 seconds (usually specified in the data sheet). For instance, this situation can occur when a three terminal regulator is incorrectly mounted on a PCB, with the output terminal connected to the unregulated DC input and the input connected to the load. Mirror-image insertion protection is also important when a regulator circuit is used in battery charging circuits, when external power fails or is not turned on and the output terminal remains at battery voltage.
See also
Charge controller
Constant current regulator
DC-to-DC converter
List of LM-series integrated circuits
Third-brush dynamo
Voltage comparator
Voltage regulator module
References
Further reading
Linear & Switching Voltage Regulator Handbook; ON Semiconductor; 118 pages; 2002; HB206/D.(Free PDF download)
Analog circuits
Regulator | Voltage regulator | Physics,Engineering | 5,092 |
20,829,311 | https://en.wikipedia.org/wiki/C6H11NO2 | {{DISPLAYTITLE:C6H11NO2}}
The molecular formula C6H11NO2 may refer to:
Cyclohexyl nitrite, an organic nitrite
Cycloleucine, an unnatural amino acid
Isonipecotic acid, a GABAA receptor partial agonist
Nipecotic acid, a GABA uptake inhibitor
Nitrocyclohexane, a nitro compound
Pipecolic acid, a small organic molecule which accumulates in pipecolic acidemia
Vigabatrin, an antiepileptic drug | C6H11NO2 | Chemistry | 121 |
54,400,853 | https://en.wikipedia.org/wiki/Gregory%20L.%20Verdine | Gregory L. Verdine (born June 10, 1959) is an American chemical biologist, biotech entrepreneur, venture capitalist and university professor. He is a founder of the field of chemical biology, which deals with the application of chemical techniques to biological systems. His work has focused on mechanisms of DNA repair and cell penetrability.
Verdine is the co-inventor with Christian Schafmeister of stapled peptides, a new class of drugs that combines the versatile binding properties of monoclonal antibodies with the cell-penetrating ability of small molecules. Verdine coined the term "drugging the undruggable" to describe the unique capabilities of stapled peptides. A close analog of a stapled peptide drug invented in the Verdine Lab, sulanemadlin (ALRN-6924), is a first-in-class dual MDM2/MDMX inhibitor currently in Phase II clinical development by Aileron Therapeutics, which he co-founded in 2005. FogPharma, founded in 2016, aims to further develop stapled peptide technology for therapeutic use.
He has founded numerous other drug discovery companies, including six that are listed on the NASDAQ. His companies have succeeded in developing two FDA-approved drugs, romidepsin and paritaprevir, which are, respectively, an anticancer agent used in cutaneous T-cell lymphoma (CTCL) and other peripheral T-cell lymphomas (PTCLs), and an acylsulfonamide inhibitor that is used to treat chronic hepatitis C.
Education and training
Verdine received a Bachelor of Science in Chemistry from Saint Joseph's University and a PhD in Chemistry from Columbia University, working under Koji Nakanishi and Maria Tomasz. He held an NIH postdoctoral fellowship in molecular biology at MIT and Harvard Medical School, and joined the faculty of Harvard University in 1988.
Academic career
Over the course of his academic career at Harvard University and the Harvard Medical School, Verdine has elucidated the molecular mechanism of epigenetic DNA methylation and pathways by which certain genotoxic forms of DNA damage are surveilled in and eradicated from the genome. As a professor, Verdine introduced biological principles into organic chemistry courses and helped found two fields of science that meld basic research and new medicines discovery: chemical biology, which enlists chemistry to answer biological questions; and new modalities, which works to discover and develop novel structural classes of therapeutics.
He has served as the Erving Professor of Chemistry in the Departments of Stem Cell and Regenerative Biology and Chemistry and Chemical Biology at Harvard University since 1988. In 2013, he stepped down from his tenured professorship at Harvard, taking a leave of absence in order to focus full-time on steering Warp Drive Bio as CEO while continuing to run his eponymous Verdine Laboratory at the Harvard University Department of Stem Cell & Regenerative Biology. The laboratory focused on research based in chemical biology, including synthetic biologics and genomic research,. He has since transitioned to a 'professor of the practice' position at Harvard.
Research
In his academic research, Verdine made fundamental discoveries about how organisms manage their genomes: how they tag specific cell types and conduct search-and-destroy operations for cancer-causing abnormalities. Verdine has published more than 190 academic articles. In 2005, Verdine and Anirban Banerjee published research in crystallography showing how enzymes could be used to fix flawed DNA. In 2013, Verdine received a research grant to study cell-penetrating miniproteins in order to target cancer cells. His work has led to the FDA approval of the drugs romidepsin and paritaprevir.
Verdine is also the inventor of stapled peptide technology, which stabilizes peptides intended for therapeutic use by introducing an all-hydrocarbon “staple” into the peptide’s linear backbone. These “stapled” peptides have a higher affinity for their targets, enter cells more easily and are less readily degraded.
Biotechnology
Companies
To translate his discoveries into therapeutics, Verdine has founded or co-founded numerous public biotech companies including Variagenics, Enanta, Eleven Bio, Tokai, Wave Life Sciences, and Aileron. He also founded the private company Gloucester Pharmaceuticals, which was acquired by Celgene in 2009. His companies share the mission of developing molecules intended to target “hard-to-drug” endogenous targets that have remained out of reach of modern cell-penetration technologies.
FogPharma
In 2016, Verdine co-founded FogPharma with Sir David Lane to develop next-generation stapled peptides, Cell-Penetrating Miniproteins (CPMPs), a broad new class of medicines that aim to combine the cell-penetrating abilities of small molecules with the strong target engagement of biologics.
LifeMine
Founded alongside FogPharma in 2016, LifeMine seeks to discover, characterize, and translate into medicine bioactive compounds in fungal genomes.
Gloucester Marine Genomics Institute
Founded in 2013, the nonprofit Gloucester Marine Genomics Institute to study marine genomes for potential therapeutic compounds and to advance fisheries science. He is also the founder and director of the Gloucester Biotechnology Academy, which is providing technical training in the life science industry to high school graduates in Gloucester, MA, USA.
Warp Drive Bio
In 2012, Verdine founded Warp Drive Bio with cofounders George Church and James Wells. The company maps the genomes of soil-dwelling microbes in the search for potential treatments for drug-resistant ailments. In 2013, Verdine became full-time CEO of Warp Drive Bio, then handed the CEO position to Lawrence Reid in 2016 in order to found two new startups, FogPharma and LifeMine.
Wave Life Sciences
Verdine is the Chairman of the Board of Wave Life Sciences, which uses synthetic chemistry to develop nucleic acid therapeutic candidates.
Venture capital
Verdine has worked in the venture capital industry as a Venture Partner with Apple Tree Partners, Third Rock Ventures, and WuXi Healthcare Ventures, and as a Special Advisor to Texas Pacific Group.
Scientific consultation
Verdine is a member of both the Board of Scientific Consultants of the Memorial Sloan-Kettering Cancer Center, the Board of Scientific Advisors of the National Cancer Institute, Advisory Board at Spinal Muscular Atrophy Foundation, and the Board of Reviewers at Bill & Melinda Gates Foundation.
Recent recognition
2019 - Honorary Doctor of Science Degree, Clarkson University
2019 - Herman S. Bloch Award for Scientific Excellence in Industry, University of Chicago
2011 - American Association for Cancer Research Award for Excellence in Chemistry in Cancer Research
2007 - Nobel Laureate Signature Award for Graduate Education in Chemistry, with Anirban Banerjee
2005 - Royal Society of Chemistry Nucleic Acid Award Lecture, Responses to DNA Damage conference
References
1959 births
Living people
21st-century American chemists
Harvard Medical School faculty
People from Somers Point, New Jersey
Saint Joseph's University alumni
Columbia Graduate School of Arts and Sciences alumni
Chemical biology | Gregory L. Verdine | Chemistry,Biology | 1,426 |
1,653,015 | https://en.wikipedia.org/wiki/Viterbi%20decoder | A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been
encoded using a convolutional code or trellis code.
There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding. It is most often used for decoding convolutional codes with constraint lengths k≤3, but values up to k=15 are used in practice.
Viterbi decoding was developed by Andrew J. Viterbi and published in the paper
There are both hardware (in modems) and software implementations of a Viterbi decoder.
Viterbi decoding is used in the iterative Viterbi decoding algorithm.
Hardware implementation
A hardware Viterbi decoder for basic (not punctured) code usually consists of the following major blocks:
Branch metric unit (BMU)
Path metric unit (PMU)
Traceback unit (TBU)
Branch metric unit (BMU)
A branch metric unit's function is to calculate branch metrics, which are normed distances between every possible symbol in the code alphabet, and the received symbol.
There are hard decision and soft decision Viterbi decoders. A hard decision Viterbi decoder receives a simple bitstream on its input, and a Hamming distance is used as a metric. A soft decision Viterbi decoder receives a bitstream containing information about the reliability of each received symbol. For instance, in a 3-bit encoding, this reliability information can be encoded as follows:
Of course, it is not the only way to encode reliability data.
The squared Euclidean distance is used as a metric for soft decision decoders.
Path metric unit (PMU)
A path metric unit summarizes branch metrics to get metrics for paths, where K is the constraint length of the code, one of which can eventually be chosen as optimal. Every clock it makes decisions, throwing off wittingly nonoptimal paths. The results of these decisions are written to the memory of a traceback unit.
The core elements of a PMU are ACS (Add-Compare-Select) units. The way in which they are connected between themselves is defined by a specific code's trellis diagram.
Since branch metrics are always , there must be an additional circuit (not shown on the image) preventing metric counters from overflow. An alternate method that eliminates the need to monitor the path metric growth is to allow the path metrics to "roll over"; to use this method it is necessary to make sure the path metric accumulators contain enough bits to prevent the "best" and "worst" values from coming within 2(n-1) of each other. The compare circuit is essentially unchanged.
It is possible to monitor the noise level on the incoming bit stream by monitoring the rate of growth of the "best" path metric. A simpler way to do this is to monitor a single location or "state" and watch it pass "upward" through say four discrete levels within the range of the accumulator. As it passes upward through each of these thresholds, a counter is incremented that reflects the "noise" present on the incoming signal.
Traceback unit (TBU)
Back-trace unit restores an (almost) maximum-likelihood path from the decisions made by PMU. Since it does it in inverse direction, a viterbi decoder comprises a FILO (first-in-last-out) buffer to reconstruct a correct order.
Note that the implementation shown on the image requires double frequency. There are some tricks that eliminate this requirement.
Implementation issues
Quantization for soft decision decoding
In order to fully exploit benefits of soft decision decoding, one needs to quantize the input signal properly. The optimal quantization zone width is defined by the following formula:
where is a noise power spectral density, and k is a number of bits for soft decision.
Euclidean metric computation
The squared norm () distance between the received and the actual symbols in the code alphabet may be further simplified into a linear sum/difference form, which makes it less computationally intensive.
Consider a 1/2 convolutional code, which generates 2 bits (00, 01, 10 or 11) for every input bit (1 or 0). These Return-to-Zero signals are translated into a Non-Return-to-Zero form shown alongside.
Each received symbol may be represented in vector form as vr = {r0, r1}, where r0 and r1 are soft decision values, whose magnitudes signify the joint reliability of the received vector, vr.
Every symbol in the code alphabet may, likewise, be represented by the vector vi = {±1, ±1}.
The actual computation of the Euclidean distance metric is:
Each square term is a normed distance, depicting the energy of the symbol. For ex., the energy of the symbol vi = {±1, ±1} may be computed as
Thus, the energy term of all symbols in the code alphabet is constant (at (normalized) value 2).
The Add-Compare-Select (ACS) operation compares the metric distance between the received symbol ||vr|| and any 2 symbols in the code alphabet whose paths merge at a node in the corresponding trellis, ||vi(0)|| and ||vi(1)||. This is equivalent to comparing
and
But, from above we know that the energy of vi is constant (equal to (normalized) value of 2), and the energy of vr is the same in both cases. This reduces the comparison to a minima function between the 2 (middle) dot product terms,
since a operation on negative numbers may be interpreted as an equivalent operation on positive quantities.
Each dot product term may be expanded as
where, the signs of each term depend on symbols, vi(0) and vi(1), being compared. Thus, the squared Euclidean metric distance calculation to compute the branch metric may be performed with a simple add/subtract operation.
Traceback
The general approach to traceback is to accumulate path metrics for up to five times the constraint length (5 (K - 1)), find the node with the largest accumulated cost, and begin traceback from this node.
The commonly used rule of thumb of a truncation depth of five times the memory (constraint length K-1) of a convolutional code is accurate only for rate 1/2 codes. For an arbitrary rate, an accurate rule of thumb is 2.5(K - 1)/(1−r) where r is the code rate.
However, computing the node which has accumulated the largest cost (either the largest or smallest integral path metric) involves finding the maxima or minima of several (usually 2K-1) numbers, which may be time consuming when implemented on embedded hardware systems.
Most communication systems employ Viterbi decoding involving data packets of fixed sizes, with a fixed bit/byte pattern either at the beginning or/and at the end of the data packet. By using the known bit/byte pattern as reference, the start node may be set to a fixed value, thereby obtaining a perfect Maximum Likelihood Path during traceback.
Limitations
A physical implementation of a Viterbi decoder will not yield an exact maximum-likelihood stream due to quantization of the input signal, branch and path metrics, and finite traceback length. Practical implementations do approach within 1 dB of the ideal.
The output of a Viterbi decoder, when decoding a message damaged by an additive Gaussian channel, has errors grouped in error bursts.
Single-error-correcting codes alone can't correct such bursts, so either the convolutional code and the Viterbi decoder must be designed powerful enough to drive down errors to an acceptable rate, or burst error-correcting codes must be used.
Punctured codes
A hardware viterbi decoder of punctured codes is commonly implemented in such a way:
A depuncturer, which transforms the input stream into the stream which looks like an original (non punctured) stream with ERASE marks at the places where bits were erased.
A basic Viterbi decoder understanding these ERASE marks (that is, not using them for branch metric calculation).
Software implementation
One of the most time-consuming operations is an ACS butterfly, which is usually implemented using assembly language and an appropriate instruction set extensions (such as SSE2) to speed up the decoding time.
Applications
The Viterbi decoding algorithm is widely used in the following areas:
Radio communication: digital TV (ATSC, QAM, DVB-T, etc.), radio relay, satellite communications, PSK31 digital mode for amateur radio.
Decoding trellis-coded modulation (TCM), the technique used in telephone-line modems to squeeze high spectral efficiency out of 3 kHz-bandwidth analog telephone lines.
Computer storage devices such as hard disk drives.
Automatic speech recognition
References
External links
Details on Viterbi decoding, as well as a bibliography.
Viterbi algorithm explanation with the focus on hardware implementation issues.
r=1/6 k=15 coding for the Cassini mission to Saturn.
Online Generator of optimized software Viterbi decoders (GPL).
GPL Viterbi decoder software for four standard codes.
Description of a k=24 Viterbi decoder, believed to be the largest ever in practical use.
Generic Viterbi decoder hardware (GPL).
Data transmission
Error detection and correction | Viterbi decoder | Engineering | 2,010 |
3,378,926 | https://en.wikipedia.org/wiki/CRAC-II | CRAC-II is both a computer code (titled Calculation of Reactor Accident Consequences) and the 1982 report of the simulation results performed by Sandia National Laboratories for the Nuclear Regulatory Commission. The report is sometimes referred to as the CRAC-II report because it is the computer program used in the calculations, but the report is also known as the 1982 Sandia Siting Study or as NUREG/CR-2239. The computer program MACCS2 has since replaced CRAC-II for the consequences of radioactive release.
CRAC-II has been declared to be obsolete and will be replaced by the State-of-the-Art Reactor Consequence Analyses study.
The CRAC-II simulations calculated the possible consequences of a worst-case accident under worst-case conditions (a so-called "class-9 accident") for several different U.S. nuclear power plants. In the Sandia Siting Study, the Indian Point Energy Center was calculated to have the largest possible consequences for an SST1 (spectrum of source terms) release, with estimated maximum possible casualty numbers of around 50,000 deaths, 150,000 injuries, and property damage of $274 Billion to $314 Billion (based on figures at the time of the report in 1982). The Sandia Siting Study, however, is commonly misused as a risk analysis, which it is not. It is a sensitivity analysis of different amounts of radioactive releases and an SST1 release is now generally considered not a credible accident (see below).
Another significant report is the 1991 NUREG-1150 calculations, which is a more-rigorous risk assessment of five U.S. Nuclear Power Plants.
Followup study
As the NRC was preparing NUREG-1437, Supplement 56, "Generic Environmental Impact Statement for License Renewal of Nuclear Plants Supplement 56 Regarding Fermi Nuclear Power Plant", it solicited comments on the proposed report. In response to comments specifically mentioning the CRAC-II study, the NRC wrote:
"The U.S. Nuclear Regulatory Commission has devoted considerable research resources, both in the past and currently, to evaluating accidents and the possible public consequences of severe reactor accidents. The NRC's most recent studies have confirmed that early research into the topic led to extremely conservative consequence analyses that generate
invalid results for attempting to quantify the possible effects of very unlikely severe accidents. In particular, these previous studies did not reflect current plant design, operation, accident management strategies or security enhancements. They often used unnecessarily conservative estimates or assumptions concerning possible damage to the reactor core, the possible radioactive contamination that could be released, and possible failures of the reactor vessel and containment buildings. These previous studies also failed to realistically model the effect of emergency preparedness. The NRC performed a state-of-the-art assessment of possible
severe accidents as part of its ongoing effort to evaluate the consequences of such accidents."
This study was published as "NUREG–1935, State-of-the-Art Reactor Consequence Analyses Report" in 2012.
See also
Nuclear accidents in the United States
Nuclear safety in the U.S.
Nuclear power
RELAP5-3D
WASH-740 (1957)
WASH-1400 (1975)
NUREG-1150 (1991)
References
Nuclear safety and security
Nuclear Regulatory Commission | CRAC-II | Physics | 671 |
41,528,945 | https://en.wikipedia.org/wiki/Metachirality | Metachirality is a stronger form of chirality.
It applies to objects or systems that are chiral (not identical to their mirror image) and where, in addition, their mirror image has a symmetry group that differs from the symmetry group of the original object or system.
Many familiar chiral objects, like the capital letter 'Z' embedded in the plane, are not metachiral.
The symmetry group of the capital letter 'Z' embedded in the plane consists of the identity transformation and a rotation over 180˚ (a half turn).
In this case, the mirror image has the same symmetry group.
In particular, asymmetric objects (that only have the identity transformation as symmetry, like a human hand) are not metachiral,
since the mirror image is also asymmetric.
In general, two-dimensional objects and bounded three-dimensional objects are not metachiral.
An example of a metachiral object is an infinite helical staircase.
A helix in 3D has a handedness (either left or right, like screw thread), whereby it differs from its mirror image.
An infinite helical staircase, however, does have symmetries:
screw operations, that is, a combination of a translation and a rotation.
The symmetry group of the mirror image of an infinite helical staircase also contains screw operations.
But they are of the opposite handedness and, hence,
the symmetry groups differ.
Note, however, that these symmetry groups are isomorphic.
Of the 219 space groups, 11 are metachiral.
A nice example of a metachiral spatial structure is the K4 crystal, also known as Triamond,
and featured in the Bamboozle mathematical artwork.
See also
Orientation (mathematics)
Stereochemistry
Right-hand rule
Handedness
Asymmetry
References
Chirality | Metachirality | Physics,Chemistry,Biology | 370 |
30,294,670 | https://en.wikipedia.org/wiki/Parent%20structure | In chemistry, a parent structure is the structure of an unadorned ion or molecule from which derivatives can be visualized. Parent structures underpin systematic nomenclature and facilitate classification. Fundamental parent structures have one or no functional groups and often have various types of symmetry. Benzene () is a chemical itself consisting of a hexagonal ring of carbon atoms with a hydrogen atom attached to each, and is the parent of many derivatives that have substituent atoms or groups replacing one or more of the hydrogens. Some parents are rare or nonexistent themselves, as in the case of porphine, though many simple and complex derivatives are known.
IUPAC definitions
According to the International Union of Pure and Applied Chemistry, the concept of parent structure is closely related to or identical to parent compound, parent name, or simply parent.
Organic parents
These species consist of an unbranched chain of skeletal atoms, or consisting of an unsubstituted monocyclic or polycyclic ring system. Parent structures bearing one or more functional groups that are not specifically denoted by a suffix are called functional parents. Names of parent structures are used in IUPAC nomenclature as basis for systematic names.
Hydride parents
A parent hydride is a parent structure with one or more hydrogen atoms. Parent hydrides have a defined standard population of hydrogen atoms attached to a skeletal structure. Parent hydrides are used extensively in organic nomenclature, but are also used in inorganic chemistry.
See also
Hydride
Preferred IUPAC name
References
Chemical nomenclature | Parent structure | Chemistry | 315 |
2,093,254 | https://en.wikipedia.org/wiki/Melon%20%28chemistry%29 | In chemistry, melon is a compound of carbon, nitrogen, and hydrogen of still somewhat uncertain composition, consisting mostly of heptazine units linked and closed by amine groups and bridges (, , , etc.). It is a pale yellow solid, insoluble in most solvents.
A careful 2001 study indicates the formula , that consists of ten imino-heptazine units connected into a linear chain by amino bridges; that is, . However, other researchers are still proposing different structures.
Melon is the oldest known compound with the heptazine core, having been described in the early 19th century. It has been little studied until recently, when it has been recognized as a notable photocatalyst and as a possible precursor to carbon nitride.
History
In 1834 Liebig described the compounds that he named melamine, melam, and melon.
The compound received little attention for a long time, due to its insolubility. In 1937 Linus Pauling showed by x-ray crystallography that the structure of melon and related compounds contained fused triazine rings.
In 1939, C. E. Redemamm and other proposed a structure consisting of 2-amino-heptazine units connected by amine bridges through carbons 5 and 8. The structure was revised in 2001 by T. Komatsu, who proposed a tautomeric structure.
Preparation
The compound can be extracted from the solid residue of the thermal decomposition of ammonium thiocyanate at 400 °C. (The thermal decomposition of solid melem, on the other hand, yields a graphite-like C-N material.)
Structure and properties
According to Komatsu, a characterized form of melon consists of oligomers that can be described as condensations of 10 units of melem tautomer with loss of ammonia . In this structure 2-imino-heptazine units are connected by amino bridges, from carbon 8 of one unit to nitrogen 4 of the next unit. X-ray diffraction data and other evidence indicate that the oligomer is planar, and the triangular heptazine cores have alternating orientations.
The crystal structure of melon is orthorhombic, with estimated lattice constants a = 739.6 pm, b = 2092.4 pm and c= 1295.4 pm.
Polymerization and decomposition
Heated to 700 °C, melon converts to a polymer of high molecular weight, consisting of longer chains with the same motif.
Chlorination
Melon can be converted to 2,5,8-trichloroheptazine, a useful reagent for synthesis or heptazine derivatives.
Applications
Photocatalysis
In 2009, Xinchen Wang and others observed that melon acts as a catalyst for the splitting of water into hydrogen and oxygen, or converting back into fuel, using energy from sunlight. It was the first metal-free photocatalyst, and it was seen to enjoy a number of advantages over previous compounds, including low cost of material, simple synthesis, negligible toxicity, exceptional chemical and thermal stability. The downside is its modest efficiency, which however seems amenable to improvement by doping or nanostructuring.
Carbon nitride precursor
Another wave of interest for melon happened in the 1990s, when theoretical computations suggested that β- — a hypothetical carbon nitride compound structurally analogous to β- —might be harder than diamond. Melon seemed to be a good precursor for another form of the material, "graphitic" carbon nitride or g-.
See also
Melem
Melam
References
Nitrogen heterocycles
Polymers | Melon (chemistry) | Chemistry,Materials_science | 749 |
37,815,827 | https://en.wikipedia.org/wiki/Astrostatistics | Astrostatistics is a discipline which spans astrophysics, statistical analysis and data mining. It is used to process the vast amount of data produced by automated scanning of the cosmos, to characterize complex datasets, and to link astronomical data to astrophysical theory. Many branches of statistics are involved in astronomical analysis including nonparametrics, multivariate regression and multivariate classification, time series analysis, and especially Bayesian inference. The field is closely related to astroinformatics.
References
Astrophysics
Applied statistics
Data mining
Machine learning | Astrostatistics | Physics,Astronomy,Mathematics,Engineering | 111 |
663,047 | https://en.wikipedia.org/wiki/Cook%E2%80%93Levin%20theorem | In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
The theorem is named after Stephen Cook and Leonid Levin. The proof is due to Richard Karp, based on an earlier proof (using a different notion of reducibility) by Cook.
An important consequence of this theorem is that if there exists a deterministic polynomial-time algorithm for solving Boolean satisfiability, then every NP problem can be solved by a deterministic polynomial-time algorithm. The question of whether such an algorithm for Boolean satisfiability exists is thus equivalent to the P versus NP problem, which is still widely considered the most important unsolved problem in theoretical computer science.
Contributions
The concept of NP-completeness was developed in the late 1960s and early 1970s in parallel by researchers in North America and the Soviet Union.
In 1971, Stephen Cook published his paper "The complexity of theorem proving procedures" in conference proceedings of the newly founded ACM Symposium on Theory of Computing. Richard Karp's subsequent paper, "Reducibility among
combinatorial problems", generated renewed interest in Cook's paper by providing a list of 21 NP-complete problems. Karp also introduced the notion of completeness used in the current definition of NP-completeness (i.e., by polynomial-time many-one reduction). Cook and Karp each received a Turing Award for this work.
The theoretical interest in NP-completeness was also enhanced by the work of Theodore P. Baker, John Gill, and Robert Solovay who showed, in 1975, that solving NP-problems in certain oracle machine models requires exponential time. That is, there exists an oracle A such that, for all subexponential deterministic-time complexity classes T, the relativized complexity class NPA is not a subset of TA. In particular, for this oracle, PA ≠ NPA.
In the USSR, a result equivalent to Baker, Gill, and Solovay's was published in 1969 by M. Dekhtiar. Later Leonid Levin's paper, "Universal search problems", was published in 1973, although it was mentioned in talks and submitted for publication a few years earlier.
Levin's approach was slightly different from Cook's and Karp's in that he considered search problems, which require finding solutions rather than simply determining existence. He provided six such NP-complete search problems, or universal problems.
Additionally he found for each of these problems an algorithm that solves it in optimal time (in particular, these algorithms run in polynomial time if and only if P = NP).
Definitions
A decision problem is in NP if it can be decided by a non-deterministic Turing machine in polynomial time.
An instance of the Boolean satisfiability problem is a Boolean expression that combines Boolean variables using Boolean operators.
Such an expression is satisfiable if there is some assignment of truth values to the variables that makes the entire expression true.
Idea
Given any decision problem in NP, construct a non-deterministic machine that solves it in polynomial time. Then for each input to that machine, build a Boolean expression that computes whether when that specific input is passed to the machine, the machine runs correctly, and the machine halts and answers "yes". Then the expression can be satisfied if and only if there is a way for the machine to run correctly and answer "yes", so the satisfiability of the constructed expression is equivalent to asking whether or not the machine will answer "yes".
Proof
This proof is based on the one given by .
There are two parts to proving that the Boolean satisfiability problem (SAT) is NP-complete. One is to show that SAT is an NP problem. The other is to show that every NP problem can be reduced to an instance of a SAT problem by a polynomial-time many-one reduction.
SAT is in NP because any assignment of Boolean values to Boolean variables that is claimed to satisfy the given expression can be verified in polynomial time by a deterministic Turing machine. (The statements verifiable in polynomial time by a deterministic Turing machine and solvable in polynomial time by a non-deterministic Turing machine are equivalent, and the proof can be found in many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3., as well as in the Wikipedia article on NP).
Now suppose that a given problem in NP can be solved by the nondeterministic Turing machine , where is the set of states, is the alphabet of tape symbols, is the initial state, is the set of accepting states, and is the transition relation. Suppose further that accepts or rejects an instance of the problem after at most computation steps, where is the size of the instance and is a polynomial function.
For each input, , specify a Boolean expression that is satisfiable if and only if the machine accepts .
The Boolean expression uses the variables set out in the following table. Here, is a machine state, is a tape position, is a tape symbol, and is the number of a computation step.
Define the Boolean expression to be the conjunction of the sub-expressions in the following table, for all and :
If there is an accepting computation for on input , then is satisfiable by assigning , and their intended interpretations. On the other hand, if is satisfiable, then there is an accepting computation for on input that follows the steps indicated by the assignments to the variables.
There are Boolean variables, each encodable in space . The number of clauses is so the size of is . Thus the transformation is certainly a polynomial-time many-one reduction, as required.
Only the first table row () actually depends on the input string . The remaining lines depend only on the input length and on the machine ; they formalize a generic computation of for up to steps.
The transformation makes extensive use of the polynomial . As a consequence, the above proof is not constructive: even if is known, witnessing the membership of the given problem in NP, the transformation cannot be effectively computed, unless an upper bound of 's time complexity is also known.
Complexity
While the above method encodes a non-deterministic Turing machine in complexity , the literature describes more sophisticated approaches in complexity . The quasilinear result first appeared seven years after Cook's original publication.
The use of SAT to prove the existence of an NP-complete problem can be extended to other computational problems in logic, and to completeness for other complexity classes.
The quantified Boolean formula problem (QBF) involves Boolean formulas extended to include nested universal quantifiers and existential quantifiers for its variables. The QBF problem can be used to encode computation with a Turing machine limited to polynomial space complexity, proving that there exists a problem (the recognition of true quantified Boolean formulas) that is PSPACE-complete. Analogously, dependency quantified boolean formulas encode computation with a Turing machine limited to logarithmic space complexity, proving that there exists a problem that is NL-complete.
Consequences
The proof shows that every problem in NP can be reduced in polynomial time (in fact, logarithmic space suffices) to an instance of the Boolean satisfiability problem. This means that if the Boolean satisfiability problem could be solved in polynomial time by a deterministic Turing machine, then all problems in NP could be solved in polynomial time, and so the complexity class NP would be equal to the complexity class P.
The significance of NP-completeness was made clear by the publication in 1972 of Richard Karp's landmark paper, "Reducibility among combinatorial problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its intractability, are NP-complete.
Karp showed each of his problems to be NP-complete by reducing another problem (already shown to be NP-complete) to that problem. For example, he showed the problem 3SAT (the Boolean satisfiability problem for expressions in conjunctive normal form (CNF) with exactly three variables or negations of variables per clause) to be NP-complete by showing how to reduce (in polynomial time) any instance of SAT to an equivalent instance of 3SAT.
Garey and Johnson presented more than 300 NP-complete problems in their book Computers and Intractability: A Guide to the Theory of NP-Completeness, and new problems are still being discovered to be within that complexity class.
Although many practical instances of SAT can be solved by heuristic methods, the question of whether there is a deterministic polynomial-time algorithm for SAT (and consequently all other NP-complete problems) is still a famous unsolved problem, despite decades of intense effort by complexity theorists, mathematical logicians, and others. For more details, see the article P versus NP problem.
References
Theorems in computational complexity theory
Articles containing proofs | Cook–Levin theorem | Mathematics | 1,924 |
44,484,698 | https://en.wikipedia.org/wiki/Tylopilus%20subcellulosus | Tylopilus subcellulosus is a bolete fungus in the family Boletaceae found in Tamaulipas, Mexico, where it grows under oak. It was described as new to science in 1991.
See also
List of North American boletes
References
External links
subcellulosus
Fungi described in 1991
Fungi of Mexico
Fungi without expected TNC conservation status
Fungus species | Tylopilus subcellulosus | Biology | 80 |
73,227,568 | https://en.wikipedia.org/wiki/Narave%20pig | The Narave or Naravé pig is a type of domestic pig native to northern Vanuatu. Narave pigs are pseudohermaphrodite (intersex) male individuals that are kept for ceremonial purposes.
Etymology
The term narave is from Bislama.
Clark (2009) reconstructs Proto-North-Central Vanuatu *raβʷe ‘hermaphrodite pig, intersex pig’. Reflexes documented in Clark (2009) include Mota rawe ‘an hermaphrodite pig, female’; Raga ravwe ‘hermaphrodite (usually of pig)’; Nokuku rawe ‘boar’ (MacDonald 1889), rav ‘intersex pig’ (Clark 2005–2007 field notes); Vara Kiai rave; Tamambo ravue; Sakao e-re ‘intersex pig’; Suñwadaga na-raghwe; Araki dave; Vao na-rav ‘intersex pigs’, bò-rav ‘sow’. François (2021) documents Araki rave [ɾaβe] ‘hermaphrodite pig, of great customary value’.
Endocrinology
In the pigs, deficiency of the mitochondrial cytochrome P450 enzyme 17α-hydroxylase (CYP17A1) causes very low 17α-Hydroxypregnenolone levels, leading to pseudohermaphroditism.
Genetics
An analysis of Narave pig mitochondrial DNA by Lum et al. (2006) found that they are descended from Southeast Asian pigs and were brought to the island by Lapita seafarers about 3,000 years ago.
Distribution
Although formerly widespread across northern Vanuatu, intersex pigs are most common in Malo Island during the 21st century. Intersex pigs are kept for use in Nimangki grade-taking ceremonies at Patani village on the northwestern coast of Espiritu Santo. Some intersex pigs are kept on Gaua and northeastern Ambae islands, although they are not as prevalent compared to the early 20th century. Old carvings of intersex pigs on Vao Island can also still be found. An intersex pig was also found on Aore Island by McIntyre (1997).
John R. Baker (1928) reported large numbers of intersex pigs in Espiritu Santo, where he reported to have seen "no fewer than 125 intersexes in one single day at Hog Harbour" during an event when younger and older pigs were brought together from different islands for trading. He also received reports that such pigs were present on the islands of Gaua, Vanua Lava, Mota, Ambae, Ambrym (not in large numbers), and Tongoa. He reports that intersex-producing females were also brought from the other islands to breed intersex pigs on Merelava and Merig islands. Baker's dissections showed that all the animals he looked at had only male internal sexual organs. That is, despite the appearance of their external genitalia, internally there was no question that they were male pigs.
Baker (1928) reported the following names for intersex pigs in various parts of Vanuatu.
rauoē in Mota
rau or rolas in Gaua
ndrē or nerē in the northeast peninsula of Espiritu Santo
ra or ravē in southeast Espiritu Santo
teret in Ambrym
pulpul in Tongoa
In the 21st century, Narave pigs can be found in Avunatari village, Malo Island. There is also another population in Nasulnun village, Espiritu Santo Island, whose residents have recently relocated there from Malo Island, which has an exclusively Tamambo-speaking native population.
References
Pig breeds
Sanma Province
Intersex topics
Bislama words and phrases | Narave pig | Biology | 799 |
5,152,901 | https://en.wikipedia.org/wiki/UN-Water | United Nations Water (UN-Water) is an interagency mechanism that coordinates the efforts of United Nations entities and international organizations working on water and sanitation issues.
"Over 30 UN organizations carry out water and sanitation programmes, reflecting the fact that water issues run through all of the UN's main focus areas. UN-Water's role is to coordinate so that the UN family 'delivers as one' in response to water-related challenges."
The majority of the offices is located in Geneva, Switzerland.
Issues
Water is at the core of sustainable development and is critical for socio-economic development, healthy ecosystems and for human survival itself. Ecosystems across the world, particularly wetlands, are in decline in terms of the services they provide. Between US$4.3 and US$20.2 trillion per year worth of ecosystem services were lost between 1997 and 2011 due to land use change.
Water is vital for reducing the global burden of disease and improving the health, welfare and productivity of populations. Today, 2.1 billion people lack access to safely managed drinking water services and 4.5 billion people lack safely managed sanitation services.
Water is also at the heart of adaptation to climate change, serving as the crucial link between the climate system, human society and the environment. Without proper water governance, there is likely to be increased competition for water between sectors and an escalation of water crises of various kinds, triggering emergencies in a range of water-dependent sectors. By 2025, 1.8 billion people are expected to be living in conditions with absolute water scarcity, and two-thirds of the world population could be under water stress conditions.
The physical world of water is closely bound up with the socio-political world, with water often a key factor in managing risks such as famine, migration, epidemics, inequalities and political instability. Since 1900, more than 11 million people have died as a consequence of drought and more than 2 billion have been affected by drought, more than any other physical hazard.
Activities
UN-Water members and partners inform about water and sanitation policies, monitor and report on progress, and coordinate two annual global campaigns on World Water Day and World Toilet Day.
Key policy processes
UN-Water members and partners have helped embed water and sanitation in several agreements, such as the 2030 Agenda for Sustainable Development (which led to the Sustainable Development Goals (SDGs)), the 2015-2030 Sendai Framework for Disaster Risk Reduction, the 2015 Addis Ababa Action Agenda on Financing for Development, and the 2015 Paris Agreement within the UN Convention Framework on Climate Change.
Monitoring and reporting
To meet the needs of the 2030 Agenda, UN-Water launched the Integrated Monitoring Initiative for SDG 6, building on and expanding the experience and lessons learned during the MDG period.
All the custodian agencies of the SDG 6 global indicators have come together under the initiative, which includes the work of WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation (JMP), the inter-agency initiative GEMI and UN-Water Global Analysis and Assessment of Sanitation and Drinking-Water (GLAAS).
Campaigns
Every year, UN-Water coordinates the United Nations international observances on freshwater and sanitation: World Water Day and World Toilet Day. Depending on the official UN theme of the campaign, they are led by one or more UN-Water Members and Partners with a related mandate. On World Water Day, UN-Water releases the World Water Development Report focusing on the same topic as the campaign.
Governance
Members and Partners
UN agencies, programmes and funds with a water-related mandate are Members of UN-Water. Partners are international organizations, professional unions, associations or other civil-society groups that are actively involved in water and that have the capacity and willingness to contribute tangibly to the work of UN-Water.
Senior Programme Managers
The UN-Water Senior Programme Managers are the representatives of the UN-Water Members at UN-Water. They provide the overall governance and strategic direction. Collectively, they constitute the highest operational decision-making body of UN-Water.
Chair, Vice-Chair, Secretary
The Chair of UN-Water is nominated among the UN Executive Heads, after consultations in the UN System Chief Executives Board for Coordination. The vice-chair of UN-Water is elected among the UN-Water Senior Programme Managers. The UN DESA Senior Programme Manager serves as Secretary of UN-Water ex-officio.
List of UN-Water Chairs
History
1977: The UN's Intersecretariat Group for Water Resources coordinates UN activities on water and has a three-person secretariat in the UN Department of Economic and Social Affairs' (UN-DESA) predecessor in New York.
1992: The Group is subsumed into the UN Administrative Coordination Committee's (ACC) Subcommittee on Water Resources, which functions for several years before being disbanded. Members continued to meet informally to continue collaborating on water issues.
1993: The UN General Assembly designates 22 March as World Water Day.
2003: UN-Water is established, endorsed by the successor to the ACC: the UN System Chief Executives Board for Coordination.
2005-2015: UN-Water coordinates the 'Water for Life' International Decade for Action, culminating in the Sanitation Drive to 2015, a campaign to meet the 2000-2015 Millennium Development Goals' sanitation target and end open defecation.
2012: The Key Water Indicator Portal is launched, backed by a federated database containing data from several UN agencies.
2013: The UN General Assembly designates 19 November as World Toilet Day.
2014: UN-Water launches its 2014-2020 Strategy in support of the 2030 Agenda.
2015: The 2030 Agenda's Sustainable Development Goals are launched and a dedicated goal on water and sanitation is adopted by the UN General Assembly with input from UN-Water's Technical Advice Unit.
2016: The Integrated Water Monitoring initiative is launched with the aim of reporting on progress on water and sanitation in a coherent and coordinated way.
2017: "Why Waste Water" was the 2017 World Water Day theme emphasizing both the importance of not wasting water, as well as new policy initiatives around waste water.
References
External links
Monitoring SDG 6 on Water and Sanitation website
United Nations webpage on water
United Nations microsite on the Sustainable Development Goals
Water and politics
Water
United Nations Development Programme
Food and Agriculture Organization
United Nations Environment Programme
World Health Organization
Water security | UN-Water | Environmental_science | 1,292 |
18,996,406 | https://en.wikipedia.org/wiki/Neuregulin%204 | Neuregulin 4 also known as NRG4 is a member of the neuregulin protein family which in humans is encoded by the NRG4 gene.
Function
The neuregulins, including NRG4, activate erb-b2 receptor tyrosine kinase 4 (ERBB4) to initiating cell signaling through cytosolic tyrosine phosphorylation.
Clinical significance
Loss of expression of NRG4 is frequently seen in advanced bladder cancer while increased NRG4 expression correlates to better survival.
References
Further reading
Neurotrophic factors | Neuregulin 4 | Chemistry | 119 |
1,880,346 | https://en.wikipedia.org/wiki/Eureptilia | Eureptilia ("true reptiles") is one of the two major subgroups of the clade Sauropsida, the other one being Parareptilia. Eureptilia includes Diapsida (the clade containing all modern reptiles and birds), as well as a number of primitive Permo-Carboniferous forms previously classified under Anapsida, in the old (no longer recognised) order "Cotylosauria".
Eureptilia is characterized by the skull having greatly reduced supraoccipital, tabular, and supratemporal bones that are no longer in contact with the postorbital. Aside from Diapsida, the group notably contains Captorhinidae, a diverse and long lived (Late Carboniferous-Late Permian) clade of initially small carnivores that later evolved into large herbivores. Other primitive eureptiles such as the "protorothyrids" were all small, superficially lizard-like forms, that were probably insectivorous. One primitive eureptile, the Late Carboniferous "protorothyrid" Anthracodromeus, is the oldest known climbing tetrapod. Diapsids were the only eureptilian clade to continue beyond the end of the Permian.
Classification
Eureptilia was defined as a stem-based clade, specifically, the most inclusive clade containing Captorhinus aguti and Petrolacosaurus kansensis but not Procolophon trigoniceps, by Tsuji and Müller (2009). The cladogram here was modified after Muller and Reisz (2006):
References
External links
Eureptilia examples of some Permian species
Eureptilia
Reptile taxonomy
Tetrapod unranked clades
Extant Pennsylvanian first appearances
Taxa named by Everett C. Olson
Polyphyletic groups | Eureptilia | Biology | 386 |
44,522,280 | https://en.wikipedia.org/wiki/Plasmonics | Plasmonics or nanoplasmonics refers to the generation, detection, and manipulation of signals at optical frequencies along metal-dielectric interfaces in the nanometer scale. Inspired by photonics, plasmonics follows the trend of miniaturizing optical devices (see also nanophotonics), and finds applications in sensing, microscopy, optical communications, and bio-photonics.
Principles
Plasmonics typically utilizes surface plasmon polaritons (SPPs), that are coherent electron oscillations travelling together with an electromagnetic wave along the interface between a dielectric (e.g. glass, air) and a metal (e.g. silver, gold). The SPP modes are strongly confined to their supporting interface, giving rise to strong light-matter interactions. In particular, the electron gas in the metal oscillates with the electro-magnetic wave. Because the moving electrons are scattered, ohmic losses in plasmonic signals are generally large, which limits the signal transfer distances to the sub-centimeter range, unless hybrid optoplasmonic light guiding networks, or plasmon gain amplification are used. Besides SPPs, localized surface plasmon modes supported by metal nanoparticles are referred to as plasmonics modes. Both modes are characterized by large momentum values, which enable strong resonant enhancement of the local density of photon states, and can be utilized to enhance weak optical effects of opto-electronic devices.
Motivation and current challenges
An effort is currently being made to integrate plasmonics with electric circuits, or in an electric circuit analog, to combine the size efficiency of electronics with the data capacity of photonic integrated circuits (PIC). While gate lengths of CMOS nodes used for electrical circuits are ever decreasing, the size of conventional PICs is limited by diffraction, thus constituting a barrier for further integration. Plasmonics could bridge this size mismatch between electronic and photonic components. At the same time, photonics and plasmonics can complement each other, since, under the right conditions, optical signals can be converted to SPPs and vice versa.
One of the biggest issues in making plasmonic circuits a feasible reality is the short propagation length of surface plasmons. Typically, surface plasmons travel distances only on the scale of millimeters before damping diminishes the signal. This is largely due to ohmic losses, which become increasingly important the deeper the electric field penetrates into the metal. Researchers are attempting to reduce losses in surface plasmon propagation by examining a variety of materials, geometries, the frequency and their respective properties. New promising low-loss plasmonic materials include metal oxides and nitrides as well as graphene. Key to more design freedom are improved fabrication techniques that can further contribute to reduced losses by reduced surface roughness.
Another foreseeable barrier plasmonic circuits will have to overcome is heat; heat in a plasmonic circuit may or may not exceed the heat generated by complex electronic circuits. It has recently been proposed to reduce heating in plasmonic networks by designing them to support trapped optical vortices, which circulate light powerflow through the inter-particle gaps thus reducing absorption and Ohmic heating, In addition to heat, it is also difficult to change the direction of a plasmonic signal in a circuit without significantly reducing its amplitude and propagation length. One clever solution to the issue of bending the direction of propagation is the use of Bragg mirrors to angle the signal in a particular direction, or even to function as splitters of the signal. Finally, emerging applications of plasmonics for thermal emission manipulation and heat-assisted magnetic recording leverage Ohmic losses in metals to obtain devices with new enhanced functionalities.
Waveguiding
Optimal plasmonic waveguide designs strive to maximize both the confinement and propagation length of surface plasmons within a plasmonic circuit. Surface plasmon polaritons are characterized by a complex wave vector, with components parallel and perpendicular to the metal-dielectric interface. The imaginary part of the wave vector component is inversely proportional to the SPP propagation length, while its real part defines the SPP confinement. The SPP dispersion characteristics depend on the dielectric constants of the materials comprising the waveguide. The propagation length and confinement of the surface plasmon polariton wave are inversely related. Therefore, stronger confinement of the mode typically results in shorter propagation lengths. The construction of a practical and usable surface plasmon circuit is heavily dependent on a compromise between propagation and confinement. Maximizing both confinement and propagation length helps mitigate the drawbacks of choosing propagation length over confinement and vice versa. Multiple types of waveguides have been created in pursuit of a plasmonic circuit with strong confinement and sufficient propagation length. Some of the most common types include insulator-metal-insulator (IMI), metal-insulator-metal (MIM), dielectric loaded surface plasmon polariton (DLSPP), gap plasmon polariton (GPP), channel plasmon polariton (CPP), wedge surface plasmon polariton (wedge), and hybrid opto-plasmonic waveguides and networks. Dissipation losses accompanying SPP propagation in metals can be mitigated by gain amplification or by combining them into hybrid networks with photonic elements such as fibers and coupled-resonator waveguides. This design can result in the previously mentioned hybrid plasmonic waveguide, which exhibits subwavelength mode on a scale of one-tenth of the diffraction limit of light, along with an acceptable propagation length.
Coupling
The input and output ports of a plasmonic circuit will receive and send optical signals, respectively. To do this, coupling and decoupling of the optical signal to the surface plasmon is necessary. The dispersion relation for the surface plasmon lies entirely below the dispersion relation for light, which means that for coupling to occur additional momentum should be provided by the input coupler to achieve the momentum conservation between incoming light and surface plasmon polariton waves launched in the plasmonic circuit. There are several solutions to this, including using dielectric prisms, gratings, or localized scattering elements on the surface of the metal to help induce coupling by matching the momenta of the incident light and the surface plasmons. After a surface plasmon has been created and sent to a destination, it can then be converted into an electrical signal. This can be achieved by using a photodetector in the metal plane, or decoupling the surface plasmon into freely propagating light that can then be converted into an electrical signal.
Alternatively, the signal can be out-coupled into a propagating mode of an optical fiber or waveguide.
Active devices
The progress made in surface plasmons over the last 50 years has led to the development in various types of devices, both active and passive. A few of the most prominent areas of active devices are optical, thermo-optical, and electro-optical. All-optical devices have shown the capacity to become a viable source for information processing, communication, and data storage when used as a modulator. In one instance, the interaction of two light beams of different wavelengths was demonstrated by converting them into co-propagating surface plasmons via cadmium selenide quantum dots. Electro-optical devices have combined aspects of both optical and electrical devices in the form of a modulator as well. Specifically, electro-optic modulators have been designed using evanescently coupled resonant metal gratings and nanowires that rely on long-range surface plasmons (LRSP). Likewise, thermo-optic devices, which contain a dielectric material whose refractive index changes with variation in temperature, have also been used as interferometric modulators of SPP signals in addition to directional-coupler switches. Some thermo-optic devices have been shown to utilize LRSP waveguiding along gold stripes that are embedded in a polymer and heated by electrical signals as a means for modulation and directional-coupler switches. Another potential field lies in the use of spasers in areas such as nanoscale lithography, probing, and microscopy.
Passive devices
Although active components play an important role in the use of plasmonic circuitry, passive circuits are just as integral and, surprisingly, not trivial to make. Many passive elements such as prisms, lenses, and beam splitters can be implemented in a plasmonic circuit, however fabrication at the nano scale has proven difficult and has adverse effects. Significant losses can occur due to decoupling in situations where a refractive element with a different refractive index is used. However, some steps have been taken to minimize losses and maximize compactness of the photonic components. One such step relies on the use of Bragg reflectors, or mirrors composed of a succession of planes to steer a surface plasmon beam. When optimized, Bragg reflectors can reflect nearly 100% of the incoming power. Another method used to create compact photonic components relies on CPP waveguides as they have displayed strong confinement with acceptable losses less than 3 dB within telecommunication wavelengths.
Maximizing loss and compactness with regards to the use of passive devices, as well as active devices, creates more potential for the use of plasmonic circuits.}{citation needed}}
See also
Nanophotonics
Metamaterials
Spoof surface plasmon
References
Photonics
Nanoelectronics
Metamaterials
Nanotechnology | Plasmonics | Physics,Chemistry,Materials_science,Engineering | 2,018 |
3,604,289 | https://en.wikipedia.org/wiki/Intelligent%20electronic%20device | In the electric power industry, an intelligent electronic device (IED) is an integrated microprocessor-based controller of power system equipment, such as circuit breakers, transformers and capacitor banks.
Description
IEDs receive data from sensors and power equipment and can issue control commands, such as tripping circuit breakers if they sense voltage, current, or frequency anomalies, or raise/lower tap positions in order to maintain the desired voltage level. Common types of IEDs include protective relaying devices, tap changer controllers, circuit breaker controllers, capacitor bank switches, recloser controllers, voltage regulators etc. This is generally controlled by a setting file. The testing of setting files is typically one of the most time-consuming roles of a protection tester.
Digital protective relays are primarily IEDs, using a microprocessor to perform several protective, control and similar functions. A typical IED can contain around 5–12 protection functions, 5–8 control functions controlling separate devices, an autoreclose function, self monitoring function, communication functions etc.
Some recent IEDs are designed to support the IEC61850 standard for substation automation, which provides interoperability and advanced communications capabilities.
IEDs are used as a more modern alternative to, or a complement of, setup with traditional remote terminal units (RTUs). Unlike the RTUs, IEDs are integrated with the devices they control and offer a standardized set of measuring and control points that is easier to configure and require less wiring. Most IEDs have a communication port and built-in support for standard communication protocols (DNP3, IEC104 or IEC61850), so they can communicate directly with the SCADA system or a substation programmable logic controller. Alternatively, they can be connected to a substation RTU that acts as a gateway towards the SCADA server.
See also
Power system automation
References
Electric power | Intelligent electronic device | Physics,Engineering | 394 |
43,844,480 | https://en.wikipedia.org/wiki/TAPI-1 | TAPI-1 ( TNF-alpha protease inhibitor I) is a structural analog of TAPI-0 with similar but more stable validness in vitro for the matrix metalloproteinases (MMPs) and TNF- alpha converting enzyme which blocks shedding of several cell surface proteins such as IL-6 and p60 TNF receptor.
References
TNF inhibitors
Hydroxamic acids
2-Naphthyl compounds | TAPI-1 | Chemistry | 90 |
19,202,816 | https://en.wikipedia.org/wiki/Pyridine-N-oxide | Pyridine-N-oxide is the heterocyclic compound with the formula C5H5NO. This colourless, hygroscopic solid is the product of the oxidation of pyridine. It was originally prepared using peroxyacids as the oxidising agent. The compound is used infrequently as an oxidizing reagent in organic synthesis.
Structure
The structure of pyridine-N-oxide is very similar to that of pyridine with respect to the parameters for the ring. The molecule is planar. The N-O distance is 1.34Å. The C-N-C angle is 124°, 7° wider than in pyridine.
Synthesis
The oxidation of pyridine can be achieved with a number of peracids including peracetic acid and perbenzoic acid. Oxidation can also be effected by a modified Dakin reaction using a urea-hydrogen peroxide complex, and sodium perborate or, using methylrhenium trioxide () as catalyst, with sodium percarbonate.
Reactions
Pyridine N-oxide is five orders of magnitude less basic than pyridine: the pKa of protonated pyridine-N-oxide is 0.8. Protonated derivatives are isolable, e.g., [C5H5NOH]Cl. Further demonstrating its (feeble) basicity, pyridine-N-oxide also serves as a ligand in coordination chemistry. A host of transition metal complexes of pyridine-N-oxides are known.
Treatment of the pyridine-N-oxide with phosphorus oxychloride gives 4- and 2-chloropyridines.
Related pyridine-N-oxides
The N-oxides of various pyridines are precursors to useful drugs:
Nicotinic acid N-oxide, derived from nicotinic acid is a precursor to niflumic acid and pranoprofen.
2,3,5-trimethylpyridine N-oxide is a precursor to the drug omeprazole
2-chloropyridine N-oxide is a precursor to the fungicide zinc pyrithione
Safety
The compound is a skin irritant.
Further reading
discovery of pyridine-N-oxide:
Synthesis of N-oxides from substituted pyridines:
References
Amine oxides
Pyridinium compounds
Oxidizing agents | Pyridine-N-oxide | Chemistry | 528 |
74,451,504 | https://en.wikipedia.org/wiki/Conformal%20linear%20transformation | A conformal linear transformation, also called a homogeneous similarity transformation or homogeneous similitude, is a similarity transformation of a Euclidean or pseudo-Euclidean vector space which fixes the origin. It can be written as the composition of an orthogonal transformation (an origin-preserving rigid transformation) with a uniform scaling (dilation). All similarity transformations (which globally preserve the shape but not necessarily the size of geometric figures) are also conformal (locally preserve shape). Similarity transformations which fix the origin also preserve scalar–vector multiplication and vector addition, making them linear transformations.
Every origin-fixing reflection or dilation is a conformal linear transformation, as is any composition of these basic transformations, including rotations and improper rotations and most generally similarity transformations. However, shear transformations and non-uniform scaling are not. Conformal linear transformations come in two types, proper transformations preserve the orientation of the space whereas improper transformations reverse it.
As linear transformations, conformal linear transformations are representable by matrices once the vector space has been given a basis, composing with each-other and transforming vectors by matrix multiplication. The Lie group of these transformations has been called the conformal orthogonal group, the conformal linear transformation group or the homogeneous similtude group.
Alternatively any conformal linear transformation can be represented as a versor (geometric product of vectors); every versor and its negative represent the same transformation, so the versor group (also called the Lipschitz group) is a double cover of the conformal orthogonal group.
Conformal linear transformations are a special type of Möbius transformations (conformal transformations mapping circles to circles); the conformal orthogonal group is a subgroup of the conformal group.
General properties
Across all dimensions, a conformal linear transformation has the following properties:
Distance ratios are preserved by the transformation.
Given an orthonormal basis, a matrix representing the transformation must have each column the same magnitude and each pair of columns must be orthogonal.
The transformation is conformal (angle preserving); in particular orthogonal vectors remain orthogonal after applying the transformation.
The transformation maps concentric -spheres to concentric -spheres for every (circles to circles, spheres to spheres, etc.). In particular, -spheres centered at the origin are mapped to -spheres centered at the origin.
By the Cartan–Dieudonné theorem, every orthogonal transformation in an -dimensional space can be expressed as some composition of up to reflections. Therefore, every conformal linear transformation can be expressed as the composition of up to reflections and a dilation. Because every reflection across a hyperplane reverses the orientation of a pseudo-Euclidean space, the composition of any even number of reflections and a dilation by a positive real number is a proper conformal linear transformation, and the composition of any odd number of reflections and a dilation is an improper conformal linear transformation.
Two dimensions
In the Euclidean vector plane, an improper conformal linear transformation is a reflection across a line through the origin composed with a positive dilation. Given an orthonormal basis, it can be represented by a matrix of the form
A proper conformal linear transformation is a rotation about the origin composed with a positive dilation. It can be represented by a matrix of the form
Alternately a proper conformal linear transformation can be represented by a complex number of the form
Practical applications
When composing multiple linear transformations, it is possible to create a shear/skew by composing a parent transform with a non-uniform scale, and a child transform with a rotation. Therefore, in situations where shear/skew is not allowed, transformation matrices must also have uniform scale in order to prevent a shear/skew from appearing as the result of composition. This implies conformal linear transformations are required to prevent shear/skew when composing multiple transformations.
In physics simulations, a sphere (or circle, hypersphere, etc.) is often defined by a point and a radius. Checking if a point overlaps the sphere can therefore be performed by using a distance check to the center. With a rotation or flip/reflection, the sphere is symmetric and invariant, therefore the same check works. With a uniform scale, only the radius needs to be changed. However, with a non-uniform scale or shear/skew, the sphere becomes "distorted" into an ellipsoid, therefore the distance check algorithm does not work correctly anymore.
References
Abstract algebra
Functions and mappings
Transformation (function)
Conformal mappings | Conformal linear transformation | Mathematics | 907 |
11,485,159 | https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor%20receptor%201 | Fibroblast growth factor receptor 1 (FGFR-1), also known as basic fibroblast growth factor receptor 1, fms-related tyrosine kinase-2 / Pfeiffer syndrome, and CD331, is a receptor tyrosine kinase whose ligands are specific members of the fibroblast growth factor family. FGFR-1 has been shown to be associated with Pfeiffer syndrome, and clonal eosinophilias.
Gene
The FGFR1 gene is located on human chromosome 8 at position p11.23 (i.e. 8p11.23), has 24 exons, and codes for a Precursor mRNA that is alternatively spliced at exons 8A or 8B thereby generating two mRNAs coding for two FGFR1 isoforms, FGFR1-IIIb (also termed FGFR1b) and FGFR1-IIIc (also termed FGFR1c), respectively. Although these two isoforms have different tissue distributions and FGF-binding affinities, FGFR1-IIIc appears responsible for most of functions of the FGFR1 gene while FGFR1-IIIb appears to have only a minor, somewhat redundant functional role. There are four other members of the FGFR1 gene family: FGFR2, FGFR3, FGFR4, and Fibroblast growth factor receptor-like 1 (FGFRL1). The FGFR1 gene, similar to the FGFR2-4 genes are commonly activated in human cancers as a result of their duplication, fusion with other genes, and point mutation; they are therefore classified as proto-oncogenes.
Protein
Receptor
FGFR1 is a member of the fibroblast growth factor receptor (FGFR) family, which in addition to FGFR1, includes FGFR2, FGFR3, FGFR4, and FGFRL1. FGFR1-4 are cell surface membrane receptors that possess tyrosine kinase activity. A full-length representative of these four receptors consists of an extracellular region composed of three immunoglobulin-like domains which bind their proper ligands, the fibroblast growth factors (FGFs), a single hydrophobic stretch which passes through the cell's surface membrane, and a cytoplasmic tyrosine kinase domain. When bonded to FGFs, these receptors form dimers with any one of the four other FGFRs and then cross-phosphorylate key tyrosine residues on their dimer partners. These newly phosphorylated sites bind cytosolic docking proteins such as FRS2, PRKCG and GRB2 which proceed to activate cell signaling pathways that lead to cellular differentiation, growth, proliferation, prolonged survival, migration, and other functions. FGFRL1 lacks a prominent intracellular domain and tyrosine kinase activity; it may serve as a decoy receptor by binding with and thereby diluting the action of FGFs. There are 18 known FGFs that bind to and activate one or more of the FGFRs: FGF1 to FGF10 and FGF16 to FGF23. Fourteen of these, FGF1 to FGF6, FGF8, FGF10, FGF17, and FGF19 to FGF23 bind and activate FGFR1. FGFs binding to FGFR1 is promoted by their interaction with cell surface heparan sulfate proteoglycans and, with respect to FGF19, FGF20, and FGR23, the transmembrane protein Klotho.
Cell activation
FGFR1, when bound to a proper FGF, elicits cellular responses by activating signaling pathways that include the: a) Phospholipase C/PI3K/AKT, b) Ras subfamily/ERK, c) Protein kinase C, d) IP3-induced raising of cytosolic Ca2+, and e) Ca2+/calmodulin-activated elements and pathways. The exact pathways and elements activated depend on the cell type being stimulated plus other factors such as the stimulated cells microenvironment and previous as well as concurrent history of stimulation
Activation of the gamma isoforms of phospholipase C (PLCγ) (see PLCG1 and PLCG2 illustrates one mechanism by which FGFR1 activates cell stimulating pathways. Following its binding of a proper FGF and subsequent pairing with another FGFR, FGFR1 becomes phosphorylated by its partner FGFR on a highly conserved tyrosine residue (Y766) at its C-terminal. This creates a binding or "docking" site to recruit PLCγ via PLCγ tandem nSH2 and cSH2 domains and then phosphorylate PLCγ. By being phosphorylated PLCγ is relieved of its auto-inhibition structure and becomes active in metabolizing nearby Phosphatidylinositol 4,5-bisphosphate (PIP2) to two secondary messengers, inositol 1,4,5-trisphosphate (IP3) and diacyglycerol (DAG). These secondary messengers proceed to mobilize other cell-signaling and cell-activating agents: IP3 elevates cytosolic Ca2+ and thereby various Ca2+-sensitive elements while DAG activates various protein kinase C isoforms.
Recent publication on the 2.5 Å crystal structure of PLCγ in complex with FGFR1 kinase (PDB: 3GQI) provides new insights in understanding the molecular mechanism of FGFR1's recruitment of PLCγ by its SH2 domains. Figure 1 on the extreme right shows the PLCγ-FGFR1 kinase complex with the c-SH2 domain colored in red, n-SH2 domain colored in blue, and the interdomain linker colored in yellow. The structure contains typical SH2 domain, with two α-helices and three antiparallel β-strands in each SH2 domain. In this complex, the phosphorylated tyrosine (pY766) on the C-terminal tail of FGFR1 kinase binds preferentially to the nSH2 domain of PLCγ. The phosphorylation of tyrosine residue 766 on FGFR1 kinase forms hydrogen bonds with the n-SH2 to stabilize the complex. Hydrogen bonds in the binding pocket help to stabilize the PLCγ-FGFR1 kinase complex. The water molecule as shown mediates the interaction of asparagine 647 (N647) and aspartate 768 (D768) to further increase the binding affinity of the n-SH2 and FGFR1 kinase complex. (Figure 2). The phosphorylation of tyrosine 653 and tyrosine 654 in the active kinase conformation causes a large conformation change in the activation segment of FGFR1 kinase. Threonine 658 is moved by 24Å from the inactive form (Figure 3.) to the activated form of FGFR1 kinase (Figure 4.). The movement causes the closed conformation in the inactive form to open to enable substrate binding. It also allows the open conformation to coordinate Mg2+ with AMP-PCP (analog of ATP). In addition, pY653 and pY654 in the active form helps to maintain the open conformation of the SH2 and FGFR1 kinase complex. However, the mechanism by which the phosphorylation at Y653 and Y654 helps to recruit the SH2 domain to its C-terminal tail upon phosphorylation of Y766 remains elusive. Figure 5 shows the overlay structure of active and inactive forms of FGFR1 kinase. Figure 6 shows the dots and contacts on phosphorylated tyrosine residues 653 and 654. Green dots show highly favorable contacts between pY653 and pY654 with surrounding residues. Red spikes show unfavorable contacts in the activation segment. The figure is generated through Molprobity extension on Pymol.
The tyrosine kinase region of FGFR1 binds to the N-SH2 domain of PLCγ primarily through charged amino acids. Arginine residue (R609) on the N-SH2 domain forms a salt bridge to aspartate 755 (D755) on the FGFR1 domain. The acid base pairs located in the middle of the interface are nearly parallel to each other, indicating a highly favorable interaction. The N-SH2 domain makes an additional polar contact through water-mediated interaction that takes place between the N-SH2 domain and the FGFR1 kinase region. The arginine residue 609 (R609) on the FGFR1 kinase also forms a salt bridge to the aspartate residue (D594) on the N-SH2 domain. The acid-base pair interacts with each other carry out a reduction–oxidation reaction that stabilizes the complex (Figure 7). Previous studies have done to elucidate the binding affinity of the n-SH2 domain with the FGFR1 kinase complex by mutating these phenylalanine or valine amino acids. The results from isothermal titration calorimetry indicated that the binding affinity of the complex decreased by 3 to 6-fold, without affecting the phosphorylation of the tyrosine residues.
Cell inhibition
FGF-induced activation of FGFR1 also stimulates the activation of sprouty proteins SPRY1, SPRY2, SPRY3, and/or SPRY4 which in turn interact with GRB2, SOS1, and/or c-Raf to reduce or inhibit further cell stimulation by activated FGFR1 as well as other tyrosine kinase receptors such as the Epidermal growth factor receptor. These interactions serve as negative feedback loops to limit the extent of cellular activation.
Function
Mice genetically engineered to lack a functional Fgfr1 gene (ortholog of the human FGFR1 gene) die in utero before 10.5 days of gestation. Embryos exhibit extensive deficiencies in the development and organization of mesoderm-derived tissues and the musculoskeletal system. The Fgfr1 gene appears critical for the truncation of embryonic structures and formation of muscle and bone tissues and thereby the normal formation of limbs, skull, outer, middle, and inner ear, neural tube, tail, and lower spine as well as normal hearing.
Clinical significance
Congenital diseases
Hereditary mutations in the FGFR1 gene are associated with various congenital malformations of the musculoskeletal system. Interstitial deletions at human chromosome 8p12-p11, arginine to a stop nonsense mutation at FGFR1 amino acid 622 (annotated as R622X), and numerous other autosomal dominant inactivating mutations in FGFR1 are responsible for ~10% of the cases of Kallmann syndrome. This syndrome is a form of hypogonadotropic hypogonadism associated in a varying percentage of cases with anosmia or hyposmia; cleft palate and other craniofacial defects; and scoliosis and other musculoskeletal malformations. An activating mutation in FGFR1 viz., P232R (proline-to-arginine substitution in the protein's 232nd amino acid), is responsible for the Type 1 or classic form of Pfeiffer syndrome, a disease characterized by craniosynostosis and mid-face deformities. A tyrosine-to-cysteine substitution mutation in the 372nd amino acid of FGFR1 (Y372C) is responsible for some cases of Osteoglophonic dysplasia. This mutation results in craniosynostosis, mandibular prognathism, hypertelorism, brachydactyly, and inter-phalangeal joint fusion. Other inherited defects associated with 'FGFR1 mutations likewise involve musculoskeletal malformations: these include the Jackson–Weiss syndrome (proline to arg substitution at amino acid 252), Antley-Bixler syndrome (isoleucine-to-threonine at amino acid 300 (I300T), and Trigonocephaly (mutation the same as the one for the Antley-Bixler syndrome viz., I300T).
Cancers
Somatic mutations and epigenetic changes in the expression of the FGFR1 gene occur in and are thought to contribute to various types of lung, breast, hematological, and other types of cancers.
Lung cancers
Amplification of the FGFR1 gene (four or more copies) is present in 9 to 22% of patients with non-small-cell lung carcinoma (NSCLC). FGFR1 amplification was highly correlated with a history of tobacco smoking and proved to be the single largest prognostic factor in a cohort of patients suffering this disease. About 1% of patients with other types of lung cancer show amplifications in FGFR1.
Breast cancers
Amplification of FGFR1 also occurs in ~10% of estrogen receptor positive breast cancers, particularly of the luminal subtype B form of breast cancer. The presence of FGFR1 amplification has been correlated with resistance to hormone blocking therapy and found to be a poor prognostic factor in the disease.
Hematological cancers
In certain rare hematological cancers, the fusion of FGFR1 with various other genes due to Chromosomal translocations or Interstitial deletions create genes that encode chimeric FGFR1 Fusion proteins. These proteins have continuously active FGFR1-derived tyrosine kinase and thereby continuously stimulated the cell growth and proliferation. These mutations occur in the early stages of myeloid and/or lymphoid cell lines and are the cause of or contribute to the development and progression of certain types of hematological malignancies that have increased numbers of circulating blood eosinophils, increased numbers of bone marrow eosinophils, and/or the infiltration of eosinophils into tissues. These neoplasms were initially regarded as eosinophilias, hypereosinophilias, Myeloid leukemias, myeloproliferative neoplasms, myeloid sarcomas, lymphoid leukemias, or non-Hodgkin lymphomas. Based on their association with eosinophils, unique genetic mutations, and known or potential sensitivity to tyrosine kinase inhibitor therapy, they are now being classified together as clonal eosinophilias. These mutations are described by connecting the chromosome site for the FGFR1 gene, 8p11 (i.e. human chromosome 8's short arm [i.e. p] at position 11) with another gene such as the MYO18A whose site is 17q11 (i.e human chromosome 17's long arm [i.e. q] at position 11) to yield the fusion gene annotated as t(8;17)(p11;q11). These FGFR1 mutations along with the chromosomal location of FGFR1As partner gene and the annotation of the fused gene are given in the following table.
These cancers are sometimes termed 8p11 myeloproliferative syndromes based on the chromosomal location of the FGFR1 gene. Translocations involving ZMYM2, CNTRL, and FGFR1OP2 are the most common forms of these 8p11 syndromes. In general, patients with any of these diseases have an average age of 44 and present with fatigue, night sweats, weight loss, fever, lymphadenopathy, and enlarged liver and/or spleen. They typically evidence hematological features of the myeloproliferative syndrome with moderate to greatly elevated levels of blood and bone marrow eosinophils. However, patients bearing: a) ZMYM2-FGFR1 fusion genes often present as T-cell lymphomas with spreading to non-lymphoid tissue; b) FGFR1-BCR fusion genes usually present as chronic myelogenous leukemias; c) CEP110 fusion genes may present as a chronic myelomonocytic leukemia with involvement of tonsil; and d) FGFR1-BCR or FGFR1-MYST3 fusion genes often present with little or no eosinophilia. Diagnosis requires conventional cytogenetics using Fluorescence in situ hybridization#Variations on probes and analysis for FGFR1.
Unlike many other myeloid neoplasms with eosinophil such as those caused by Platelet-derived growth factor receptor A or platelet-derived growth factor receptor B fusion genes, the myelodysplasia syndromes caused by FGFR1 fusion genes in general do not respond to tyrosine kinase inhibitors, are aggressive and rapidly progressive, and require treatment with chemotherapy agents followed by bone marrow transplantion in order to improve survival. The tyrosine kinase inhibitor Ponatinib has been used as mono-therapy and subsequently used in combination with intensive chemotherapy to treat the myelodysplasia caused by the FGFR1-BCR fusion gene.
Phosphaturic mesenchymal tumor
Phosphaturic mesenchymal tumors is characterized by a hypervascular proliferation of apparently non-malignant spindled cells associated with a variable amount of ‘smudgy’ calcified matrix but a small subset of these tumors exhibit malignant histological features and may behave in a clinically malignant fashion. In a series of 15 patients with this disease, 9 were found to have tumors that bore fusions between the FGFR1 gene and the FN1 gene located on human chromosome 2 at position q35. The FGFR1-FN1 fusion gene was again identified in 16 of 39 (41%) patients with phosphaturic mesenchymal tumors. The role of the(2;8)(35;11) FGFR1-FN1 fusion gene in this disease is not known.
Rhabdomyosarcoma
Elevated expression of FGFR1 protein was detected in 10 of 10 human Rhabdomyosarcoma tumors and 4 of 4 human cell lines derived from rhabdomyocarcoma. The tumor cases included 6 cases of Alveolar rhabdomyosarcoma, 2 cases of Embryonal rhabdomyosarcoma, and 2 cases of pleomorphic rhabdomyosarcoma. Rhabdomyosarcoma is a highly malignant form of cancer that develops from immature skeletal muscle cell precursors viz., myoblastss that have failed to fully differentiate. FGFR1 activation causes myoblast to proliferate while inhibiting their differentiation, dual effects that may lead to the assumption of a malignant phenotype by these cells. The 10 human rhabdomyosarcoma tumor exhibited decreased levels of methylation of CpG islands upstream of the first FGFR1 exon. CpG islands commonly function to silence expression of adjacent genes while their methylation inhibits this silencing. Hypomethylation of CpG islands upstream of FGFR1 is hypothesized to be at least in part responsible for the over-expression of FGFR1 by and malignant behavior of these rhabdomyosarcoma tumors. In addition, a single case of rhabdomyosarcoma tumor was found express co-amplified FOXO1 gene at 13q14 and FGFR1 gene at 8p11, i.e. t(8;13)(p11;q14), suggesting the formation, amplification, and malignant activity of a chimerical FOXO1-FGFR1 fusion gene by this tumor.
Other types of cancers
Acquired abnormalities if the FGFR1 gene are found in: ~14% of urinary bladder Transitional cell carcinomas (almost all are amplifications); ~10% of squamous cell Head and neck cancers (~80% amplifications, 20% other mutations); ~7% of endometrial cancers (half amplifications, half other types of mutations); ~6% of prostate cancers (half amplifications, half other mutations); ~5% of ovarian Papillary serous cystadenocarcinoma (almost all amplifications); ~5% of colorectal cancers (~60 amplifications, 40% other mutations); ~4% of sarcomas (mostly amplifications); <3% of Glioblastomas (Fusion of FGFR1 and TACC1 (8p11) gene); <3% of Salivary gland cancer (all amplifications); and <2% in certain other cancers.
FGFR inhibitors
FGFR-targeted drugs exert direct as well as indirect anticancer effects, due to the fact that FGFRs on cancer cells and endothelial cells are involved in tumorigenesis and vasculogenesis, respectively. FGFR therapeutics are active as FGF affects numerous features of cancers, such as invasiveness, stemness and cellular survival. Primary among such drugs are antagonists. Small molecules that fit between the ATP binding pockets of the tyrosine kinase domains of the receptors. For FGFR1, numerous such small molecules have been approved for targeting the TKI ATP pocket. These include dovitinib and brivanib. The table below provides the IC50 (nanomolar) of small-molecule compounds targeting FGFRs.
FGFR1 mutation in breast and lung cancer as a result of genetic over-amplification is effectively targeted using dovitinib and ponatinib, respectively. Drug resistance is a highly relevant topic in the field of drug development for FGFR targets. FGFR inhibitors allow for the increase of tumor sensitivity to cytotoxic anticancer drugs such as paclitaxel, and etoposide in human cancer cells, thereby decreasing antiapoptotic potential based on faulty FGFR activation. Since FGF signaling inhibition dramatically reduces revascularization, it interferes with one of the hallmarks of cancers, angiogenesis. It also reduces tumor burden in human tumors that depend on autocrine FGF signaling, based on FGF2 upregulation following the common VEGFR-2 therapy for breast cancer. Thus, FGFR1 can act synergistically with therapies to cut off cancer clonal resurgence by eliminating potential pathways of future relapse. Moreover, FGF signaling inhibition dramatically reduces revascularization.
FGFR inhibitors have been predicted to be effective on relapsed tumors because of the clonal evolution of an FGFR-activated minor subpopulation after therapy targeted to EGFRs or VEGFRs. Because there are multiple mechanisms of action for FGFR inhibitors to overcome drug resistance in human cancer, FGFR-targeted therapy might be a promising strategy for the treatment of refractory cancer.
AZD4547 has undergone a phase II clinical trial in gastric cancer and reported some results.
Lucitanib is an inhibitor of FGFR1 and FGFR2 and has undergone clinical trials for advanced solid tumors.
Dovitinib (TKI258), an inhibitor of FGFR1, FGFR2, and FGFR3, has had a clinical trial on FGFR-amplified breast cancers.
Interactions
Fibroblast growth factor receptor 1 has been shown to interact with:
FGF1,
FRS2,
Klotho,
GRB14, and
SHB.
See also
Cluster of differentiation
References
Further reading
External links
GeneReviews/NIH/NCBI/UW entry on FGFR-Related Craniosynostosis Syndromes
GeneReviews/NCBI/NIH/UW entry on Kallmann syndrome
Fibroblast growth factor receptor 1 on the Atlas of Genetics and Oncology
Clusters of differentiation
Tyrosine kinase receptors | Fibroblast growth factor receptor 1 | Chemistry | 5,203 |
6,022,246 | https://en.wikipedia.org/wiki/Shotgun%20lipidomics | In lipidomics, the process of shotgun lipidomics (named by analogy with shotgun sequencing) uses analytical chemistry to investigate the biological function, significance, and sequelae of alterations in lipids and protein constituents mediating lipid metabolism, trafficking, or biological function in cells.
Lipidomics has been greatly facilitated by recent advances in, and novel applications of, electrospray ionization mass spectrometry (ESI/MS).
Lipidomics is a research field that studies the pathways and networks of cellular lipids in biological systems (i.e., lipidomes) on a large scale. It involves the identification and quantification of the thousands of cellular lipid molecular species and their interactions with other lipids, proteins, and other moieties in vivo. Investigators in lipidomics examine the structures, functions, interactions, and dynamics of cellular lipids and the dynamic changes that occur during pathophysiologic perturbations. Lipidomic studies play an essential role in defining the biochemical mechanisms of lipid-related disease processes through identifying alterations in cellular lipid metabolism, trafficking and homeostasis. The two major platforms currently used for lipidomic analyses are HPLC-MS and shotgun lipidomics.
History
Shotgun lipidomics was developed by Richard W. Gross and Xianlin Han, by employing ESI intrasource separation techniques. Individual molecular species of most major and many minor lipid classes can be fingerprinted and quantitated directly from biological lipid extracts without the need for chromatographic purification.
Advantages
Shotgun lipidomics is fast, highly sensitive, and it can identify hundreds of lipids missed by other methods — all with a much smaller tissue sample so that specific cells or minute biopsy samples can be examined.
References
Further reading
Gunning for fats
Biochemistry methods | Shotgun lipidomics | Chemistry,Biology | 376 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.