id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
58,938,242 | https://en.wikipedia.org/wiki/Giant%20Radio%20Array%20for%20Neutrino%20Detection | The Giant Radio Array for Neutrino Detection (GRAND) is a proposed large-scale detector designed to collect ultra-high energy cosmic particles as cosmic rays, neutrinos and photons with energies exceeding 1017 eV. This project aims at solving the mystery of their origin and the early stages of the universe itself. The proposal, formulated by an international group of researchers, calls for an array of 200,000 receivers to be placed on mountain ranges around the world.
Overview
The GRAND detector would search for neutrinos, exotic particles emitted by some and the black holes in the center of galaxies. These neutrinos could help astronomers find the source of other energetic particles called ultra-high-energy cosmic rays. When neutrinos reach Earth, they often collide with particles either in the air or on the ground, creating showers of secondary particles. These secondary particles can be picked up by the radio antennas, which lets researchers calculate the trajectory of the initial neutrinos and trace them back to their source. The concept was first published in 2017.
The giant radio detector array would comprise 200,000 low-cost antennas in groups of 10,000 spread out over nearly at different locations around the world. This would make it the largest detector in the world. Construction, installation and networking the 200,000 antennae, would cost approximately million, excluding the price for renting the land and manpower.
Principle
The strategy of GRAND is to detect the radio emission coming from particle showers that develop in the terrestrial atmosphere as a result of the interaction of ultra-high energy (UHE) cosmic rays, gamma rays, and neutrinos. Astrophysical tau neutrinos () can be detected through extensive air showers (EAS) induced by tau () decays in the atmosphere. The short-lived tau decays in the atmosphere generates an EAS that emits measurable electromagnetic emissions up to frequencies of hundreds of MHz. The antennae are foreseen to operate in the 60-200 MHz band to avoid the short-wave background noise at lower frequencies.
Each individual antenna is a simple Bow-tie design, featuring 3 perpendicular bows with an additional vertical arm to sample all three polarization directions. Each antenna is mounted on a single 5-meter-tall pole, and each antenna in the grid is spaced at 1 km within a square grid. If the full array of 200,000 antennae is built, GRAND would reach an all-flavor sensitivity of 4 x10−10 GeV cm−2 s−1 sr−1 above 5 x1017 eV. Because of its sub-degree angular resolution, GRAND will also search for point sources of UHE neutrinos, steady and transient, potentially starting UHE neutrino astronomy, allowing for the discovery and follow-up of large numbers of radio transients, fast radio bursts, giant radio pulses, and for precise studies of the epoch of reionization.
The researchers estimate that GRAND could allow not just the detection of neutrinos, but could also allow a differentiation of the source types, such as galaxy clusters with central sources, fast-spinning newborn pulsars, active galactic nuclei, and afterglows of gamma-ray bursts.
Status
Simulation and experimental work is ongoing on technological development and background rejection strategies. Phase one is called GRANDProto35, that includes 35 antennas and 24 scintillators, deployed in the Tian Shan mountains in China. If a pulse is observed simultaneously in the signals from three or more scintillators, the signals are recorded. As of October 2018, GRANDProto35 is in commissioning phase. So far, the system achieves 100% detection efficiency for trigger rates up to .
The following step is planned for 2020, and it is a dedicated setup called GRANDProto300 within an area of . The baseline layout is a square grid with a inter-antenna spacing, just as for later stages. Because GRANDProto300 will not be large enough to detect cosmogenic neutrinos, the viability will be tested using instead extensive air showers initiated by very inclined cosmic rays, thus providing an opportunity to do cosmic-ray science. The site would be hosted at the Chinese provinces of XinJiang, Inner Mongolia, Yunnan, and Gansu. If funded, the later phases would build GRAND10k in 2025, and finally GRAND200k (200,000 receivers) in the 2030s.
See also
High-energy astronomy
List of neutrino experiments
Multi-messenger astronomy
Neutrino astronomy
References
Astronomical instruments
Observational astronomy
Neutrino astronomy
Neutrino observatories
Neutrinos
Particle detectors | Giant Radio Array for Neutrino Detection | [
"Astronomy",
"Technology",
"Engineering"
] | 942 | [
"Neutrino astronomy",
"Observational astronomy",
"Measuring instruments",
"Particle detectors",
"Astronomical instruments",
"Astronomical sub-disciplines"
] |
58,944,573 | https://en.wikipedia.org/wiki/Centro%20Nacional%20de%20Aceleradores | The Centro Nacional de Aceleradores (CNA) is the centre for particle accelerators in Spain and is based in Seville. It was created in 1997.
It combines the efforts of the University of Seville, the Regional Government of Andalusia and the Spanish Higher Council for Scientific Research. It is located in the Cartuja 93 Science and Technology Park.
It has three different types of ion accelerators (3MV Van de Graaf Tandem, Cyclotron which provides 18 MeV protons and 9 MeV deuterons and a 1 MV Cockcroft-Walton Tandem as a mass spectrometer) for studies in various fields. In addition, they feature a PET/CT scanner for people, new Carbon 14 dating systems (the MiCaDaS) and a 60CO.2 irradiator.
References
Particle accelerators
Seville
University of Seville
Andalusia | Centro Nacional de Aceleradores | [
"Physics"
] | 181 | [
"Particle physics stubs",
"Particle physics"
] |
58,946,268 | https://en.wikipedia.org/wiki/Zimmer%27s%20conjecture | Zimmer's conjecture is a statement in mathematics "which has to do with the circumstances under which geometric spaces exhibit certain kinds of symmetries." It was named after the mathematician Robert Zimmer. The conjecture states that there can exist symmetries (specifically higher-rank lattices) in a higher dimension that cannot exist in lower dimensions.
In 2017, the conjecture was proven by Aaron Brown and Sebastián Hurtado-Salazar of the University of Chicago and David Fisher of Indiana University.
References
Symmetry
Conjectures that have been proved | Zimmer's conjecture | [
"Physics",
"Mathematics"
] | 110 | [
"Geometry",
"Conjectures that have been proved",
"Mathematical problems",
"Mathematical theorems",
"Symmetry"
] |
51,199,133 | https://en.wikipedia.org/wiki/Claudio%20Luchinat | Claudio Luchinat (born February 15, 1952, in Florence) is an Italian chemist. He is author of about 550 publications in Bioinorganic Chemistry, NMR and Structural Biology, and of four books. According to Google scholar, his h-index is 90 and his papers have been quoted more than 33,000 times ().
He earned a PhD in Chemistry from the University of Florence. He has been full professor of Chemistry at the University of Bologna (1986–96).
He is currently a researcher at the University of Florence and full professor of Chemistry at the same university (1996–, CERM and Department of Chemistry). He is member of the Italian Chemical Society, New York Academy of Sciences, American Association for the Advancement of Science.
References
21st-century Italian chemists
Bioinorganic chemistry
University of Florence alumni
1952 births
Living people | Claudio Luchinat | [
"Chemistry",
"Biology"
] | 176 | [
"Biochemistry",
"Bioinorganic chemistry"
] |
38,328,632 | https://en.wikipedia.org/wiki/Incoherent%20broad-band%20cavity-enhanced%20absorption%20spectroscopy | Incoherent broad band cavity enhanced absorption spectroscopy (IBBCEAS), sometimes called broadband cavity enhanced extinction spectroscopy (IBBCEES), measures the transmission of light intensity through a stable optical cavity consisting of high reflectance mirrors (typically R>99.9%). The technique is realized using incoherent sources of radiation e.g. Xenon arc lamps, LEDs or supercontinuum (SC) lasers, hence the name.
Typically in IBBCEAS, the wavelength selection of the transmitted light takes place after the cavity by either dispersive or interferometric means. The light is either directly focused onto the entrance slit of a monochromator and imaged onto a charge-coupled device (CCD) array via a dispersive optical element (e.g. a diffraction grating) or imaged onto the entrance aperture of a conventional interferometer. The spectrum is reconstructed taking the Fourier transform of the recorded interferogram.
Similar to other cavity enhanced spectroscopic techniques, in IBBCEAS, the transmission signal strength is measured with and without the absorber of interest present inside the cavity ( I(λ) and I0(λ) respectively). From the ratio of the wavelength-dependent transmitted intensities, the effective reflectivity of the mirrors Reff(λ) and the sample path length per pass d inside the cavity, the sample's extinction coefficient α(λ) is calculated as:
The sensitivity (smallest achievable α for a given sample) increases for large mirror reflectivities and large path lengths in the cavity, which is maximal, if d equals the cavity length.(1-Reff) includes all unspecified losses per pass (e.g. scattering or diffraction losses) other than the losses due to the limited reflectivity of the cavity mirrors. Note that although the technique is often used for studying absorption, total light extinction, α, is retrieved, and it therefore measures the sum of absorption and scattering.
The advantages of IBBCEAS include:
High sensitivity, experimental simplicity
High temporal resolution
Simultaneous detection of multiple species due to the wide spectral coverage
No mode matching involved as in some Cavity Ring Down Spectroscopy applications (CRDS)
Applicable to solids, liquids, gases and plasmas.
Cost effective
The disadvantages include:
Unlike CRDS, the sensitivity is dependent on the light source stability and the measurement accuracy of the transmitted intensity.
It requires a reliable calibration procedure to determine baseline optical losses of the system (often performed by calibration of reflectivity as a function of wavelength using known concentrations of sample in the cavity).
Lower spectral resolution compared to laser based methods.
Measurement Principle IBBCEAS - Detailed Description
When the optical cavity is illuminated by an incoherent broadband light source like the white light of a lamp or LED, the mode structure of the cavity intensity can be neglected. Consider a cavity of length d formed by two identical high reflectivity mirrors (R1 = R2 = R > 99.9%) with losses L, which is continuously excited with incoherent light of intensity Iin. For an empty resonator with L = 0, the time integrated transmitted intensity I0 is given by
The intensity of light transmitted by the cavity, I( = I0 + I1 + I2 + ⋯ ), can be described by the superposition of the light after an odd number of passes, leading to a geometric series:
Since R < 1 and L < 1 the series converges to:
Assuming the losses per pass to be solely due to Lambert-Beer attenuation, i.e. , the extinction coefficient, α can be written as
In case of small losses per pass, L → 0, and high reflectivities of the mirrors, R → 1, α can then be approximated as
Approximating Δ I / I 0 ≈ (I 0 - I) / I 0 ≈ (I 0 - I) / I, the minimum absorption coefficient, αmin, can be expressed by the following equation:
where Δ I min is the minimum detectable change in intensity smaller than I_{0}. The maximum sensitivity (for given R and d) is limited by the intensity of the lamp, the dispersion of the monochromator, and the noise of the detector. The above equation demonstrates that the effective path length is (1-R)−1 times longer than that of a conventional single pass experiment. Fiedler et al. have studied in detail the influence of cavity parameters like the cavity length, mirror curvature and reflectivity, different light injection geometries on the IBBCEAS signal.
Experimental Setup
Free space IBBCEAS
A basic IBBCEAS setup consists of an incoherent light source, collimation optics, the absorber of interest and a detector. The incoherent source of radiation is spectrally filtered to match the bandwidth of the high reflectivity cavity mirrors. The filtered light is passively coupled into a stable optical cavity formed by two mirrors. Due to the high reflectivity of the mirrors effective absorption path lengths can reach a few kilometres. Light transmitted through the cavity is detected using a suitable detector, for example, a monochromator / charge-coupled device (CCD) combination interfaced with a computer. To obtain quantitative results, the reflectivity of the mirrors must be accurately determined. This is usually accomplished by measuring the reflectivity as a function of wavelength using known concentrations of a calibration sample inside the cavity. By knowing the number density n (molecules/cm3) and wavelength-dependent absorption cross-section of the calibration sample, the effective reflectivity Reff(λ) can be determined by:
where σ(λ) is the known absorption cross-section of the gas and d is the length of the cavity.
Fiber ring IBBCEAS
Incoherent broadband cavity-enhanced spectroscopy can be constructed also using fiber ring resonators to attain alignment-free setup. The experimental setup for the dual coupler resonator and single coupler cavity are shown in figures 3(a) and 3(b), respectively. Figure 3 (a) depicts the dual coupler resonator which consists of two directional couplers. On the other hand, figure 3 (b) depicts the single coupler resonator which consists of a single directional coupler. In both configurations, a gas cell is filled and a gain medium (to compensate for the loss and enhance the effective length). For the dual coupler configuration, the external source is used as a broadband source which is in our case the amplified spontaneous emission (ASE) of another gain medium. In the single coupler configuration, the ASE gain medium placed inside the cavity is used as a broadband incoherent source. In both configurations, the output of the resonator is fed to an OSA.
The response of the optical cavity should be characterized by any other means as mentioned before in the case of Free space IBBCEAS. The analysis for dual coupler configuration is the same as the analysis of free space Fabry-Perot cavity that has been done in the previous section. Yet, the analysis for single coupler configuration will vary.
Fourier Transform Incoherent Broadband Cavity Enhanced Spectroscopy (FT-IBBCEAS)
Fourier Transform Incoherent Broadband Cavity Enhanced Spectroscopy (FT-IBBCEAS) is a variant of IBBCEAS which uses a Fourier transform spectrometer/photodiode instead of the conventional monochromator/CCD in order to establish a spectrum. In this case, the absorption is determined from the Fourier Transform of the intensity of light escaping the cavity. The combination of a Fourier transform spectrometer allows for high spectral resolution to be achievable, however, at the expense of good temporal resolution, making the technique less suitable for kinetic studies. On the other hand, the approach provides an improvement to conventional Fourier Transform spectroscopy for gas applications where small sample volumes are required (e.g. for discharges, combustion plasmas, flames or chemical flow reactors).
The figure above shows the spin forbidden O2 b-band at ~ 14500 cm−1 (688 nm) measured in ambient air at atmospheric pressure using a xenon arc lamp compared against a calculated HITRAN spectrum. The cavity was formed by two dielectric high reflectivity mirrors (R>0.996 at 662 nm) separated by 89 cm. The characteristic doublets of the O2 b-band and the bandhead of the R branch are visible in the experimental spectrum. In order to fully exploit the selectivity feature of Fourier transform spectroscopy, the near infrared region is of interest because many overtone spectra of atmospherically relevant gases are located in this part of the spectrum. Some of these studies include the detection of overtone bands of CO2, OCS, CH3CN and HD18O in the near IR.
Applications of IBBCEAS
Pollution monitoring
Combustion Diagnostics
Atmospheric trace gas detection
Aerosol science
Breath Analysis
Fundamental Science and Research
Chemical Reaction Kinetics
Selected Literature
Since its development in 2003, IBBCEAS has been used with a wide variety of incoherent light sources, including arc lamps, LEDs, SLEDs and supercontinnum sources.
Arc lamp
IBBCEAS was first demonstrated on the basis of the spin and symmetry forbidden γ-band of molecular oxygen using a short-arc Xe lamp. The application of IBBCEAS to isolated jet cooled gas-phase species was demonstrated in continuous supersonic jets almost two decades ago and recently, to pulsed jets. Arc lamps have been used for cavities as small as 80 mm to study optical absorption of liquids and very long cavities of 20 m length for sensitive in situ measurements of NO3 and NO2 concentrations in an atmospheric simulation chamber. Recent studies have demonstrated their application in Evanescent Wave-IBBCEAS using a mirror-prism-mirror cavity configuration to measure absorption spectra of metallo-porphyrins in thin solution layers. Other applications of Xe lamp based IBBCEAS include its combination with discharge flow tubes for absorption measurements of marine boundary layer species like I2, IO and OIO, and for measuring weak near-UV and visible gas phase absorption spectra.
LEDs
IBBCEAS has been used in conjunction with LEDs and superluminescent LEDs in a number of gas phase and liquid analyte studies. Simultaneous concentration measurements of NO2 and NO3 have been achieved using LED based IBBCEAS within the ppbv detection limit. The advantages of using LEDs as light source are their compactness, long life, power efficiency and price. Also, due to the small area emission, the emitted power per unit area at the peak wavelength can approach that of Xe arc lamps. However, the LED output is often temperature dependent; hence they require temperature stabilization for IBBCEAS applications. More recently, LED-IBBCEAS has been applied to simultaneous open path measurements of HONO and NO2 in the UV region with detection limits of 430 pptv and 1 ppbv respectively and acquisition times in the order of a few seconds.
Supercontinuum radiation sources
SC sources are attractive for spectroscopic applications owing to their broad wavelength coverage, which enables spectral signatures of multiple species to be detected simultaneously. In comparison to lamps and LEDs, these sources provide higher spectral brightness, permitting more rapid measurements to be performed. Detection sensitivities at picomolar concentration levels in solution have been reported for BBCEAS measurements with SC sources with signal acquisition times in the lower millisecond range. Though initial studies on FT-IBBCEAS reported lower sensitivities in comparison with CRDS experiments, more recent breath analysis applications with supercontinuum sources have reported sensitivities in the order of 10−9 cm−1 within 4 minutes acquisition time.
References
External links
http://www.cfa.harvard.edu/hitran/
Absorption spectroscopy | Incoherent broad-band cavity-enhanced absorption spectroscopy | [
"Physics",
"Chemistry"
] | 2,469 | [
"Spectroscopy",
"Spectrum (physical sciences)",
"Absorption spectroscopy"
] |
38,331,380 | https://en.wikipedia.org/wiki/Bulletin%20of%20Earthquake%20Engineering | The Bulletin of Earthquake Engineering is a bimonthly peer-reviewed scientific journal published by Springer Science+Business Media on behalf of the European Association for Earthquake Engineering. It covers all aspects of earthquake engineering. It was established in 2003 and the editor-in-chief is Atilla Ansal (Ozyegin University).
Abstracting and indexing
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.827.
References
External links
Earthquake engineering
Springer Science+Business Media academic journals
Quarterly journals
English-language journals
Engineering journals
Academic journals established in 2003 | Bulletin of Earthquake Engineering | [
"Engineering"
] | 125 | [
"Structural engineering",
"Earthquake engineering",
"Civil engineering"
] |
38,334,384 | https://en.wikipedia.org/wiki/Hydrogenase%20%28NAD%2B%2C%20ferredoxin%29 | Hydrogenase (NAD+, ferredoxin) (, bifurcating [FeFe] hydrogenase) is an enzyme with systematic name hydrogen:NAD+, ferredoxin oxidoreductase. This enzyme catalyses the following chemical reaction
2 H2 + NAD+ + 2 oxidized ferredoxin 5 H+ + NADH + 2 reduced ferredoxin
The enzyme from Thermotoga maritima contains a [Fe-Fe] cluster (H-cluster) and iron-sulfur clusters. I
References
External links
EC 1.12.1 | Hydrogenase (NAD+, ferredoxin) | [
"Chemistry"
] | 122 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
38,336,059 | https://en.wikipedia.org/wiki/Auralization | Auralization is a procedure designed to model and simulate the experience of acoustic phenomena rendered as a soundfield in a virtualized space. This is useful in configuring the soundscape of architectural structures, concert venues, and public spaces, as well as in making coherent sound environments within virtual immersion systems.
History
The English term auralization was used for the first time by Kleiner et al. in an article in the journal of the AES en 1991.
The increase of computational power allowed the development of the first acoustic simulation software towards the end of the 1960s.
Principles
Auralizations are experienced through systems rendering virtual acoustic models made by convolving or mixing acoustic events recorded 'dry' (or in an anechoic chamber) projected within a virtual model of an acoustic space, the characteristics of which are determined by means of sampling its impulse response (IR).
Once this has been determined, the simulation of the resulting soundfield in the target environment is obtained by convolution:
The resulting sound is heard as it would if emitted in that acoustic space.
Binaurality
For auralizations to be perceived as realistic, it is critical to emulate the human hearing in terms of position and orientation of the listener's head with respect to the sources of sound.
For IR data to be convolved convincingly, the acoustic events are captured using a dummy head where two microphones are positioned on each side of the head to record an emulation of sound arriving at the locations of human ears, or using an ambisonics microphone array and mixed down for binaurality.
Head-related transfer functions (HRTF) datasets can be used to simplify the process insofar as a monaural IR can be measured or simulated, then audio content is convolved with its target acoustic space. In rendering the experience, the transfer function corresponding to the orientation of the head is applied to simulate the corresponding spatial emanation of sound.
See also
Convolution reverb
Reverberation
Notes and references
Application software
Acoustics | Auralization | [
"Physics"
] | 417 | [
"Classical mechanics",
"Acoustics"
] |
38,336,626 | https://en.wikipedia.org/wiki/Link%20layer%20security | The link layer is the lowest layer in the TCP/IP model. It is also referred to as the network interface layer and mostly equivalent to the data link layer plus physical layer in OSI. This particular layer has several unique security vulnerabilities that can be exploited by a determined adversary.
Network interface layer
The link layer is the interface between the host system and the network hardware. It defines how data packets are to be formatted for transmission and routings. Some common link-layer protocols include IEEE 802.2 and X.25. The data link layer and its associated protocols govern the physical interface between the host computer and the network hardware. The goal of this layer is to provide reliable communications between hosts connected on a network. Services provided by this layer of the network stack include:
Data Framing Breaking up the data stream into individual frames or packets.
Checksums Sending checksum data for each frame to enable the receiving node to determine whether or not the frame was received error-free.
Acknowledgment Sending either a positive (data was received) or negative (data was not received but expected) acknowledgement from receiver to sender to ensure reliable data transmission.
Flow Control Buffering data transmissions to ensure that a fast sender does not overwhelm a slower receiver.
Vulnerabilities and mitigation strategies
Wired networks
Content Address Memory (CAM) table exhaustion attack
The data link layer addresses data packets based on the destination hardware's physical Media Access Control (MAC) address. Switches within the network maintain Content Address Tables (CAMs) that maps the switch's ports to specific MAC addresses. These tables allow the switch to securely deliver the packet to its intended physical address only. Using the switch to connect only the systems that are communicating provides much greater security than a network hub, which broadcasts all traffic over all ports, allowing an eavesdropper to intercept and monitor all network traffic.
A CAM Table Exhaustion Attack basically turns a switch into a hub. The attacker floods the CAM table with new MAC-to-port mappings until the table's fixed memory allotment is full. At this point the switch no longer knows how to deliver traffic based on a MAC-to-port mapping, and defaults to broadcasting traffic over all ports. An adversary is then able to intercept and monitor all network traffic traversing the switch to include passwords, emails, instant messages, etc.
The CAM table-overflow attack can be mitigated by configuring port security on the switch. This option provides for either the specification of the MAC addresses on a particular switch port or the specification of the number of MAC addresses that can be learned by a switch port. When an invalid MAC address is detected on the port, the switch can either block the offending MAC address or shut down the port.
Address Routing Protocol (ARP) spoofing
At the data link layer a logical IP address assigned by the network layer is translated into a physical MAC address. In order to ensure reliable data communications all switches in the network must maintain up-to-date tables for mapping logical (IP) to physical (MAC) addresses. If a client or switch is unsure of the IP-to-MAC mapping of a data packet it receives it will send an Address Resolution Protocol (ARP) message to the nearest switch asking for the MAC address associated with the particular IP address. Once this is accomplished the client or switch will update its table to reflect the new mapping. In an ARP spoofing attack the adversary broadcasts the IP address of the machine to be attacked along with its own MAC address. All neighboring switches will then update their mapping tables and begin transmitting data destined to the attacked system's IP address to the attacker's MAC address. Such an attack is commonly referred to as a "man in the middle" attack.
Defenses against ARP spoofing generally rely on some form of certification or cross-checking of ARP responses. Uncertified ARP responses are blocked. These techniques may be integrated with the Dynamic Host Configuration Protocol (DHCP) server so that both dynamic and static IP addresses are certified. This capability may also be implemented in individual hosts or may be integrated into Ethernet switches or other network equipment.
Dynamic Host Configuration Protocol (DHCP) starvation
When a client system without an IP address enters a network it will request an IP address from the resident DHCP server. The DHCP server will reserve an IP address (so anyone else asking for one is not granted this one) and it will send that IP address to the device along with a lease identifying how long the address will be valid. Normally, from this point, the device will respond by confirming the IP address with the DHCP server and the DHCP server finally responds with an acknowledgement.
In a DHCP starvation attack, once the adversary receives the IP address and the lease period from the DHCP server, the adversary does not respond with the confirmation. Instead, the adversary floods the DHCP server with IP address requests until all addresses within the server's address space have been reserved (exhausted). At this point, any hosts wishing to join the network will be denied access, resulting in a denial of service. The adversary can then set up a rogue DHCP server so that clients receive incorrect network settings and as a result transmit data to an attacker's machine.
One method for mitigating this type of attack is to use the IP source guard feature available on many Ethernet switches. The IP guard initially blocks all traffic except DHCP packets. When a client receives a valid IP address from the DHCP server the IP address and switch port relationship are bound in an Access Control List (ACL). The ACL then restricts traffic only to those IP addresses configured in the binding.
Wireless networks
Hidden node attack
In a wireless network many hosts or nodes are sharing a common medium. If nodes A and B are both wireless laptop computers communicating in an office environment their physical separation may require that they communicate through a wireless access point. But only one device can transmit at a time in order to avoid packet collisions. Prior to transmitting, Node A sends out a Ready to Send (RTS) signal. If it is not receiving any other traffic the access point will broadcast a Clear to Send (CTS) signal over the network. Node A will then begin transmitting while Node B knows to hold off transmitting its data for the time being. Even though it cannot directly communicate with Node A, i.e. Node A is hidden, it knows to wait based on its communication with the access point. An attacker can exploit this functionality by flooding the network with CTS messages. Then every node assumes there is a hidden node trying to transmit and will hold its own transmissions, resulting in a denial of service.
Preventing hidden node attacks requires a network tool such as NetEqualizer. Such a tool monitors access point traffic and develops a baseline level of traffic. Any spikes in CTS/RTS signals are assumed to be the result of a hidden node attack and are subsequently blocked.
De-auth (de-authentication) attack
Any client entering a wireless network must first authenticate with an access point (AP) and is thereafter associated with that access point. When the client leaves it sends a deauthentication, or deauth, message to disassociate itself with the access point. An attacker can send deauth messages to an access point tied to client IP addresses thereby knocking the users off-line and requiring continued re-authenticate, giving the attacker valuable insight into the reauthentication handshaking that occurs.
To mitigate this attack, the access point can be set up to delay the effects of deauthentication or disassociation requests (e.g., by queuing such requests for 5–10 seconds) thereby giving the access point an opportunity to observe subsequent packets from the client. If a data packet arrives after a deauthentication or disassociation request is queued, that request is discarded since a legitimate client would never generate packets in that order.
References
Computer network security
Link protocols | Link layer security | [
"Engineering"
] | 1,645 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
50,318,886 | https://en.wikipedia.org/wiki/Silver%28III%29%20fluoride | Silver(III) fluoride, AgF3, is an unstable, bright-red, diamagnetic compound containing silver in the unusual +3 oxidation state. Its crystal structure is very similar to that of gold(III) fluoride: it is a polymer consisting of rectangular AgF4 units linked into chains by fluoro bridges.
Preparation
AgF3 can be prepared by treating a solution containing tetrafluoroargentate(III) ions in anhydrous hydrogen fluoride with boron trifluoride; the potassium tetrafluoroargentate(III) was prepared by heating a stoichiometric mix of potassium and silver nitrate in a sealed container filled with pressurised fluorine gas at 400C for 24 hours, twice. When dissolved in anhydrous HF, it decomposes spontaneously to Ag3F8 overnight at room temperature. The high-valence silver compounds described in the thesis are notable for their variety of colours: KAgF4 is bright orange, AgF3 bright red, AgFAsF6 is deep blue, Ag3F8 deep red-brown, and Pd(AgF4)2 is lime-green.
Earlier preparations used krypton difluoride as fluorinating agent, and tended to produce the mixed-valence Ag3F8 which may be thought of as silver(II) tetrafluoroargentate(III); Ag2F5, which is (AgF)+AgF4−, is formed by reacting AgF3 with AgFAsF6.
References
Fluorides
Silver compounds
Metal halides | Silver(III) fluoride | [
"Chemistry"
] | 341 | [
"Inorganic compounds",
"Fluorides",
"Metal halides",
"Salts"
] |
50,320,485 | https://en.wikipedia.org/wiki/Space%20geodesy | Space geodesy is geodesy by means of sources external to Earth, mainly artificial satellites (in satellite geodesy) but also quasars (in very-long-baseline interferometry, VLBI), visible stars (in stellar triangulation), and the retroreflectors on the Moon (in lunar laser ranging, LLR).
See also
Astronomical geodesy
Geodesy
Geodesy | Space geodesy | [
"Astronomy",
"Mathematics"
] | 91 | [
"Outer space",
"Applied mathematics",
"Astronomy stubs",
"Geodesy",
"Outer space stubs"
] |
50,330,491 | https://en.wikipedia.org/wiki/MICROSCOPE | The Micro-Satellite à traînée Compensée pour l'Observation du Principe d'Equivalence (Micro-Satellite with Compensated Drag for Observing the Principle of Equivalence, MICROSCOPE) is a class minisatellite operated by CNES to test the universality of free fall (the equivalence principle) with a precision to the order of , 100 times more precise than can be achieved on Earth. It was launched on 25 April 2016 alongside Sentinel-1B and other small satellites, and was decommissioned around 18 October 2018 after completion of its science objectives. The final report was published in 2022.
Experiment
To test the equivalence principle (i.e. the similarity of free fall for two bodies of different composition in an identical gravity field), two differential accelerometers are used successively. If the equivalence principle is verified, the two sets of masses will be subjected to the same acceleration. If different accelerations have to be applied, the principle will be violated.
The principal experiment is the Twin-Space Accelerometer for Gravity Experiment (T-SAGE), built by ONERA and composed of two identical accelerometers and their associated, concentric cylindrical masses. One accelerometer serves as a reference and contains two platinum-rhodium alloy masses, while the other is the test instrument and contains two masses with different neutron–proton ratios: one mass of platinum-rhodium alloy and another mass of titanium-aluminium-vanadium alloy (TA6V). The masses are maintained within their test areas by electrostatic repulsion, designed to render them motionless with respect to the satellite.
It is necessary to create a thermally benign environment for the accelerometers. To that end, a Sun-synchronous orbit provides constant illumination; the experiments are mounted on the end of the satellite bus away from the Sun; and to maintain thermal isolation from the satellite itself, the modes of thermal connection were modelled and wire connections were minimised.
Satellite control
The satellite employs a Drag-Free Attitude Control System (DFACS), also called the Acceleration and Attitude Control System (AACS), that uses a double-redundant primary and backup set of four microthrusters (sixteen total) to "fly" the satellite around the test masses. This system takes into account the dynamic forces acting on the spacecraft, including aerodynamic forces due to residual atmosphere, solar pressure forces due to photon impacts, electromagnetic forces within the Earth's magnetosphere, and gravitational forces in the Sun-Earth-Moon system.
Launch
MICROSCOPE was successfully launched on 25 April 2016 at 21:02:13 UTC from the Guiana Space Centre outside Kourou, French Guiana. It was carried by a Soyuz ST-A booster with a Fregat-M upper stage. Other payloads on this flight were the European Space Agency's Sentinel-1B Earth observation satellite and three CubeSats: OUFTI-1 from the University of Liège, e-st@r-II from the Polytechnic University of Turin, and AAUSAT-4 from Aalborg University.
Results
On 4 December 2017, the first results were published. The equivalence principle was measured to hold true within a precision of , improving prior measurements by an order of magnitude.
End of mission
After completing its mission goals and exhausting its supply of nitrogen fuel, the decommissioning of MICROSCOPE was announced on 18 October 2018. The spacecraft was first passivated, then two IDEAS (Innovative DEorbiting Aerobrake System) inflatable booms were deployed to passively de-orbit the spacecraft by creating a higher drag profile. By this method, MICROSCOPE is expected to re-enter Earth's atmosphere within 25 years instead of 73 years.
See also
Tests of general relativity
Timeline of gravitational physics and relativity
References
External links
MICROSCOPE website at CNES.fr
General relativity
European Space Agency satellites
Satellites orbiting Earth
Spacecraft launched by Soyuz-2 rockets
Spacecraft launched in 2016
Secondary payloads | MICROSCOPE | [
"Physics"
] | 805 | [
"General relativity",
"Theory of relativity"
] |
50,333,728 | https://en.wikipedia.org/wiki/Diffuse%20field%20acoustic%20testing | Diffuse field acoustic testing is the testing of the mechanical resistance of a spacecraft to the acoustic pressures during launch.
In the aerospace industry, acoustic chambers are the main facilities for such tests. A chamber is a reverberant room that creates a diffuse sound field and is composed of an empty volume (from 1 m3 to 2900 m3) and a multifrequency sound generation system.
Diffuse field principle
Theoretically, diffuse field is defined as a Sound pressure field where there is no privileged direction of the energy. In other words, when sound pressure is the same everywhere in the room. This is obtained with large rooms with no absorbent materials on walls, ceiling or floor. Diffusion is enhanced in asymmetric rooms. To obtain such conditions, the room must be reverberant. The source's direct field must be negligible compared to the reverberant field, avoiding privileged propagation.
Reverberation time
Reverberation is due to multiple reflections on walls with some delays that come back to the receptor. Summing up these contributions, a reverberant pressure field is created. The more reverberation, the more the field is diffused.
Two oft-used measures of reverberation time quantify this parameter, : and . These values are the interval for the sound pressure level to the lower of 30 or 60 dBSPL. It can be obtained by measuring the sound pressure decrease after a sound impulse or by using approximate formulas such as Sabine's or Eyring's. In the case of a diffuse field (low absorption on the walls, and big volumes) Sabine's formula is used.
Where is the equivalent absorption area involving the surface of the walls and their absorption coefficient .
Geometrical approach
Many theoretical ways to model sound propagation are used. One of these is the geometrical approach. This represents sound waves as a ray of energy propagating. When it meets an obstacle, this ray has two possible behaviors: It can be reflected following the normal of the plan, specular reflection. Alternatively it can be separated into many rays following a mathematical law (for example Lambert's law), diffused reflection.
Quantities involving diffuse pressure field
Frequency is a main factor of a good diffused field. Some phenomena linked to frequency of the sound pressure field lead to poor homogeneity of a pressure field. The frequency response of a room is the amplification or reduction of some frequencies. It represents the repartition of pressure with respect to frequency. In low frequencies this can lead to mode apparition. These modes are due to standing waves that lead to maximum and minimum pressure according to the geometry of the room. To determine the frequency for which the pressure field can be considered diffused, Schroeder's frequency is commonly used. It is obtained considering the frequency from which the modal overlap exceeds . Below this frequency, the field is not diffuse and standing waves create pressure modes
Where is the reverberation time of the room and its volume.
For example, in the case of a rectangular room, low frequency modes are determined relative to the room dimensions as
Where , and are respectively the mode of the length , and of the room and the celerity of sound in the working fluid.
Aerospace application
Acoustic tests are mainly use for environmental tests on aircraft structures. Satellites are expensive products with high-engineering built-in components. To improve the resistance of a spacecraft during launch and during its orbital life, analysis is focused on tests in three categories : Thermal, Radio-frequencies and Vibrations. This last test area is focused on the mechanical stresses that the specimen will meet during its life, especially during launch.
Acoustics creates mechanical stresses during the first five seconds. Sound pressure levels can go up to 150 dBSPL. Acoustic tests are used to verify the mechanical resistance of the satellite and its elements to acoustic pressures generated.
Accelerometer measurement
Once the sound generation system is working, acceleration measurement is performed by accelerometers placed on the specimen.
Acoustic chamber
Sound generation
To test a satellite, a sound generation system generates a broadband spectrum ([25 Hz-10000]Hz) simulating the maximum envelope of all launchers that the satellite may fly in. To qualify, three tests are realized with changing global gain compared to launcher spectrum:
Low-Level : - 8 dB SPL
Intermediate : - 4 dB SPL
PFM (Proto Flight Model) : 0 dB SPL
Before testing the satellite, an empty room test is performed to check the chamber's signature.
This pressure field is generated by multifrequency sirens powered by nitrogen or compressed air modulators. This system can generate sound pressure levels up to 160 dBSPL. Each acoustic chamber has its own configurations, but each siren is centered on a frequency where sound pressure levels are the highest. In some cases these sirens can be completed with electroacoustic systems to generate and control midrange and high frequencies. Sirens generate low frequencies, but with high sound pressure levels distortion appears that leads to higher harmonics. Loudspeakers are used in some chambers to control these frequencies.
To produce exact levels, piloting microphones check sound pressure levels and apply a realtime gain correction to adjust the level.
Advantages
Homogeneity : Spatial homogeneity guaranteed for ± 1.5 dBSPL
Low frequency generation : Very efficient low frequency generation (below 50 Hz)
Security : Control with piloting microphones that adjust the level or abort if needed
Representativeness : Faithful to real stresses during launch
Well known process : Used by many aerospace industries
Disadvantages
Gas generation : May require large amounts of nitrogen
High frequency control : If no high frequency sirens or electroacoustic devices are included, only harmonics generated by distortion produce mid and high frequencies.
Examples
Thales Alenia Space (Cannes) : 1000 m3
IABG (Ottobrunn) : 1378 m3
NASA : 2860 m3
References
External links
Thales Alenia Space Official Website
IABG Space Official Website
NASA's Space Power Facility (SPF) Wikipedia page
Intespace's acoustic test facility
Astronautics
Acoustics | Diffuse field acoustic testing | [
"Physics"
] | 1,248 | [
"Classical mechanics",
"Acoustics"
] |
48,851,801 | https://en.wikipedia.org/wiki/Microplane%20model%20for%20constitutive%20laws%20of%20materials | The microplane model, conceived in 1984, is a material constitutive model for progressive softening damage. Its advantage over the classical tensorial constitutive models is that it can capture the oriented nature of damage such as tensile cracking, slip, friction, and compression splitting, as well as the orientation of fiber reinforcement. Another advantage is that the anisotropy of materials such as gas shale or fiber composites can be effectively represented. To prevent unstable strain localization (and spurious mesh sensitivity in finite element computations), this model must be used in combination with some nonlocal continuum formulation (e.g., the crack band model). Prior to 2000, these advantages were outweighed by greater computational demands of the material subroutine, but thanks to huge increase of computer power, the microplane model is now routinely used in computer programs, even with tens of millions of finite elements.
Method and motivation
The basic idea of the microplane model is to express the constitutive law not in terms of tensors, but in terms of the vectors of stress and strain acting on planes of various orientations called the microplanes. The use of vectors was inspired by G. I. Taylor's idea in 1938 which led to Taylor models for plasticity of polycrystalline metals. But the microplane models differ conceptually in two ways.
Firstly, to prevent model instability in post-peak softening damage, the kinematic constraint must be used instead of the static one. Thus, the strain (rather than stress) vector on each microplane is the projection of the macroscopic strain tensor, i.e.,
where and are the normal vector and two strain vectors corresponding to each microplane, and and where and are three mutually orthogonal vectors, one normal and two tangential, characterizing each particular microplane (subscripts refer to Cartesian coordinates).
Secondly, a variational principle (or the principle of virtual work) relates the stress vector components on the microplanes ( and ) to the macro-continuum stress tensor , to ensure equilibrium. This yields for the stress tensor the expression:
with
Here is the surface of a unit hemisphere, and the sum is an approximation of the integral. The weights, , are based on an optimal Gaussian integration formula for a spherical surface. At least 21 microplanes are needed for acceptable accuracy but 37 are distinctly more accurate.
The inelastic or damage behavior is characterized by subjecting the microplane stresses and to strain-dependent strength limits called stress-strain boundaries imposed on each microplane. They are of four types, viz.:
The tensile normal boundary – to capture progressive tensile fracturing;
The compressive volumetric boundary – to capture phenomenon such as pore collapse under extreme pressures;
The shear boundary – to capture friction; and
The compressive deviatoric boundary – to capture softening in compression, using the volumetric stress and deviatoric stress on the microplanes.
Each step of explicit analysis begins with an elastic predictor and, if the boundary has been exceeded, the stress vector component on the microplane is then dropped at constant strain to the boundary.
Applications
The microplane constitutive model for damage in concrete evolved since 1984 through a series of progressively improved models labeled M0, M1, M2, ..., M7. It was also extended to fiber composites (woven or braided laminates), rock, jointed rock mass, clay, sand, foam and metal. The microplane model has been shown to allow close fits of the concrete test data for uniaxial, biaxial and triaxial loadings with post-peak softening, compression-tension load cycles, opening and mixed mode fractures, tension-shear and compression-shear failures, axial compression followed by torsion (i.e., the vertex effect) and fatigue. The loading rate effect and long-term aging creep of concrete have also been incorporated. Models M4 and M7 have been generalized to finite strain. The microplane model has been introduced into various commercial programs (ATENA, OOFEM, DIANA, SBETA,...) and large proprietary wavecodes (EPIC, PRONTO, MARS,...). Alternatively, it is often being used as the user's subroutine such as UMAT or VUMAT in ABAQUS.
References
Equations of physics
Materials science | Microplane model for constitutive laws of materials | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 904 | [
"Applied and interdisciplinary physics",
"Equations of physics",
"Mathematical objects",
"Materials science",
"Equations",
"nan"
] |
52,722,234 | https://en.wikipedia.org/wiki/Borde%E2%80%93Guth%E2%80%93Vilenkin%20theorem | The Borde–Guth–Vilenkin (BGV) theorem is a theorem in physical cosmology which deduces that any universe that has, on average, been expanding throughout its history cannot be infinite in the past but must have a past spacetime boundary. It is named after the authors Arvind Borde, Alan Guth and Alexander Vilenkin, who developed its mathematical formulation in 2003. The BGV theorem is also popular outside physics, especially in religious and philosophical debates.
Definition
In general relativity, the geodesics represent the paths that free-falling particles or objects follow in curved spacetime. These paths are the equivalent of the shortest path (straight lines) between two points in Euclidean space. In cosmology, a spacetime is said to be geodesically complete if all its geodesics can be extended indefinitely without encountering any singularities or boundaries. On the contrary, a spacetime that is geodesically past-incomplete features geodesics that reach a boundary or a singularity within a finite amount of proper time into the past.
In this context, we can define the average expansion rate as
where ti is an initial time ( is the proper initial time), tf a final time ( is the proper final time), and H is the expansion parameter, also called the Hubble parameter.
The BGV theorem states that for any spacetime where
,
then the spacetime is geodesically past-incomplete.
The theorem only applies to classical spacetime, but it does not assume any specific mass content of the universe and it does not require gravity to be described by Einstein field equations.
Derivation
For FLRW metric
Here is an example of derivation of the BGV theorem for an expanding homogeneous isotropic flat universe (in units of speed of light c=1). Which is consistent with ΛCDM model, the current model of cosmology. However, this derivation can be generalized to an arbitrary space-time with no appeal to homogeneity or isotropy.
The Friedmann–Lemaître–Robertson–Walker metric is given by
,
where t is time, xi (i=1,2,3) are the spatial coordinates and a(t) is the scale factor. Along a timeline geodesic , we can consider the universe to be filled with comoving particles. For an observer with proper time τ following the world line , has a 4-momentum , where is the energy, m is the mass and p=|p| the magnitude of the 3-momentum.
From the geodesic equation of motion, it follows that where pf is the final momentum at time tf. Thus
,
where is the Hubble parameter, and
,
γ being the Lorentz factor. For any non-comoving observer γ>1 and F(γ)>0.
Assuming it is follows that
.
Thereby any non-comoving past-directed timelike geodesic satisfying the condition , must have a finite proper length, and so must be past-incomplete.
Implications
Current astronomical observations, show that the universe is expanding, thus the BGV implies that there must be a boundary or singularity in the history of the universe. This singularity has often been associated to the Big Bang. However the theorem does not tell if it is associated to any other event in the past. The theorem also does not allow to tell when the singularity takes place, or if it is a gravitational singularity or any other kind of boundary condition.
Some physical theories do not discard the possibility of a non-accelerated expansion before a certain moment in time. For example, the expansion rate could be different from up to the period of inflation.
Limitations and criticism
Alternative models, where the average expansion of the universe throughout its history does not hold, have been proposed under the notions of emergent spacetime, eternal inflation, and cyclic models. Vilenkin and Audrey Mithani have argued that none of these models escape the implications of the theorem. In 2017, Vilenkin stated that he does not think there are any viable cosmological models that escape the scenario.
Sean M. Carroll argues that the theorem only applies to classical spacetime, and may not hold under consideration of a complete theory of quantum gravity. He added that Alan Guth, one of the co-authors of the theorem, disagrees with Vilenkin and believes that the universe had no beginning. Vilenkin argues that the Carroll–Chen model constructed by Carroll and Jennie Chen, and supported by Guth, to elude the BGV theorem's conclusions persists to indicate a singularity in the history of the universe as it has a reversal of the arrow of time in the past.
Joseph E. Lesnefsky, Damien A. Easson and Paul Davies constructed an uncountable infinite class of classical solutions which have and are geodesically complete. The authors claim that the geodesic incompleteness of inflationary spacetime is still an open issue. Furthermore, there are examples of infinite cyclic models solving the problem of unbounded entropy growth which are geodesically complete. In both of these studies, the authors argue that the previous investigations often did not use mathematically precise formulations of the BGV theorem and thus reached incomplete conclusions.
Use in theology
Vilenkin has also written about the religious significance of the BGV theorem. In October 2015, Vilenkin responded to arguments made by theist William Lane Craig and the New Atheism movement regarding the existence of God. Vilenkin stated "What causes the universe to pop out of nothing? No cause is needed." Regarding the BGV theorem itself, Vilenkin told Craig: "I think you represented what I wrote about the BGV theorem in my papers and to you personally very accurately."
See also
Kalam cosmological argument
Gibbons–Hawking–York boundary term
Gibbons–Hawking effect
Penrose–Hawking singularity theorems
References
Further reading
Eponymous theorems of physics
Physical cosmology | Borde–Guth–Vilenkin theorem | [
"Physics",
"Astronomy"
] | 1,234 | [
"Astronomical sub-disciplines",
"Equations of physics",
"Theoretical physics",
"Astrophysics",
"Eponymous theorems of physics",
"Physical cosmology",
"Physics theorems"
] |
52,724,058 | https://en.wikipedia.org/wiki/Methyltestosterone%203-hexyl%20ether | Methyltestosterone 3-hexyl ether (brand names Androgénol, Enoltestovis, Enoltestovister), or 17α-methyltestosterone 3-hexyl enol ether, also known as 17α-methylandrost-3,5-dien-17β-ol-3-one 3-hexyl ether, is a synthetic anabolic-androgenic steroid and an androgen ether – specifically, the 3-hexyl ether of methyltestosterone.
See also
Penmesterol (methyltestosterone 3-cyclopentyl enol ether)
References
Abandoned drugs
Androgen ethers
Anabolic–androgenic steroids
Androstanes
Hepatotoxins
Androgen esters
Tertiary alcohols | Methyltestosterone 3-hexyl ether | [
"Chemistry"
] | 171 | [
"Drug safety",
"Abandoned drugs"
] |
52,728,353 | https://en.wikipedia.org/wiki/ENAC%20Foundation | The ENAC Foundation (French: Fondation de l'École nationale de l'aviation civile) was founded in 2012. The goal of the Foundation, as was put forward by École nationale de l'aviation civile, is to promote scientific and public interest activities in aviation, aerospace and aeronautics.
History
In 2009, the school and its alumni association organized the first edition of the aeronautical book fair. In December 2010, ENAC became one of the ICAO's aviation security training centres.
After the creation of the Foundation in 2012, the first scholarships have been given in 2014, in order to promote international mobility. In 2015, the first research chair in UAV is built, in partnership with Ineo, Cofely and Sagem.
The foundation is supported by GIFAS since 2017.
See also
Science and technology in France
Grandes écoles
References
External links
Official website
Scientific organizations based in France
Foundations based in France
École nationale de l'aviation civile
2012 establishments in France | ENAC Foundation | [
"Engineering"
] | 200 | [
"École nationale de l'aviation civile",
"Aerospace engineering organizations"
] |
52,729,448 | https://en.wikipedia.org/wiki/Faint%20Images%20of%20the%20Radio%20Sky%20at%20Twenty-Centimeters | Faint Images of the Radio Sky at Twenty-Centimeters, or FIRST, was an astronomical survey of the Northern Hemisphere carried out by the Very Large Array. It was led by Robert H. Becker, Richard L. White, and David J. Helfand, who came up with the idea for the survey after they had completed the VLA Galactic Plane survey in 1990, as well as Michael D. Gregg and Sally A. Laurent-Muehleisen. The survey was started 50 years after the first systematic survey of the radio sky was completed by Grote Reber in April 1943.
Survey
The survey covers 10,575 square degrees, around 25% of the sky, with regions centred on the North and South Galactic poles. The regions were chosen so that they would also be covered by the Sloan Digital Sky Survey (SDSS) in 5 optical bands, and the survey was comparable to the Palomar Observatory Sky Survey in terms of resolution and sensitivity.
The observations were made in 'B' configuration at a wavelength of (in the L Band), with an angular resolution of 5 arcsecond. It was proposed at the same time as the NRAO VLA Sky Survey, and trial observations for both surveys were taken in 1992. Survey observations of the North Galactic pole started in 1993, with 144 hours of observing time in April and May 1993 for test observations and the initial survey strip of 300 square degrees, producing an initial catalogue of 28,000 sources. Survey observations continued until 2004. Observations of the South Galactic pole were made in 2009 and 2011; the 2011 observations used the EVLA. The target flux density limit was 1 milliJansky, with an <0.15mJy r.m.s. noise limit.
The survey data was analysed using an automated pipeline through the Astronomical Image Processing System. Images and catalogues from the survey were made available after quality checks, without a proprietary period. Several versions of the survey catalogue have been generated, with the first published in 1997, and the latest () published in December 2014. The catalogue includes over 70,000 cross-identifications with SDSS and the Two Micron All Sky Survey (2MASS). The expectation was that radio sources would be observed and 65,000 images produced by the survey; the 2014 catalogue included 946,432 sources. Sources in the catalogue follow a naming convention comprising the survey name and source coordinate with the format "FIRST Jhhmmss.s+ddmmss"; the convention is registered with the International Astronomical Union.
Science
The resolution of the survey was chosen so that optical counterparts to the radio sources could be identified; complex radio sources with multiple components could be resolved (to avoid optical misidentifications); and radio morphology (e.g., Fanaroff-Riley classification) could be identified. The survey aimed to contribute to science on quasars and active galaxies; galaxy evolution; galactic astronomy; the large-scale structure of the Universe; and dark matter. The survey produced a series of papers. The survey paper has been referenced by over 1,600 other scientific publications.
The survey sources were cross-matched with the Palomar Sky Survey to create the FIRST Bright Quasar Survey (FBQS), which comprised quasar candidates that were then followed up with optical spectroscopy. The initial survey found 69 quasars, with 51 being newly identified. A number of broad absorption line quasars were discovered by FIRST. Other, high-redshift quasars were identified in the survey by cross-matching with SDSS.
Variability was detected in over 1600 sources during the course of the survey, including stars, pulsars, galaxies, quasars, and unidentified radio sources. On large scales, the two-point correlation function between radio galaxies was observed.
References
Astronomical surveys
Observational astronomy | Faint Images of the Radio Sky at Twenty-Centimeters | [
"Astronomy"
] | 783 | [
"Astronomical surveys",
"Observational astronomy",
"Works about astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
56,980,406 | https://en.wikipedia.org/wiki/Sea%20ice%20microbial%20communities | Sea Ice Microbial Communities (SIMCO) refer to groups of microorganisms living within and at the interfaces of sea ice at the poles. The ice matrix they inhabit has strong vertical gradients of salinity, light, temperature and nutrients. Sea ice chemistry is most influenced by the salinity of the brine which affects the pH and the concentration of dissolved nutrients and gases. The brine formed during the melting sea ice creates pores and channels in the sea ice in which these microbes can live. As a result of these gradients and dynamic conditions, a higher abundance of microbes are found in the lower layer of the ice, although some are found in the middle and upper layers. Despite this extreme variability in environmental conditions, the taxonomical community composition tends to remain consistent throughout the year, until the ice melts.
Much of the knowledge concerning the community diversity of the sea ice is known through genetic analyses and next-generation sequencing. In both the Arctic and Antarctic, Alphaproteobacteria, Gammaproteobacteria and Flavobacteriia are the common bacterial classes found. Most sea ice Archaea belong to the phylum Nitrososphaerota while most of the protists belong to one of 3 supergroups: Alveolata, Stramenopile and Rhizaria. The abundance of living cells within and on sea ice ranges from 104-108 cells/mL. These microbial communities play a significant role in the microbial loop as well as in global biogeochemical cycles. Sea ice communities are important because they provide an energy source for higher trophic levels, they contribute to primary production and they provide a net influx of Carbon in the oceans at the poles.
Habitat
Sea ice matrix: chemical and physical properties
Sea ice formation and physical properties
The autumnal decrease in atmospheric temperatures in the Arctic and Antarctic leads to the formation of a surface layer of ice crystals called frazil ice. A mixture of salts, nutrients and dissolved organic matter (DOM) known as brine is expelled when frazil ice solidifies to form sea ice. Brine permeates through the ice cover and creates a network of channels and pores. This process forms an initial semisolid matrix of approximately 1 meter in thickness with strong temperature, salinity, light and nutrient gradients.
Since thickening of the sea ice during winter months results in more salts being expelled from the frazil ice, atmospheric temperatures are strongly and negatively correlated to brine salinity. The sea ice-seawater interface temperature is maintained at the freezing point of seawater (~1.8 °C) while the sea ice-air interface reflects more the current atmospheric temperature. Brine salinity can increase to as much as 100 PSU when sea ice temperature reaches ~3 °C below the freezing point of seawater. Brine temperature typically ranges from -1.9 to -6.7 °C in the winter. Sea ice temperatures fluctuate in response to irradiance and atmospheric temperatures, but also change in response to the volume of snowfall. Accumulating snow on the ice cover combined with harsh atmospheric conditions can lead to the formation of a snowpack layer that absorbs UV radiations and provides insulation to the bottom ice layer. The fraction of irradiance reaching the sea ice matrix is thus also controlled by the amount of snowfall and varies from <0.01% to 5% depending on the thickness and density of the snowpack.
The surface of sea ice also allows the formation of frost flowers, which have their own unique microbial communities.
Carbon species, nutrients and gases
The fluctuation of brine salinity, which is controlled by atmospheric temperatures, is the single-most influential factor on the chemistry of the sea ice matrix. The solubility of carbon dioxide and oxygen, two biologically essential gases, decreases in higher salinity solutions. This can result in hypoxia within high heterotrophic activity regions of the sea ice matrix. Regions of high photosynthetic activity often exhibit internal depletion of inorganic carbon compound and hyperoxia. These conditions have the potential to elevate brine pH and to further contribute to the creation of an extreme environment. In these conditions, high concentrations of DOM and ammonia and low concentrations of nutrients often characterize the ice matrix.
High brine salinity combined with an elevated pH reduces the rate at which gases and inorganic nutrients diffuse into the ice matrix. The concentration of nutrients such as nitrate, phosphate and silicate inside the sea ice matrix relies largely on the diffusive influx from the sea ice-water interface and to some extent on the atmospheric deposits on the sea ice-air interface. Iron concentrations in the Southern Ocean ice cover are thought to be regulated by the amount of new iron supply at the time of ice formation and were shown to be reduced during late winter.
The chemical properties of the sea ice matrix are highly complex and depend on the interaction between the internal sea ice biological assemblage as well as external physical factors. Winters are typically characterized by moderate oxygen levels that are accompanied by nutrient and inorganic carbon concentrations that are not growth limiting to phytoplankton. Summers are typically characterized by high oxygen levels that are accompanied by a depletion of nutrients and inorganic carbon. Because of its diffusive interaction with seawater, the lower part of the sea ice matrix is typically characterized by higher nutrient concentrations.
Colonization
Microorganisms present in the surface seawater during fall are integrated in the brine solution during ice formation. A small proportion of the initial microbial population colonizes the ice matrix while the rest is expelled with brine. Studies have shown that sea ice microbial retention can be enhanced by the presence of extracellular polymeric substance/polysaccharides (EPS) on the walls of the brine channels. EPS are proteins expressed on the cell walls of microorganism such as algae. They improve the cell adherence to surfaces and when found in sufficient concentration, are thought to play a role in recruiting other organisms such as microbes.
Airborne microorganisms make up a significant proportion of the microbial input to the ice matrix. Microorganisms located in the sea or in the ice matrix brine can be incorporated in falling snow or in aerosols, and subsequently transported by strong winds such as the West Wind Drift that causes the Antarctic Circumpolar Current. These airborne microorganisms can originate from terrestrial environment and marine environment, thus contributing to the diversity of the SIMCO.
Distribution
Spatial Distribution
Microbes colonizing the Antarctic sea ice are eventually incorporated in the pore spaces and brine channels of the ice matrix, but can also inhabit the ice-seawater interface. Pore spaces in the matrix lose their ability to exchange nutrients, DOM and microorganisms with brine at approximately -5 °C. This suggests that the Antarctic microbial community is fluid along the ice matrix during fall and spring and that it is restricted during winter.
In the Arctic, brine channels are also inhabited by bacteria. Channels as small as ≤200 μm offer a spatial refuge with microbial community concentrations of 1-2 orders of magnitude higher than in the remaining channel network.
Both the Antarctic and Arctic sea ice environments present strong vertical gradients of salinity, temperature, light, nutrients and DOM. These gradients were shown to induce strong vertical stratification in bacterial communities throughout the ice layer. Microbial abundance declines significantly with depth in the upper and middle ice, but not in the lowest, suggesting that much of the prokaryotic bacterial community is resistant to extreme environmental conditions. Heterotrophic bacteria were also shown to be more abundant at the bottom of the ice layer in zones of greater algae concentration, which characterized by higher DOM and nutrient concentrations.
Temporal Distribution
The temporal distribution of microbial community composition in the Antarctic and Arctic sea ice does not present significant seasonal variability, despite extremes in environmental conditions. Previous studies of sea ice habitats have shown that the composition of SIMCO in early fall is identical to the source seawater community. The microbial community composition does not seem to change significantly in fall and winter, despite the extreme variability in irradiance, temperature, salinity and nutrient concentrations. In contrast, the abundance within the SIMCO is reduced throughout the winter as resources become limiting. Studies have shown that sea ice microalgae provide a platform and organic nutrient source for bacterial growth, therefore increasing community diversity and abundance. It has also been proven that microbes produce extracellular polymeric substances (EPS) to help retain nutrients and survive under high salinity and low temperature conditions.
The increase in irradiance levels in late spring promotes ice algal photosynthesis which in turn affects the microbial community abundance and composition. While most of the sea ice cover melts in late spring in the Antarctic and Arctic, multiyear sea ice occasionally persists when late spring and summer temperatures are lower than average. This suggests that certain microbial lineages may have adapted more efficiently to the extreme conditions of sea ice environments. Temporal abundance can also be affected by the thickness of the annual ice cover and seasonal temperature variations. The ice cover thickness was shown to regulate microbial production and the temperature of the ice matrix through layer insulation.
Community composition
A majority of the information on sea ice microbial community composition comes from 16S ribosomal RNA taxonomic marker genes and metagenomic analyses. Next-generation sequencing has allowed researcher to identify and quantify microbial communities, and to gain a more complete understanding of its structure.
Bacteria
Arctic
Metagenomic studies of Arctic sea ice show classes Alphaproteobacteria, Gammaproteobacteria and Flavobacteria. Within the Flavobacteriia class the genera Polaribacter, Psychrobacter, Psychroflexus, and Flavobacterium are the most common. Within Gammaproteobacteria the genera Glaciecola and Colwellia are the most common. Also found in Arctic sea ice samples were bacteria of the following classes and phyla: "Opitutae", Bacilli, "Cyanobacteria", Betaproteobacteria, Sphingobacteria, and Aquificota.
Antarctic
Metagenomic studies of the Ross Sea illustrate the high abundance of aerobic anoxygenic phototrophic bacteria in sea ice environments. These specialists were shown to mostly belong to the Alphaproteobacteria class. Genera of the Alphaproteobacteria class were shown to include Loktanella, Octadecabacter, Roseobacter, Sulfitobacter and Methylobacterium and to agree with previous phylogenetic analyses of sea ice around the Antarctic. A study of the SIMCO 16S ribosomal RNA at Cape Hallett in the Antarctic has shown that aerobic oxygenic phototrophic bacteria may be equally abundant.
Members of the Gammaproteobacteria and Flaviobacteria classes were also shown to be abundant within the ice matrix, and thus to be adapted to the sea ice conditions. Genera of the Gammaproteobacteria class found in the Ross Sea and around the Antarctic waters include Colwellia, Marionomonas, Pseudoalteromonas and Psychrobacter. The orders Chlamydiales and Verrucomicrobiales were also found in the sea ice microbial assemblage of these locations. The predominance of Gammaproteobacteria in sea ice around the globe have been reported by many studies. A large proportion of the identified SIMCO in these studies were shown to belong to phylotypes associated with heterotrophic taxa
While this gives researchers an insight into the microbial community composition of the Antarctic sea ice, there are clear shifts between locations in the Southern Ocean. These shifts are attributed to biological and physical forcing factors. These factors include the composition of the microbial communities in place at the moment of sea ice formation, and the regional weather and wind patterns affecting the transport of snow and aerosols.
Archaea
Studies of the 16s ribosomal RNA subunits found in the sea ice cover of Terra Nova Bay have shown that archaea consist of ≤ 6.6% of the total prokaryotic community in this environment. 90.8% of this archaeal community belonged to the phylum Nitrososphaerota (formerly Thaumarchaeota), a close parent to marine ammonia-oxidizing bacteria, while Euryarchaeota made up the rest of the community. Other molecular phylogenetic analyses of the SIMCO have detected no trace of the archaeal domain.
Protists
Metagenomic studies of Arctic sea ice using 454 sequencing of 18S rDNA and 18S rRNA. These studies showed dominance of three supergroups: Alveolata, Stramenophile, and Rhizaria. Within the Alveolates most common were Ciliates, and Dinoflagellates. Within the Stramenophile group most organisms were classified as Bacillariophyceae. Finally most of the Rhizarians were classified as from Thecofilosea.
Adaptation
Studies have shown that high concentrations of microbial cryoprotective exopolymer (EPS) were found in the sea ice brine. These EPS were shown to correlate with a stable microbial community composition throughout the winter season. They are thought to play an important role in sea ice environments where they act as a buffer and cryoprotectant against high salinity and ice-crystal damage. These exopolymers are believed to constitute a microbial adaptation to low temperatures in extreme environments.
Sea ice microbes have also developed anti-freeze proteins, which prevent the formation of ice crystals that could damage bacterial membranes. It is common for these proteins to be rich in beta-sheets as they prevent the formation of ice crystals/
Metabolic diversity
Role in the Microbial Loop
Bacteria in all environments contribute to the microbial loop, but the roles of sea ice microbial communities in the microbial loop differ due to the rapidly changing environmental conditions found in the Arctic and Antarctic. Sea ice algae contribute 10%–28% of the total primary production in ice-covered regions of the Antarctic. Microalgae provide a vital source of nutrition for juvenile zooplankton such as the Antarctic krill Euphausia superba in the winter. DOM derived from phototrophic microalgae is crucial to the microbial loop, by serving as a growth substrate for heterotrophic bacteria.
The microbial loop functions differently in sea ice, as compared to oligotrophic or temperate waters. Animals found in the extreme polar environments depend on the high bacterial production as a food source, despite the slow turnover of DOM. The microbial production of ammonium in nitrate-rich Antarctic waters may provide the necessary reductants for nitrogen fixation, increasing primary productivity of light-limited phytoplankton. Phytoflagellates and diatoms found in the Antarctic pelagic environment are directly digestible by metozoan herbivores.
See also
Marine microorganism
References
Sea ice
Microbial population biology
Marine ecoregions | Sea ice microbial communities | [
"Physics"
] | 3,133 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice"
] |
56,981,277 | https://en.wikipedia.org/wiki/Bundwerk | Bundwerk is a method of building with timber that was used especially in the 19th century in Austria, South Tyrol and Bavaria. After log construction and timber framing, bundwerk is one of the most widespread forms of timber building techniques. It involved using wooden beams that were arranged partly in a lattice or diagonally over a cross. It often decorated the front and gable sides of agricultural buildings, frequently the grain barn or Stadel of quadrangular farms (Vierseithöfen).
In northeastern Upper Bavaria bundwerk is especially varied and colourful. By contrast, in the Werdenfelser Land and the region around Innsbruck only a few places exhibit this type of timber building throughout.
Bundwerk had its heyday between 1830 and 1860 when artists and woodcarvers, as well as carpenters, decorated the bundwerk with paintings and carvings, often with mythical creatures or Christian symbols.
Literature
Jüngling, Armin: Das Bundwerk am Bauernhaus des Chiemgaus. 1978
Stoermer, Hans W.: Zimmererkunst am Bauernhaus: Bayrisch-Alpines Bundwerk. 1981
Werner, Paul:
Das Bundwerk: eine alte Zimmermannstechnik: Konstruktion, Gestaltung, Ornamentik. 1985; 1988
Bundwerk in Bayern. 1988
Das Bundwerk in Bayern. 2000 -
Fratzen und Schlangen am Stadel Bundwerk; das schönste Zeugnis bäuerlicher Baukultur. In: Unser Bayern. 2000
Enno Burmeister: Bundwerkstadel im Rupertiwinkel. 1989
Günther Knesch:
Der Bundwerkstadel Architektur und Volkskunst im östlichen Oberbayern. 1989
Wie alt ist das niederbayerische Bundwerk. In: Bayerisches Jahrbuch für Volkskunde. 1989
Der Bundwerkstadel von Nodern aus der Zeit um 1600. In: Ars Bavarica. 1989
Zansham, ein reichgestalteter Bundwerkstadel des 19. Jahrhunderts. In: Forschungen zur historischen Volkskultur. 1989
Der „Stiefel“ am niederbayerischen Bundwerk. In: Ars Bavarica. 1991
Bundwerkstadel aus Ostoberbayern 28 Bauaufnahmen. 1991
Holzbau in Niederbayern: Ständerbau, Bundwerk, Blockbau am Stadel. In: Niederbayern. 1995
Der Bundwerkstadel von Feldkirchen bei Trostberg Bauwerk und Bedeutung. In: Ars Bavarica. 1996
Bundwerkstadel in Niederbayern eine Dokumentation. 1997
Bundwerkstadel bäuerliche Baukunst in Niederbayern. In: Ostbairische Grenzmarken. 1998
Der Bundwerkstadel beim „Wagenhofer“. In: Das Salzfass. 1998
Löwen am Bundwerk. In: Schönere Heimat. 2001
Ein Bundwerkstadelbilderbogen. In: Das Salzfass. 2001
Die Bundwerkbau in einigen Landschaften des Alpenvorlandes und des Alpenlandes. In: Historischer Holzbau in Europa. 1999
External links
Timber framing
Building
Structural system
Vernacular architecture
Woodworking | Bundwerk | [
"Technology",
"Engineering"
] | 736 | [
"Structural engineering",
"Timber framing",
"Building",
"Building engineering",
"Structural system",
"Construction"
] |
56,982,034 | https://en.wikipedia.org/wiki/Liguri%20Mosulishvili | Liguri Mosulishvili (; ; 10 June 1933 – 5 April 2010) was a Georgian physicist and Head of the Biophysics Department at the Andronikashvili Institute of Physics of Tbilisi State University. He held a PhD in physics.
Biography
Liguri Mosulishvili was born in 1933 in the village of Arashenda, Gurjaani Municipality, Georgia. He trained as a physicist at Tbilisi State University from 1953 to 1958. In 1958, at the invitation of Elephter Andronikashvili, he began working as a junior researcher at the Andronikashvili Institute of Physics.
In 1968, Mosulishvili earned his Candidate of Ph.D. in Physical and Mathematical Sciences (Experimental Physics) at Tbilisi State University, Georgia. He received his Ph.D. in Biophysics (Nuclear Physics and Biophysics) from the Andronikashvili Institute of Physics of Tbilisi State University in 1985. His major research interests included Life Sciences, Ecology, Biophysics, Molecular biology, Neutron activation analysis, and Experimental physics.
A small book of memoirs, Episodes from the Life of Physicists, by Liguri Mosulishvili (edited by Miho Mosulishvili, who is the son of Liguri Mosulishvili's brother), was published in 2010.
Working experience
2004–2006 — Principal Investigator, Andronikashvili Institute of Physics
1990–2004 — Head of Department, Andronikashvili Institute of Physics
1961–1990 — Head of Laboratory, Andronikashvili Institute of Physics
1958–1961 — Junior Researcher, Andronikashvili Institute of Physics
The Scientific Projects
2000–2002 — ISTC project G-348: "Molecular Mechanisms of Heavy Metal Transformation on Microbial-Mineral Surfaces, their Roles in Detoxifying High-Oxidation State Cr and Other Heavy Metal Ions." Collaborators: Lawrence Berkeley National Laboratory, Berkeley, CA, USA.
2001–2003 — ISTC project G-349: "In Vitro Study of Mechanisms of Intracellular Responses to Low-Dose and Low-Dose Rate Exposure to Cr(VI) Compounds." Collaborators: Lawrence Berkeley National Laboratory, Berkeley, CA, USA.
2001–2004 — ISTC project G-408: "Neutron-Activation Analysis of Blue-Green Alga Spirulina Platensis: Heavy and Toxic Elements Accumulation from Nutrient Medium in the Process of Cell Growth" (Project Manager). Project collaborator: David C. Glasgow.
2002–2003 — The International Atomic Energy Agency (IAEA) Coordinated Research Programme (CRP) Contract N11528/RBF: "Selenium Containing Blue-Green Algae Spirulina Platensis for Preventive Health Care Investigated by Nuclear Techniques."
Georgian grants
2004–2005 — Project 2.32.04: "The Study of Toxic Metals Absorption and Accumulation by Algae Spirulina Platensis" (Project Manager)
2000–2001 — Project 2.25: "The Study of Physical-Chemical Properties of Prokaryotic Systems Under Loading with Toxic Metals" (Project Manager)
1997–1999 — Project 2.22: "Study of Cd(II) Interaction with Biomacromolecules in Vivo and In Vitro Experiments" (Project Manager)
The scientific contacts
1999–2006 — Joint Institute for Nuclear Research, Dubna, Russia
2004–2006 — Sapienza University of Rome, Italy
2000–2005 — Oak Ridge National Laboratory, USA
1975–1985 — Dresden Generating Station, Germany
1970–1975 — Saulce-sur-Rhône (Loriol Le Pouzin Dam), France
1964–1987 — Reactor of Tashkent, Uzbekistan
Awards
2005 — Incentive prize from the Joint Institute for Nuclear Research for the research "Using neutron activation analysis for the development of new medical preparations and sorbents based on the blue-green alga Spirulina platensis," Dubna (Russia), N 3108.
2002 — Second prize in the Contest of Scientific, Methodological, and Applied Works at the Laboratory of Neutron Physics (LNF) for "Using neutron activation analysis for the development of new medical preparations"
Publications
Episodes from the Life of Physicists (A Notebook of Memoirs), published in 2010 by Saari in Tbilisi, Georgia.
Liguri Mosulishvili, Nelly Tsibakhashvili, Elene Kirkesali, Linetta Tsertsvadze, Marina Frontasyeva, Sergei Pavlov - Biotechnology in Georgia for Various Applications (Note: This link is dead), BULLETIN OF THE GEORGIAN NATIONAL ACADEMY OF SCIENCES, vol. 2, no. 3, 2008.
L.M. Mosulishvili, E.I. Kirkesali, A.I. Belokobylsky, A.I. Khizanishvili, M.V. Frontasyeva, S.S. Pavlov, S.S. Gundorina (2002), J. Pharm. Biomed. Anal., 30(1): 87.
L.M. Mosulishvili, E.I. Kirkesali, A.I. Belokobylsky, A.I. Khizanishvili, M.V. Frontasyeva, S.F. Gundorina, C.D. Oprea (2002), J. Radioanal. Nucl. Chem. Articles, 252(1): 15–20.
L.M. Mosulishvili, M.V. Frontasyeva, S.S. Pavlov, A.I. Belokobylsky, E.I. Kirkesali, A.I. Khizanishvili (2004), J. Radioanal. Nucl. Chem., 259(1): 41–45.
A.I. Belokobylsky, E.N. Ginturi, N.E. Kuchava, E.I. Kirkesali, L.M. Mosulishvili, M.V. Frontasyeva, S.S. Pavlov, N.G. Aksenova (2004), J. Radioanal. Nucl. Chem., 259(1): 65–68.
L.M. Mosulishvili, A.I. Belokobylsky, E.I. Kirkesali, M.V. Frontasyeva, S.S. Pavlov, N.G. Aksenova (2007), J. Neutron Res., 15(1): 49.
M.V. Frontasyeva, E.I. Kirkesali, N.G. Aksenova, L.M. Mosulishvili, A.I. Belokobylsky, A.I. Khizanishvili (2006), J. Neutron Res., 14(2): 1–7.
L.M. Mosulishvili, A.I. Belokobylsky, E.I. Kirkesali, M.V. Frontasyeva, S.S. Pavlov (2001), Patent of RF No. 2209077, priority of March 15.
L.M. Mosulishvili, A.I. Belokobylsky, E.I. Kirkesali, M.V. Frontasyeva, S.S. Pavlov (2002), Patent of RF No. 2230560, priority of June 11.
N.Ya. Tsibakhashvili, M.V. Frontasyeva, E.I. Kirkesali, N.G. Aksenova, T.L. Kalabegishvili, I.G. Murusidze, L.M. Mosulishvili, H.-Y.N. Holman (2006), Anal. Chem. 78: 6285–6290.
N.Ya. Tsibakhashvili, L.M. Mosulishvili, E.I. Kirkesali, T.L. Kalabegishvili, M.V. Frontasyeva, E.V. Pomyakushina, S.S. Pavlov (2004). J. Radioanal. Nucl. Chem., 259(3): 527–531.
N. Tsibakhashvili, T. Kalabegishvili, L. Mosulishvili, E. Kirkesali, S. Kerkenjia, I. Murusidze, H.-Y. Holman, M.V. Frontasyeva, S.F. Gundorina (2008), J. Radioanal. Nucl. Chem. (accepted).
N. Tsibakhashvili, N. Asatiani, M. Abuladze, B. Birkaya, N. Sapozhnikova, L. Mosulishvili, H.-Y. Holman (2002), Biomed. Chrom., 16: 327.
References
External links
Episodes from life of physicists (A notebook of memoirs) by Liguri Mosulishvili, Saari Publishing House, 2010 (in Georgian)
L. M. Mosulishvili's scientific contributions
ACCUMULATION OF TRACE ELEMENTS BY BIOLOGICAL MATRICE OF Spirulina platensis
Determination of the pI of Human Rhinovirus Serotype 2 by Capillary Isoelectric Focusing
The binding strength of Cd(II) to C-phycocyanin from Spirulina platensis
Capillary electrophoresis of Cr(VI) reducer Arthrobacter oxydans
Accumulation of trace elements by biological matrice of Spirulina platensis
Epithermal neutron activation analysis of Cr(VI)-reducer basalt-inhabiting bacteria
Patents of the author Mosulishvili Liguri Mikhaylovich (in russian)
Physicists from Georgia (country)
Experimental physicists
20th-century physicists
1933 births
2010 deaths
Scientists from Tbilisi
Tbilisi State University alumni
Biophysicists | Liguri Mosulishvili | [
"Physics"
] | 2,088 | [
"Experimental physics",
"Experimental physicists"
] |
41,137,026 | https://en.wikipedia.org/wiki/Paradox%20Engineering | Paradox Engineering SA is a Swiss technology company that designs and markets solutions and services enabling smart cities and Industry 4.0 applications. The company's mission is to offer technologies to unlock the value of data. Its solutions are ready for the Internet of things, and enable cities and companies to collect, transport, store and deliver any kind of data lying in industrial plants or urban objects, transforming information into actionable intelligence to feed business decisions.
The technologies provided by the company are based on IPv6 / 6LoWPAN open standard protocol, and fully interoperable with other systems or applications.
It was established in 2005, with headquarters in Novazzano, Switzerland. In July 2015 the Japanese Group Minebea Co. Ltd., the world's leading comprehensive manufacturer of high-precision components, acquired full capital and assets of Paradox Engineering SA. The acquisition was aimed at accelerating the success of the Group in the Internet of Things and smart markets.
History
Paradox Engineering was founded in 2005 in Novazzano, Ticino Canton, Switzerland. The company was born as a telecommunication company, serving the niche market of industrial data transportation. It developed at first a one-stop-shop business model providing virtual networks to connect any customer industrial operation site and enable remote and condition monitoring programs.
In 2010 it began to design and engineer pioneer technologies to implement interoperable and highly scalable IPv6/6LoWPAN network infrastructures for industrial or urban applications.
In 2011 it entered the Smart Metering, Smart Grid and Smart City markets with the introduction of a modular solution for urban architectures (PE.AMI). Thanks to this product, the company was acknowledged with the Living Labs Global Award 2012 for presenting a wireless sensor network solution to meet the needs of San Francisco Public Utilities Commission and start a pilot project supporting the management of streetlights, Ev charging stations, electric meters and traffic signals management in the city.
In May 2013 the company launched PE.STONE, an OEM solution for developers and companies willing to build their own Internet of Things and smart applications.
In November 2013, the company announced the launch of a new vertical solution for parking management supporting utilities and municipalities to reduce traffic congestion and offering improved mobility service to citizens.
During 2013 the company unveiled two successful Smart City projects in Chiasso and Bellinzona, Switzerland.
On December 2, 2013 Minebea Co. Ltd joined the company as shareholder to strengthen the presence in Smart City/Smart Grid, smart building and industrial sensor network markets.
The company continued to grow in the Smart City market. In May 2015 Tinynode SA entered Paradox Engineering's ecosystem. As Tinynode was specialized in wireless vehicle detection systems, the two companies aimed at positioning as unique enabler of any kind of smart environment through compelling solutions for the Internet of Things.
From July 2015 Paradox Engineering is part of MinebeaMitsumi Group. Leveraging Paradox Engineering as IoT Excellence Center, the Group is accelerating the development of solutions for a fully networked IoT society, including applications for automotive, medical, consumer, industrial and Smart Cities markets.
In 2020 the company launched a Smart Waste application. Latest developments relate to blockchain and cybersecurity services for Smart Cities.
References
Wireless networking
Swiss companies established in 2005
Information technology companies of Switzerland
Smart grid
Technology companies established in 2005 | Paradox Engineering | [
"Technology",
"Engineering"
] | 669 | [
"Wireless networking",
"Computer networks engineering"
] |
41,138,699 | https://en.wikipedia.org/wiki/Thermal%20manikin | The thermal manikin is a human model designed for scientific testing of thermal environments without the risk or inaccuracies inherent in human subject testing. Thermal manikins are primarily used in automotive, indoor environment, outdoor environment, military and clothing research. The first thermal manikins in the 1940s were developed by the US Army and consisted of one whole-body sampling zone. Modern-day manikins can have over 30 individually controlled zones. Each zone (right hand, pelvis, etc.) contains a heating element and temperature sensors within the “skin” of the manikin. This allows the control software to heat the manikin to a normal human body temperature, while logging the amount of power necessary to do so in each zone and the temperature of that zone.
History
Clothing insulation is the thermal insulation provided by clothing and it is measured in clo. The measuring unit was developed in 1941. Shortly afterward, thermal manikins were developed by the US Army for the purposes of carrying out insulation measurements on the gear they were developing. The first thermal manikins were standing, made of copper, and were one segment, measuring whole-body heat loss. Over the years these were improved upon by various companies and individuals employing new technologies and techniques as understanding of thermal comfort increased. In the mid-1960s, seated and multi-segmented thermal manikins were developed, and digital regulation was employed, allowing for much more accurate power application and measurement. Over time breathing, sneezing, moving (such as continuous walking or biking motions) and sweating were all employed in the manikins, in addition to male, female, and child sizes depending on the application. Nowadays most manikins used for research purposes will have a minimum of 15 zones, and as many as 34 with options (often as a purchasable add-on to the base manikin) for sweating, breathing, and movement systems although simpler manikins are also in use in the clothing industry. Additionally, in the early 2000s several different computer models of manikins were developed in Hong Kong, the UK, and Sweden.
The following table gives an overview of different thermal manikin developments through the years:
Design
Modern thermal manikins consist of three main elements, with optional additional add-ons. The exterior skin of the manikin may be made of fiberglass, polyester, carbon fiber, or other heat conducting materials, within which are temperature sensors in each measurement zone. Underneath the skin is the heating element. Each zone of a thermal manikin is designed to be heated as evenly as possible. To achieve this, wiring is coiled throughout the interior of the manikin with as few gaps as possible. Electricity is run through the wire to heat it, with the power use of each zone being separate controlled and recorded by the manikin control software. Finally, the manikins are designed to simulate humans as accurately as possible, and so any necessary additional mass is added to the interior of the manikin and distributed as needed. Additionally, manikins may be fitted with supplemental devices that mimic human actions such as breathing, walking, or sweating.
The heating element of thermal manikins may be set up in one of three locations within the manikin: at the outer surface, within the skin of the manikin, or in the interior of the manikin. The further inside the manikin the heating element is, the more stable the heat output at the skin surface will be, however the time constant of the manikin’s ability to respond to changes in the external environment will also rise as it will take longer for heat to penetrate through the system.
Control
The amount of heat supplied to thermal manikins may be controlled in three ways. In “comfort mode” the PMV model equation found in ISO 7730 is applied to the manikin, and the controller software calculates the heat loss an average person would be comfortable undergoing within a given environment. This requires that the system know several basic facts about the manikin (surface area, hypothesized metabolic rate) while experimental factors must be input by the user (clothing insulation, Wet Bulb Globe Temperature). The second control method is constant heat flux from the manikin. That is, the manikin supplies a constant level of power, set by the user, and the skin temperature of the different segments is measured. The third method is that the skin temperature of the manikin is maintained constant at a user-specified value, while the power increases or decreases depending on the environmental conditions. This may arguably be considered a fourth method as well, as one can set the entire manikin to maintain the same temperature in all zones, or choose specific temperatures for each zone. Of these methods, the comfort mode is considered to be the most accurate representation of the actual heat distribution across the human body, while the heat flux mode is primarily used in high temperature settings (when room temperatures are likely to be above 34 °C).
Calibration
Temperature sensors
To obtain the most accurate results possible it is necessary to calibrate the internal temperature sensors of the thermal manikin. A good calibration will use at least 2 temperature set points minimum 10 °C apart from one another. The manikin is set up in a thermally controlled environmental chamber so that the temperature of all its segments will be nearly identical to the operative temperature of the chamber. This means that the manikin must be unclothed and with minimal insulation between any body part and the air. A good system to achieve this is to have the manikin seated in an open chair (allowing air movement to pass through), with its feet propped up off the ground. Fans should be used to increase air movement in the chamber, ensuring constant mixing. This is acceptable for maintaining a constant temperature as there is no evaporative cooling without sweating or condensation (humidity should be low to ensure no condensation occurs). At each temperature set point the manikin will need to remain in the room for 3 to 6 hours in order to come to steady state conditions. Once equilibrium has been obtained a calibration point may be obtained for each body segment (this should be included in the control software).
Equivalent temperature
The most accurate method of evaluating how the environment is affecting the thermal manikin is by calculating the equivalent temperature of the environment, accounting for the effects of radiant heat, air temperature, and air movement. It is necessary to calibrate the manikin based on this before each experiment, as the factor to convert power output and manikin skin temperature to equivalent temperature (the heat transfer coefficient) changes slightly for each zone of the manikin and based on clothing the manikin is wearing. Calibration should be carried out in a thermally controlled chamber, where radiant and air temperatures are nearly identical, and minimal temperature variation occurs throughout the space. It is necessary that the manikin be wearing the same clothing as it will during experimental tests. Multiple calibration points must be taken, minimally spanning the range of temperatures that will be tested in the experiment. During calibration air movement should be kept as low as possible, and as much of the manikin’s surface should be exposed to air and radiant heat as possible, by placing it on supports that keep it in a seated position but do not block the back or legs as a traditional seat would. Manikin data should be recorded for each calibration point when the air, surface, and manikin temperatures have all reached steady state. Temperature of the “seat” should also be recorded, and data collection should not be stopped before the seat has reached a steady state temperature. To calculate the heat transfer coefficient (hcali) the following equation is used:
where:
Qsi = the dry heat loss, or power, recorded by the manikin
tski = the skin temperature of the manikin
teq = the equivalent temperature of the room (the calibration temperature)
This factor may then be used to calculate equivalent temperature during further experiments in which radiant temperature and air velocity are not controlled using the equation:
Setup
Posture, positioning, and clothing affect the thermal manikin measurements. With regard to posture, the most accurate method would be to have the manikin in precisely the same posture as it was calibrated in. Clothing affects heat transfer to the manikin and may add a layer of air insulation. Clothing reduces the effects of air velocity and changes the strength of the free convection flow around the body and face. Fitted clothing should be used if possible to decrease uncertainty of measurements as loose clothing is likely to change shape any time the manikin is moved.
References
Heating, ventilation, and air conditioning
Temperature
Heat transfer | Thermal manikin | [
"Physics",
"Chemistry"
] | 1,739 | [
"Transport phenomena",
"Scalar physical quantities",
"Temperature",
"Physical phenomena",
"Thermodynamic properties",
"Physical quantities",
"Heat transfer",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
41,139,312 | https://en.wikipedia.org/wiki/GIT%20quotient | In algebraic geometry, an affine GIT quotient, or affine geometric invariant theory quotient, of an affine scheme with an action by a group scheme G is the affine scheme , the prime spectrum of the ring of invariants of A, and is denoted by . A GIT quotient is a categorical quotient: any invariant morphism uniquely factors through it.
Taking Proj (of a graded ring) instead of , one obtains a projective GIT quotient (which is a quotient of the set of semistable points.)
A GIT quotient is a categorical quotient of the locus of semistable points; i.e., "the" quotient of the semistable locus. Since the categorical quotient is unique, if there is a geometric quotient, then the two notions coincide: for example, one has
for an algebraic group G over a field k and closed subgroup H.
If X is a complex smooth projective variety and if G is a reductive complex Lie group, then the GIT quotient of X by G is homeomorphic to the symplectic quotient of X by a maximal compact subgroup of G (Kempf–Ness theorem).
Construction of a GIT quotient
Let G be a reductive group acting on a quasi-projective scheme X over a field and L a linearized ample line bundle on X. Let
be the section ring. By definition, the semistable locus is the complement of the zero set in X; in other words, it is the union of all open subsets for global sections s of , n large. By ampleness, each is affine; say and so we can form the affine GIT quotient
Note that is of finite type by Hilbert's theorem on the ring of invariants. By universal property of categorical quotients, these affine quotients glue and result in
which is the GIT quotient of X with respect to L. Note that if X is projective; i.e., it is the Proj of R, then the quotient is given simply as the Proj of the ring of invariants .
The most interesting case is when the stable locus is nonempty; is the open set of semistable points that have finite stabilizers and orbits that are closed in . In such a case, the GIT quotient restricts to
which has the property: every fiber is an orbit. That is to say, is a genuine quotient (i.e., geometric quotient) and one writes . Because of this, when is nonempty, the GIT quotient is often referred to as a "compactification" of a geometric quotient of an open subset of X.
A difficult and seemingly open question is: which geometric quotient arises in the above GIT fashion? The question is of a great interest since the GIT approach produces an explicit quotient, as opposed to an abstract quotient, which is hard to compute. One known partial answer to this question is the following: let be a locally factorial algebraic variety (for example, a smooth variety) with an action of . Suppose there are an open subset as well as a geometric quotient such that (1) is an affine morphism and (2) is quasi-projective. Then for some linearlized line bundle L on X. (An analogous question is to determine which subring is the ring of invariants in some manner.)
Examples
Finite group action by
A simple example of a GIT quotient is given by the -action on sending
Notice that the monomials generate the ring . Hence we can write the ring of invariants as
Scheme theoretically, we get the morphism
which is a singular subvariety of with isolated singularity at . This can be checked using the differentials, which are
hence the only point where the differential and the polynomial both vanish is at the origin. The quotient obtained is a conical surface with an ordinary double point at the origin.
Torus action on plane
Consider the torus action of on by . Note this action has a few orbits: the origin , the punctured axes, , and the affine conics given by for some . Then, the GIT quotient has structure sheaf which is the subring of polynomials , hence it is isomorphic to . This gives the GIT quotientNotice the inverse image of the point is given by the orbits , showing the GIT quotient isn't necessarily an orbit space. If it were, there would be three origins, a non-separated space.
See also
quotient stack
character variety
Chow quotient
Notes
References
Pedagogical
References
Algebraic geometry | GIT quotient | [
"Mathematics"
] | 1,002 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
41,143,378 | https://en.wikipedia.org/wiki/Proof%20of%20stake | Proof-of-stake (PoS) protocols are a class of consensus mechanisms for blockchains that work by selecting validators in proportion to their quantity of holdings in the associated cryptocurrency. This is done to avoid the computational cost of proof-of-work (POW) schemes. The first functioning use of PoS for cryptocurrency was Peercoin in 2012, although its scheme, on the surface, still resembled a POW.
Description
For a blockchain transaction to be recognized, it must be appended to the blockchain. In the proof of stake blockchain, the appending entities are named minters or (in the proof of work blockchains this task is carried out by the miners); in most protocols, the validators receive a reward for doing so. For the blockchain to remain secure, it must have a mechanism to prevent a malicious user or group from taking over a majority of validation. PoS accomplishes this by requiring that validators have some quantity of blockchain tokens, requiring potential attackers to acquire a large fraction of the tokens on the blockchain to mount an attack.
Proof of work (PoW), another commonly used consensus mechanism, uses a validation of computational prowess to verify transactions, requiring a potential attacker to acquire a large fraction of the computational power of the validator network. This incentivizes consuming huge quantities of energy. PoS is more energy-efficient.
Early PoS implementations were plagued by a number of new attacks that exploited the unique vulnerabilities of the PoS protocols. Eventually two dominant designs emerged: so called Byzantine fault tolerance-based and chain-based approaches. Bashir identifies three more types of PoS:
committee-based PoS (a.k.a. nominated PoS, NPoS);
delegated proof of stake (DPoS);
liquid proof of stake (LPoS).
Attacks
The additional vulnerabilities of PoS schemes are directly related to their advantage: a relatively low amount of calculations required when constructing a blockchain.
Long-range attacks
The low amount of computing power involved allows a class of attacks that replace a non-negligible portion of the main blockchain with a hijacked version. These attacks are called in literature by different names, Long-Range, Alternative History, Alternate History, History Revision, and are unfeasible in the PoW schemes due to the sheer volume of calculations required. The early stages of a blockchain are much more malleable for rewriting, as they likely have much smaller group of stakeholders involved, simplifying the collusion. If the per-block and per-transaction rewards are offered, the malicious group can, for example, redo the entire history and collect these rewards.
The classic "Short-Range" attack (bribery attack) that rewrites just a small tail portion of the chain is also possible.
Nothing at stake
Since validators do not need to spend a considerable amount of computing power (and thus money) on the process, they are prone to the Nothing-at-Stake attack: the participation in a successful validation increases the validator's earnings, so there is a built-in incentive for the validators to accept all chain forks submitted to them, thus increasing the chances of earning the validation fee. The PoS schemes enable low-cost creation of blockchain alternatives starting at any point in history (costless simulation), submitting these forks to eager validators endangers the stability of the system. If this situation persists, it can allow double-spending, where a digital token can be spent more than once. This can be mitigated through penalizing validators who validate conflicting chains ("economic finality") or by structuring the rewards so that there is no economic incentive to create conflicts. Byzantine Fault Tolerance based PoS are generally considered robust against this threat (see below).
Bribery attack
Bribery attack, where the attackers financially induce some validators to approve their fork of blockchain, is enhanced in PoS, as rewriting a large portion of history might enable the collusion of once-rich stakeholders that no longer hold significant amounts at stake to claim a necessary majority at some point back in time, and grow the alternative blockchain from there, an operation made possible by the low computing cost of adding blocks in the PoS scheme.
Variants
Chain-based PoS
This is essentially a modification of the PoW scheme, where the competition is based not on applying brute force to solving the identical puzzle in the smallest amount of time, but instead on varying the difficulty of the puzzle depending on the stake of the participant; the puzzle is solved if on a tick of the clock (|| is concatenation):
The smaller amount of calculations required for solving the puzzle for high-value stakeholders helps to avoid excessive hardware.
Nominated PoS (NPoS)
Also known as "committee-based", this scheme involves an election of a committee of validators using a verifiable random function with probabilities of being elected higher with higher stake. Validators then randomly take turns producing blocks. NPoS is utilized by Ouroboros Praos and BABE.
BFT-based PoS
The outline of the BFT PoS "epoch" (adding a block to the chain) is as follows:
A "proposer" with a "proposed block" is randomly selected by adding it to the temporary pool used to select just one consensual block;
The other participants, validators, obtain the pool, validate, and vote for one;
The BFT consensus is used to finalize the most-voted block.
The scheme works as long as no more than a third of validators are dishonest. BFT schemes are used in Tendermint and Casper FFG.
Delegated proof of stake (DPoS)
Proof of stake delegated systems use a two-stage process: first,
the stakeholders elect a validation committee, a.k.a. witnesses, by voting proportionally to their stakes, then the witnesses take turns in a round-robin fashion to propose new blocks that are then voted upon by the witnesses, usually in the BFT-like fashion. Since there are fewer validators in the DPoS than in many other PoS schemes, the consensus can be established faster. The scheme is used in many chains, including EOS, Lisk, Tron.
Liquid proof of stake (LPoS)
In the liquid PoS anyone with a stake can declare themselves a validator, but for the small holders it makes sense to delegate their voting rights instead to larger players in exchange for some benefits (like periodic payouts). A market is established where the validators compete on the fees, reputation, and other factors. Token holders are free to switch their support to another validator at any time. LPoS is used in Tezos.
'Stake' definition
The exact definition of "stake" varies from implementation to implementation. For instance, some cryptocurrencies use the concept of "coin age", the product of the number of tokens with the amount of time that a single user has held them, rather than merely the number of tokens, to define a validator's stake.
Implementations
The first functioning implementation of a proof-of-stake cryptocurrency was Peercoin, introduced in 2012. Other cryptocurrencies, such as Blackcoin, Nxt, Cardano, and Algorand followed. However, , PoS cryptocurrencies were still not as widely used as proof-of-work cryptocurrencies.
In September 2022, Ethereum, the second-largest cryptocurrency, switched from PoW to a PoS consensus mechanism, after several proposals and some delays.
Concerns
Security
Critics have argued that the proof of stake model is less secure compared to the proof of work model.
Centralization
Critics have argued that the proof of stake will likely lead cryptocurrency blockchains being more centralized in comparison to proof of work as the system favors users who have a large amount of cryptocurrency, which in turn could lead to users who have a large amount of cryptocurrency having major influence on the management and direction for a crypto blockchain.
Legal status in US
US regulators have argued over the legal status of the proof-of-stake model, with the Securities and Exchange Commission claiming that staking rewards are the equivalent of interest, so coins such as ether and ada are financial securities. However, in 2024, the SEC sidestepped the question by recognising Ethereum market funds on condition that they did not stake their coins. The level of staking of ether at 27% of total supply was low compared with Cardano (66%) and Solana (63%). However, not staking their tokens meant that the funds were losing about 3% of potential returns a year.
Energy consumption
In 2021, a study by the University of London found that in general the energy consumption of the proof-of-work based Bitcoin was about a thousand times higher than that of the highest consuming proof-of-stake system that was studied even under the most favorable conditions and that most proof of stake systems cause less energy consumption in most configurations.
In January 2022, Erik Thedéen, the vice-chair of the European Securities and Markets Authority, called on the EU to ban the PoW model in favor of PoS because of the latter's lower energy consumption.
Ethereum's switch to proof-of-stake was estimated to have cut its energy use by 99%.
References
Sources
Cryptography
Cryptocurrencies | Proof of stake | [
"Mathematics",
"Engineering"
] | 1,998 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
41,144,410 | https://en.wikipedia.org/wiki/Equivariant%20topology | In mathematics, equivariant topology is the study of topological spaces that possess certain symmetries. In studying topological spaces, one often considers continuous maps , and while equivariant topology also considers such maps, there is the additional constraint that each map "respects symmetry" in both its domain and target space.
The notion of symmetry is usually captured by considering a group action of a group on and and requiring that is equivariant under this action, so that for all , a property usually denoted by . Heuristically speaking, standard topology views two spaces as equivalent "up to deformation," while equivariant topology considers spaces equivalent up to deformation so long as it pays attention to any symmetry possessed by both spaces. A famous theorem of equivariant topology is the Borsuk–Ulam theorem, which asserts that every -equivariant map necessarily vanishes.
Induced G-bundles
An important construction used in equivariant cohomology and other applications includes a naturally occurring group bundle (see principal bundle for details).
Let us first consider the case where acts freely on . Then, given a -equivariant map , we obtain sections given by , where gets the diagonal action , and the bundle is , with fiber and projection given by . Often, the total space is written .
More generally, the assignment actually does not map to generally. Since is equivariant, if (the isotropy subgroup), then by equivariance, we have that , so in fact will map to the collection of . In this case, one can replace the bundle by a homotopy quotient where acts freely and is bundle homotopic to the induced bundle on by .
Applications to discrete geometry
In the same way that one can deduce the ham sandwich theorem from the Borsuk-Ulam Theorem, one can find many applications of equivariant topology to problems of discrete geometry. This is accomplished by using the configuration-space test-map paradigm:
Given a geometric problem , we define the configuration space, , which parametrizes all associated solutions to the problem (such as points, lines, or arcs.) Additionally, we consider a test space and a map where is a solution to a problem if and only if . Finally, it is usual to consider natural symmetries in a discrete problem by some group that acts on and so that is equivariant under these actions. The problem is solved if we can show the nonexistence of an equivariant map .
Obstructions to the existence of such maps are often formulated algebraically from the topological data of and . An archetypal example of such an obstruction can be derived having a vector space and . In this case, a nonvanishing map would also induce a nonvanishing section from the discussion above, so , the top Stiefel–Whitney class would need to vanish.
Examples
The identity map will always be equivariant.
If we let act antipodally on the unit circle, then is equivariant, since it is an odd function.
Any map is equivariant when acts trivially on the quotient, since for all .
See also
Equivariant cohomology
Equivariant stable homotopy theory
G-spectrum
References
Group actions (mathematics)
Topological spaces
Topology | Equivariant topology | [
"Physics",
"Mathematics"
] | 704 | [
"Mathematical structures",
"Group actions",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Symmetry"
] |
41,145,205 | https://en.wikipedia.org/wiki/Propagation%20of%20grapevines | The propagation of grapevines is an important consideration in commercial viticulture and winemaking. Grapevines, most of which belong to the Vitis vinifera family, produce one crop of fruit each growing season with a limited life span for individual vines. While some centenarian old vine examples of grape varieties exist, most grapevines are between the ages of 10 and 30 years. As vineyard owners seek to replant their vines, a number of techniques are available which may include planting a new cutting that has been selected by either clonal or mass (massal) selection. Vines can also be propagated by grafting a new plant vine upon existing rootstock or by layering one of the canes of an existing vine into the ground next to the vine and severing the connection when the new vine develops its own root system.
In commercial viticulture, grapevines are rarely propagated from seedlings as each seed contains unique genetic information from its two parent varieties (the flowering parent and the parent that provided the pollen that fertilized the flower) and would, theoretically, be a different variety than either parent. This would be true even if two hermaphroditic vine varieties, such as Chardonnay, cross pollinated each other. While the grape clusters that would arise from the pollination would be considered Chardonnay any vines that sprang from one of the seeds of the grape berries would be considered a distinct variety other than Chardonnay. It is for this reason that grapevines are usually propagated from cuttings while grape breeders will utilize seedlings to come up with new grape varieties including crossings that include parents of two varieties within the same species (such as Cabernet Sauvignon which is a crossing of the Vitis vinifera varieties Cabernet Franc and Sauvignon blanc) or hybrid grape varieties which include parents from two different Vitis species such as the Armagnac grape Baco blanc, which was propagated from the vinifera grape Folle blanche and the Vitis labrusca variety Noah.
Terminology
A color mutation is a grape variety that while genetically similar to the original variety is considered unique enough to merit being considered its own variety. Both Pinot gris and Pinot blanc are color mutations of Pinot noir.
In viticulture, a clone is a single vine that has been selected from a "mother vine" to which it is identical. This clone may have been selected deliberately from a grapevine that has demonstrated desirable traits (good yields, grape disease resistance, small berry size, etc.) and propagated as cuttings from that mother vine. Varieties such as Sangiovese and Pinot noir are well known to have a variety of clones. While there may be slight mutations to differentiate the various clones, all clones are considered genetically part of the same variety (i.e. Sangiovese or Pinot noir).
A selection massale is the opposite of cloning, where growers select cuttings from the mass of the vineyard, or a field blend.
A crossing is a new grape variety that was created by the cross pollination of two different varieties of the same species. Syrah is a crossing of two French Vitis vinifera species, Dureza from the Ardèche and Mondeuse blanche from Savoie. Theoretically, every seedling (also known as a selfling), even if its pollinated by a member of the same grape variety (i.e. such as two Merlot vines), is a crossing as any vine that results from the seed being planted will be a different grape variety distinct from either parent.
A hybrid is a new grape variety that was produced from a cross pollination of two different grape species. In the early history of American winemaking, grape growers would cross the European Vitis vinifera vines with American vine varieties such as Vitis labrusca to create French-American hybrids that were more resistant to American grape diseases such as downy and powdery mildew as well as phylloxera. When the phylloxera epidemic of the mid to late 19th century hit Europe, some growers in European wine regions experimented with using hybrids until a solution involving grafting American rootstocks to vinifera varieties was found. Eventually, the use of hybrids in wine production declined with their use formally outlawed by European wine laws in the 1950s.
Propagation methods
As commercial winemakers usually want to work with a desired grape variety that dependably produces a particular crop, most grapevines are propagated by clonal or massal selection of plant material. This can be accomplished in one of three ways.
Cuttings
This involves a shoot taken from a mother vine and then planted where the shoot will eventually sprout a root system and regenerate itself into a full-fledged vine with trunk and canopy. Often new cuttings will be first planted in a nursery where it is allowed to develop for a couple of years before being planted in the vineyard.
Grafting
Grafting is a process in which a new grape vine is produced by making a cut in the rootstock and then adding scionwood that is cut to fit inside the incision made in the rootstock. This involves removing the canopy and most of the trunk of an existing vine and replacing it with a cutting of a new vine that is sealed by a graft union.
There are two main types of grafting in the relation to the propagation of a grapevine.
Bench Grafting
This process is typically performed in the beginning of a new year in a greenhouse, taking place during the late winter months, to the early spring months. This process is used on younger and smaller vines before the vines are planted in a vineyard. However, the type of cut made on the grape vine determines the classification of the Bench graft. The two techniques to perform a Bench Graft includes the Omega Graft and the Whip Graft.
The Omega Graft is performed by the rootstock and scion being grafted together by the two pieces being cut into shapes that align together.
The Whip Graft is performed by making an identical small dip at angle into the rootstock and the scion, so they can be adjoined.
Field Grafting
Field grafting is performed after the vine has been planted in a vineyard and has aged a few years. The objective of using this method is to avoid replanting and a final product of a grapevine with two diversifications. The procedure of field grafting is performed with the vines still planted, by making two inversions in the rootstock of a certain type of grapevines and placing two of the same type of scions that differ from the rootstock into the rootstock. The most common ways to perform field grafting are the Chip Bud method, the T Bud method, the Cleft Graft and the Bark Graft.
The Chip Bud Method is performed shortly after the grape vine is planted, giving the rootstock enough time to become active but the bud of the grape vine is still inactive. It is performed by cutting two small slopes in both sides of the rootstock and cutting a small scion into a small bud and placing the scion bud into the cuts made on the rootstock.
The T Bud Method is performed by making a cutting a T at the bottom of the grapevine that is above the soil. Once the T is cut, the bark surrounding the cut is pulled back and the scion is placed between the two sides that were pulled back.
The Chip Graft is performed on the branches of a grape vine, when the rootstock is dormant. The method is performed by making a wedge in the rootstock and placing two scions into the wedge. After the Graft starts growing one of the scions is removed, leaving only one to grow.
The Bark Graft is performed by making three incisions on the edge of the grape vine's rootstock, and removing majority of the bark around each of the cuttings, leaving a small amount of bark at the end of the cut and inserting three of the same scions into the incisions, using the remaining piece of the cut bark to cover the end of the scions.
Layering
In established vineyards where only a few vines need to be replaced within a row (such as vine lost to machine damage or disease), a new vine can be propagated by bending a cane from a neighboring vine into the ground and covering it with dirt. This segment of vine will soon begin sprouting its own independent root system while still being nourished by the connecting vine. Eventually, the connection between the two vines is severed, allowing each vine to grow independently.
Clonal versus massal selection
Each cutting, taken from a mother vine, is a clone of that vine. The way that a vine grower selects these cuttings can be described as either clonal or massal selection. In clonal selection, an ideal plant within a vineyard or nursery that has exhibited the most desirable traits is selected with all cuttings taken from that single plant. In massal (or "mass") selection, cuttings are taken from several vines of the same variety that have collectively demonstrated desirable traits.
Historically, massal selection was the primary means of vineyard propagation, particularly in traditional vineyards where vines are only sporadically replaced, often by layering a cane from a neighboring vine. In the 1950s, the isolation and identification of desirable clones in nurseries and breeding stations lead to an increase in clonal selection with new vineyard plantings seeking out clones from well established vineyards and wine region. This trend towards clonal selection has seen some criticism from wine writers and viticulturalists who complain about "mono-clonal" viticulture that has the risk of producing wines that are overly similar and dull.
Other criticisms of clonal selection involve the increased risk in vineyards lacking genetic diversity among its vines as well as the changing priorities in wine production. While many clones in the mid to late 20th century were isolated, some of the desirable traits exhibited by those clones (such as early ripening or high yield potential) may no longer be as desirable today where other traits (such as low yields and drought resistance) may be more prized.
References
Viticulture
Plant reproduction | Propagation of grapevines | [
"Biology"
] | 2,104 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction"
] |
41,145,444 | https://en.wikipedia.org/wiki/Bis%28triphenylphosphine%29platinum%20chloride | Bis(triphenylphosphine)platinum chloride is a metal phosphine complex with the formula PtCl2[P(C6H5)3]2. Cis- and trans isomers are known. The cis isomer is a white crystalline powder, while the trans isomer is yellow. Both isomers are square planar about the central platinum atom. The cis isomer is used primarily as a reagent for the synthesis of other platinum compounds.
Preparation
The cis isomer is the prepared by heating solutions of platinum(II) chlorides with triphenylphosphine. For example, starting from potassium tetrachloroplatinate:
K2PtCl4 + 2 PPh3 → cis-Pt(PPh3)2Cl2 + 2 KCl
The trans isomer is the prepared by treating potassium trichloro(ethylene)platinate(II) (Zeise's salt) with triphenylphosphine:
KPt(C2H4)Cl3 + 2 PPh3 → trans-Pt(PPh3)2Cl2 + KCl + C2H4
With heating or in the presence of excess PPh3, the trans isomer converts to the cis complex. The latter complex is the thermodynamic product due to triphenylphosphine being a strong trans effect ligand.
In cis-bis(triphenylphosphine)platinum chloride, the average Pt-P has a bond distance of 2.261 Å and the average Pt-Cl has a bond distance of 2.346 Å. In trans-bis(triphenylphosphine)platinum chloride, the Pt-P distance is 2.316 Å and the Pt-Cl distance is 2.300 Å.
The complex also undergoes photoisomerization.
See also
Bis(triphenylphosphine)palladium(II) chloride
Bis(triphenylphosphine)nickel(II) chloride
References
Platinum(II) compounds
Homogeneous catalysis
Triphenylphosphine complexes
Chloro complexes
B | Bis(triphenylphosphine)platinum chloride | [
"Chemistry"
] | 431 | [
"Catalysis",
"Homogeneous catalysis"
] |
54,061,668 | https://en.wikipedia.org/wiki/Liouville%E2%80%93Bratu%E2%80%93Gelfand%20equation | For Liouville's equation in differential geometry, see Liouville's equation.
In mathematics, Liouville–Bratu–Gelfand equation or Liouville's equation is a non-linear Poisson equation, named after the mathematicians Joseph Liouville, Gheorghe Bratu and Israel Gelfand. The equation reads
The equation appears in thermal runaway as Frank-Kamenetskii theory, astrophysics for example, Emden–Chandrasekhar equation. This equation also describes space charge of electricity around a glowing wire and describes planetary nebula.
Liouville's solution
Source:
In two dimension with Cartesian Coordinates , Joseph Liouville proposed a solution in 1853 as
where is an arbitrary analytic function with . In 1915, G.W. Walker found a solution by assuming a form for . If , then Walker's solution is
where is some finite radius. This solution decays at infinity for any , but becomes infinite at the origin for , becomes finite at the origin for and becomes zero at the origin for . Walker also proposed two more solutions in his 1915 paper.
Radially symmetric forms
If the system to be studied is radially symmetric, then the equation in dimension becomes
where is the distance from the origin. With the boundary conditions
and for , a real solution exists only for , where is the critical parameter called as Frank-Kamenetskii parameter. The critical parameter is for , for and for . For , two solution exists and for infinitely many solution exists with solutions oscillating about the point . For , the solution is unique and in these cases the critical parameter is given by . Multiplicity of solution for was discovered by Israel Gelfand in 1963 and in later 1973 generalized for all by Daniel D. Joseph and Thomas S. Lundgren.
The solution for that is valid in the range is given by
where is related to as
The solution for that is valid in the range is given by
where is related to as
References
Differential equations
Eponymous equations of physics | Liouville–Bratu–Gelfand equation | [
"Physics",
"Mathematics"
] | 407 | [
"Equations of physics",
"Mathematical objects",
"Differential equations",
"Eponymous equations of physics",
"Equations"
] |
54,061,735 | https://en.wikipedia.org/wiki/Head%20%28hydrology%29 | In hydrology, the head is the point on a watercourse up to which it has been artificially broadened and/or raised by an impoundment. Above the head of the reservoir natural conditions prevail; below it the water level above the riverbed has been raised by the impoundment and its flow rate reduced, unless and until banks, barrages, weir sluices or dams are overcome (overtopped), whereby a less frictional than natural course will exist (mid-level and surface rather than bed and bank currents) resulting in flash flooding below.
In principle, a distinction must be drawn between the head of a reservoir impounded by a dam, and the head of a works resulting from a barrage or canal locks.
Head of a reservoir
A head's location varies with the height of the water level against the dam. Since there is only an extremely low flow within the reservoir so no water level gradient, the head can be clearly seen: where the farthest watercourse discharges into the reservoir.
Upstream of the actual reservoir is likely to be a pre-dam, which typically have a constant water level so the head is reinforced.
The term does not apply to embankment (storage/settling) reservoirs, to which water is pumped from below.
Head of a works
On large rivers in all but arid climates, the head of a works is rarely fixed rigidly, as, within the impounded reach a significant flow rate and water gradient is sometimes seen. The head can only be found by calculation or defined by observations with and without impoundment. Depending on the flow rate and control of the barrage, locks or weir, position will greatly vary and will not necessarily be where the so-called headworks are.
Many rivers (such as the Moselle) are barraged many times to make them navigable and/or to avoid uncontrolled flooding. In such a case only the higher stretches of river are uninfluenced by impoundment. As to the other stretches the river has long "level" pounds but no or few natural heads, instead having artificial structures until the top head. Ideal management of the higher heads will allow headroom to keep back some flood meadow water so as not to compound heavy precipitation and resultant run-off downstream; corollary channels with spare capacity are a further mitigation where land is at a premium (such as the Jubilee River). Ideal management of the lowest head will allow daily timed openings, at least in flood events, to coincide with an outgoing (ebb), rather than flood tide.
See also
Hydraulic head
Staff (head) gauge
Hydraulic engineering
Hydrology | Head (hydrology) | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 540 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
54,063,615 | https://en.wikipedia.org/wiki/Zeldovich%20number | The Zeldovich number is a dimensionless number which provides a quantitative measure for the activation energy of a chemical reaction which appears in the Arrhenius exponent, named after the Russian scientist Yakov Borisovich Zeldovich, who along with David A. Frank-Kamenetskii, first introduced in their paper in 1938. In 1983 ICDERS meeting at Poitiers, it was decided that the non-dimensional number will be named after Zeldovich.
It is defined as
where
is the activation energy of the reaction
is the universal gas constant
is the burnt gas temperature
is the unburnt mixture temperature.
In terms of heat release parameter , it is given by
For typical combustion phenomena, the value for Zel'dovich number lies in the range . Activation energy asymptotics uses this number as the large parameter of expansion.
References
Chemical kinetics
Combustion
Dimensionless numbers of fluid mechanics
Fluid dynamics
Dimensionless numbers of chemistry | Zeldovich number | [
"Chemistry",
"Engineering"
] | 192 | [
"Fluid dynamics stubs",
"Chemical reaction engineering",
"Chemical engineering",
"Combustion",
"Piping",
"Chemical reaction stubs",
"Chemical kinetics",
"Dimensionless numbers of chemistry",
"Chemical process stubs",
"Fluid dynamics"
] |
54,064,251 | https://en.wikipedia.org/wiki/Burke%E2%80%93Schumann%20flame | In combustion, a Burke–Schumann flame is a type of diffusion flame, established at the mouth of the two concentric ducts, by issuing fuel and oxidizer from the two region respectively. It is named after S.P. Burke and T.E.W. Schumann, who were able to predict the flame height and flame shape using their simple analysis of infinitely fast chemistry (which is now called as Burke–Schumann limit) in 1928 at the First symposium on combustion.
Mathematical description
Consider a cylindrical duct with axis along direction with radius through which fuel is fed from the bottom and the tube mouth is located at . Oxidizer is fed along the same axis, but in the concentric tube of radius outside the fuel tube. Let the mass fraction in the fuel tube be and the mass fraction of the oxygen in the outside duct be . Fuel and oxygen mixing occurs in the region . The following assumptions were made in the analysis:
The average velocity is parallel to axis ( direction) of the ducts,
The mass flux in the axial direction is constant,
Axial diffusion is negligible compared to the transverse/radial diffusion
The flame occurs infinitely fast (Burke–Schumann limit), therefore flame appears as a reaction sheet across which properties of flow changes
Effects of gravity has been neglected
Consider a one-step irreversible Arrhenius law, , where is the mass of oxygen required to burn unit mass of fuel and is the amount of heat released per unit mass of fuel burned. If is the mass of fuel burned per unit volume per unit time and introducing the non-dimensional fuel and mass fraction and the Stoichiometry parameter,
the governing equations for fuel and oxidizer mass fraction reduce to
where Lewis number of both species is assumed to be unity and is assumed to be constant, where is the thermal diffusivity. The boundary conditions for the problem are
The equation can be linearly combined to eliminate the non-linear reaction term and solve for the new variable
,
where is known as the mixture fraction. The mixture fraction takes the value of unity in the fuel stream and zero in the oxidizer stream and it is a scalar field which is not affected by the reaction. The equation satisfied by is
(If the Lewis numbers of fuel and oxidizer are not equal to unity, then the equation satisfied by is nonlinear as follows from Shvab–Zeldovich–Liñán formulation). Introducing the following coordinate transformation
reduces the equation to
The corresponding boundary conditions become
The equation can be solved by separation of variables
where and are the Bessel function of the first kind and is the nth root of Solution can also be obtained for the planar ducts instead of the axisymmetric ducts discussed here.
Flame shape and height
In the Burke-Schumann limit, the flame is considered as a thin reaction sheet outside which both fuel and oxygen cannot exist together, i.e., . The reaction sheet itself is located by the stoichiometric surface where , in other words, where
where is the stoichiometric mixture fraction. The reaction sheet separates fuel and oxidizer region. The inner structure of the reaction sheet is described by Liñán's equation. On the fuel side of the reaction sheet ()
and on the oxidizer side ()
For given values of (or, ) and , the flame shape is given by the condition , i.e.,
When (), the flame extends from the mouth of the inner tube and attaches itself to the outer tube at a certain height (under-ventilated case) and when (), the flame starts from the mouth of the inner tube and joins at the axis at some height away from the mouth (over-ventilated case). In general, the flame height is obtained by solving for in the above equation after setting for the under-ventilated case and for the over-ventilated case.
Since flame heights are generally large for the exponential terms in the series to be negligible, as a first approximation flame height can be estimated by keeping only the first term of the series. This approximation predicts flame heights for both cases as follows
where
References
Fire
Combustion
Fluid dynamics | Burke–Schumann flame | [
"Chemistry",
"Engineering"
] | 850 | [
"Chemical engineering",
"Combustion",
"Piping",
"Fire",
"Fluid dynamics"
] |
60,437,375 | https://en.wikipedia.org/wiki/Mori-Zwanzig%20formalism | The Mori–Zwanzig formalism, named after the physicists and Robert Zwanzig, is a method of statistical physics. It allows the splitting of the dynamics of a system into a relevant and an irrelevant part using projection operators, which helps to find closed equations of motion for the relevant part. It is used e.g. in fluid mechanics or condensed matter physics.
Idea
Macroscopic systems with a large number of microscopic degrees of freedom are often well described by a small number of relevant variables, for example the magnetization in a system of spins. The Mori–Zwanzig formalism allows the finding of macroscopic equations that only depend on the relevant variables based on microscopic equations of motion of a system, which are usually determined by the Hamiltonian. The irrelevant part appears in the equations as noise. The formalism does not determine what the relevant variables are, these can typically be obtained from the properties of the system.
The observables describing the system form a Hilbert space. The projection operator then projects the dynamics onto the subspace spanned by the relevant variables. The irrelevant part of the dynamics then depends on the observables that are orthogonal to the relevant variables. A correlation function is used as a scalar product, which is why the formalism can also be used for analyzing the dynamics of correlation functions.
Derivation
A not explicitly time-dependent observable obeys the Heisenberg equation of motion
where the Liouville operator is defined using the commutator in the quantum case and using the Poisson bracket in the classical case. We assume here that the Hamiltonian does not have explicit time-dependence. The derivation can also be generalized towards time-dependent Hamiltonians. This equation is formally solved by
The projection operator acting on an observable is defined as
where is the relevant variable (which can also be a vector of various observables), and is some scalar product of operators. The Mori product, a generalization of the usual correlation function, is typically used for this scalar product. For observables , it is defined as
where is the inverse temperature, Tr is the trace (corresponding to an integral over phase space in the classical case) and is the Hamiltonian. is the relevant probability operator (or density operator for quantum systems). It is chosen in such a way that it can be written as a function of the relevant variables only, but is a good approximation for the actual density, in particular such that it gives the correct mean values.
Now, we apply the operator identity
to
Using the projection operator introduced above and the definitions
(frequency matrix),
(random force) and
(memory function), the result can be written as
This is an equation of motion for the observable , which depends on its value at the current time , the value at previous times (memory term) and the random force (noise, depends on the part of the dynamics that is orthogonal to ).
Markovian approximation
The equation derived above is typically difficult to solve due to the convolution term. Since we are typically interested in slow macroscopic variables changing timescales much larger than the microscopic noise, this has the effect of integrating over an infinite time limit while disregarding the lag in the convolution. We see this by expanding the equation to second order in , to obtain
,
where
.
Generalizations
For larger deviations from thermodynamic equilibrium, the more general form of the Mori–Zwanzig formalism is used, from which the previous results can be obtained through a linearization. In this case, the Hamiltonian has explicit time-dependence. In this case, the transport equation for a variable
,
where is the mean value and is the fluctuation, be written as (use index notation with summation over repeated indices)
,
where
,
,
and
.
We have used the time-ordered exponential
and the time-dependent projection operator
These equations can also be re-written using a generalization of the Mori product. Further generalizations can be used to apply the formalism to time-dependent Hamiltonians, general relativity, and arbitrary dynamical systems
See also
Nakajima–Zwanzig equation
Zwanzig projection operator
Notes
References
Hermann Grabert Projection operator techniques in nonequilibrium statistical mechanics, Springer Tracts in Modern Physics, Band 95, 1982
Robert Zwanzig Nonequilibrium Statistical Mechanics 3rd ed., Oxford University Press, New York, 2001
Statistical mechanics | Mori-Zwanzig formalism | [
"Physics"
] | 910 | [
"Statistical mechanics"
] |
60,439,553 | https://en.wikipedia.org/wiki/Keene%27s%20cement%20plaster | Keene's cement plaster or Keene's cement is a hard plaster formulation, primarily used for ornamental work. Alternative names are Martin's cement and Parian cement. It is a calcined formulation of regular calcium sulfate plaster with an alum admixture. The compound gives a hard finish that can be polished. The product was developed by Richard Wynn Keene and patented in 1838.
References
Plastering | Keene's cement plaster | [
"Physics",
"Chemistry",
"Engineering"
] | 87 | [
"Building engineering",
"Materials stubs",
"Coatings",
"Materials",
"Plastering",
"Matter"
] |
44,004,087 | https://en.wikipedia.org/wiki/Apparent%20longitude | Apparent longitude is celestial longitude corrected for aberration and nutation as opposed to mean longitude.
Apparent longitude is used in the definition of equinox and solstice. At equinox, the apparent geocentric celestial longitude of the Sun is 0° or 180°. At solstice, it is equal to 90° or 270°. This does not match up to declination exactly zero or declination extreme value because the celestial latitude of the Sun is (less than 1.2 arcseconds but) not zero.
Sources
Astronomical coordinate systems | Apparent longitude | [
"Astronomy",
"Mathematics"
] | 117 | [
"Astronomical coordinate systems",
"Astronomy stubs",
"Coordinate systems"
] |
44,005,352 | https://en.wikipedia.org/wiki/ECS%20Journal%20of%20Solid%20State%20Science%20and%20Technology | The ECS Journal of Solid State Science and Technology is a monthly peer-reviewed scientific journal covering solid state science and technology. The editor-in-chief is Krishnan Rajeshwar (University of Texas at Arlington). The Technical Editors are Francis D'Souza (University of North Texas), Aniruddh Jagdish Khanna (Applied Materials Inc.), Ajit Khosla (Yamagata University), Peter Mascher (McMaster University), Kailash C. Mishra (Osram Sylvania), and Fan Ren (University of Florida). The Associate Editors are Michael Adachi (Simon Fraser University), Netz Arroyo (Johns Hopkins University School of Medicine), Paul Maggard (North Carolina State University), Meng Tao (Arizona State University), and Thomas Thundat (University at Buffalo). The journal was established in 2012 and is published by The Electrochemical Society.
Abstracting an indexing
The journal is abstracted and indexed in the Science Citation Index Expanded, Current Contents/Physical, Chemical & Earth Sciences, Current Contents/Engineering, Computing & Technology, Chemical Abstracts Service, and Scopus. According to the Journal Citation Reports, the journal has a 2023 impact factor of 1.8.
References
External links
Materials science journals
Academic journals published by learned and professional societies
Academic journals established in 2012
Monthly journals
English-language journals
Electrochemical Society academic journals | ECS Journal of Solid State Science and Technology | [
"Physics",
"Materials_science",
"Engineering"
] | 288 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs"
] |
55,613,190 | https://en.wikipedia.org/wiki/Scotlandite | Scotlandite is a sulfite mineral first discovered in a mine at Leadhills in South Lanarkshire, Scotland, an area known to mineralogists and geologists for its wide range of different mineral species found in the veins that lie deep in the mine shafts. This specific mineral is found in the Susanna vein of Leadhills, where the crystals are formed as chisel-shaped or bladed. Scotlandite was actually the first naturally occurring sulfite, which has the ideal chemical formula of PbSO3. The mineral has been approved by the Commission on New Minerals and Mineral Names, IMA, to be named scotlandite for Scotland.
Occurrence
Scotlandite is found in association with pyromorphite, anglesite, lanarkite, leadhillite, susannite, and barite. It occurs in cavities in massive barite and anglesite, and is closely associated with lanarkite and susannite. Scotlandite represents the latest phase in the crystallization sequence of the associated lead secondary minerals. It can often be found in the vuggy anglesite as yellowish single crystals up to 1 millimeter in length that sometimes arrange in a fan-shaped aggregates. Anglesite can usually be recognized in a very thin coating on scotlandite which is used to protect the sulfite from further oxidation. A second variety of scotlandite can also occur in discontinuously distributed cavities between the anglesite mass containing the first variety and the barite matrix. This variety is characterized by tiny, whitish to water-clear crystals, and crystal clusters less than one millimeter in size, which encrust large portions of the interior of the cavities. Scotlandite is a uniquely rare mineral, as it occurs in small amounts in few locations around the world.
Physical properties
Scotlandite is a pale yellow, greyish-white, colorless, transparent mineral with an adamantine or pearly luster. It exhibits a hardness of 2 on the Mohs hardness scale. Scotlandite occurs as chisel-shaped or bladed crystals elongated along the c-axis, with a tendency to form radiating clusters. Its crystals are characterized by the {100}, {010}, {011}, {021}, {031}, and {032}. faces. Scotlandite shows perfect cleavage along the {100} plane and a less good one along the {010} plane. The measured density is 6.37 g/cm3.
Optical properties
Scotlandite is biaxial positive, which means it will refract light along two axes. The mineral is optically biaxial positive, 2Vmeas. 35° 24'(Na). The refractive indices are: α ~ 2.035, β ~ 2.040, and γ ~ 2.085 (Na). Dispersion is strong, v >> r. The extinction is β//b, and α [001] = 20° (γ [100] = 4° in the obtuse angle β. H(Mohs) < 2. D = 6.37 and calculated Dx = 6.40 g cm−3. The infrared spectrum of scotlandite shows conclusively that it is an anhydrous sulfite, with no OH groups or other polyatomic anions being present. It is also proven by electron microprobe analysis and infrared spectroscopy that scotlandite must be a polymorph of lead sulfite.
Chemical properties
Scotlandite is a sulfite compared with chemically related compounds, it is very close to the value of anglesite (6.38 g cm−3), but distinctly different from that of lanarkite (6.92 g cm−3). Orthorhombic lead sulfite is of higher density (Dmeas = 6.54, calculated Dx = 6.56 g cm−3), and has the same chemical properties as well. The empirical chemical formula for scotlandite calculated on the basis of Pb+S = 2, is Pbl.06S0.94O2.94 or more ideally PbSO3.
Chemical composition
X-ray crystallography
A small crystal of scotlandite, showing some cleavage faces, was examined using Weissenberg and precession techniques. Scotlandite is in the monoclinic crystal system. The only systematic extinctions observed from the single crystal patterns were 0k0 where k was odd. Thus the possible space group is either P2 or P2/m. The unit cell parameters obtained from the single crystal study were used to index the X-ray powder pattern and were then refined with the indexed powder data. A subsequent study determined the space group is P21/m (no. 11) with unit cell dimensions: a = 4.505 Å, b = 5.333 Å, c = 6.405 Å; β= 106.24°; Z = 2. If the present a and c axes are interchanged, the unit cell of scotlandite is very similar, isotypic, to that of molybdomenite, PbSeO3. Lead is coordinated to nine oxygen atoms with Pb-Oav=2.75 Å, and possibly further to one sulfur atom with Pb−S=3.46 Å. The average S−O distance in the pyramidal SO3 group is 1.52 Å.
See also
List of Minerals
References
Natural materials
Lead minerals
Monoclinic minerals
Minerals in space group 11 | Scotlandite | [
"Physics"
] | 1,130 | [
"Natural materials",
"Materials",
"Matter"
] |
55,614,105 | https://en.wikipedia.org/wiki/BioCompute%20Object | The BioCompute Object (BCO) project is a community-driven initiative to build a framework for standardizing and sharing computations and analyses generated from High-throughput sequencing (HTS—also referred to as next-generation sequencing or massively parallel sequencing). The project has since been standardized as IEEE 2791-2020, and the project files are maintained in an open source repository. The July 22nd, 2020 edition of the Federal Register announced that the FDA now supports the use of BioCompute (officially known as IEEE 2791-2020) in regulatory submissions, and the inclusion of the standard in the Data Standards Catalog for the submission of HTS data in NDAs, ANDAs, BLAs, and INDs to CBER, CDER, and CFSAN.
Originally started as a collaborative contract between the George Washington University and the Food and Drug Administration, the project has grown to include over 20 universities, biotechnology companies, public-private partnerships and pharmaceutical companies including Seven Bridges and Harvard Medical School. The BCO aims to ease the exchange of HTS workflows between various organizations, such as the FDA, pharmaceutical companies, contract research organizations, bioinformatic platform providers, and academic researchers. Due to the sensitive nature of regulatory filings, few direct references to material can be published. However, the project is currently funded to train FDA Reviewers and administrators to read and interpret BCOs, and currently has 4 publications either submitted or nearly submitted.
Background
One of the biggest challenges in bioinformatics is documenting and sharing scientific workflows in such a way that the computation and its results can be peer-reviewed or reliably reproduced. Bioinformatic pipelines typically use multiple pieces of software, each of which typically has multiple versions available, multiple input parameters, multiple outputs, and possibly platform-specific configurations. As with experimental parameters in a laboratory protocol, small changes in computational parameters may have a large impact on the scientific validity of the results. The BioCompute Framework provides an object oriented design from which a BCO that contains details of a pipeline and how it was used can be constructed, digitally signed, and shared. The BioCompute concept was originally developed to satisfy FDA regulatory research and review needs for evaluation, validation, and verification of genomics data. However, the Biocompute Framework follows FAIR Data Principles and can be used broadly to provide communication and interoperability between different platforms, industries, scientists and regulators
Utility
As a standardization for genomic data, BioCompute Objects are mostly useful to three groups of users: 1) academic researchers carrying out new genetic experiments, 2) pharma/biotech companies that wish to submit work to the FDA for regulatory review, and 3) clinical settings (hospitals and labs) that offer genetic tests and personalized medicine. The utility to academic researchers is the ability to reproduce experimental data more accurately and with less uncertainty. The utility to entities wishing to submit work to the FDA is a streamlined approach, again with less uncertainty and with the ability to more accurately reproduce work. For clinical settings, it is critical that HTS data and clinical metadata be transmitted in an accurate way, ideally in a standardized way that is readable by any stakeholder, including regulatory partners.
Format
The BioCompute Object is in json format and, at a minimum, contains all the software versions and parameters necessary to evaluate or verify a computational pipeline. It may also contain input data as files or links, reference genomes, or executable Docker components. A BioCompute Object can be integrated with HL7 FHIR as a Provenance Resource. Multiple joint implementations are also under development that leverage BCO's report-centric format, including CWL (one of which is part of an active government funded public contract with a cofounder of CWL to pilot and generate documentation for a joint BCO-CWL, as well as examples) and RO.
BCO Consortium
The BioCompute Object working group facilitated a means for different stakeholders to provide input on current practices on the BCO. This working group was formed during preparation for the 2017 HTS Computational Standards for Regulatory Sciences Workshop, and was initially made up of the workshop participants. The growth and work of the BCO working group, as a direct result of the interaction between a variety of stakeholders from all interested communities, culminated in the official standard, IEEE 2791-2020, which was approved in January 2020. A Public-Private partnerships was formed between GWU and CBER and has become an easy point of entry for new individuals or institutions into the BCO project to participate in the discussion of best practices for the objects.
Implementations
The simple R package biocompute can create, validate, and export BioCompute Objects. The Genomics Compliance Suite is a Shiny app that offers similar features to regular expressions found in all modern text editors. There are several internally developed open source software packages and web applications that implement the BioCompute specification, three of which have been deployed in a publicly accessible AWS EC2 cloud. These include an instance of the High-performance Integrated Virtual Environment, the BioCompute Portal (a form-based web application that can create and edit BioCompute Objects based on the IEEE-2791-2020 standard, and a BioCompute compliant instance of Galaxy.
Some bioinformatics platforms have built-in support for Biocompute, which let a user automatically create a BCO from a workflow and edit the descriptive content.
DNAnexus and PrecisionFDA facilitate the generation of BCOs by importing workflows, allowing users to edit descriptive content. The platform supports metadata import and export of WDL and CWL scripts, and offers the BCOnexus tool, which is a high-level, platform-free tool with a graphical user interface that lets a user merge BCOs.
Velsera's Seven Bridges Genomics and Cancer Genomics Cloud also have support for BioCompute by enabling direct pre-population of BCO fields from workflows.
BioCompute has also been integrated into HIVE and the main Galaxy instance, both of which similarly enable users to automatically generate BCOs and edit content within these platforms.
BioCompute has also been implemented in the Common Fund Data Elements Playbook Partnership project. This implementation lets a user save a workflow when they're satisfied with the results, which aids in traceability through the network of independently-versioned resources, allowing users to save queries and annotate them for future use, sharing, or repeatability, aligning with its role in advancing bioinformatics practices.
Integration into platforms is meant to improve data handling and collaboration and provide effective ways for users to execute a workflow, and graphical representations of BCOs are often more intuitive ways of browsing or reading BCOs.
References
External links
Official Website
IEEE 2791-2020 open source project
Bioinformatics software
Interoperability
JSON
DNA sequencing | BioCompute Object | [
"Chemistry",
"Engineering",
"Biology"
] | 1,429 | [
"Telecommunications engineering",
"Bioinformatics software",
"Bioinformatics",
"Molecular biology techniques",
"DNA sequencing",
"Interoperability"
] |
55,615,276 | https://en.wikipedia.org/wiki/Archaerhodopsin | Archaerhodopsin proteins are a family of retinal-containing photoreceptors found in the archaea genera Halobacterium and Halorubrum. Like the homologous bacteriorhodopsin (bR) protein, archaerhodopsins harvest energy from sunlight to pump H+ ions out of the cell, establishing a proton motive force that is used for ATP synthesis. They have some structural similarities to the mammalian G protein-coupled receptor protein rhodopsin, but are not true homologs.
Archaerhodopsins differ from bR in that the claret membrane, in which they are expressed, includes bacterioruberin, a second chromophore thought to protect against photobleaching. Also, bR lacks the omega loop structure observed at the N-terminus of the structures of several archaerhodopsins.
Mutants of Archaerhodopsin-3 (AR3) are used as tools in optogenetics for neuroscience research.
Etymology
The term archaerhodopsin is a portmanteau of archaea (the domain in which the proteins are found) and rhodopsin (a photoreceptor responsible for vision in the mammalian eye).
archaea from Ancient Greek ἀρχαῖα (, "ancient"), the plural and neuter form of ἀρχαῖος (, "ancient").
rhodopsin from Ancient Greek ῥόδον (, "rose"), because of its pinkish color, and ὄψις (, "sight").
History
In the 1960s, a light driven proton pump was discovered in Halobacterium salinarum, and called Bacteriorhodopsin. Over the following years, there were various studies of the membrane of H. salinarum to determine the mechanism of these light-driven proton pumps.
In 1988, another Manabu Yoshida's group at Osaka University reported a novel light-sensitive proton pump from a strain of Halobacterium which they termed Archaerhodopsin. A year later, the same group reported isolating the gene that encodes Archaerhodopsin.
Family members
Seven members of the archaerhodopsin family have been identified to date.
AR1 and AR2
Archaerhodopsin 1 and 2 (AR1 and AR2) were the first archaerhodopsins to be identified and are expressed by Halobacterium sp. Aus-1 and Aus-2 respectively. Both species were first isolated in Western Australia in the late 1980s. The crystal structures of both proteins were solved by Kunio Ihara, Tsutomo Kouyama and co-workers at Nagoya University, together with collaborators at the Spring-8 synchrotron.
AR3/Arch
AR3 is expressed by Halorubrum sodomense. The organism was first identified in the Dead Sea in 1980 and requires a higher concentration of Mg2+ ions for growth than related halophiles. The aop3 gene was cloned by Ihara and colleagues at Nagoya University in 1999 and the protein was found to share 59% sequence identity with bacteriorhodopsin. The crystal structure of AR3 was solved by Anthony Watts at Oxford University and Isabel Moraes at the National Physical Laboratory, together with collaborators at Diamond Light Source.
Mutants of Archaerhodopsin-3 (AR3) are widely used as tools in optogenetics for neuroscience research.
AR3 has recently been introduced as a fluorescent voltage sensor.
AR4
AR4 is expressed in Halobacterium species xz 515. The organism was first identified in a salt lake in Tibet. The gene encoding it was identified by H. Wang and colleagues in 2000. In most bacteriorhodopsin homologs, H+ release to the extracellular medium takes place before a replacement ion is taken up from the cytosolic side of the membrane. Under the acidic conditions found in the organism's native habitat, the order of these stages in the AR4 photocycle is reversed.
AR-BD1
AR-BD1 (also known as HxAR) is expressed by Halorubrum xinjiangense. The organism was first isolated from Xiao-Er-Kule Lake in Xinjiang, China.
HeAr
HeAR is expressed by Halorubrum ejinorense. The organism was first isolated from Lake Ejinor in Inner Mongolia, China.
AR-TP009/ArchT
AR-TP009 is expressed by Halorubrum sp. TP009. Its ability to act as a neural silencer has been investigated in mouse cortical pyramidal neurons.
General features
Occurrence
Like other members of the microbial rhodopsin family, archaerhodopsins are expressed in specialised, protein-rich domains of the cell surface membrane, commonly called the claret membrane. In addition to ether lipids, the claret membrane contains bacterioruberin, (a 50-carbon carotenoid pigment) which is thought to protect against photobleaching. Atomic force microscope images of the claret membranes of several archaerhodopsins, show that the proteins are trimeric and are arranged in a hexagonal lattice. Bacterioruberin has also been implicated in oligomerisation and may facilitate protein-protein interactions in the native membrane.
Function
Archaerhodopsins are active transporters, using the energy from sunlight to pump H+ ions out of the cell to generate a proton motive force that is used for ATP synthesis. Removal of the retinal cofactor (e.g. by treatment with hydroxylamine) abolishes the transporter function and dramatically alters the absorption spectra of the proteins. The proton pumping ability of AR3 has been demonstrated in recombinant E. coli cells and of AR4 in liposomes.
In the resting or ground state of archaerhodopsin, the bound retinal is in the all-trans form, but on absorption of a photon of light, it isomerizes to 13-cis. The protein surrounding the chromophore reacts to the change of shape and undergoes an ordered sequence of conformational changes, which are collectively known as the photocycle. These changes alter the polarity of the local environment surrounding titratable amino acid side chains inside the protein, enabling H+ to be pumped from the cytoplasm to the extracellular side of the membrane. The intermediate states of the photocycle may be identified by their absorption maxima.
Structures
Crystal structures of the resting or ground states of AR1 (3.4 Å resolution), AR2 (1.8 Å resolution) and AR3 (1.07 and 1.3 Å) have been deposited in the Protein Data Bank. Proteins possess seven transmembrane α-helices and a two-stranded extracellular-facing β-sheet. Retinal is covalently bonded via Schiff base to a lysine residue on helix G. The conserved DLLxDGR sequence, close to the extracellular-facing N-terminus of both proteins, forms a tightly curved omega loop that has been implicated in bacterioruberin binding. The cleavage of the first 6 amino acids and the conversion of Gln7 to a pyroglutamate (PCA) residue was also observed in AR3, as previously reported for bacteriorhodopsin.
Use in research
Archaerhodopsins drive the hyperpolarization of the cell membrane by secreting protons in presence of light, thereby inhibiting action potential firing of neurons. This process is associated with an increase in extracellular H+ (i.e. decreased pH linked to the activity of these proteins. These characteristics allow for Archaerhodopsins to be commonly used tools for optogenetic studies as they behave as transmission inhibition factors in presence of light. When expressed within intracellular membranes, the proton pump activity increases the cytosolic pH, this functionality can be used for optogenetic acidification of lysosomes and synaptic vesicles when targeted to these organelles.
Notes
References
Archaea proteins
Integral membrane proteins
Photosynthesis | Archaerhodopsin | [
"Chemistry",
"Biology"
] | 1,712 | [
"Biochemistry",
"Archaea proteins",
"Photosynthesis",
"Archaea"
] |
55,617,041 | https://en.wikipedia.org/wiki/Marine%20biogenic%20calcification | Marine biogenic calcification is the production of calcium carbonate by organisms in the global ocean.
Marine biogenic calcification is the biologically mediated process by which marine organisms produce and deposit calcium carbonate minerals to form skeletal structures or hard tissues. This process is a fundamental aspect of the life cycle of some marine organisms, including corals, mollusks, foraminifera, certain types of plankton, and other calcifying marine invertebrates. The resulting structures, such as shells, skeletons, and coral reefs, function as protection, support, and shelter and create some of the most biodiverse habitats in the world. Marine biogenic calcifiers also play a key role in the biological carbon pump and the biogeochemical cycling of nutrients, alkalinity, and organic matter.
Processes of Marine Biogenic Calcification
Biochemical mechanisms
Cellular and molecular processes of biogenic calcification
Calcium carbonate plays a fundamental role in the skeletal formation of marine calcifiers. The skeletal structures of these organisms are predominantly composed of calcium carbonate minerals, specifically aragonite and calcite. These structures provide support, protection, and housing for marine calcifiers and are formed through the biochemical processes of biomineralization to precipitate the crystal structures that form the hard tissues of these organisms.
The biogenic formation of calcium carbonate structures is the result of a combination of biological and physical processes such as genetics, cellular activity, crystal competition, growth in confined spaces, and self-organization processes. The composition of these structures, and the mechanisms involved in building them, are highly diverse. For example, some corals can incorporate both calcite and aragonite polymorphs into their skeletons. Some species, like corals and byrozoans, can incorporate other minerals to form complex protein matrices that perform specific functions.
The key steps involved in marine biogenic calcification include the uptake of dissolved calcium ions (Ca2+) and carbonate ions (CO32-) from seawater, the precipitation of calcium carbonate crystals, and the controlled formation of skeletal structures through biomineralization processes. These organisms often regulate the calcification process through the secretion of organic molecules and proteins that influence the nucleation and growth of crystalline structures.
A range of biochemical calcification (biocalcification) mechanisms exist, indicated by the fact that marine calcifiers use different forms of calcium carbonate minerals. Within this range of mechanisms, there are two broad categories of biogenic calcification in marine organisms: extracellular mineralization and intracellular mineralization. In particular, mollusks and corals use the extracellular strategy in which ion exchange pumps actively pump ions out of a cell into the extracellular space, where environmental conditions, such as pH, can be tightly controlled. In contrast, during intracellular mineralization the calcium carbonate is formed within the organism and can either be kept within the organism as an internal structure or is later moved to the outside while retaining the cell membrane covering. Broadly, the intracellular mechanism pumps ions into a vesicle within the cell. This vesicle can then be secreted to the outside of the organism. Often, cells will fuse their membranes and combine these vesicles in order to build very large calcium carbonate structures that would not be possible within a single cell.
Forms of calcium carbonate
The three most common calcium carbonate minerals are aragonite, calcite, and vaterite. Although these minerals have the same chemical formula (CaCO3), they are considered polymorphs because the atoms that make up the molecule are stacked in different arrangements. For example, aragonite minerals have an orthorhombic crystal lattice structure, while calcite crystals have a trigonal structure. Some of the calcite polymorphs are further subdivided by relative magnesium content (Mg/Ca ratio), with calcite solubility increasing with increasing Mg.
The solubility of various forms of CaCO3 differs in seawater; specifically, aragonite exhibits greater solubility compared to pure calcite.
Chemical processes and saturation state
The surface ocean engages in air-sea interactions and absorbs carbon dioxide (CO2) from the atmosphere, making the ocean the Earth's largest sink for atmospheric CO2. Carbon dioxide dissolves in and reacts with seawater to form carbonic acid. Subsequent reactions then produce carbonate (CO32−), bicarbonate (HCO3−), and hydrogen (H+) ions. Carbonate and bicarbonate are also deposited into the global ocean by rivers through the weathering of rock formations. The three species of carbon in seawater, carbon dioxide, bicarbonate, and carbonate, make up the total concentration of dissolved organic carbon (DIC) in the ocean. Approximately 90% of DIC is bicarbonate ions, 10% is carbonate ions, and <1% is dissolved carbon dioxide, with some spatial variation. The equilibria reactions between these species result in the buffering of seawater in terms of the concentrations of hydrogen ions present.
The following chemical reactions exhibit the dissolution of carbon dioxide in seawater and its subsequent reaction with water:
CO2(g) + H2O(l) ⥨ H2CO3(aq)
H2CO3(aq) ⥨ HCO3−(aq) + H+(aq)
HCO3−(aq) ⥨ CO32−(aq) + H+(aq)
This series of reactions governs the pH levels in the ocean and also dictates the saturation state of seawater, indicating how saturated or unsaturated the seawater is with carbonate ions. Consequently, the saturation state significantly influences the balance between the dissolution and calcification processes in marine biogenic calcifiers. When seawater is oversaturated with calcium carbonate, the concentration of calcium ions and carbonate ions exceed the saturation point for a particular mineral, such as aragonite or calcite, which make up the skeletons of many marine organisms. Such conditions are favorable to marine calcifiers for the formation of calcium carbonate skeletons or shells. When seawater is undersaturated, meaning the concentration of calcium and carbonate ions is below the saturation point, it becomes challenging for marine calcifiers to build and maintain their skeletal structures, as the equilibrium conditions favor dissolution of calcium carbonate. As a general rule, seawater that is undersaturated (Ω < 1) can dissolve the structures of calcifying organisms. However, many organisms see negative effects on growth at saturation states above Ω = 1. For example, a saturation state of Ω = 3 is optimal for coral growth, so a saturation state Ω < 3 can potentially have negative effects on coral growth and survival.
Calcium carbonate saturation can be determined using the following equation:
Ω = ([Ca2+][CO32−])/Ksp
where the numerator ([Ca2+][CO32−]) denotes the concentration of calcium and carbonate ions and the denominator (Ksp) refers to the mineral (solid) phase stoichiometric solubility product of calcium carbonate.
When saturation is high, organisms can extract calcium and carbonate ions from seawater, forming solid crystals of calcium carbonate:
Ca2+(aq) + 2HCO3−(aq) → CaCO3(s) + CO2 + H2O
For marine calcifiers to build and maintain calcium carbonate structures, CaCO3 production must be greater than CaCO3 loss through physical, chemical, and biological processes. This net production can be thought of as follows:
CaCO3 accretion = CaCO3 production – CaCO3 dissolution – physical loss of CaCO3
The decreasing saturation of seawater with respect to calcium carbonate, associated with ocean acidification, a result of increased carbon dioxide (CO2) absorption by the oceans, poses a significant threat to marine calcifiers. As CO2 concentrations in seawater rise, a decrease in pH and a reduction in carbonate ion concentrations in seawater follows. And, since calcification is a source of CO2 to the surrounding water, decreased rates of global calcification would inversely affect the rate of atmospheric CO2 absorption, perpetuating these effects. This can make it difficult for marine organisms to precipitate and maintain their calcium carbonate structures, affecting growth, development, and overall health.
The widespread use of calcification by marine organisms has relied on the ability of calcium carbonate to readily form in seawater, where the saturation states (Ω) of aragonite and calcite minerals have consistently surpassed Ω = 1 (indicating oversaturation) in surface waters for hundreds of millions of years. The impacts of reduced calcium carbonate saturation on marine calcifiers have broader ecological implications, as these organisms play vital roles in marine ecosystems. For example, coral reefs, which are built by coral polyps secreting calcium carbonate skeletons, are particularly vulnerable to changes in calcium carbonate saturation.
There is much debate in the scientific community on whether calcification rates correlate more with carbonate ions and saturation state or with pH. Some researchers state that a correlation exists between calcification and the Ω of carbonate ions in seawater. Meanwhile, others state that from a physiological standpoint there are numerous marine organisms, and their calcification control is attributed more so to the concentrations of seawater bicarbonate (HCO3−) and protons (H+) rather than the Ω. Further research is essential to gain a comprehensive understanding of the intricate connections between Ω, ocean acidification, and their impacts on the calcification rates of marine biogenic calcifiers, elucidating the distinct roles played by each.
Marine Calcifying Organisms
Corals
Coral reefs, physical structures formed from calcium carbonate, are important on biological and ecological scales to the regions they are endemic to. Their robust calcification abilities have resulted in extensive calcium carbonate deposits, some housing significant hydrocarbon reserves. However, this group only accounts for about 10% of the global production of calcium carbonate.
Corals undergo extracellular calcification and first develop an organic matrix and skeleton on top of which they will form their calcite structures. It is proposed that calcification via pH upregulation of the coral's extracellular calcifying fluid occurs at least in part via Ca2+-ATPase. Ca2+-ATPase is an enzyme in the calicoblastic epithelium that pumps Ca2+ ions into the calcifying region and ejects protons (H+). This process circumvents the kinetic barriers to CaCO3precipitation that exist naturally in seawater.
Molluscs
Mollusks are a diverse group including slugs, oysters, limpets, snails, scallops, mussels, clams, cephalopods and others. Mollusks employ a strategic approach to protect their soft tissues and deter predation by developing an external calcified shell. This process involves specialized cells following genetic instructions to synthesize minerals under non-equilibrium conditions. The resulting minerals exhibit complex shapes and sizes along with being formed within a confined space. These organisms also pump hydrogen out of the calcifying area so that it will not bond to the carbonate ions, which prevents crystallization of calcium carbonate.
Echinoderms
Echinoderms, of the phylum Echinodermata, include organisms such as sea stars, sea urchins, sand dollars, crinoids, sea cucumbers and brittle stars. These organisms form extensive endoskeletons consisting of magnesium-rich calcite. Magnesium-rich calcite maintains the chemical composition of CaCO3, yet features substitutions of Mg for Ca as calcite and aragonite are mineral forms or polymorphs of CaCO3. Adult echinoderm skeletons consist of teeth, spines, tests, tubule feet, and in some cases, spicules. Echinoderms serve as excellent blueprints for biomineralization. Adult sea urchins are a particularly popular species studied to better understand the molecular and cellular processes that the calcification and biomineralization of their skeletal structures requires. Unlike many other marine calcifiers, echinoderm tests are not formed purely from calcite; instead, their structures also heavily consist of organic matrices that increases the toughness and strength of their endoskeletons.
Crustaceans
Crustaceans have a hard outer shell formed from calcium carbonate. These organisms form a network of chitin-protein fibers and then precipitate calcium carbonate within this matrix. The chitin-protein fibers are first hardened by sclerotization, or crosslinking of protein and polysaccharides, followed by the crosslinking of proteins with other proteins. The presence of a hard, calcified exoskeleton means that the crustacean has to molt and shed the exoskeleton as its body size increases. This links molting cycles to calcification processes, making access to a regular source of calcium and carbonate ions crucial for the growth and survival of crustaceans. Various body parts of the crustacean will have a different mineral content, varying the hardness at these locations with the harder areas being generally stronger. This calcite shell provides protection for the crustaceans, meaning between molting cycles the crustacean must avoid predators while it waits for its calcite shell to form and harden.
Foraminifera
Foraminifera, or forams, are single-celled protists that form chambered shells (tests) from calcium carbonate. Forams first appeared approximately 170 million years ago, and populate oceans globally. Forams are microscopic organisms, typically no larger than 1 mm in length. The calcification and dissolution of their shells causes changes both in the surface seawater carbonate chemistry, and in deep-water chemistry. These organisms are excellent paleo-proxies as they record ambient water chemistry during shell formation and are well-preserved in the sedimentary fossil record. Planktonic foraminifera, found in large numbers in the ocean, contribute significantly to oceanic carbonate production. Unlike their benthic counterparts, more of these species have algal symbionts.
Coccolithophores
Phytoplankton, especially haptophytes such as coccolithophores, are also well known for their calcium carbonate production. It is estimated that these phytoplankton may contribute up to 70% of the global calcium carbonate precipitation, and coccolithophores are the largest phytoplankton contributors, along with diatoms and dinoflagellates. Contributing between 1 and 10% of total ocean primary productivity, 200 species of coccolithophores live in the ocean, and under the right conditions they can form large blooms. These large bloom formations are a driving force for the export of calcium carbonate from the surface to the deep ocean in what is sometimes called “Coccolith rain”. As the coccolithophores sink to the seafloor they contribute to the vertical carbon dioxide gradient in the water column.
Great Calcite Belt of the Southern Ocean
Coccolithophores produce calcite plates termed coccoliths which together cover the entire cell surface forming the coccosphere. The coccoliths are formed using the intracellular strategy where the plates are formed in a coccoliths vesicle, but the product forming within the vesicle varies between the haploid and diploid phases. A coccolithophore in the haploid phase will produce what is called a holococcolith, while one in the diploid phase will produce heterococcoliths. Holococcoliths are small calcite crystals held together in an organic matrix, while heterococcoliths are arrays are larger, more complex calcite crystals. These are often formed over a pre-existing template, giving each plate its particular structure and forming complex designs. Each coccolithophore is a cell surrounded by the exoskeleton coccosphere, but there exists a wide range of sizes, shapes and architectures between different cells. Advantages of these plates may include protection against infection by viruses and bacteria, as well as protection from grazing zooplankton. The calcium carbonate exoskeleton enhances the amount of light the coccolithophore can uptake, increasing the level of photosynthesis. Finally, the coccoliths protect the phytoplankton from photodamage by UV light from the sun.
The coccolithophores are also important in the geological history of Earth. The oldest coccolithophore fossil records are more than 209 million years old, placing their earliest presence in the Late Triassic period. Their calcium carbonate formation may have been the first deposition of carbonate on the seafloor.
Corallinales (red algae)
Calcifying rhodophytes stock their filamentous cell walls with calcium carbonate and magnesium. Corallinales is the one genus of red algae exists but their distribution ranges across the world's oceans. Examples include Corallina, Neogoniolithon, and Harveylithon. The magnesium-rich calcium carbonate of Corallinales cell wall provides shelter from predators and structural integrity in the intertidal zone. The CaCO3 production in Coralline also plays a role in habitat formation and provides resources for benthic invertebrates.
Calcifying bacteria
Evidence shows that some calcifying cyanobacteria strains have existed for millions of years and contributed to large land formations. About 70 strains of cyanobacteria can precipitate calcium carbonate, including some strains of Synechococcus, Bacillus sphaericus, Bactilus subtilus,and Sporosarcina psychrophile.
Morphological variations in calcium carbonate skeletons
Structural adaptations in different marine organisms
Diverse algae exhibit distinct mechanisms of CaCO3 formation, with calcification occurring internally or externally. Calcification may play a role in producing CO2 or supporting processes that need H+, based on the observed partial reaction. Phytoplankton species relying on CO2 diffusion for photosynthesis may face limitations due to CO2 concentration and diffusion to the chloroplast's Rubisco site. Calcifying macroalgae like Halimeda and Corallina also produce CaCO3 in alkaline extracellular spaces.
Coccolithophorid phytoplankton form CaCO3 in crystalline structures known as coccoliths, with holococcoliths formed externally and heterococcoliths produced intracellularly. Various coccolithophores produce two coccolith types: Heterococcoliths, from diploid cells, are complex, while holococcoliths, from haploid stages, are less studied. Factors influencing life cycle phase transitions and the role of specific proteins like GPA in coccolith morphology are explored. Polysaccharides, particularly coccolith-associated polysaccharides (CAPs), emerge as key regulators of calcite growth and morphology. CAPs' diverse roles, including nucleation promotion and inhibition, vary between species. External polysaccharides also influence coccolith adhesion and organization. Recent findings link cellular transport processes, carbonate saturation conditions, and regulatory processes determining calcite precipitation rate and morphology. Unexpectedly, silicon's role in coccolith morphology regulation is species-dependent, highlighting physiological distinctions among coccolithophore groups. These revelations raise questions about ecological implications, evolutionary adaptations, and the impact of changing ocean silicate levels on coccolithogenesis.
Calcification rates in coccolithophores often correlate with photosynthesis, implying a potential metabolic role. Heterococcoliths develop inside intracellular vesicles, with coccolith formation showing a unity ratio with photosynthetic carbon fixation under high calcification rates. The variability in isotope fractionation and calcification mechanisms underscores these organisms' adaptability and complexity in responding to environmental factors.
For corals, DIC from the seawater is absorbed and transferred to the coral skeleton. An anion exchanger will then be used to secrete DIC at the site of calcification. This DIC pool is also used by algal symbionts (dinoflagellates) that live in the coral tissue. These algae photosynthesize and produce nutrients, some of which are passed to the coral. The coral in turn will emit ammonium waste products which the algae uptake as nutrients. There has been an observed tenfold increase in calcium carbonate formation in corals containing algal symbionts compared with corals that do not have this symbiotic relationship. The coral algal symbionts, Symbiodinium, show decreased populations with increased temperatures, often leaving the coral colorless and unable to photosynthesize and losing pigments (known as coral bleaching).
Evolution of biogenic calcification
The evolution of biogenic calcification and carbonate structures within the eukaryotic domain is complex, highlighted by the distribution of mineralized skeletons across major clades. Five out of the eight major clades feature species with mineralized skeletons, and all five clades involve organisms that precipitate calcite or aragonite. Skeletal evolution occurred independently in foraminiferans and echinoderms, suggesting two separate origins of CaCO3 skeletons. The common ancestry for echinoderm and ascidian skeletons is less clear, but a conservative estimate indicates that carbonate skeletons evolved at least twenty-eight times within Eukarya.
Phylogenetic insights highlight repeated innovations in carbonate skeleton evolution, raising questions about homology in underlying molecular processes. Skeleton formation involves controlled mineral precipitation in specific biological environments, requiring directed calcium and carbonate transport, molecular templates, and growth inhibitors. Biochemical similarities, including the synthesis of acidic proteins and glycoproteins guiding mineralization, suggest an ancient capacity for carbonate formation in eukaryotes. While skeletons may not share structural homology, underlying physiological pathways are common, reflecting multiple cooptations of molecular and physiological processes across eukaryotic organisms.
The Cambrian Period marks a significant watershed in skeletal evolution, with the appearance of mineralized skeletons in various groups. Skeletal diversity increased during this period, driven by predation pressure favoring protective armor evolution. The Cambrian radiation of mineralized skeletons was likely part of a broader animal diversity expansion.
The evolution of mineralized skeletons during the Cambrian did not occur instantly, with a gradual increase in abundance and diversity over 25 million years. Environmental changes and predation pressure played key roles in shaping skeletal evolution. The diversity of minerals and skeletal architectures during this period challenges explanations solely based on changing ocean chemistry. The interplay between genetic possibility and environmental opportunity, influenced by factors like increased oxygen tensions, likely contributed to Cambrian diversification. Later Cambrian oceans witnessed a decline in mineralized skeletons, potentially influenced by high temperatures and pCO2 associated with a super greenhouse. Skeletal physiological responses to environmental conditions remain an area of study. Large-scale variations in carbonate chemistry suggest a connection between ocean chemistry and the mineralogy of carbonate precipitation. Skeletal organisms that precipitate massive skeletons under limited physiological control show stratigraphic patterns corresponding to shifts in seawater chemistry. This interplay between physiology, evolution, and environment underscores the complexity of mineralized skeleton evolution across geological time.
Calcium carbonate cycling and the biological carbon pump
The calcium carbonate cycle in the global ocean is of great significance to the biological, chemical, and physical state of the ocean. Mineral calcium carbonate most commonly presents as calcite in the ocean, and the majority of calcite is produced biologically in the upper layer of the ocean. CaCO3 material is exported from the upper ocean to sediments on the ocean floor where it either dissolves or is buried. Alternatively, CaCO3 can dissolve or be remineralized within the water column prior to reaching the seafloor.
Upon reaching the seafloor, CaCO3 undergoes a diagenetic process that ends in either dissolution or burial. The distribution of sediments consisting of calcium carbonate is fairly even across the global oceans, but specific locations are determined by the solubility and saturation level of calcium carbonate.
The “biological carbon pump” is a colloquial term coined by scientists to summarize the global carbon cycle in the ocean and its relationship to the biological processes that occur throughout the ocean. The calcium carbonate cycle is inherently linked to the biological pump. The formation of biogenic calcium carbonate by marine calcifiers is one way to add ballast to sinking particles and enhance transport of carbon to the deep ocean and seafloor. The calcium carbonate counter pump refers to the biological process of precipitation of carbonate and the sinking of particulate inorganic carbon. This process releases CO2 into the surface ocean and atmosphere across timescales spanning 100 to 1,000 years. Its crucial role in regulating atmospheric pCO2 significantly influences global changes in atmospheric CO2 concentration.
Inorganic sources of calcium carbonate
Of all the metals important to biogeochemical cycles in the ocean, calcium is one of the most significant in both its mobility and the role it plays in regulating climate over millions of years through its presence in calcium carbonate. Calcium has the ability to migrate relatively easily between the hydrosphere, the biosphere, and the crust of the Earth.
Calcium and bicarbonate ions are largely deposited into the ocean from the weathering of rock formations and are transported via riverine input. This process occurs on very long timescales. Weathering accounts for approximately 60-90% of solute calcium within the global calcium cycle. Limestone rock, which consists mostly of calcite, is a prime example of a rich source of calcium to the ocean. The source of the majority of inorganic calcium present in the ocean is due to riverine deposition, though volcanic activity interacting with seawater does provide some calcium as well. The distribution of calcium sources described above is the case for both the present day oceanic calcium budget, and the historical budget over the last 25 million years. The formation of biogenic calcium carbonate is the primary mechanism of removal of calcium in the ocean water column.
Impact of environmental factors on calcification
Rising temperature and light exposure
Marine biogenic calcifiers, such as corals, are facing challenges due to increasing ocean temperatures, leading to prolonged warming events. When sea surface temperatures exceed the local summer maximum monthly mean, coral bleaching and mortality occur as a result of the breakdown in symbiosis with Symbiodiniaceae. Predicted increases in summer-time temperatures, coupled with ocean warming, are expected to impact coral health and overall rates of calcification, particularly in tropical regions where many corals already live close to their upper thermal limits.
Corals are highly adapted to their local seasonal temperature and light conditions, influencing their physiology and calcification rates. While increased temperature or light levels typically stimulate calcification up to a certain optimum, beyond which rates decline, the effects of temperature and light on the calcifying fluid chemistry are less clear. Coral calcification is a biologically mediated process influenced by the regulation of internal calcifying fluid chemistry, including pH and dissolved inorganic carbon. The impacts of temperature and light on these factors remain a knowledge gap, with laboratory studies yielding contrasting results. Decoupling the effects of temperature and light on calcification processes is challenging due to their seasonal co-variation, highlighting the need for further research to address this gap and enhance our understanding of how marine biogenic calcifiers respond to future climate change.
Ocean Acidification
Calcifying organisms are particularly at risk due to changes in the chemical composition of ocean water associated with ocean acidification. As pH decreases due to ocean acidification, the availability of carbonate ions (CO32-) in seawater also decreases. Therefore, calcifying organisms experience difficulty building and maintaining their skeletons or shells in an acidic environment. There has been considerable debate in the literature regarding whether organisms are responding to reduced pH or reduced mineral saturation state as both variables decline with ocean acidification. However, recent studies that have isolated the effects of saturation state independent of pH changes point toward saturation state as the most important factor impacting shell formation development. However, we still need to fully constrain the carbonate chemistry to better interpret the ecological responses around ocean acidification.
Responses of marine calcifiers to reduced carbonate ion availability are seen in different ways. For example, coral reefs experience inhibited growth at decreased pH, and live calcium carbonate structures experience weakening of existing structures. Other organisms are particularly vulnerable in the early stages of their life cycle. Bivalves for instance are particularly susceptible during early larval stages during initial shell formation since these early stages have a high energetic cost to the individual's development. In contrast, adult bivalves are considerably more resilient to reduced pH.
Human interactions and applications
Economic importance
Shellfish industry
Ocean acidification (OA) presents a formidable threat to global shellfish production, particularly exerting its impact on calcification processes. Projections indicate that by the end of the century, mussel and oyster calcification could witness substantial reductions of 25% and 10%, respectively, as outlined in the IPCC IS92a scenario, which has an emissions trajectory that results in atmospheric CO2 reaching approximately 740 ppm in 2100. These species, integral to coastal ecosystems and representing a significant portion of global aquaculture, play crucial roles as ecosystem engineers. The anticipated decline in calcification due to OA not only jeopardizes coastal biodiversity and ecosystem functioning but also carries the potential for considerable economic losses. For example, global aquaculture production for shellfish contributed US$29.2 billion to the world economy.
Damaged shell surfaces, primarily resulting from reduced calcification rates, contribute to a significant decrease in sale prices, marking a critical economic concern. Economic assessments reveal that such damages, particularly impacting culture quasi-profits or applied cultural value, can lead to reductions ranging from 35% to 70%. Furthermore, when accounting for assumed pH-driven changes occurring concurrently, quasi-profits diminish even more substantially, reaching levels of 49% to 84% across diverse OA scenarios. Consequently, the economic fallout is substantial, with the UK facing potential direct losses of £3 to £6 billion in GDP by 2100, and globally, costs exceeding US$100 billion. These findings emphasize the urgent need for proactive measures to mitigate OA's impact on bivalve farming and underscore the importance of comprehensive climate policies to address these multifaceted challenges.
Coral reef tourism
For organisms relying on calcified structures (e.g. such as reef-associated organisms), OA can potentially disrupt entire ecosystems. As calcifiers play crucial roles in maintaining marine biodiversity, the repercussions of coral reef decline extend beyond economic considerations, emphasizing the urgency of comprehensive conservation efforts. Extensive degradation is occurring in the Caribbean and Western Atlantic region's coral reefs, stemming from issues like disease, overfishing, and a range of human activities. Adding to the challenges, rapid climate-induced ocean warming and acidification exacerbate the threats to these vital ecosystems. Tourism is integral to the Caribbean region with the sector contributing to over 15 percent of GDP and sustaining 13 percent of jobs in the region as a whole. In the face of these challenges, the worldwide combined economic value of coral reefs is an estimated average of US$490 per hectare annually. Specific regions showcase the economic significance of coral reefs, with Hawai'i's contributing US$360 million annually to its economy, and the Philippine economy receiving at least US$1.06 billion each year from coral reefs. In the St. Martin region, coral reefs contribute significantly, emphasizing the need for prioritized conservation and protection efforts. Proposed solutions encompass ecological measures such as water quality management, sustainable fishing practices, ecological engineering, and marine spatial planning. Additionally, socio-economic strategies involve establishing a regional reef secretariat, integrating reef health into blue economy plans, and initiating a reef labeling program to foster corporate partnerships.
See also
Ocean acidification - a threat for marine biogenic calcification
Protist shell
Seashell
Shell growth in estuaries
References
Biomineralization | Marine biogenic calcification | [
"Chemistry"
] | 6,689 | [
"Bioinorganic chemistry",
"Biomineralization"
] |
47,136,464 | https://en.wikipedia.org/wiki/Cloverleaf%20model%20of%20tRNA | The cloverleaf model of tRNA is a model that depicts the molecular structure of tRNA. The model revealed that the chain of tRNA consists of two ends—sometimes called "business ends"—and three arms. Two of the arms have a loop, D-loop (dihydro U loop) and Tψc-loop with a ribosome recognition site. The third arm, known as the "variable arm", has a stem with optional loop. One end of the chains (with a double stranded structure in which the 5' and 3' ends are adjacent to each other), the amino acids acceptor stem, usually attaches to amino acids and such reactions are often catalyzed by a specific enzymes, aminoacyl tRNA synthetase. For example, if the amino acid that attach to the end is phenylalanine, the reaction will be catalyzed by phenylalanine-tRNA synthase to produce tRNAphe.
The other end—the bottom often called the "DNA arm"—consists of a three base sequence that pairs with a complementary base sequence in a mRNA.
See also
Plaque hybridization
References
RNA
Protein biosynthesis
Non-coding RNA
Articles containing video clips | Cloverleaf model of tRNA | [
"Chemistry"
] | 254 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
47,141,114 | https://en.wikipedia.org/wiki/Smithsonian%20Transcription%20Center | The Smithsonian Transcription Center is a crowdsourcing transcription project that aims to assist with the preservation and digitization of handwritten material in the Smithsonian Institution. The Transcription Center cites five reasons why transcription matters: discovery, humanities research, scientific research, education, and readability. Collections available for transcription include such documents as scientist field notebooks, artist diaries, astronomy logbooks, botany and bumblebee specimens and certified currency proofs.
The Smithsonian Transcription Center began in June 2013 and spent approximately a year in a beta test phase. On 12 August 2014 the Transcription Center website was launched to the public. As well as transcribing, volunteers review the submitted work before it is sent for approval. The final transcription is then checked by Smithsonian staff and once accepted, both the original images of the work and the transcription are kept on line.
The Transcriptions Center has an open call for anyone wanting to join in on transcribing documents for their many projects. Researches, educators, history buffs, amateur social scientists, and citizens are welcome to volunteer to transcribe for any of the many projects. The Transcription Center hopes that it will engage the public by making the Smithsonian Institution collections accessible.
References
External links
Presentation by Dr. Meghan Ferriter, Smithsonian Transcription Center Project Manager on Experiences from the Smithsonian Transcription Center
Smithsonian Collections Blog posts on the Transcription Center
Transcription Center blog posts on Smithsonian Libraries' Unbound blog
American science websites
Human-based computation
Citizen science
Crowdsourcing
Smithsonian Institution
Astronomy projects
Internet properties established in 2014
2014 establishments in the United States | Smithsonian Transcription Center | [
"Astronomy",
"Technology"
] | 312 | [
"Astronomy projects",
"Information systems",
"Human-based computation"
] |
47,143,238 | https://en.wikipedia.org/wiki/Graph-based%20access%20control | Graph-based access control (GBAC) is a declarative way to define access rights, task assignments, recipients and content in information systems. Access rights are granted to objects like files or documents, but also business objects such as an account. GBAC can also be used for the assignment of agents to tasks in workflow environments. Organizations are modeled as a specific kind of semantic graph comprising the organizational units, the roles and functions as well as the human and automatic agents (i.a. persons, machines). The main difference with other approaches such as role-based access control or attribute-based access control is that in GBAC access rights are defined using an organizational query language instead of total enumeration.
History
The foundations of GBAC go back to a research project named CoCoSOrg (Configurable Cooperation System) [] (in English language please see) at Bamberg University. In CoCoSOrg an organization is represented as a semantic graph and a formal language is used to specify agents and their access rights in a workflow environment. Within the C-Org-Project at Hof University's Institute for Information Systems (iisys), the approach was extended by features like separation of duty, access control in virtual organizations and subject-oriented access control.
Definition
Graph-based access control consists of two building blocks:
A semantic graph modeling an organization
A query language.
Organizational graph
The organizational graph is divided into a type and an instance level. On the instance level there are node types for organizational units, functional units and agents. The basic structure of an organization is defined using so called "structural relations". They define the "is part of"- relations between functional units and organizational units as well as the mapping of agents to functional units. Additionally there are specific relationship types like "deputyship" or "informed_by". These types can be extended by the modeler. All relationships can be context sensitive through the usage of predicates.
On the type level organizational structures are described in a more general manner. It consists of organizational unit types, functional unit types and the same relationship types as on the instance level. Type definitions can be used to create new instances or reuse organizational knowledge in case of exceptions (for further reading see).
Query language
In GBAC a query language is used to define agents having certain characteristics or abilities. The following table shows the usage of the query language in the context of an access control matrix.
The first query means that all managers working for the company for more than six months can read the financial report, as well as the managers who are classified by the flag "ReadFinancialReport".
The daily financial report can only be written by the manager of the controlling department or clerks of the department that are enabled to do that (WriteFinancialReport==TRUE).
Implementation
GBAC was first implemented in the CoCoS Environment within the organizational server CoCoSOrg.
In the C-Org-Project it was extended with more sophisticated features like separation of duty or access control in distributed environments.
There is also a cloud-based implementation on IBM's Bluemix platform.
In all implementations the server takes a query from a client system and resolves it to a set of agents. This set is sent back to the calling client as response. Clients can be file systems, database management systems, workflow management systems, physical security systems or even telephone servers.
See also
References
Access control
Computer access control | Graph-based access control | [
"Engineering"
] | 705 | [
"Cybersecurity engineering",
"Computer access control"
] |
47,145,502 | https://en.wikipedia.org/wiki/Lattice%20model%20%28biophysics%29 | Lattice models in biophysics represent a class of statistical-mechanical models which consider a biological macromacromolecule (such as DNA, protein, actin, etc.) as a lattice of units, each unit being in different states or conformations.
For example, DNA in chromatin can be represented as a one-dimensional lattice, whose elementary units are the nucleotide, base pair or nucleosome. Different states of the unit can be realized either by chemical modifications (e.g. DNA methylation or modifications of DNA-bound histones), or due to quantized internal degrees of freedom (e.g. different angles of the bond joining two neighboring units), or due to binding events involving a given unit (e.g. reversible binding of small ligands or proteins to DNA, or binding/unbinding of two complementary nucleotides in the DNA base pair).
DNA-ligand binding models
DNA double helix melting models
DNA coil-globule / fractal models
References
Further reading
Ewans J. W. (1993). Random and cooperative sequential adsorption. Rev. Mod. Phys., 65, 1281-1329]
Poland D., Scheraga H.A. (1970). Theory of Helix-Coil Transitions in Biopolymers: Statistical Mechanical Theory of Order-disorder Transitions in Biological Macromolecules. Academic Press, 797 pages.
Khokhlov A.R., Grosberg A.Yu. 1997. Statistical Physics of Macromolecules.
Statistical mechanics
Biophysics
Molecular biology
Lattice models | Lattice model (biophysics) | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 335 | [
"Statistical mechanics stubs",
"Applied and interdisciplinary physics",
"Theoretical physics",
"Lattice models",
"Computational physics",
"Biophysics",
"Condensed matter physics",
"Theoretical physics stubs",
"Biochemistry",
"Statistical mechanics",
"Computational physics stubs",
"Molecular bi... |
32,742,753 | https://en.wikipedia.org/wiki/Ordinary%20differential%20equation | In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
Differential equations
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
where and are arbitrary differentiable functions that do not need to be linear, and
are the successive derivatives of the unknown function of the variable .
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
Background
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement and the time of an object under the force , is given by the differential equation
which constrains the motion of a particle of constant mass . In general, is a function of the position of the particle at time . The unknown function appears on both sides of the differential equation, and is indicated in the notation .
Definitions
In what follows, is a dependent variable representing an unknown function of the independent variable . The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation
is more useful for differentiation and integration, whereas Lagrange's notation
is more useful for representing higher-order derivatives compactly, and Newton's notation is often used in physics for representing derivatives of low order with respect to time.
General definition
Given , a function of , , and derivatives of . Then an equation of the form
is called an explicit ordinary differential equation of order .
More generally, an implicit ordinary differential equation of order takes the form:
There are further classifications:
System of ODEs
A number of coupled differential equations form a system of equations. If is a vector whose elements are functions; , and is a vector-valued function of and its derivatives, then
is an explicit system of ordinary differential equations of order and dimension . In column vector form:
These are not necessarily linear. The implicit analogue is:
where is the zero vector. In matrix form
For a system of the form , some sources also require that the Jacobian matrix be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
Solutions
Given a differential equation
a function , where is an interval, is called a solution or integral curve for , if is -times differentiable on , and
Given two solutions and , is called an extension of if and
A solution that has no extension is called a maximal solution. A solution defined on all of is called a global solution.
A general solution of an th-order equation is a solution containing arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
Solutions of finite duration
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
Admits the finite duration solution:
Theories
Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the th degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties.
Fuchsian theory
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces under rational one-to-one transformations.
Lie's theory
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
Sturm–Liouville theory
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
Existence and uniqueness of solutions
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
{| class="wikitable"
|-
! Theorem
! Assumption
! Conclusion
|-
|Peano existence theorem
|| continuous
||local existence only
|-
|Picard–Lindelöf theorem
|| Lipschitz continuous
||local existence and uniqueness
|-
|}
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
Local existence and uniqueness theorem simplified
The theorem can be stated simply as follows. For the equation and initial value problem:
if and are continuous in a closed rectangle
in the plane, where and are real (symbolically: ) and
denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
for some where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on to be linear, this applies to non-linear equations that take the form , and it can also be applied to systems of equations.
Global uniqueness and maximum domain of solution
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition there exists a unique maximum (possibly infinite) open interval
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain .
In the case that , there are exactly two possibilities
explosion in finite time:
leaves domain of definition:
where is the open set in which is defined, and is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
may depend on the specific choice of .
Example.
This means that , which is and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all since the solution is
which has maximum domain:
This shows clearly that the maximum interval may depend on the initial conditions. The domain of could be taken as being but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not because
which is one of the two possible cases according to the above theorem.
Reduction of order
Differential equations are usually easier to solve if the order of the equation can be reduced.
Reduction to a first-order system
Any explicit differential equation of order ,
can be written as a system of first-order differential equations by defining a new family of unknown functions
for . The -dimensional system of first-order coupled differential equations is then
more compactly in vector notation:
where
Summary of exact solutions
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below, , , , , and , are any integrable functions of , ; and are real given constants; are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions, and are dummy variables of integration (the continuum analogues of indices in summation), and the notation just means to integrate with respect to , then after the integration substitute , without adding constants (explicitly stated).
Separable equations
General first-order equations
General second-order equations
Linear to the th order equations
The guessing method
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
Software for ODE solving
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
See also
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
Notes
References
.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003.
Bibliography
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications,
.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002.
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
External links
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha
Differential calculus | Ordinary differential equation | [
"Mathematics"
] | 3,805 | [
"Differential calculus",
"Calculus"
] |
32,743,838 | https://en.wikipedia.org/wiki/Atomic-terrace%20low-angle%20shadowing | Atomic Terrace Low Angle Shadowing (ATLAS) is a surface science technique which enables the growth of planar nanowire or nanodot arrays using molecular beam epitaxy on a vicinal surface. ATLAS utilises the inherent step-and-terrace structure of the surface as a template for such nanostructures. The technique involves the low angle incidence of flux material on vicinal substrates. Vicinal substrates are composed of atomic terraces separated by atomic steps. The ATLAS technique allows for the fabrication of well defined planar arrays of plasmonic nanostructures, of dimensions unachievable by lithography.
A collimated beam of atoms or molecules is evaporated at an oblique angle to the substrate. This causes the steps to "shadow" the beam, and the molecules to be adsorbed only on the exposed parts of the steps in direct line of sight of the evaporator.
The principal attraction of the technique is its relative simplicity, as it does not involve multiple lithography steps and can be applied to metal, semiconductor or oxide surfaces alike.
The technique is a "bottom-up" approach and allows great control over the separation of nanostructures within the array, as well as their individual widths. The separation is controlled by the size of the atomic terraces of the substrate, which is determined by its miscut from the principal index; and the width of the nanostructures is controlled by the oblique angle of the deposition.
ATLAS has been shown to be a very versatile technique, with the growth of metallic, semi-conducting and magnetic nanowires and nanodots demonstrated using a variety of source materials and substrates.
Basic Principles
Figure 1(a) shows a schematic of the deposition in the "downhill" direction, that is, from an outer step edge to a lower terrace. The deposition angle β between the beam and surface is small (1°-3°) so that some areas of the terraces are exposed to the beam, and others are geometrically shadowed.
The deposition angle β determines the width of the nanostructures, according to the following relation:
where w is the nanostructure width, a is the height of one step, α is the miscut angle and β is the deposition angle between the incident beam and the surface (α and β are assumed to be small and are measured in radians).
Figure 1(b) shows a similar situation, but this time with the substrate rotated by 180° so that the incident beam is now in the "uphill" direction, and nearly parallel to the surface. In this case, the step faces provide the bonding sites and the deposited material grows along the steps, similar to the step-flow growth mechanism.
In order to grow nanowires with a width of fifteen nanometers or less, the deposition temperature for both orientations should be chosen such that the mean free path of the adatoms on the surface is limited to a few nanometers.
Experimental Development
The ATLAS system was developed within the Applied Physics Group at the School of Physics, Trinity College, Dublin. The experimental procedure is relatively straightforward, when compared to lithography or other approaches, meaning that only standard equipment is needed.
The set-up consists of an ultrahigh vacuum chamber (base pressure in the low 10−10 Torr range), with the sample mounted at a large working distance (40-100 cm) from the evaporation source. This large distance provides the high collimation required for the ATLAS technique. The sample itself is mounted on a rotation stage and can be tilted through 200° with a precision of ±0.5°.
The substrate can be heated during deposition by either passing direct current through the sample for semiconductors or by driving current through a separate heating foil underneath the substrate for insulating oxides.
Versatility
The capabilities of the system were first tested by growing arrays of 10-30 nm wide metallic nanowires on two types of vicinal substrates, step-bunched Si(111) and α-Al2O3(0001). Deposition of Au and Ag onto these substrates yields arrays of wires with a width and height of 15 nm and 2 nm, and separated by approximately 30 nm.
Since its introduction in 2008, ATLAS has been demonstrated as a simple technique to produce nanowires of a variety of materials down to a width of 15 nm and thickness of 2 nm, on several stepped substrates.
Limitations
Although ATLAS is a versatile technique, some limitations do exist. The initial growth of the nanowires is nucleated on certain preferential adsorption sites. This can form epitaxial seeds, which grow independently of each other, until they meet, which forms an overall polycrystalline wire. This polycrystallinity can affect the stability of the wire when exposed to air, and can increase the resistance due to its defective nature. It is an ongoing topic of research to increase the quality of nanowires by lattice matching, or increasing initial mobility through heating of the substrate.
Despite these limitations, ATLAS's results of a 15 nm width is approximately a five-fold reduction in size compared to other shallow-angle techniques.
References
Nanomaterials
Nanoelectronics | Atomic-terrace low-angle shadowing | [
"Materials_science"
] | 1,076 | [
"Nanotechnology",
"Nanomaterials",
"Nanoelectronics"
] |
32,747,510 | https://en.wikipedia.org/wiki/Quintuple%20product%20identity | In mathematics the Watson quintuple product identity is an infinite product identity introduced by and rediscovered by and . It is analogous to the Jacobi triple product identity, and is the Macdonald identity for a certain non-reduced affine root system. It is related to Euler's pentagonal number theorem.
Statement
References
Foata, D., & Han, G. N. (2001). The triple, quintuple and septuple product identities revisited. In The Andrews Festschrift (pp. 323–334). Springer, Berlin, Heidelberg.
Cooper, S. (2006). The quintuple product identity. International Journal of Number Theory, 2(01), 115-161.
See also
Hirschhorn–Farkas–Kra septagonal numbers identity
Further reading
Subbarao, M. V., & Vidyasagar, M. (1970). On Watson’s quintuple product identity. Proceedings of the American Mathematical Society, 26(1), 23-27.
Hirschhorn, M. D. (1988). A generalisation of the quintuple product identity. Journal of the Australian Mathematical Society, 44(1), 42-45.
Alladi, K. (1996). The quintuple product identity and shifted partition functions. Journal of Computational and Applied Mathematics, 68(1-2), 3-13.
Farkas, H., & Kra, I. (1999). On the quintuple product identity. Proceedings of the American Mathematical Society, 127(3), 771-778.
Chen, W. Y., Chu, W., & Gu, N. S. (2005). Finite form of the quintuple product identity. arXiv preprint math/0504277.
Elliptic functions
Theta functions
Mathematical identities
Theorems in number theory
Infinite products | Quintuple product identity | [
"Mathematics"
] | 396 | [
"Mathematical theorems",
"Mathematical analysis",
"Algebra",
"Theorems in number theory",
"Infinite products",
"Mathematical identities",
"Mathematical problems",
"Number theory"
] |
32,747,596 | https://en.wikipedia.org/wiki/Structural%20coloration | Structural coloration in animals, and a few plants, is the production of colour by microscopically structured surfaces fine enough to interfere with visible light instead of pigments, although some structural coloration occurs in combination with pigments. For example, peacock tail feathers are pigmented brown, but their microscopic structure makes them also reflect blue, turquoise, and green light, and they are often iridescent.
Structural coloration was first described by English scientists Robert Hooke and Isaac Newton, and its principle—wave interference—explained by Thomas Young a century later. Young described iridescence as the result of interference between reflections from two or more surfaces of thin films, combined with refraction as light enters and leaves such films. The geometry then determines that at certain angles, the light reflected from both surfaces interferes constructively, while at other angles, the light interferes destructively. Different colours therefore appear at different angles.
In animals such as on the feathers of birds and the scales of butterflies, interference is created by a range of photonic mechanisms, including diffraction gratings, selective mirrors, photonic crystals, crystal fibres, matrices of nanochannels and proteins that can vary their configuration. Some cuts of meat also show structural coloration due to the exposure of the periodic arrangement of the muscular fibres. Many of these photonic mechanisms correspond to elaborate structures visible by electron microscopy. In the few plants that exploit structural coloration, brilliant colours are produced by structures within cells. The most brilliant blue coloration known in any living tissue is found in the marble berries of Pollia condensata, where a spiral structure of cellulose fibrils produces Bragg's law scattering of light. The bright gloss of buttercups is produced by thin-film reflection by the epidermis supplemented by yellow pigmentation, and strong diffuse scattering by a layer of starch cells immediately beneath.
Structural coloration has potential for industrial, commercial and military applications, with biomimetic surfaces that could provide brilliant colours, adaptive camouflage, efficient optical switches and low-reflectance glass.
History
In his 1665 book Micrographia, Robert Hooke described the "fantastical" colours of the peacock's feathers:
In his 1704 book Opticks, Isaac Newton described the mechanism of the colours other than the brown pigment of peacock tail feathers. Newton noted that
Thomas Young (1773–1829) extended Newton's particle theory of light by showing that light could also behave as a wave. He showed in 1803 that light could diffract from sharp edges or slits, creating interference patterns.
In his 1892 book Animal Coloration, Frank Evers Beddard (1858–1925) acknowledged the existence of structural colours:
But Beddard then largely dismissed structural coloration, firstly as subservient to pigments: "in every case the [structural] colour needs for its display a background of dark pigment;" and then by asserting its rarity: "By far the commonest source of colour in invertebrate animals is the presence in the skin of definite pigments", though he does later admit that the Cape golden mole has "structural peculiarities" in its hair that "give rise to brilliant colours".
Principles
Structure not pigment
Structural coloration is caused by interference effects rather than by pigments. Colours are produced when a material is scored with fine parallel lines, or formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the colour's wavelength.
Structural coloration is responsible for the blues and greens of the feathers of many birds (the bee-eater, kingfisher and roller, for example), as well as many butterfly wings, beetle wing-cases (elytra) and (while rare among flowers) the gloss of buttercup petals. These are often iridescent, as in peacock feathers and nacreous shells such as of pearl oysters (Pteriidae) and Nautilus. This is because the reflected colour depends on the viewing angle, which in turn governs the apparent spacing of the structures responsible. Structural colours can be combined with pigment colours: peacock feathers are pigmented brown with melanin, while buttercup petals have both carotenoid pigments for yellowness and thin films for reflectiveness.
Principle of iridescence
Iridescence, as explained by Thomas Young in 1803, is created when extremely thin films reflect part of the light falling on them from their top surfaces. The rest of the light goes through the films, and a further part of it is reflected from their bottom surfaces. The two sets of reflected waves travel back upwards in the same direction. But since the bottom-reflected waves travelled a little farther – controlled by the thickness and refractive index of the film, and the angle at which the light fell – the two sets of waves are out of phase. When the waves are one or more whole wavelengths apart – in other words, at certain specific angles, they add (interfere constructively), giving a strong reflection. At other angles and phase differences, they can subtract, giving weak reflections. The thin film therefore selectively reflects just one wavelength – a pure colour – at any given angle, but other wavelengths – different colours – at different angles. So, as a thin-film structure such as a butterfly's wing or bird's feather moves, it seems to change colour.
Mechanisms
Fixed structures
A number of fixed structures can create structural colours, by mechanisms including diffraction gratings, selective mirrors, photonic crystals, crystal fibres and deformed matrices. Structures can be far more elaborate than a single thin film: films can be stacked up to give strong iridescence, to combine two colours, or to balance out the inevitable change of colour with angle to give a more diffuse, less iridescent effect. Each mechanism offers a specific solution to the problem of creating a bright colour or combination of colours visible from different directions.
A diffraction grating constructed of layers of chitin and air gives rise to the iridescent colours of various butterfly wing scales as well as to the tail feathers of birds such as the peacock. Hooke and Newton were correct in their claim that the peacock's colours are created by interference, but the structures responsible, being close to the wavelength of light in scale (see micrographs), were smaller than the striated structures they could see with their light microscopes. Another way to produce a diffraction grating is with tree-shaped arrays of chitin, as in the wing scales of some of the brilliantly coloured tropical Morpho butterflies (see drawing). Yet another variant exists in Parotia lawesii, Lawes's parotia, a bird of paradise. The barbules of the feathers of its brightly coloured breast patch are V-shaped, creating thin-film microstructures that strongly reflect two different colours, bright blue-green and orange-yellow. When the bird moves the colour switches sharply between these two colours, rather than drifting iridescently. During courtship, the male bird systematically makes small movements to attract females, so the structures must have evolved through sexual selection.
Photonic crystals can be formed in different ways. In Parides sesostris, the emerald-patched cattleheart butterfly, photonic crystals are formed of arrays of nano-sized holes in the chitin of the wing scales. The holes have a diameter of about 150 nanometres and are about the same distance apart. The holes are arranged regularly in small patches; neighbouring patches contain arrays with differing orientations. The result is that these emerald-patched cattleheart scales reflect green light evenly at different angles instead of being iridescent. In Lamprocyphus augustus, a weevil from Brazil, the chitin exoskeleton is covered in iridescent green oval scales. These contain diamond-based crystal lattices oriented in all directions to give a brilliant green coloration that hardly varies with angle. The scales are effectively divided into pixels about a micrometre wide. Each such pixel is a single crystal and reflects light in a direction different from its neighbours.
Selective mirrors to create interference effects are formed of micron-sized bowl-shaped pits lined with multiple layers of chitin in the wing scales of Papilio palinurus, the emerald swallowtail butterfly. These act as highly selective mirrors for two wavelengths of light. Yellow light is reflected directly from the centres of the pits; blue light is reflected twice by the sides of the pits. The combination appears green, but can be seen as an array of yellow spots surrounded by blue circles under a microscope.
Crystal fibres, formed of hexagonal arrays of hollow nanofibres, create the bright iridescent colours of the bristles of Aphrodita, the sea mouse, a non-wormlike genus of marine annelids. The colours are aposematic, warning predators not to attack. The chitin walls of the hollow bristles form a hexagonal honeycomb-shaped photonic crystal; the hexagonal holes are 0.51 μm apart. The structure behaves optically as if it consisted of a stack of 88 diffraction gratings, making Aphrodita one of the most iridescent of marine organisms.
Deformed matrices, consisting of randomly oriented nanochannels in a spongelike keratin matrix, create the diffuse non-iridescent blue colour of Ara ararauna, the blue-and-yellow macaw. Since the reflections are not all arranged in the same direction, the colours, while still magnificent, do not vary much with angle, so they are not iridescent.
Spiral coils, formed of helicoidally stacked cellulose microfibrils, create Bragg reflection in the "marble berries" of the African herb Pollia condensata, resulting in the most intense blue coloration known in nature. The berry's surface has four layers of cells with thick walls, containing spirals of transparent cellulose spaced so as to allow constructive interference with blue light. Below these cells is a layer two or three cells thick containing dark brown tannins. Pollia produces a stronger colour than the wings of Morpho butterflies, and is one of the first instances of structural coloration known from any plant. Each cell has its own thickness of stacked fibres, making it reflect a different colour from its neighbours, and producing a pixellated or pointillist effect with different blues speckled with brilliant green, purple, and red dots. The fibres in any one cell are either left-handed or right-handed, so each cell circularly polarizes the light it reflects in one direction or the other. Pollia is the first organism known to show such random polarization of light, which, nevertheless does not have a visual function, as the seed-eating birds who visit this plant species are not able to perceive polarised light. Spiral microstructures are also found in scarab beetles where they produce iridescent colours.
Thin film with diffuse reflector, based on the top two layers of a buttercup's petals. The brilliant yellow gloss derives from a combination, rare among plants, of yellow pigment and structural coloration. The very smooth upper epidermis acts as a reflective and iridescent thin film; for example, in Ranunculus acris, the layer is 2.7 micrometres thick. The unusual starch cells form a diffuse but strong reflector, enhancing the flower's brilliance. The curved petals form a paraboloidal dish which directs the sun's heat to the reproductive parts at the centre of the flower, keeping it some degrees Celsius above the ambient temperature.
Surface gratings, consisting of ordered surface features due to exposure of ordered muscle cells on cuts of meat. The structural coloration on meat cuts appears only after the ordered pattern of muscle fibrils is exposed and light is diffracted by the proteins in the fibrils. The coloration or wavelength of the diffracted light depends on the angle of observation and can be enhanced by covering the meat with translucent foils. Roughening the surface or removing water content by drying causes the structure to collapse, thus, the structural coloration to disappear.
Interference from multiple total internal reflections can occur in microscale structures, such as sessile water droplets and biphasic oil-in-water droplets as well as polymer microstructured surfaces. In this structural coloration mechanism, light rays that travel by different paths of total internal reflection along an interface interfere to generate iridescent colour.
Variable structures
Some animals including cephalopods such as squid are able to vary their colours rapidly for both camouflage and signalling. The mechanisms include reversible proteins which can be switched between two configurations. The configuration of reflectin proteins in chromatophore cells in the skin of the Doryteuthis pealeii squid is controlled by electric charge. When charge is absent, the proteins stack together tightly, forming a thin, more reflective layer; when charge is present, the molecules stack more loosely, forming a thicker layer. Since chromatophores contain multiple reflectin layers, the switch changes the layer spacing and hence the colour of light that is reflected.
Blue-ringed octopuses spend much of their time hiding in crevices whilst displaying effective camouflage patterns with their dermal chromatophore cells. If they are provoked, they quickly change colour, becoming bright yellow with each of the 50-60 rings flashing bright iridescent blue within a third of a second. In the greater blue-ringed octopus (Hapalochlaena lunulata), the rings contain multi-layer iridophores. These are arranged to reflect blue–green light in a wide viewing direction. The fast flashes of the blue rings are achieved using muscles under neural control. Under normal circumstances, each ring is hidden by contraction of muscles above the iridophores. When these relax and muscles outside the ring contract, the bright blue rings are exposed.
Examples
In technology
Gabriel Lippmann won the Nobel Prize in Physics in 1908 for his work on a structural coloration method of colour photography, the Lippmann plate. This used a photosensitive emulsion fine enough for the interference caused by light waves reflecting off the back of the glass plate to be recorded in the thickness of the emulsion layer, in a monochrome (black and white) photographic process. Shining white light through the plate effectively reconstructs the colours of the photographed scene.
In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales. The fibres are composed of 61 flat alternating layers, between 70 and 100 nanometres thick, of two plastics with different refractive indices, nylon and polyester, in a transparent nylon sheath with an oval cross-section. The materials are arranged so that the colour does not vary with angle. The fibres have been produced in red, green, blue, and violet.
Several countries and regions, including the U.S., European Union, and Brazil, use banknotes that include optically variable ink, which is structurally coloured, as a security feature. These pearlescent inks appear as different colours depending on the angle the banknote is viewed from. Because the ink is hard to obtain, and because a photocopier or scanner (which works from only one angle) cannot reproduce or even perceive the color-shifting effect, the ink serves to make counterfeiting more difficult.
Structural coloration could be further exploited industrially and commercially, and research that could lead to such applications is under way. A direct parallel would be to create active or adaptive military camouflage fabrics that vary their colours and patterns to match their environments, just as chameleons and cephalopods do. The ability to vary reflectivity to different wavelengths of light could also lead to efficient optical switches that could function like transistors, enabling engineers to make fast optical computers and routers.
The surface of the compound eye of the housefly is densely packed with microscopic projections that have the effect of reducing reflection and hence increasing transmission of incident light. Similarly, the eyes of some moths have antireflective surfaces, again using arrays of pillars smaller than the wavelength of light. "Moth-eye" nanostructures could be used to create low-reflectance glass for windows, solar cells, display devices, and military stealth technologies. Antireflective biomimetic surfaces using the "moth-eye" principle can be manufactured by first creating a mask by lithography with gold nanoparticles, and then performing reactive-ion etching.
See also
Animal coloration
Camouflage
Patterns in nature
References
Bibliography
Pioneering books
Beddard, Frank Evers (1892). Animal Coloration, An Account of the Principal Facts and Theories Relating to the Colours and Markings of Animals. Swan Sonnenschein, London.
--- 2nd Edition, 1895.
Hooke, Robert (1665). Micrographia, John Martyn and James Allestry, London.
Newton, Isaac (1704). Opticks, William Innys, London.
Research
Fox, D.L. (1992). Animal Biochromes and Animal Structural Colours. University of California Press.
Johnsen, S. (2011). The Optics of Life: A Biologist's Guide to Light in Nature. Princeton University Press.
Kolle, M. (2011). Photonic Structures Inspired by Nature . Springer.
General books
Brebbia, C.A. (2011). Colour in Art, Design and Nature. WIT Press.
Lee, D.W. (2008). Nature's Palette: The Science of Plant Color. University of Chicago Press.
Kinoshita, S. (2008). "Structural Color in the Realm of Nature". World Scientific Publishing
Mouchet, S. R., Deparis, O. (2021). "Natural Photonics and Bioinspiration". Artech House
External links
National Geographic News: Peacock Plumage Secrets Uncovered
Causes of Color: Peacock feathers
Butterflies and Gyroids – Numberphile
Animal coat colors
Color
Nanotechnology
Optical materials | Structural coloration | [
"Physics",
"Materials_science",
"Engineering"
] | 3,738 | [
"Materials science",
"Materials",
"Optical materials",
"Nanotechnology",
"Matter"
] |
61,588,101 | https://en.wikipedia.org/wiki/Alain%20Gachet | Alain Claude Christian Gachet is a French physicist specialized in geology, born in the French colony of Madagascar in 1951. He is the inventor of an algorithm used in a process known as WATEX that can detect the presence of deep groundwater . He is a natural resources entrepreneur and CEO of RTI Exploration.
Biography
Early years and education
The son of a forestry ranger, Alain Gachet grew up in an isolated region of northern Madagascar. His father was responsible for recording the inventory of Madagascar's botanical species diversity, the discoveries of which he shared with his son after his explorations of the island. His father was also active in the protection of the environment of the Madagascar mangroves. Alain Gachet has said that his childhood experiences in Madagascar instilled in him a love and respect for nature.
When he reached the age of 14, seven years after the independence of Madagascar, he moved to the capital Tananarive, where his father was transferred. After a discovery and reading of the Bible, he developed a passion for biblical history and its related archeology which led him to seek, in 1966, a summer residency in Israel, at the kibbutz of Evron, in Galilee. During his stay, he had the opportunity to do an internship in geology and hydrogeology with experts from the University of Tel Aviv, in the Sinai desert.
In 1969, his family settled in mainland France. After French post-secondary Preparatory Classes, he was accepted into the Ecole Nationale Superieure des Mines de Nancy from which he graduated in 1975.
Career as Petroleum Engineer at Elf Aquitaine
Alain Gachet began a career in 1978 with Elf Aquitaine and was assigned to the research and development program in the department responsible for the processing and interpretation of seismic data. He then participated in the oil exploration of the North Sea. He made his mark by inventing a method to identify new gas fields. For this, he earned the Elf Innovation Award.
From there he was sent to Gabon, next to the Middle East and then to Kazakhstan, Russia, before leaving for Congo Brazzaville. In 1996, in a "disagreement on the policy of the company" at the time of the Congo-Brazzaville Civil War, he decided to resign in order to create his own exploration company. Because of a non-competition clause, he was restricted from working in the oil industry during four years following his resignation.
Career as an inventor
Entrepreneur in the field of exploration.
Alain Gachet subsequently received specialized training in radar exploration and acquisition techniques in the United States. In 1999, he founded a mining exploration company, Radar Technologies International Exploration, aka RTI Exploration, which, at that time, focused on exploration and discovery of gold and ore deposits.
He then embarked on gold exploration in the rainforest of the Republic of Congo, then in Mali. He prospected for gold-bearing zones by panning the river bottoms for months with the Central African Pygmies in the equatorial forest, and thereby acquired a solid knowledge of the sub soils of the Congo. He combined this knowledge with his recently acquired skills in new radar technology which allowed Gachet to penetrate the clouds and deep canopy of the jungle and to locate the gold's source.
Alain Gachet was later commissioned by Shell, as a consultant, to prospect for an oil project in Sirte, Libya.
Genesis of using satellite radar to prospect for water
In June 2002, while studying satellite radar images taken from the Libyan desert for Shell Oil Exploration, Gachet identified unexpected features in the radar echoes of southern Sirte. This proved to be the signs of a gigantic leak which measured billions of cubic meters, originating from the Great man-made River,a huge artificial underground pipeline of four thousand kilometers and four meters in diameter built by Colonel Gaddafi. As the bearer of bad news, the engineer quickly fled Libya but said he realized "he was onto something new and important".
Gachet says “The experience gave me the idea that I could use radar frequencies to find underground water that could be used to help people”. Over the next two years, he used geophysical, geological and satellite data in conjunction with various other disciplines—physics, chemistry, geophysics, seismology, and a complex algorithm he developed, to render 3D maps of water occurrence probability in specific areas indicating where and at what depth groundwater could likely to be located. The technique makes it possible to "remove the outer layers like an onion, and to thereby focus ground searches on interesting areas".
Testing and development
During the Darfur Crisis in 2004, more than 250,000 Sudanese refugees were forced to relocate to the desert landscape of eastern Chad. Providing water to the refugees was the most important priority for their survival. Every day without enough water meant the loss of 200 children's lives in the camps. UNOSAT contacted Alain Gachet to solve water supply issues.
RTI was commissioned by the Office of the United Nations High Commissioner for Refugees for the implementation of its method of radar detection and for the drilling of some 350 wells in eastern Chad and northern Sudan on the sites of camps sheltering 250,000 refugees from the War in Darfur where Water access had been identified as one of the major source of the conflict. Alain Gachet and the drilling team traveled across the deserted region, comparing the satellite images and visual cues present in the field.
RTI developed maps using WATEX technology to identify target areas to drill for deep groundwater. In 2004, with the help of UNHCR, around 250,000 refugees in the Ouaddaï Region in Chad were provided water from this project and, in 2005 and 2006, with contributions from USGS and the U.S. Department of State, this project contributed to the survival of hundreds of thousands of internally displaced persons camps in Darfur in Sudan.
The success of the operation attracted the attention of Bill Woods, White House cartographer at the time, who called upon Alain Gachet in June 2005. The system of the French scientist has been appraised as "genius" by Dr. Saud Amer, head of the United States Geological Survey, and former United States Ambassador to the Republic of the Congo Willam Ramsay has stated that Alain Gachet "has provided a tool for humanity to address one of the most serious problems of the 21st century".
United States Agency for International Development USAID entrusted RTI with a second mission in the Darfur region of Sudan. On the specific indications of Alain Gachet, 1,700 wells were drilled. Prior to RTI's involvement, the NGO's responsible for locating water sources had a success rate of around 33 percent. Using RTI, the rate of success was 98 percent.
Additional projects and assignments
RTI participated in the archaeological research of the Hebrew Mission in Jerusalem, on the supposed tomb of King Herod the Great.
In 2013, upon the request of the UN, Gachet and his team located an underground lake measuring 200 000 000 m3 in the desert county of Turkana in Kenya: the Lotikipi Basin aquifer is one of the largest aquifers known to date on the African continent. The Lotikipi Basin Aquifer contains 200 billion cubic meters of fresh water and covers an area of 4,164 km2. The aquifer is nine times the size of any other aquifer in Kenya and has the potential to supply the population with enough fresh water to last 70 years or indefinitely if properly managed. The acquifer is about the size of Rhode Island and replenishes at a rate of 1.2 billion cubic meters a year. Gachet's company, RTI established a new basis for the mapping of Kenya. Some wells remain open, but one planned for 160,000 nomads in the region was dismantled because the Kenyan government could no longer fund it.
In 2015, the Iraqi government appealed to Alain Gachet to find new water reserves in an attempt to relieve Iraq from the threat of water shortages, generated upstream, in the dams along the Tigris and Euphrates rivers. With the support of the European Union and UNESCO, Gachet delineated a map showing more than 67 aquifers, including 64 located in northern Iraq, on a territory of more than 1.68 million hectares.
Alain Gachet continues to focus on the implementation and development of the WATEX method to improve upon the exploration of water at depths of more than 80 meters below the earths surface. Deep ground aquifers have been discovered using WATEX technology in Afghanistan, South-western Angola, Ethiopia, Eritrea, Eastern Chad, Darfur Sudan, Iraq, Gabon, Togo, and the sultanate of Oman.
Since 2018, Alain Gachet and his company RTI exploration has been working together with MINAE (Ministry of Environment, Energy, and Telecommunications of Costa Rica) and the U.S. Geological Service to evaluate aquifers that will provide Costa Rica with clearer information related to its underground water sources. The project “Mapping of the Ground Water Hydric Resource in Costa Rica” uses WATEX high technology which provides images of Costa Rica's subterraneous water sources bypassing interference from surface obstacles like infrastructure and vegetation. This will allow Costa Rica to develop a better strategy for using its underground water sources to address the issues of drought and climate change.
Awards and honors
In January 2015, upon the recommendation of Ségolène Royal, the French Minister of Ecology at that time, Alain Gachet is awarded by Yves Coppens, the rank of Chevaliers of the French Legion of Honor. In the US, he was elected in the “Space Technology Hall of Fame” by NASA and the Space Foundation in 2016 for using modern space technologies for the progress of humanity.
In 2017, endorsed by the organizations that had previously accompanied him but also by George Washington University, the University of Turin and the Canadian Space Agency, Gachet calls for an increase in the drilling of aquifers and the establishment of regulation on rapid agricultural development and relates his message through television, lectures and in writing. He has written a memoir of his experiences, published in 2015 by JC Lattès of his experiences: Le sourcier qui fait jaillir l’eau du désert.
Only aquifers that are renewable can reasonably be exploited because fossil water must be preserved for future generations, according to Gachet. However, they represent together, by far, the largest freshwater deposit on the planet. There is more fresh water hidden underground than visible on the lakes, rivers and glaciers. Up to thirty times more according to NASA estimations.
Gachet stated in article published in The Independent that "over a billion people have no easy access to drinking water and that 1.8 million children die each year from diseases linked to drinking bad water". He added that "in another half century, there will be 5.5 billion - two thirds of the population - living in a state of severe water-shortage." In an article published in the French magazine Ouest France, Gachet points out that there is enough deep groundwater in Africa to transfigure the entire face of the continent - enough water to stop many wars, rebuild agriculture and restore dignity and hope to millions of men (translated from statement in French).
He directly links the European migration crisis of the 21st century to the desertification of the Sahel and the agricultural underdevelopment it entails; according to him, conflicts in the Middle East are also linked.
The Watex Method
WATEX is a portmanteau of the words "water" and "exploration". WATEX is an interdisciplinary approach to groundwater exploration, developed by Alain Gachet, involving a fusion of several types of measurements; geological, geophysical, climatic, and spatial remote sensing. Combining these data, a grid of probabilities guides the physical exploration, both on the surface and in the depth of the subsoil.
In cases where radar images do not allow for ground penetration beyond a depth of twenty meters, the Watex system permits inferences of a sufficient number of parameters to reveal certain geological aspects up to four hundred meters under the surface and the results are expressed on color maps. The Watex technology produces detailed maps indicating where water has accumulated deep beneath the surface. The method is likened to peeling back the Earth's surface "like and onion".
The Sudan Darfur campaign conducted between 2005 and 2008 on 1,700 wells showed a Watex success rate of 98%.
WATEX is an interdisciplinary approach to groundwater exploration, involving a fusion of humanitarian intelligence, hydrology, geology, and geospatial analysis. A 2006 report by George Washington University concludes that the use of WATEX significantly reduces the risk and cost of water exploration, and limits ground survey to only areas with high water potential.
Obstacles
The cost of deep drilling is thirty times greater than that of a conventional well, constituting up to $200,000 to equipe a well at 300 meters (compared to just $6,000 for a conventional well).
A 2017 publication by the Schiller Institute, "Extending the New Silk road to West Asia and Africa", states that "WATEX and other modern theories of hydrology prove the criticism that fossil water and completely confined aquifers is a myth". The report points out that "while a great deal of water is stored for thousands of years in some underground aquifers, a great amount of water is continuously recharging aquifers through very deep fracture systems, upwelling from great depths...".
Bibliography
Alain Gachet, L'homme qui fait jaillir l'eau du désert : à la recherche de l'eau profonde (translation of the title - The Man Who made Water Gush from the Desert: In Search of Deep Water) Paris, JC Lattès, 2015, 276 pages
Expositions
H. Staub & A. Gachet, Terra, Galerie Omnius, Arles,from April 2, 2016, to May 30, 2016.
Watex picture illustrating "The wonderful side of nature and the horrible side of man."
H. Staub & A. Gachet, Terra II, Galerie Omnius, Arles, Terra II, Exposition du 4 July 2016 to 15 September 2016.
See also
RTI Exploration.
S. Amer & A. Gachet, « Groundwater exploration WATEX applications with Ground Penetrating Radars. », USGS, Reston (Virginie), 2016.
References
Hydrogeology
Hydrology
Hydrologists
Knights of the Legion of Honour
1951 births
Living people | Alain Gachet | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,941 | [
"Hydrology",
"Hydrologists",
"Hydrogeology",
"Environmental engineering"
] |
61,590,350 | https://en.wikipedia.org/wiki/Hay%27s%20bridge | Hay's bridge is used to determine the Inductance of an inductor with a high Q factor. Maxwell's bridge is only appropriate for measuring the values for inductors with a medium quality factor. Thus, the bridge is the advanced form of Maxwell’s bridge.
One of the arms of a Hay's bridge has an accurately characterized capacitor used to balance the unknown inductance value. The other arms contain resistors.
References
Electrical meters
Bridge circuits | Hay's bridge | [
"Technology",
"Engineering"
] | 99 | [
"Measuring instruments",
"Electrical meters"
] |
61,590,966 | https://en.wikipedia.org/wiki/Conditional%20symmetric%20instability | Conditional symmetric instability, or CSI, is a form of convective instability in a fluid subject to temperature differences in a uniform rotation frame of reference while it is thermally stable in the vertical and dynamically in the horizontal (inertial stability). The instability in this case develop only in an inclined plane with respect to the two axes mentioned and that is why it can give rise to a so-called "slantwise convection" if the air parcel is almost saturated and moved laterally and vertically in a CSI area. This concept is mainly used in meteorology to explain the mesoscale formation of intense precipitation bands in an otherwise stable region, such as in front of a warm front. The same phenomenon is also applicable to oceanography.
Principle
Hydrostatic stability
An air particle at a certain altitude will be stable if its adiabatically modified temperature during an ascent is equal to or cooler than the environment. Similarly, it is stable if its temperature is equal or warmer during a descent. In the case where the temperature is equal, the particle will remain at the new altitude, while in the other cases, it will return to its initial level4.
In the diagram on the right, the yellow line represents a raised particle whose temperature remains at first under that of the environment (stable air) which entails no convection. Then in the animation, there is warming surface warming and the raised particle remains warmer than the environment (unstable air). A measure of hydrostatic stability is to record the variation with the vertical of the equivalent potential temperature ():
If diminish with altitude leads to unstable airmass
If remains the same with altitude leads to neutral airmass
If increase with altitude leads to stable airmass.
Inertial Stability
In the same way, a lateral displacement of an air particle changes its absolute vorticity . This is given by the sum of the planetary vorticity, , and , the geostrophic (or relative) vorticity of the parcel:
Where :
and are the meridional and zonal geostrophic velocities respectively.
and correspond to the zonal and meridional coordinates.
is the Coriolis parameter, which describes the component of vorticity around the local vertical that results from the rotation of the reference frame.
is the relative vorticity around the local vertical. It is found by taking the vertical component of the curl of the geostrophic velocity.
can be positive, null or negative depending on the conditions in which the move is made. As the absolute vortex is almost always positive on the synoptic scale, one can consider that the atmosphere is generally stable for lateral movement. Inertial stability is low only when is close to zero. Since is always positive, can be satisfied only on the anticyclonic side of a strong maximum of jet stream or in a barometric ridge at altitude, where the derivative velocities in the direction of displacement in the equation give a significant negative value.
The variation of the angular momentum indicate the stability:
, the particle then remains at the new position because its momentum has not changed
, the particle returns to its original position because its momentum is greater than that of the environment
, the particle continues its displacement because its momentum is smaller than that of the environment.
Slantwise movement
Under certain stable hydrostatic and inertial conditions, slantwise displacement may, however, be unstable when the particle changes air mass or wind regime. The figure on the right shows such a situation. The displacement of the air particle is done with respect to kinetic moment lines () that increase from left to right and equivalent potential temperature () that increase with height.
Lateral movement A
Horizontal accelerations (to the left or right of a surface ) are due to an increase/decrease in the of the environment in which the particle moves. In these cases, the particle accelerates or slows down to adjust to its new environment. Particule A undergoes a horizontal acceleration that gives it positive buoyancy as it moves to colder air and decelerates as it moves to a region of smaller . The particle rises and eventually becomes colder than its new environment. At this point, it has negative buoyancy and begins to descend. In doing so, increases and the particle returns to its original position.
Vertical displacement B
Vertical movements in this case result in negative buoyancy as the particle encounters warmer air ( increases with height) and horizontal acceleration as it moves to larger surfaces . As the particle goes down, its decreases to fit the environment and the particle returns to B.
Slantwise displacement C
Only case C is unstable. Horizontal acceleration combines with a vertical upward disturbance and allows oblique displacement. Indeed, the of the particle is larger than the of the environment. While the momentum of the particle is less than that of the environment. An oblique displacement thus produces a positive buoyancy and an acceleration in the oblique displacement direction which reinforces it.
The condition for having conditional symmetric instability in an otherwise stable situation is therefore that:
the slope of is greater than that of
Laterally displaced air is almost saturated.
Potential effects
CSI is usually embedded in large areas of vertical upward motion. The ideal situation is a geostrophic flow from the South with wind speeds that increase with height. The environment is well mixed and close to saturation. Since the flow is unidirectional, the u component of the wind can be set equal to zero, which establishes a symmetrical flow perpendicular to the temperature gradient in the air mass. This type of flow is typically found in baroclinic atmospheres with cold air to the west.
The image to the right shows such a situation in winter with CSI associated with negative equivalent potential vorticity () near a warm front. Banded snow
forms along the front, near the low pressure area and the CSI.
Slantwise convection
If a particle is climbing in a CSI zone, it will cool down and the water vapor will condense upon saturation, giving cloud and precipitation by oblique convection. For example, in front of a warm front, the air mass is stable because the mild air overcomes a cold mass. The geostrophic equilibrium brings back any particle moving perpendicularly from the center of the depression towards it. However, an upwardly oblique displacement by synoptic scale upward acceleration in a CSI layer produces parallel bands of heavy rainfall.
Conditional symmetric instability affects a layer that can be thin or very large in the vertical, similar to hydrostatic convection. The thickness of the layer determines the enhancement of convective precipitation within a region otherwise stratiform clouds. As the motion is in an area near saturation, the particle remains very close to the moist adiabatic lapse rate which gives it a limited Convective available potential energy (CAPE). The rate of climb in a slantwise convection zone ranges from a few tens of centimeters per second to a few meters per second. This is usually below the climbing speed limit in a cumulonimbus, i.e. 5 m/s, which gives lightning and limit the occurrence of it with CSI. It is however possible in:
The trailing precipitation region of mesoscale convective systems.
Wintertime convection because the lower and colder tropopause is helping the ionization of upward moving ice crystals.
In the eyewall during the deepening phase of mature hurricanes, although rarely as it is a region symmetrically neutral and is generally free of lightning activity.
Slantwise convection bands have several characteristics:
They are parallel
They are parallel to the thermal wind
They move with the general circulation
The space between the bands is proportional to the thickness of the CSI layer
Subsidence
Conversely, if the particle slide downward, it will warm up and become relatively less saturated, dissipating clouds. The snow produced at higher altitude by the slantwise convection will also sublimate in the descending flow and accelerate. It can give it a speed of descent reaching the 20 m/s. This effect is associated with the descent to the ground of the Sting jet.
References
External links
Fluid mechanics
Meteorological concepts
Weather prediction
Physical oceanography
Atmospheric thermodynamics
Severe weather and convection | Conditional symmetric instability | [
"Physics",
"Engineering"
] | 1,651 | [
"Weather prediction",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Weather",
"Civil engineering",
"Physical oceanography",
"Fluid mechanics"
] |
42,560,314 | https://en.wikipedia.org/wiki/LacUV5 | The lacUV5 promoter is a mutated promoter from the Escherichia coli lac operon which is used in molecular biology to drive gene expression on a plasmid. lacUV5 is very similar to the classical lac promoter, containing just 2 base pair mutations in the -10 hexamer region, compared to the lac promoter. LacUV5 is among the most commonly used promoters in molecular biology because it requires no additional activators and it drives high levels of gene expression.
The lacUV5 promoter sequence conforms more closely to the consensus sequence recognized by bacterial sigma factors than the traditional lac promoter does. Due to this, lacUV5 recruits RNA Polymerase more effectively, thus leading to higher transcription of target genes. Additionally, unlike the lac promoter, lacUV5 works independently of activator proteins or other cis regulatory elements (apart from the -10 and -35 promoter regions). While no activators are required, lacUV5 promoter expression can be regulated by the LacI repressor and can be induced with IPTG, which is an effective inducer of protein expression when used in the concentration range of 100 μM to 1.5 mM. Due to this control, the lacUV5 promoter is commonly found on expression plasmids and is used when controllable but high levels of a product are desired.
The lacUV5 mutation was first identified in 1970 in a study of lac promoter mutants that produce higher yields. Some of them, including UV5, has lost catabolite repression at the CAP site. Development into cloning vectors is known since 1982, when a UV5-carrying phage known as "λ h80 lacUV5 cI857" has its genome spliced with the HaeIII restriction enzyme to make plasmids carrying the fragment with UV5.
Sequence
Modern lacUV5 is seen in the BL21(DE3) strain, which carries both a lac operon with the standard promoter and a lacUV5 operon split by the DE3 prophage (and as a result driving the T7 RNA polymerase instead). The two important mutations are underlined.
lacUV5 TCACTCATTAGGCACCCCAGGCTTTACACTTTATGCTTCCGGCTCGTATAATGTGTGGAATTGTGAGCGGATAACAATTTCACACAGGAAACAGCT
LacZ TCACTCATTAGGCACCCCAGGCTTTACACTTTATGCTTCCGGCTCGTATGTTGTGTGAAATTGTGAGCGGATAACAATTTCACACAGGAAACAGCT
position ^-35 ^-10 ^+1
References
Escherichia coli
Gene expression
Genetics techniques | LacUV5 | [
"Chemistry",
"Engineering",
"Biology"
] | 570 | [
"Genetics techniques",
"Gene expression",
"Genetic engineering",
"Model organisms",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Escherichia coli"
] |
42,563,034 | https://en.wikipedia.org/wiki/KiSAO | The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of systems biology models, their characterization and interrelationships. KiSAO is part of the BioModels.net project and of the COMBINE initiative.
Structure
KiSAO consists of three main branches:
simulation algorithm
simulation algorithm characteristic
simulation algorithm parameter
The elements of each algorithm branch are linked to characteristic and parameter branches using has characteristic and has parameter relationships accordingly. The algorithm branch itself is hierarchically structured using relationships which denote that the descendant algorithms were derived from, or specify, more general ancestors.
See also
COMBINE
SED-ML
MIRIAM
SBO
TEDDY
References
Systems biology
Bioinformatics software
Free science software
Algorithms
Biological databases | KiSAO | [
"Mathematics",
"Biology"
] | 147 | [
"Bioinformatics software",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Bioinformatics",
"Biological databases",
"Systems biology"
] |
42,564,019 | https://en.wikipedia.org/wiki/Rosalind%20%28education%20platform%29 | Rosalind is an educational resource and web project for learning bioinformatics through problem solving and computer programming. Rosalind users learn bioinformatics concepts through a problem tree that builds up biological, algorithmic, and programming knowledge concurrently or learn by topics, with the topic of Alignment, Combinatorics, Computational Mass Spectrometry, Heredity, Population Dynamics and so on. Each problem is checked automatically, allowing for the project to also be used for automated homework testing in existing classes.
Rosalind is a joint project between the University of California, San Diego and Saint Petersburg Academic University along with the Russian Academy of Sciences. The project's name commemorates Rosalind Franklin, whose X-ray crystallography with Raymond Gosling facilitated the discovery of the DNA double helix by James D. Watson and Francis Crick. It was recognized by Homolog.us as the Best Educational Resource of 2012 in their review of the Top Bioinformatics Contributions of 2012. , it hosts over 88,000 problem solvers.
Rosalind was used to teach the first Bioinformatics Algorithms MOOC on Coursera in 2013, including interactive learning materials hosted on Stepic.
References
External links
Rosalind
Bioinformatics software
Biology websites
Computer programming | Rosalind (education platform) | [
"Technology",
"Engineering",
"Biology"
] | 252 | [
"Bioinformatics software",
"Computer programming",
"Software engineering",
"Bioinformatics",
"Computers"
] |
42,564,106 | https://en.wikipedia.org/wiki/C14H16N2O6S3 | The molecular formula C14H16N2O6S3 (molar mass: 404.48 g/mol) may refer to:
Aldesulfone sodium
Sulfoxone
Molecular formulas | C14H16N2O6S3 | [
"Physics",
"Chemistry"
] | 43 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
42,569,187 | https://en.wikipedia.org/wiki/NTERA-2 | The NTERA-2 (also designated NTERA2/D1, NTERA2, or NT2) cell line is a clonally derived, pluripotent human embryonal carcinoma cell line.
Characteristics
NTERA-2 cells exhibit biochemical and developmental properties similar to the cells of the early embryo, and can be used to study the early stages of human neurogenesis. The cells exhibit a high nucleo-cytoplasmic ratio, prominent nucleoli, and the expression of the glycolipid antigen SSEA-3. They also express nestin and vimentin, which are found in neuroepithelial precursor cells, as well as microtubule-associated proteins expressed in human neuroepithelium. NTERA-2 cells also accumulate cytoplasmic glycogen.
Differentiation
NTERA-2 cells differentiate when exposed to retinoic acid and lose expression of SSEA-3. Differentiation produces neurons via asymmetric cell division, and these cells form interconnected axon networks and express tetanus toxin receptors and neurofilament proteins. By 10–14 days of exposure to retinoic acid, NTERA-2 cells begin to take on the morphological characteristics of neurons, such as rounded cell bodies and processes. NTERA-2 cells can also produce a small number of oligodendrocyte-type cells, but they cannot differentiate into astrocytes.
Research
Because of their similarity to human embryonic stem cells, NTERA-2 cells are used to study the dopaminergic differentiation of neuronal precursor cells. They have also been proposed as an in vitro test system for developmental neurotoxicity.
History
NTERA-2 cells were originally isolated from a lung metastasis from a 22-year-old male patient with primary embryonal carcinoma of the testis. The tumor was xenografted onto a mouse, and from this cells were cloned into the NTERA-2 cell line.
References
External links
Cellosaurus entry for NTERA-2
Human cell lines
Stem cell research | NTERA-2 | [
"Chemistry",
"Biology"
] | 433 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
51,208,917 | https://en.wikipedia.org/wiki/Community%20search | Discovering communities in a network, known as community detection/discovery, is a fundamental problem in network science, which attracted much attention in the past several decades. In recent years, with the tremendous studies on big data, another related but different problem, called community search, which aims to find the most likely community that contains the query node, has attracted great attention from both academic and industry areas. It is a query-dependent variant of the community detection problem. A detailed survey of community search can be found at ref., which reviews all the recent studies
Main advantages
As pointed by the first work on community search published in SIGKDD'2010, many existing community detection/discovery methods consider the static community detection problem, where the graph needs to be partitioned a-priori with no reference to query nodes. While community search often focuses the most-likely communitie containing the query vertex. The main advantages of community search over community detection/discovery are listed as below:
(1) High personalization. Community detection/discovery often uses the same global criterion to decide whether a subgraph qualifies as a community. In other words, the criterion is fixed and predetermined. But in reality, communities for different vertices may have very different characteristics. Moreover, community search allows the query users to specify more personalized query conditions. In addition, the personalized query conditions enable the communities to be interpreted easily.
For example, a recent work, which focuses on attributed graphs, where nodes are often associated with some attributes like keyword, and tries to find the communities, called attributed communities, which exhibit both strong structure and keyword cohesiveness. The query users are allowed to specify a query node and some other query conditions: (1) a value, k, the minimum degree for the expected communities; and (2) a set of keywords, which control the semantic of the expected communities. The communities returned can be easily interpreted by the keywords shared by all the community members. More details can be found from.
(2) High efficiency. With the striking booming of social networks in recent years, there are many real big graphs. For example, the numbers of users in Facebook and Twitter are often billions-scale. As community detection/discovery often finds all the communities from an entire social network, this can be very costly and also time-consuming. In contrast, community search often works on a sub-graph, which is much efficient. Moreover, detecting all the communities from an entire social network is often unnecessary. For real applications like recommendation and social media markets, people often focus on some communities that they are really interested in, rather than all the communities.
Some recent studies have shown that, for million-scale graphs, community search often takes less than 1 second to find a well-defined community, which is generally much faster than many existing community detection/discovery methods. This also implies that, community search is more suitable for finding communities from big graphs.
(3) Support for dynamically evolving graphs. Almost all the graphs in real life are often evolving over time. Since community detection often uses the same global criterion to find communities, they are not sensitive of the updates of nodes and edges in graphs. In other words, the detected communities may loose freshness after a short period of time. On the contrary, community search can handle this easily since it is able to search the communities in an online manner, based on a query request.
Metrics for community search
Community search often uses some well-defined, fundamental graph metrics to formulate the cohesiveness of communities. The commonly used metrics are
k-core (minimum degree), k-truss, k-edge-connected, etc. Among these measures, the k-core metric is the most popular one, and has been used in many recent studies as surveyed in.
References
Community
Network theory | Community search | [
"Mathematics"
] | 785 | [
"Network theory",
"Mathematical relations",
"Graph theory"
] |
51,212,201 | https://en.wikipedia.org/wiki/Institute%20for%20Nuclear%20Research%20%28NASU%29 | Institute for Nuclear Research of the National Academy of Sciences of Ukraine (KINR) () is a research institute located in Kyiv, Ukraine. The Institute publishes the journal Nuclear Physics and Atomic Energy.
History
Soon after the World War II, in 1944 at the Institute of Physics in Kyiv was created a department for dealing with number of issues concerning nuclear physics and use atomic energy. For execution of outlined activities sequentially were placed into operation: in 1956 a cyclotron U-120, in 1960 a research reactor VVR-M (Water-Water), and in 1964 an electro-static generator EGP-5.
On 26 March 1970 the Presidium of the National Academy of Sciences of Ukraine (at that time Ukrainian Academy of Sciences) in pursuance of the relevant resolution of the Council of Ministers of the Ukrainian SSR adopted the resolution #105 about the creation of NASU Institute of Nuclear Research (IYaD) on the basis of a number of nuclear departments of the NASU Institute of Physics. It is guarded by the 22nd National Guard Brigade.
Major scientific directions
The main directions of scientific research are Nuclear Physics, Atomic Energy, Solid-state physics, Plasma Physics, Radiobiology and Radioecology.
Experimental facilities
Research Nuclear reactor WWR-M, Cyclotron U-120, Isochronous cyclotron U-240, 10 MeV Electrostatic Tandem Accelerator.
Directors
1970–1974 Mitrofan Pasichnyk
1974–1983 Oleh Nimets
1984–2015 Ivan Vyshnevskyi
2015– Vasyl Slisenko
Known scientists
Aleksandr Leipunskii
Vilen Strutinsky
Kostyantyn O. Terenetsky
Yuri G. Zdesenko
References
Nuclear research institutes
Research institutes in Kyiv
Science and technology in Kyiv
Research institutes in the Soviet Union
Institutes of the National Academy of Sciences of Ukraine
NASU Institute of Physics
Research institutes established in 1970
1970 establishments in the Soviet Union | Institute for Nuclear Research (NASU) | [
"Engineering"
] | 391 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
38,339,304 | https://en.wikipedia.org/wiki/J.%20H.%20Wilkerson%20%26%20Son%20Brickworks | J. H. Wilkerson & Son Brickworks was a historic abandoned brickworks and national historic district located at Milford, Kent County, Delaware. The district includes the sites of three contributing buildings and one contributing site at the brickworks that operated from 1912 to 1957. The sheds, machinery, kiln, and other structures which housed the machinery remain standing, others have deteriorated or collapsed. Last standing were the storage shed, the shed over the brick-making machine, and one of the drying sheds. All of the machinery was in place as were other pieces of equipment used in the brick-making process. The walls of the kiln remain standing, just as they would have been left after the fired bricks are removed.
It was listed on the National Register of Historic Places in 1978. It is listed on the Delaware Cultural and Historic Resources GIS system as destroyed or demolished.
References
External links
Brick Making Machine website
Historic American Engineering Record in Delaware
Industrial buildings and structures on the National Register of Historic Places in Delaware
Historic districts on the National Register of Historic Places in Delaware
Buildings and structures in Kent County, Delaware
Milford, Delaware
Kilns
Brickworks in the United States
National Register of Historic Places in Kent County, Delaware
Demolished but still listed on the National Register of Historic Places | J. H. Wilkerson & Son Brickworks | [
"Chemistry",
"Engineering"
] | 256 | [
"Chemical equipment",
"Kilns"
] |
38,339,537 | https://en.wikipedia.org/wiki/Methylene%20group | A methylene group is any part of a molecule that consists of two hydrogen atoms bound to a carbon atom, which is connected to the remainder of the molecule by two single bonds. The group may be represented as or , where the '>' denotes the two bonds.
This stands in contrast to a situation where the carbon atom is bound to the rest of the molecule by a double bond, which is preferably called a methylidene group, represented . Formerly the methylene name was used for both isomers. The name “methylene bridge“ can be used for the single-bonded isomer, to emphatically exclude methylidene. The distinction is often important, because the double bond is chemically different from two single bonds.
The methylene group should be distinguished from the molecule called carbene. This was also formerly called methylene.
Activated methylene
The central carbon in 1,3-dicarbonyl compound is known as an activated methylene group. This is because, owing to the structure, the carbon is especially acidic and can easily be deprotonated to form a methylene group.
See also
Carbene
Methyl group
Methine
References
Alkanediyl groups | Methylene group | [
"Chemistry"
] | 243 | [
"Substituents",
"Alkanediyl groups",
"Organic chemistry stubs"
] |
38,340,996 | https://en.wikipedia.org/wiki/Methylene%20bridge | In organic chemistry, a methylene bridge, methylene spacer, or methanediyl group is any part of a molecule with formula ; namely, a carbon atom bound to two hydrogen atoms and connected by single bonds to two other distinct atoms in the rest of the molecule. It is the repeating unit in the skeleton of the unbranched alkanes.
A methylene bridge can also act as a bidentate ligand joining two metals in a coordination compound, such as titanium and aluminum in Tebbe's reagent.
A methylene bridge is often called a methylene group or simply methylene, as in "methylene chloride" (dichloromethane ). As a bridge in other compounds, for example in cyclic compounds, it is given the name methano. However, the term methylidene group (not to be confused with the term methylene group, nor the carbene methylidene) properly applies to the group when it is connected to the rest of the molecule by a double bond (), giving it chemical properties very distinct from those of a bridging group.
Reactions
Compounds possessing a methylene bridge located between two strong electron withdrawing groups (such as nitro, carbonyl or nitrile groups) are sometimes called active methylene compounds. Treatment of these with strong bases can form enolates or carbanions, which are often used in organic synthesis. Examples include the Knoevenagel condensation and the malonic ester synthesis.
Examples
Examples of compounds which contain methylene bridges include:
See also
Methyl group
Methylene group
Methyne
References
Functional groups | Methylene bridge | [
"Chemistry"
] | 332 | [
"Functional groups"
] |
38,344,104 | https://en.wikipedia.org/wiki/Chow%E2%80%93Rashevskii%20theorem | In sub-Riemannian geometry, the Chow–Rashevskii theorem (also known as Chow's theorem) asserts that any two points of a connected sub-Riemannian manifold, endowed with a bracket generating distribution, are connected by a horizontal path in the manifold. It is named after Wei-Liang Chow who proved it in 1939, and Petr Konstanovich Rashevskii, who proved it independently in 1938.
The theorem has a number of equivalent statements, one of which is that the topology induced by the Carnot–Carathéodory metric is equivalent to the intrinsic (locally Euclidean) topology of the manifold. A stronger statement that implies the theorem is the ball–box theorem. See, for instance, and .
See also
Orbit (control theory)
References
Metric geometry
Theorems in geometry | Chow–Rashevskii theorem | [
"Mathematics"
] | 168 | [
"Mathematical theorems",
"Mathematical problems",
"Geometry",
"Theorems in geometry"
] |
38,344,422 | https://en.wikipedia.org/wiki/Discrete%20Mathematics%20%26%20Theoretical%20Computer%20Science | Discrete Mathematics & Theoretical Computer Science is a peer-reviewed open access scientific journal covering discrete mathematics and theoretical computer science. It was established in 1997 by Daniel Krob (Paris Diderot University). Since 2001, the editor-in-chief is Jens Gustedt (Institut National de Recherche en Informatique et en Automatique).
Abstracting and indexing
The journal is abstracted and indexed in Mathematical Reviews and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2011 impact factor of 0.465.
References
External links
Discrete mathematics journals
Theoretical computer science journals
Academic journals established in 1997
Open access journals
English-language journals | Discrete Mathematics & Theoretical Computer Science | [
"Mathematics"
] | 139 | [
"Discrete mathematics journals",
"Discrete mathematics"
] |
38,345,367 | https://en.wikipedia.org/wiki/Bass%20number | In mathematics, the ith Bass number of a module M over a local ring R with residue field k is the k-dimension of . More generally the Bass number of a module M over a ring R at a prime ideal p is the Bass number of the localization of M for the localization of R (with respect to the prime p). Bass numbers were introduced by .
The Bass numbers describe the minimal injective resolution of a finitely-generated module M over a Noetherian ring: for each prime ideal p there is a corresponding indecomposable injective module, and the number of times this occurs in the ith term of a minimal resolution of M is the Bass number .
References
Commutative algebra
Homological algebra | Bass number | [
"Mathematics"
] | 154 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Commutative algebra",
"Homological algebra"
] |
38,345,915 | https://en.wikipedia.org/wiki/Micrometer%20adjustment%20nut | On a manual milling machine, the micrometer adjustment nut limits the depth to which the cutting tool may plunge into the workpiece.
The nut is located on a threaded rod on the mill head. The machine operator moves it up or down by rotating it clockwise (to move it down) or counter-clockwise (to move it up). Moving the nut down increases the depth to which the cutting tool may plunge into the workpiece. Moving the nut up reduces the depth to which the cutting tool may plunge into the workpiece.
References
Machine tools | Micrometer adjustment nut | [
"Engineering"
] | 111 | [
"Machine tools",
"Industrial machinery"
] |
38,350,691 | https://en.wikipedia.org/wiki/Certification%20for%20Sustainable%20Transportation | The Certification for Sustainable Transportation is a national program housed at the University of Vermont Extension that seeks to promote the practice of using energy efficient modes of transportation. The CST work centers on its eRating vehicle certification program, which is an eco-label for passenger transportation vehicles. The eRating uses a sustainability index which includes factors such as green house gas emissions per passenger mile, emission levels of criteria pollutants, and in certain circumstances factors such as training for drivers and use of endorsed carbon offsets. Once a certain threshold is met, vehicles may qualify for e1, e2, e3, or e4 levels in the certification program.
Other key components of the CST's work are online and in person training programs. The CST offers training programs geared to help drivers and organizations eliminate all unnecessary idling and on eco-driving. These training programs are focused on helping reduce environmental impacts, save fuel, and save money.
The CST is now actively working with companies in 48 states and three Canadian provinces to prevent unnecessary emissions, reduce environmental impact, and decrease consumption of fossil fuels.
This program is not to be confused with the "E-Mark" vehicle equipment safety certification promulgated by the European Union since 2002 under EU Directive 72/245/EEC and amendments to the requirements of Directive 95/54/EC.
History
In 2007, the University of Vermont began the Green Coach Certification research project, which sought to investigate what efficiency standards would be best applied to motor coaches to promote greater energy sustainability. Research was conducted on actual motor coach companies. It also researched whether a certification program could help reduce environmental impacts from the motor coach industry by educating operators and executives about the benefits, both financial and environmental, of adopting fuel saving strategies and switching to alternative sources of fuel. Upon completion of this pilot program the Certification for Sustainable Transportation was founded as a way to expand the size and scope of the initial program. The CST now works well beyond the motor coach industry, deploying its driver training programs to taxi drivers and school bus operators, and offering its eRating certification to vehicle manufacturers, public transportation industries, and pedi-cab operators.
Accomplishments
The CST has partnered with several leading motor coach companies, including Megabus and Academy The CST also works with the American Bus Association and the United Motorcoach Association to assist with each organization's Green Operator Awards.
Certification
An individual or organization may submit an application to be certified by the CST. If the application meets the proper criteria, the vehicle being examined is given a score. If that score exceeds a certain threshold, an is given. Areas that are rated include greenhouse gas emissions per passenger mile, the use of low emissions technology, the use of alternative fuels, purchase of carbon offset, participation in CST training programs, and the energy efficiency of other places associated with the company, such as the home office.
The CST gives s on a vehicle basis. There are four levels of certification ranging from e1 to e4. e1 is the lowest level of certification and means that the vehicle has met the requirements satisfactorily, whereas e4 certification means that the vehicle is at the highest level of energy efficiency.
Training programs
The Certification for Sustainable Transportation offers two training programs to teach drivers how to be more fuel efficient.
Idle Free
This course teaches drivers about the inefficiency of idling, leaving a bus running while it is not driving, and shows them when they should idle and when they should not. Completing this course gives a driver more points towards a higher . The CST uses a unique approach in this training concentrating on helping dispel myths about idling and helping individual drivers find and identify reasons they personally would like to go idle free. The CST worked with drivers around the country to design this program with the intent of having it come across as drivers speaking to other drivers.
Eco-Driving 101
This course teaches drivers techniques that they can use to cut their fuel consumption on the road, as well as operate their vehicles more safely. Completing this course also gives a driver more points towards a higher .
References
External links
Sustainable transport
University of Vermont
Organizations established in 2007
Environmental certification marks
2007 establishments in Vermont | Certification for Sustainable Transportation | [
"Physics"
] | 845 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
58,952,059 | https://en.wikipedia.org/wiki/Photonically%20Optimized%20Embedded%20Microprocessors | The Photonically Optimized Embedded Microprocessors (POEM) is DARPA program. It should demonstrate photonic technologies that can be integrated within embedded microprocessors and enable energy-efficient high-capacity communications between the microprocessor and DRAM. For realizing POEM technology CMOS and DRAM-compatible photonic links should operate at high bit-rates with very low power dissipation.
Current research
Currently research in this field is at University of Colorado, Berkley University, and Nanophotonic Systems Laboratory ( Ultra-Efficient CMOS-Compatible Grating Coupler Design).
References
External links
University of Colorado, Photonically Optimized Embedded Microprocessors
MIT Photonic Microsystems Group, Nanophotonic Systems Laboratory
Berkley , Electrical Engineering and Computer Sciences
DARPA , News and events Electricity, Light, Join Forces to Advance Computing
Chen Sun RISC-‐V Microprocessor Chip with Photonic I/O
Computer hardware | Photonically Optimized Embedded Microprocessors | [
"Technology",
"Engineering"
] | 203 | [
"Computer engineering",
"Computer hardware",
"Computer hardware stubs",
"Computer systems",
"Computer science",
"Computing stubs",
"Computers"
] |
58,955,266 | https://en.wikipedia.org/wiki/Igor%20Serafimovich%20Tashlykov | Igor Serafimovich Tashlykov (June 4, 1946 – June 7, 2016) was a Soviet and Belarusian physicist, who was awarded the Doctor of Physical and Mathematical Sciences degree (1989). He was a member of the Belarusian Physical Society (1995). He carried out research at the Research Institute of Applied Physical Problems (APP) of the Belarusian State University (1972-1989 - Senior Researcher), the Belarusian State Technological University (1989-2003 - Professor), the Maxim Tank Belarusian State Pedagogical University (BSPU) (2003-2007 - Dean of the Faculty of Physics; 2007-2013 - Head of the Department of General Physics; 2013-2016 - Professor of Physics and Methods of Teaching Physics).
Tashlykov was an expert in the field of the surface modification of solids using ion-beam technologies. In solid-state electronics, he developed criteria for controlling the transformation of the structure of ion-implanted gallium arsenide and silicon that were based on calculations of the energy density released in the elastic processes of cascade collisions of atomic nuclei. He also developed methods for modifying solids by atomic mixing and ion-assisted deposition of coatings. Tashlykov developed the theory for, and carried out experimental studies in connection with, the ion-assisted deposition of thin films.
Tashlykov received an Award from the President of Belarus for outstanding contributions to the social and economic development of the Republic in 2012. He also founded a research group devoted to the study of the physical characteristics and properties of the surfaces of materials such as metals, semiconductors, and elastomers. He was the author of three monographs published in Belarus and in the United States.
Biography
Tashlykov was born in North Korea to Serafim Dmitrievich Tashlykov, a pilot, and Nina Semenovna Tashlykova a nurse, whilst they were stationed there with the Soviet armed forces before he moved to Babruysk in 1952. He moved to Minsk with his parents in 1962 and finished secondary school No. 27 in 1964 with a silver medal before entering the Belarusian State University to study physics, graduating in 1969 with honors from the Physics Faculty with a specialization in solid state physics. He then started full-time postgraduate study before transferring to correspondence postgraduate study in 1970 whilst working as a junior researcher in the laboratory of experimental physics and physical electronics of the BSU. In 1971 he transferred to the Elionics Laboratory within the Research Institute of Applied Physical Problems (APP) at BSU. He became a senior researcher in 1972 and was awarded the degree of Candidate of Sciences in 1973 based on his thesis Investigation of radiation effects in bismuth and its alloys. In 1976 he became a senior research fellow before completing his doctoral thesis on the subject Modification and study of structure in crystals with different types of bonds (Si, GaAs and Ni) by nuclear physical methods at Kharkov State University in 1989.
After receiving his doctoral thesis he worked in the Department of Physics at the Belarusian Institute of Technology, later renamed into the Belarusian Technological University, until 2003, becoming a professor in 1992. In 2003, at the invitation of the rector of the Maxim Tank Belarusian State Pedagogical University, Kukharchik Petr Dmitrievich, he went to work as the Dean of the Physical Faculty. He was Head of the Department of Experimental Physics from 2007 and Professor of the Department of Physics and Methods of Teaching Physics from 2013.
June 7, 2016, died of stroke in Minsk. He was buried in the honorary lot No. 114 of the columbarium of the Eastern (Moscow) cemetery in Minsk.
Scientific activity
Tashlykov was primarily focused on the modification of the structure and properties of solids using ion-beam and ion-plasma methods and technologies. This included the study of physical and chemical processes occurring during the interaction of accelerated ions with elastomers as well as the formation of thin film (coating) / substrate systems.
Postgraduate studies
Tashlykov began his scientific work by studying radiation defects in bismuth and its alloys in 1969 under the guidance of Georgy Alexandrovich Humansky and Valery Ivanovich Prokoshin. Due to a lack of radiation sources samples were irradiated on a microtron at the Physical Institute of Academy of Sciences (PIAS) of the USSR and a U-120 cyclotron in Kiev at the Institute for Nuclear Research of the Academy of Sciences of the Ukrainian SSR. As a result of successful work Tashlykov was recommended for a one-year internship at the Department of Ionometry of the University of Jena in East Germany in 1971–1972 where he studied protonography and ionometry, later called spectroscopy of Rutherford backscattering (RBS) ions in combination with their channeling. It was later noted that Tashlykov was among the first Soviet scientists who held the years experiments in 1971-1972 using ion channeling for layer-by-layer analysis of radiation damages in single crystals.
These results were the basis for the development of the method of calculating the concentration of radiation defects in irradiated single crystals, using the method of ionometry, taking into account the multiple scattering of channeled helium ions. On September 25, 1973, Tashlykov defended his Ph.D. thesis on "The study of radiation effects in bismuth and its alloys" in the BSU.
Work at the Belarusian State University
At Belarusian State University Tashlykov started studying radiation disturbances in bismuth and its alloys by nuclear physical methods. Tashlykov initiated the creation of a research complex of nuclear physical methods on the basis of the ESU-2 electrostatic accelerator of horizontal type in the Belarusian State University in 1975–1979. This legendary accelerator number 2, built shortly after the World War II in the laboratories of Kharkov Institute of Physics and Technology, was used in the 1960s and 1970s as an electron accelerator at Lebedev Physical Institute. With the assistance of the director of the Research Institute of the APP, Academician of the Academy of Sciences of Belarus, A.N. Sevchenko, head of the semiconductor laboratory Academician, B. M. Vul passed this accelerator to Belarus.
Organization of a new research center based on ESU-2, the reconstruction of the accelerator (the creation of a high-frequency ion source and its control systems), the logistic support (transmission of a target chamber equipped with a two-axis precision goniometer) of the elionics laboratory was greatly assisted by well-known specialists: L.I. Pivovar (KPTI), G.M. Osetinsky and I.A. Chepurchenko from the Joint Institute for Nuclear Research (Dubna), as well as Tashlykov's colleague and friend Hardwig Treff of the University of Jena.
From the beginning of the 1970s research work began in the BSU under the guidance of G. A. Gumansky on the tasks identified by the State Committee on Science and Technology of the Council of Ministers of the USSR associated with ion-beam synthesis of double and triple connections of optoelectronics materials. This research led to active international cooperation, in particular carrying out research at the University of Jena in 1971–1972, 1973, 1975, and 1977.
In 1977, at the International Conference on Atomic Collisions in Solids (ICACS) at the Moscow State University, Tashlykov became acquainted with John A. Davies from Chalk River Nuclear Laboratories and George Carter of the University of Salford in Manchester.
In 1979-1980 John A. Davis invited Tashlykov to undertake a traineeship at the McMaster University and the Chalk River Nuclear Laboratories in Canada during which time he studied defect formation in gallium arsenide during implantation of Nitrogen, Aluminium, Phosphorus, Arsenic, and Antimony ions under varied conditions. Using the Rutherford backscattering (RBS), the distribution profiles of the implanted As, Sb were established using the nuclear reaction of aluminum profiles in gallium arsenide. These results, as well as the results of studies of the modification of the catalytic properties of nickel electrodes for water alkaline electrolysis for the production of hydrogen, formed the basis for his doctoral dissertation and additional information was obtained about the mechanisms of formation of radiation damage and phase reconstruction of the structure in gallium arsenide under ion irradiation, the influence of ion current density on these processes, the energy density released in the cascade of atomic collisions, the type of ions, and the irradiation temperature.
The results of the studies in Canada of ion implantation into electrodes used in alkali electrolysis for the production of hydrogen and oxygen were reported by Tashlykov at the Kurchatov Institute. In the early 1980s financing was provided to develop technical regulations for ion implantation of metal catalysts and other ions which led to the generation of 5 copyright certificates for creating active and corrosion-resistant electrodes.
Tashlykov was appointed to the Coordination Group on the methods of fast nuclear analysis of the Council of the USSR Academy of Sciences (later the Russian Academy of Sciences) on the application of methods of nuclear physics in related fields and to the Coordination Group of Electrostatic Accelerators in the Council of the Academy of Sciences of the USSR (Section of accelerators of direct action and sources of charged particles) headed by V. A. Romanov (Obninsk Institute of Physics and Power Engineering). At the invitation of Spartak Belyaev he consulted the method of spectroscopy of Rutherford backscattering and ion channeling in the Department of Nuclear Physics of the Institute of Atomic Energy in 1987–1988. The developed regulations for ion-beam synthesis and doping of semiconductor crystals were introduced at the enterprises of the Ministry of Electronics Industry in 1987 and the USSR radio industry in 1988. The regulations for the creation of anodes of catalytically active and corrosion-resistant anodes for water-alkali electrolysis by the implantation of Ag+ ions into Ni, the use of which provides for a 10% reduction in energy costs per unit of production, were transferred to the organizations of Ministry of Chemical Industry and the USSR Academy of Sciences. The economic effect amounted to 180 thousand soviet rubles. The invaluable assistance was rendered during this period of research using nuclear-physical methods by the head of the Accelerators Laboratory of the Research Institute of Nuclear Physics of Moscow State University, candidate of physical and mathematical sciences, Laureate of the State Prize of the USSR V.S. Kulikauskas.
In 1982, he led the organization and conduct of the All-Union Seminar on Methods of Instant Nuclear Analysis at the A.N. Sevchenko Scientific Research Institute of the APP of BSU. In 1985 and 1987, he was a scientific secretary of the All-Union schools conducted at the Belarusian State University on ion implantation and emission of channeled ions. In June 1985, Tashlykov organized an invitation of John A. Davis, who delivered lectures in the BSU for Belarusian scientists and microelectronics specialists. In Moscow, John A. Davis met with L. I. Ivanov. from the Institute of Metallurgy named after A.A. Baikov, who was the leader of the USSR part of the Apollo–Soyuz Test Project. Later L.I. Ivanov, being the deputy editor of the scientific journal "Physics and Chemistry of Material Processing", invited Tashlykov to join the editorial board of this journal, where he had worked until 2016.
Tashlykov was invited by George Carter to continue the study the mechanisms of defect formation in semiconductors and metals under ion irradiation in the University of Salford in 1984. At the same time, together with John S. Colligon, he conducted experiments to study the effect of the energy density released in the cascade of atomic collisions on the processes of ion-assisted deposition of coatings on materials. In 1985, during a UNESCO trip to West Germany, the studies, started in Belarus and the UK, were continued in collaboration with scientists from Germany at the Heidelberg University and the Max Planck Institute for Nuclear Physics.
The second half of the 1980s was devoted to the organizational work on the acquisition, installation and launch at the Research Institute of the APP of a new ESU-2.5 produced by firm "High Voltage Engineering", summarizing the scientific results for co-authorship with F. F. Komarov and M. A. Kumakhov in monograph «Nerazrushajushhij analiz poverhnostej tverdyh tel ionnymi puchkami (Неразрушающий анализ поверхностей твердых тел ионными пучками)» (Minsk, University Press, 1987, 256 p), later translated and then published in 1990 in English by the American publishing house Gordon and Breach, entitled "Nondestructive Ion Beam Analysis of Surfaces". At the same time, a doctoral thesis was written on "Modifying and studying the structure by nuclear physical methods in crystals with different types of chemical bonds (Si, GaAs, Ni) ", defended in Kharkov State University on March 17, 1989.
Work at the Belarusian State Technological University
After starting teaching at the BSTU in August 1989, Tashlykov began researching the physical processes in the ion-assisted coating of materials and products in conditions of self-radiation. Mechanisms of damage and interpenetration of components in the area of coating / substrate phase separation induced by radiation were established and studied which were then expanded into patents. The new technology was introduced at the enterprises of the Republic of Belarus (Baranovichi Automobile Aggregate Plant, OJSC Belarusrezinotechnika, Babruysk), which resulted in creation of corrosion-resistant, inertial to the elastomer adhesion mold surface for manufacturing of rubber products. Tashlykov continued his research collaborations with scientists from Heidelberg University, the University of Jena, the University of Salford, and the Max Planck Institute for Nuclear Physics studying the modification of the surface of solids.
Work at Belarusian State Pedagogical University
Tashlykov was the Dean of the Faculty of Physics of (2003-2007), Head of the Department of General Physics (2007-2013), Professor of the Department of Physics and Methods of Teaching Physics (2013-2016) at Maxim Tank Belarusian State Pedagogical University (BPSU). After accepting the post of dean in 2003, Tashlykov worked to encourage research and teaching in the physics of functional coatings. In 2004, he started a research laboratory to study the surface of solids, equipped with an atomic force microscope, as well as a device of precise measurement of the contact angle of wetting).
In the 2000s, Tashlykov collaborated with the research group of P. V. Zhukovsky at Lublin University of Technology which led to joint research which were reported at the international conferences: "New Electrical and Electronic Technologies and their Industrial Implementations" and "Ion Implantation and other Applications of Ions and Electrons".
From 2012, Tashlykov worked on the creation of new types of cheap and environmentally friendly thin-film absorbing layers and current-carrying contacts for solar cells. In his research it was shown that application of "hot wall" technique for obtaining thin films of SnPbS system compounds with different elemental composition significantly changes the structural and the morphological characteristics of the films and improves their physical properties. In particular, a model of the surface wettability of nanosized films with distilled water has been developed which acts as an effective method for studying the characteristics of surfaces and the processes occurring on them.
Main scientific results
Tashlykov was the first Soviet scientist who used the channeling effect to study radiation defects in ion-implanted crystals. Based on the solution of the analytical equation, he developed a method for constructing profiles of radiation defects in crystals, which takes into account de-channeling of analyzing ions not only on defects, but also on thermal vibrations of atoms and their electrons.
The fundamentals of the technology of ion-beam synthesis of layers of solid solutions based on single- and multi-component systems of materials of solid-state electronics: silicon, silicon carbide (SiC), and ternary compounds based on gallium arsenide (AlxGa1-xAs, GaAs1−xPx) have been developed.
Mechanisms of defect formation (homogeneous, heterogeneous) and structural-phase rearrangements in semiconductor crystals during ion implantation have been established.
The following criteria were experimentally determined: the density of the released energy in cascades of atomic collisions, the ion current density, the ion energy values, the temperature implantation and annealing modes affecting the concentration, depth distribution, type of radiation defects in the near-surface layers of ion implanted semiconductors.
Developed regulations for ion-beam synthesis and doping of semiconductor crystals.
The scientific and technical materials "Application of ion beams for the analysis of solids" have been developed and implemented.
The complex of studies on ion-beam modification of electrode materials (Ni, Ti, graphite) used for the electrolysis of alkaline, acidic and other solutions was carried out. The regulations for the creation by implantation of Ag+ ions in Ni of catalytically active and corrosion-resistant anodes for water-alkaline electrolysis of water were developed and handed over to the Ministry of Chemical Industry and the USSR Academy of Sciences organizations.
Fundamental results have been obtained on the mechanisms and dynamics of radiation defect formation in metal crystals during ion implantation, and a model for oxidizing the anode surface in aqueous alkaline electrolysis has been constructed.
A theory of ion-assisted protective coatings of rubber has been developed, which takes into account the capture of auxiliary gas atoms, surface spraying, the density of ionized and deposited neutral fractions generated by the ion source.
The "Method of deposition of coatings" has been experimentally proved and patented.
Technological regimes for the formation of superhard nanosized films on silicon substrates for functional elements of nano- and microelectronics have been developed. The studied radiation-controlled processes of mass transfer during the deposition of thin films allow controlling adhesion at the atomic level, as well as modifying the surface properties of the absorbing layers and back contacts for solar cells.
A vacuum method for the production of chalcogenide semiconductors Pb1−xSnx (S, Te) has been developed, and regularities have been established for the variation of their electro-physical and optical properties and structural characteristics, depending on the composition and processing conditions of production, the practical applications of these materials in the field of optoelectronics as thin-film solar cells and photo converters, have been determined.
Teaching and training
Tashlykov also contributed to the development and training of younger scientists. In particular:
The monograph "Non-destructive Analysis of Solids Surfaces by Ion Beams" is used as a textbook at the BSU
The method of resonance nuclear reactions for layer analysis of light impurities was used in a laboratory workshop on the "Backscattering method and nuclear reactions in elemental analysis of substance" for the training of students at the Kharkiv State University, Moscow State University, and Rostov State University.
Students worked under Tashlykov at BPSU, both in research laboratories and on the Problem Council
Tashlykov was involved with councils for the Defence of Doctoral dissertations at both BSPU and BSU.
Awards and honors
In the 1980s, Tashlykov was a member of the Councils of the USSR Academy of Sciences groups on direct-action accelerators and on methods of rapid nuclear analysis. He became a member of the New York Academy of Sciences in 1994 and the Belarusian Physical Society in 1995.
Honorable Diplomas of the Research Institute of Applied Physical Problems of BSU / Почетные грамоты НИИ прикладных физических проблем БГУ (1975, 1977, 1978, 1980, 1981, 1984, 1987).
Honorable Diplomas of Belarusian state University / Почетные грамоты Белорусского государственного университета (1981, 1983).
Honorable Diplomas of the Ministry of Education of the Republic of Belarus / Почетные грамоты Министерства образования Республики Беларусь (1982, 2005, 2012).
Honorable Diplomas of the Belarusian State Pedagogical University named after M.Tank / Почетные грамоты Белорусского государственного педагогического университета им. М. Танка (2006, 2009, 2016).
Badge of the Ministry of Education "Excellence in Education" / Знак Министерства образования «Отличник образования» (2014).
Grant of President of the Republic of Belarus for the development of information technologies in technical creative work / Грант Президента Республики Беларусь на разработку информационных технологий в техническом творчестве (2015).
Diploma of the State Committee on science and technology of the Republic of Belarus / Почетная грамота Государственного комитета по науке и технологиям Республики Беларусь (2016).
Selected publications
Ф.Ф.Комаров, М.А.Кумахов, И.С.Ташлыков. "Неразрушающий анализ поверхностей твердых тел ионными пучками". Мн., изд-во Университетское (1987) 256.
F.F. Komarov, M.A. Kumakhov, I.S. Tashlykov. "Non-destructive ion beam analysis of surface". London: Gordon and Breach Science Publishers (1990) 231.
I.S. Tashlykov. "Backscattering measurements of P+ implanted GaAs crystals". Nucl. Instrum. Methods, 170 (1980) 403-406.
I.S. Tashlykov. "Disorder dependence of ion implanted GaAs on the type of ion". Nucl. Instrum. Methods. Phys. Res., 203 (1982) 523-526.
G. Carter, M.J. Nobes, I.S. Tashlykov. "The influence of dose rate and analysis of procedures on measured damage in P+ ion implanted GaAs". Rad. Effects Letters, 85 (1984) 37–43.
I.S. Tashlykov, O.A. Slesarenko. J.S. Colligon, H. Kheyrandish. "Effect of atomic mixing on the electrochemical and corrosion properties of Ni-Ti surfaces". Surfacing Journal International, 1 (1986) 106–107.
G. Carter, J. Colligon, I.S. Tashlykov. "Evaluation of Coatings Produced by Low-Energy Ion Assisted Deposition of Co on Silicon". Material Science Forum, 248-249 (1997) 357–360.
I.S. Tashlykov. "A model of oxide layer growth on Ag+ and Pt+ ion implanted nickel anode in aqueous alkaline solution". Thin Solid Films, 307 (1997) 106–109.
I.S. Tashlykov, V.I. Kasperovich, M.G. Shadrukhin, A.V. Kasperovich, G.K. Wolf, W. Wesch. "Elastomer treatment by arc metal deposition assisted with self-ion irradiation". Surf. Coat. Technol., 116-119 (1999) 848–852.
I.S.Tashlykov, O.G.Bobrovich. "Radiation damage of Si wafers modified by means of thin layer ion assisted deposition". Vacuum, 78 (2005) 337–340.
I.S.Tashlykov, P.V.Zukowski, S.M.Baraishuk, O.M.Mikhalkovich. "Analysis of the composition of Ti-based thin films deposited on silicon by means of self-ion assisted deposition". Radiat. Eff. Defects Solids, 162 (2007) 637–641.
I.S.Tashlykov, A.I.Turavetz, V.F.Gremenok, K.Bente, D.M.Unuchak. "Topography and water wettability of HWVD produced (Pb, Sn)S2 thin films for solar cells". Przeglad Elektrotechniczny, 7 (2010) 118–121.
I.S. Tashlykov, A.I. Turavets, V.F. Gremenok, P. Żhukowski. "Elemental composition, topography and wettability of PbxSn1−xS thin films". Acta Physica Polonica A, 125 (2014) 1339–1343.
I.Tashlykov, P. Zhukowski, O. Mikhalkovich, S. Baraishuk. "Surface properties of Me/Si structures prepared by means of self-ion assisted deposition". Acta Physica Polonica A, 125 (2014) 1306–1308.
П.В. Коваль, А.С. Опанасюк, А.І. Туровець, І.С. Ташликов, А.Г. Понамарьов, П. Жуковскі. "Структура і елементний склад плівок Pb1−xSnxS". Журнал нано- и электронной физики, Т. 7, No. 2 (2015) 02013-1-02013-7.
References
Further reading
"Igor Serafimovich Tashlykov has passed away"- on the official website of the faculty of physics and mathematics of BSPU (in Russian) / Ушел из жизни Игорь Серафимович Ташлыков на официальном сайте физико-математического факультета БГПУ
Consolidated electronic catalogue of libraries of Belarus [Electron. resource] (in Russian) / Сводный электронный каталог библиотек Беларуси [Электрон. ресурс]
Scientific and pedagogical schools of the BSPU / editorial board. : A. V. Torkhova [et al.]; edited by A. V. Torkhova.- Minsk: BSPU, 2015. - pp. 118-136. (in Russian) / Научно-педагогические школы БГПУ / редкол. : А. В. Торхова [и др.] ; под ред. А. В. Торховой. – Минск : БГПУ , 2015. – С. 118—136.
Tashlykov Igor Serafimovich // Republic of Belarus: Encyclopedia: [in 7 volumes]. - Minsk, 2008. - V. 7. - P. 189–190. (in Russian) / Ташлыков Игорь Серафимович // Республика Беларусь : энциклопедия : [в 7 т.]. — Минск, 2008. — Т. 7. — С. 189–190.
1946 births
2016 deaths
Belarusian physicists
Belarusian State University alumni
Condensed matter physicists | Igor Serafimovich Tashlykov | [
"Physics",
"Materials_science"
] | 6,229 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
58,957,288 | https://en.wikipedia.org/wiki/RSA%20Award%20for%20Excellence%20in%20Mathematics | The RSA Conference (RSAC) Award for Excellence in Mathematics is an annual award. It is announced at the annual RSA Conference in recognition of innovations and contributions in the field of cryptography. An award committee of experts, which is associated with the Cryptographer's Track committee at the RSA Conference (CT-RSA), nominates to the award persons who are pioneers in their field, and whose work has had applied or theoretical lasting value; the award is typically given for the lifetime achievements throughout the nominee's entire career. Nominees are often affiliated with universities or involved with research and development in the information technology industry. The award is cosponsored by the International Association for Cryptologic Research.
While the field of modern cryptography started to be an active research area in the 1970s, it has already contributed heavily to Information technology and has served as a critical component in advancing the world of computing: the Internet, Cellular networks, and Cloud computing, Information privacy, Privacy engineering, Anonymity, Storage security, and Information security, to mention just a few sectors and areas. Research in Cryptography as a scientific field involves the disciplines of Mathematics, Computer Science, and Engineering. The award, which started in 1998, is one of the few recognitions fully dedicated to acknowledging experts who have advanced the field of cryptography and its related areas (another such recognition is achieving the rank of an IACR Fellow).
The first recipient of the award in 1998 was Shafi Goldwasser. Also, many of the award winners have gotten other recognitions, such as other prestigious awards, and the rank of fellow in various professional societies, etc.
Research in Cryptography is broad and is dedicated to numerous areas. In fact, the award has, over the years, emphasized the methodological contributions to the field which involve mathematical research in various ways, and has recognized achievements in many of the following crucial areas of research:
Some areas are in the general Computational number theory and Computational algebra fields, or in the fields of Information theory and Computational complexity theory, where proper mathematical structures are constructed or investigated as underlying mathematics to be employed in the field of cryptography;
Some areas are theoretical in nature, where new notions for Cryptographic primitives are defined and their security is carefully formalized as foundations of the field, some work is influenced by Quantum computing as well;
Some areas are dedicated to designing new or improved primitives from concrete or abstract mathematical mechanisms for Symmetric-key cryptography, Public-key cryptography, and Cryptographic protocols (such as Zero-knowledge proofs, Secure multi-party computations, or Threshold cryptosystems);
Some other areas are dedicated to Cryptanalysis: the breaking of cryptographic systems and mechanisms;
Yet some other areas are dedicated to the actual practice of cryptography and its efficient cryptographic hardware and software implementations, to developing and deploying new actual protocols (such as the Transport Layer Security and IPsec) to be used by information technology applications and systems. Also included are research areas where principles and basic methods are developed for achieving security and privacy in computing and communication systems.
To further read on various aspects of cryptography, from history to areas of modern research, see Books on cryptography.
In addition to the Award for Excellence in Mathematics which recognizes lifetime achievement in the specific area of Cryptographic research, the RSA conference has also presented a separate lifetime achievement awards in the more general field of information security. Past recipients of this award from the field of cryptography include:
Taher Elgamal (2009),
Whitfield Diffie (2010),
Ronald Rivest, Adi Shamir, and Leonard Adleman (2011), and
Martin Hellman (2012)
Past recipients
See also
List of computer science awards
List of mathematics awards
Notes
Here are a few examples of videos from the award ceremonies and interviews with award winners; these give some more information about each specific award year, and demonstrate the breadth of research behind each such an award:
2009 Interview with V. Miller
2009 Interview with N. Koblitz
2013 Award ceremony: J.-J. Quisquater and C. P. Schnorr:
2014 Award ceremony: B. Preneel:
2015 Award ceremony: I.B. Damgård and H. Krawczyk
2016 Award ceremony: U. Maurer
2019 Award ceremony video: T. Rabin, and the Cryptographer's panel.
2020 Award ceremony video: J. Daemen and V. Rijmen.
2021 Award ceremony video: D. Pointcheval
2023 Award ceremony video.
2024 Award ceremony video.
References
Mathematics awards
Computer science awards
Cryptography | RSA Award for Excellence in Mathematics | [
"Mathematics",
"Technology",
"Engineering"
] | 945 | [
"Cybersecurity engineering",
"Cryptography",
"Computer science awards",
"Applied mathematics",
"Computer science",
"Mathematics awards",
"Science and technology awards"
] |
50,346,067 | https://en.wikipedia.org/wiki/Effective%20fragment%20potential%20method | The effective fragment potential (EFP) method is a computational approach designed to describe intermolecular interactions and environmental effects. It is a computationally inexpensive means to describe interactions in non-bonded systems. It was originally formulated to describe the solvent effects in complex chemical systems. But it has undergone vast improvements in the past two decades, and is currently used to represent intermolecular interactions (represented as rigid fragments), and in molecular dynamics (MD) simulations as well.
References
Models of computation
Intermolecular forces | Effective fragment potential method | [
"Chemistry",
"Materials_science",
"Engineering"
] | 107 | [
"Molecular physics",
"Materials science",
"Intermolecular forces"
] |
50,348,837 | https://en.wikipedia.org/wiki/XJB-5-131 | XJB-5-131 is a synthetic antioxidant. In a mouse model of Huntington's disease, it has been shown to reduce oxidative damage to mitochondrial DNA, and to maintain mitochondrial DNA copy number. XJB-5-131 also strongly protects against ferroptosis, a form of iron-dependent regulated cell death.
References
Antioxidants
Synthetic biology
Huntington's disease | XJB-5-131 | [
"Engineering",
"Biology"
] | 86 | [
"Synthetic biology",
"Biological engineering",
"Molecular genetics",
"Bioinformatics"
] |
52,740,225 | https://en.wikipedia.org/wiki/Phosphorus%20trifluorodichloride | Phosphorus trifluorodichloride is a chemical compound with the chemical formula PF3Cl2. The covalent molecule trigonal bipyramidal molecular geometry. The central phosphorus atom has sp3d hybridization, and the molecule has an asymmetric charge distribution. It appears as a colorless gas with a disagreeable odor, and it turns into a liquid at -8 °C.
Phosphorus trifluorodichloride is formed by mixing phosphorus trifluoride with chlorine PF3 + Cl2 → PF3Cl2
The P-F bond length is 1.546 Å for equatorial position and 1.593 for the axial position and the P-Cl bond length is 2.004 Å. The chlorine atoms are in equatorial positions in the molecule.
References
Phosphorus(V) compounds
Fluorine compounds
Chlorine(−I) compounds
Gases | Phosphorus trifluorodichloride | [
"Physics",
"Chemistry"
] | 185 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
48,868,479 | https://en.wikipedia.org/wiki/Aeropause | Aeropause is the region in which the functional effects of the atmosphere on man and craft begin to cease.
Background
In the 1950s, there were discussions between the U. S. Air Force School of Medicine Aviation and the Lovelace Foundation regarding how to support the efforts to accomplish manned travel in the upper atmosphere. A research plan was developed that encompassed the aeromedical, aeronautical, astrophysical, and biological aspects that were deemed vital to manned travel in the upper atmosphere. At San Antonio, Texas in November 1951 a symposium was held with a focus aimed at forecasting the future research needed for manned flight that approached the upper limits of the atmosphere. At the symposium, the presentations and discussions focused on 4 major disciplines that comprised the areas of astrophysics, aeronautical engineering, radiobiology, and aviation medicine.
The Aeropause
At the time when the term aeropause became relevant to the pursuits of mankind into the upper regions of Earth's atmosphere, there did not exist a precise description for the word. The judgement of experts in the various fields was solicited and input was sought to define the term aeropause. The terms from aeromedical and aeronautical disciplines along with terms from geophysics, astrophysics, and radiobiology were not sufficient to depict this region of the atmosphere where human endeavors sought to venture and explore. The current terms in use were limited and insufficient. The terms available and that came under consideration were aerosphere, aeropause, and astronautics.
According to Clayton S. White, the expression aeropause is a coined word first spoken by Dr. Konrad Buettner of the Department of Space Medicine at Randolph Field during a conference. Again, referring to White: "aerosphere was visualized as that region in which flight was possible currently. The aeropause was the region just above this, to be different yesterday, today, and tomorrow, as progress in aviation ensued".
Heinz Haber, Department of Space Medicine at Randolph Field, an expert in space medicine sought a more functional definition and said the: "aeropause should be defined as that region in which the functional effects of the atmosphere on man and craft begin to cease". The aeronautical engineer defined the aeropause as those areas of the atmosphere where the physiological necessities of the aircrew became the limiting factors for the design of aircraft and supporting equipment.
The study of the aeropause requires a blend of several disciplines from the biological sciences and the physical sciences. Important fields necessary for research that involves the aeropause include aviation medicine, geophysics, astronomy, astrophysics, aeronautical engineering, biophysics, and radiobiology.
References
Aerospace
Aviation medicine
Space medicine | Aeropause | [
"Physics"
] | 538 | [
"Spacetime",
"Space",
"Aerospace"
] |
39,794,191 | https://en.wikipedia.org/wiki/Valid%20Time%20Event%20Code | Valid Time Event Code (VTEC) is a code used by the National Weather Service, a part of National Oceanic and Atmospheric Administration (NOAA) of the United States government, to identify products / events.
References
Automation
Encodings
National Weather Service
Weather events | Valid Time Event Code | [
"Physics",
"Engineering"
] | 53 | [
"Physical phenomena",
"Weather",
"Automation",
"Control engineering",
"Weather events"
] |
39,794,575 | https://en.wikipedia.org/wiki/Toyota%20TTC | Toyota TTC (Toyota Total Clean System) is a moniker used in Japan to identify vehicles built with emission control technology. This technology was installed so that vehicles would comply with Japanese emission regulations passed in 1968. The term was introduced in Japan and included an externally mounted badge on the trunk of equipped vehicles. The technology first appeared in January 1975 on the Toyota Crown, Toyota Corona Mark II, Toyota Corona, Toyota Chaser, Toyota Carina, Toyota Corolla, and Toyota Sprinter. There were three different versions initially introduced: TTC-C for Catalyst (installing a catalytic converter), TTC-V for Vortex (installing an exhaust gas recirculation valve), and TTC-L for Lean Burn (using a lean burn method). As Toyota's technology evolved, the three systems were eventually used in conjunction in future models.
The TTC-V was a licensed copy of Honda's CVCC system and was introduced in February 1975. It was only available in the Carina and Corona lines, and only on the 19R engine, a modified 18R. From March 1976, the TTC-V system was upgraded to meet the stricter 1976 emissions standards. The TTC-V engine was discontinued in 1977. The "Vortex" approach was also used with Mitsubishi's MCA-Jet technology, with Mitsubishi installing an extra valve in the cylinder head, as opposed to Honda's pre-chamber approach.
Toyota installed its emission control technology in select Daihatsu vehicles, as Toyota was a part owner. The system was labeled "DECS" (Daihatsu Economical Cleanup System). The first version to be installed was the DECS-C (catalyst) in the Daihatsu Charmant and the Consorte. As the Japanese emissions regulations continued to be tightened, the DECS-C system was replaced by the DECS-L (lean burn) method, which was also installed in the Daihatsu Fellow, on the Daihatsu A-series engine, the Daihatsu Charade, and the Daihatsu Delta.
See also
Vehicle emissions control
References
Engine technology
Engines
Automotive technology tradenames | Toyota TTC | [
"Physics",
"Technology"
] | 436 | [
"Physical systems",
"Machines",
"Engine technology",
"Engines"
] |
39,795,012 | https://en.wikipedia.org/wiki/Monochrome%20rainbow | A monochrome or red rainbow is an optical and meteorological phenomenon and a rare variation of the more commonly seen multicolored rainbow. Its formation process is identical to that of a normal rainbow (namely the reflection/refraction of light in water droplets), the difference being that a monochrome rainbow requires the sun to be close to the horizon; i.e., near sunrise or sunset. The low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
In the lower light environment where the phenomenon most often forms, a monochrome rainbow can leave a highly dramatic effect.
In July 1877, Silvanus P. Thompson witnessed a red and orange rainbow over Lake Lucerne in Switzerland:showed only red and orange colours in place of its usual array of hues. No fewer than five supernumerary arcs were visible at the inner edge of the primary bow, and these showed red only.And there were a few more reports:.
In the background of Madonna of Foligno, there is a monochrome rainbow in orange.
References
Further reading
Atmospheric optical phenomena
Rainbow | Monochrome rainbow | [
"Physics"
] | 245 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
39,796,176 | https://en.wikipedia.org/wiki/CAMEO3D | Continuous Automated Model EvaluatiOn (CAMEO) is a community-wide project to continuously evaluate the accuracy and reliability of protein structure prediction servers in a fully automated manner. CAMEO is a continuous and fully automated complement to the bi-annual CASP experiment.
Currently, CAMEO evaluates predictions for predicted three-dimensional protein structures (3D), ligand binding site predictions in proteins (LB), and model quality estimation tools (QE).
Workflow
CAMEO performs blind assessment of protein structure prediction techniques based on the weekly releases of newly determined experimental structures by the Protein Databank (PDB). The amino acid sequences of soon to be released protein structures are submitted to the participating web-servers. The web-servers return their predictions to CAMEO, and predictions received before the experimental structures have been released are included in the assessment of prediction accuracy. In contrast to the CASP experiment, the comparison between prediction and reference data is fully automated, and therefore requires numerical distance measures which are robust against relative domain movements.
History
CAMEO was developed as part of the Protein Model Portal module of the Structural Biology Knowledge Base as part of the Protein Structure Initiative. CAMEO is being developed by the computational structural biology group at the SIB Swiss Institute of Bioinformatics and the Biozentrum, University of Basel.
Earlier projects with similar aims were EVA and LiveBench.
See also
Protein structure prediction software
References
External links
CAMEO home page
Protein Model Portal
Bioinformatics
Computational chemistry | CAMEO3D | [
"Chemistry",
"Engineering",
"Biology"
] | 290 | [
"Bioinformatics",
"Theoretical chemistry",
"Computational chemistry",
"Biological engineering"
] |
60,444,111 | https://en.wikipedia.org/wiki/Fracture%20of%20soft%20materials | The fracture of soft materials involves large deformations and crack blunting before propagation of the crack can occur. Consequently, the stress field close to the crack tip is significantly different from the traditional formulation encountered in the Linear elastic fracture mechanics. Therefore, fracture analysis for these applications requires a special attention.
The Linear Elastic Fracture Mechanics (LEFM) and K-field (see Fracture Mechanics) are based on the assumption of infinitesimal deformation, and as a result are not suitable to describe the fracture of soft materials. However, LEFM general approach can be applied to understand the basics of fracture on soft materials.
The solution for the deformation and crack stress field in soft materials considers large deformation and is derived from the finite strain elastostatics framework and hyperelastic material models.
Soft materials (Soft matter) consist of a type of material that e.g. includes soft biological tissues as well as synthetic elastomers, and that is very sensitive to thermal variations. Hence, soft materials can become highly deformed before crack propagation.
Hyperelastic material models
Hyperelastic material models are utilized to obtain the stress–strain relationship through a strain energy density function. Relevant models for deriving stress-strain relations for soft materials are: Mooney-Rivlin solid, Neo-Hookean, Exponentially hardening material and Gent hyperelastic models. On this page, the results will be primarily derived from the Neo-Hookean model.
Generalized neo-Hookean (GNH)
The Neo-Hookean model is generalized to account for the hardening factor:
where b>0 and n>1/2 are material parameters, and is the first invariant of the Cauchy-Green deformation tensor:
where are the principal stretches.
Specific Neo-Hookean model
Setting n=1, the specific stress-strain function for the neo-Hookean model is derived:
.
Finite strain crack tip solutions (under large deformation)
Since LEFM is no longer applicable, alternative methods are adapted to capture large deformations in the calculation of stress and deformation fields. In this context the method of asymptotic analysis is of relevance.
Method of asymptotic analysis
The method of asymptotic analysis consists of analyzing the crack-tip asymptotically to find a series expansion of the deformed coordinates capable to characterize the solution near the crack tip. The analysis is reducible to a nonlinear eigenvalue problem.
The problem is formulated based on a crack in an infinite solid, loaded at infinity with uniform uni-axial tension under condition of plane strain (see Fig.1). As the crack deforms and progresses, the coordinates in the current configuration are represented by and in cartesian basis and and in polar basis. The coordinates and are functions of the undeformed coordinates () and near the crack tip, as r→0, can be specified as:
where , are unknown exponents, and , are unknown functions describing the angular variation.
In order to obtain the eigenvalues, the equation above is substituted into the constitutive model, which yields the corresponding nominal stress components. Then, the stresses are substituted into the equilibrium equations (the same formulation as in LEFM theory) and the boundary conditions are applied. The most dominating terms are retained resulting in an eigenvalue problem for and .
Deformation and stress field in a plane strain crack
For the case of a homogeneous neo-Hookean solid (n=1) under Mode I condition the deformed coordinates for a plane strain configuration are given by
where a and are unknown positive amplitudes that depends on the applied loading and specimen geometry.
The leading terms for the nominal stress (or first Piola–Kirchhoff stress, denoted by on this page) are:
Thus, and are bounded at the crack tip and and have the same singularity.
The leading terms for the true stress (or Cauchy stress, denoted by on this page),
The only true stress component completely defined by a is . It also presents the most severe singularity. With that, it is clear that the singularity differs if the stress is given in the current or reference configuration. Additionally, in LEFM, the true stress field under Mode I has a singularity of , which is weaker than the singularity in .
While in LEFM the near tip displacement field depends only on the Mode I stress intensity factor, it is shown here that for large deformations, the displacement depends on two parameters (a and for a plane strain condition).
Deformation and stress field in a plane stress crack
The crack tip deformation field for a Mode I configuration in a homogeneous material neo-Hookean solid (n=1) is given by
where a and c are positive independent amplitudes determined by far field boundary conditions.
The dominant terms of the nominal stress are
And the true stress components are
Analogously, the displacement depends on two parameters (a and c for a plane stress condition) and the singularity is stronger in the term.
The distribution of the true stress in the deformed coordinates (as shown in Fig. 1B) can be relevant when analyzing the crack propagation and blunt phenomenon. Additionally, it is useful when verifying experimental results of the deformation of the crack.
J-integral
The J-integral represents the energy that flows to the crack, hence, it is used to calculate the energy release rate, G. Additionally, it can be used as a fracture criterion. This integral is found to be path independent as long as the material is elastic and damages to the microstructure are not occurring.
Evaluating J on a circular path in the reference configuration yields
for plane strain Mode I, where a is the amplitude of the leading order term of and A and n are material parameters from the strain-energy function.
For plane stress Mode I in a neo-Heookean material J is given by
where b and n are material parameters of GNH solids. For the specific case of a neo-Hookean model, where n=1, b=1 and , the J-integral for plane stress and plane strain in Mode I are the same:
J-integral in the pure-shear experiment
The J-integral can be determined by experiments. One common experiment is the pure-shear in an infinite long strip, as shown in Fig. 2. The upper and bottom edges are clamped by grips and the loading is applied by pulling the grips vertically apart by ± ∆. This set generates a condition of plane stress.
Under these conditions, the J-integral is evaluated, therefore, as
where
and is the high of the strip undeformed state. The function is determined by measuring the nominal stress acting on the strip stretched by :
Therefore, from the imposed displacement of each grip, ± ∆, it is possible to determine the J-integral for the corresponding nominal stress. With the J-integral, the amplitude (parameter a) of some true stress components can be found. Some other stress components amplitudes, however, depend on other parameters such as c (e.g. under plane stress condition) and cannot be determined by the pure shear experiment. Nevertheless, the pure shear experiment is very important because it allows the characterization of fracture toughness of soft materials.
Interface cracks
To approach the interaction of adhesion between soft adhesives and rigid substrates, the asymptotic solution for an interface crack problem between a GNH material and a rigid substrate is specified. The interface crack configuration considered here is shown in Fig.3 where the lateral slip is disregarded.
For the special neo-Hookean case with n=1, and , the solution for the deformed coordinates is
which is equivalent to
According to the above equation, the crack on this type of interface is found to open with a parabolic shape. This is confirmed by plotting the normalized coordinates vs for different ratios (see Fig. 4).
To go through the analysis of the interface between two GNH sheets with the same hardening characteristics, refer to the model described by Gaubelle and Knauss.
See also
Fracture mechanics
Soft matter
J-integral
Neo-Hookean solid
Gent (hyperelastic model)
Mooney-rivlin solid
Fracture of Biological Materials
References
Soft matter
Fracture mechanics | Fracture of soft materials | [
"Physics",
"Materials_science",
"Engineering"
] | 1,673 | [
"Structural engineering",
"Fracture mechanics",
"Soft matter",
"Materials science",
"Condensed matter physics",
"Materials degradation"
] |
60,446,213 | https://en.wikipedia.org/wiki/Composite%20glass | Composite glass is the collective term for a laminate having at least two glass panes which are in each case connected by means of an adhesive intermediate layer composed of plastic, e.g. by means of a casting resin or a thermoplastic composite film, which is highly tear-resistant and is viscoelastic. Composite glass should not be confused with composite windows.
Applications
Windscreens of all kinds of vehicles as well as crash-proof glazing or pavement light used in the construction sector are part of the main fields of application. The composite film used mostly in the construction and automotive sectors is composed of polyvinyl butyral (PVB). Other customary intermediate layer materials include ethylene-vinyl acetate (EVA), polyacrylate (PA), poly(methyl methacrylate) (PMMA), polyurethane (PUR), etc.
Depending on the number, type and thickness of the glass panes used and intermediate layers, composite glasses are used as safety glass, sound-proof glass, fireproof glass, as well as throw-through-resistant, breakthrough-resistant or ballistic-resistant glass etc. Glazing which is particularly resistant is produced by means of a combination of glass panes having one or a plurality of panes made from polycarbonate. Smart glasses are also often manufactured as composite glass.
Since 2006, in accordance with the latest results coming from research, films between glass, which are provided by means of PVB, EVA or TPU as well as LED and SMD electronics mentioned above, are laminated whereby even products such as luminous glass stairways and tables as well as other composite safety glass systems are made possible.
Recently, scientists in Queensland, Australia have developed composite glass that gives phones an 'unbreakable' screen. This breakthrough could enhance the knowledge of composite glass as we know it.
Examples
The so-called "pummel test" inter alia is used to control the quality of composite glass.
See also
Laminated glass
Reference list
2. Australian Broadcasting Company (ABC) News "Composite glass breakthrough by Queensland researchers could help make phone screens 'unbreakable'"
Glass | Composite glass | [
"Physics",
"Chemistry"
] | 445 | [
"Homogeneous chemical mixtures",
"Amorphous solids",
"Unsolved problems in physics",
"Glass"
] |
60,447,403 | https://en.wikipedia.org/wiki/Shake%20it%20Up%20Australia%20Foundation | The Shake It Up Australia Foundation (SIUAF) is an Australian non-for-profit foundation founded in 2011 by Clyde and Greg Campbell. It is partnered with the Michael J. Fox Foundation (MJFF) to achieve the foundations primary aims of "promoting and funding Parkinson's disease research in Australia to slow, stop and cure the disease". Together MJFF and SIUAF are the largest non-government funders of Parkinson's research across multiple institutes in Australia. Since its founding, the foundation has co-founded 38 Parkinson's research projects across 12 institutes to the value of over $10.8 million. The foundation's funding model ensures that 100% of proceeds goes towards Parkinson's research in Australia. This is possible due to the founding directors covering all overhead costs and expenses. In January 2019, Shake It Up are one of the partner organisation in the Australian Parkinson's Mission which was awarded a $30 million-dollar grant to test repurposed drugs in clinical trials.
Parkinson's disease
Parkinson's disease is a common neurological disease affecting 10 million people worldwide, with 38 Australian's being diagnosed each day. The disease is characterised by the loss of dopamine producing cells accompanied with chronic inflammation within the brain. It is caused by the loss of cells in various areas of the brain, significantly within the substantia nigia. Patients with Parkinson's, the loss dopamine producing neurons cause high levels of neuroinflamation to occur, mostly affecting the substantia nigra. When dopamine levels are depleted, the motor system nerves are unable to control movement and coordination. Over time, the dopamine producing cells are lost and motor type symptoms occur. People live with Parkinson's for a substantial amount of time before diagnosis when 80% of the dopamine producing cells are already lost. Currently there is no definitive test to diagnose Parkinson's but current clinical trials are showing promising results into ways to identify biomarkers that will detect the disease by a simple blood test at an earlier stage.
Signs and symptoms
The most recognisable symptoms are "motor related" however non-motor symptoms can consist of autonomic dysfunction, sleep difficulties, sensory such as altered sense of smell and neuropsychiatric problems (mood, cognition, behaviour or thought alterations).
Four motor symptoms are considered the primary sign by which a diagnosis for Parkinson's is made upon; Tremor, Slowness of movement (bradykinesia), postural instability and rigidity. The most common presenting symptoms of Parkinson's within patients is a slow coarse tremor which disappears during voluntary movement and in the deeper stages of sleep. It typically appears in one hand and affects both hands are the disease progresses.
About Clyde
Originally, from Northern New South Wales, Clyde lives in Northern Sydney with wife Carolyn and three kids Josh (20), Zoe (18) and Phoebe (14). Starting out Clyde was an electrical apprentice in NSW, which lead to him forming his own company in 1987, Machinery Automation and Robotics. This company grew to 70 staff which servicing Australian and International clients with high tech automation and robotic solutions. He had a vision for technology as a solution for problems and had a goal of making a difference to individual's lives.
Clyde in 2009, was diagnosed with Parkinson's disease after noticing a tremor in his left hand when holding notes at a company meeting. Clyde's faced the hardship of coming to terms with having an incurable disease that deteriorates progressively over time. After coming to terms with his prognosis, Clyde set out to learn as much about Parkinson's disease and what was being done in the worldwide effort to find a cure. This ensured his goal of finding a cure for Parkinson's.
He found that despite the multitude of foundations supporting Parkinson's research and Australia having some of the World's leading scientist's specialising in Parkinson's research, they required financial assistance to accelerate the research on a global scale. This launched the Shake It Up Australia Foundation to increase awareness in Australia and increase funding to Australian institutions to find a cure. His vision in establishing the foundation was to "create an opportunity for everyone in Australia to make a difference".
Funding model
Shake it Up's funding model revolves around 100% of proceeds going towards funding research to cure Parkinson's disease, with the founders Clyde and Greg Campbell covering all overhead cost and expenses of the foundation. It was a decision made by the awareness of the limited resources available in a competitive market and the need for funds allocated to non-for-profit and social sector to be managed efficiently to accelerate the path to a cure. It has funded over $10.8 million into Australian institutions for research.
Government funding
In January 2019, the NSW federal government awarded a $36.8 million-dollar grant for Parkinson's medical research through the Australian Parkinson's Mission and Parkinson's nurses to help improve the lives of those diagnosed and find a way to slow, stop and cure the debilitating disease. This was achieved by the Australian Parkinson's Mission initiative, an international collaboration developed by the Garvan Institute of Medical Research, Shake It Up Australia Foundation, Parkinson's Australia, The Cure Parkinson's Trust and The Michael J. Fox Foundation for Parkinson's Research. It is an innovative research project combining clinical trials with genomics research for people with Parkinson's.
30 million dollars of the grant will be used to test repurposed drugs in a world's first in a Parkinson's clinical trial design by the Australian Parkinson's Mission, to identify effective safe treatments and fast track them to patients. Associate Professor Antony Cooper of the Garvan Institute of Medical Research will lead the research program. The mission will sequence the genome of each patient, to identify the unknown causes, discover biomarkers and determine whether there are Parkinsons's subtypes that can be targeted with specific drugs. Head of the Neurodegeneration and Neurogenomics Program at the Garvan Institute, said that the mission would be the "first step towards personalised medicine for Parkinson's patients" and further drug discovery. Its first clinical trial will assign 300 patients to randomly receive medication for existing conditions.
The further 6.8 was given to Primary Health Networks over the next four years from 2019, to ease access to specialist nurses in the community for people with movement disorders including Parkinson's disease.
Partnership with The Michael J. Fox Foundation
Clyde's goal of finding a cure for Parkinson's led him to the Michael J. Fox Foundation for Parkinson's Research in the USA and the work they do to fund medical research targeted at finding better treatments, and a cure. In 2011, the Shake It Up Australia Foundation established a collaboration with the Michael J. Fox Foundation. This leveraged MJFF's Parkinson's drug development to Australia.
The Michael J. Fox Foundation was founded in 2000 by actor Michael J Fox, funding more than $800 million in research to date. MJFF approach of assessing, funding and project managing research globally eliminates redundancy, ensures efficiency and unites the global community to find a cure. This collaboration builds a strong foundation for promising research. The partnership allows the Shake it Up Foundation to be internationally competitive and strategically managed. It allows both parties to maximise capital raised from the Australian Parkinson's community to accelerate better research and treatments to slow, stop and cure the disease.
Research projects
SIUAF and MJFF fund strategic, non-redundant and internationally competitive research projects. They have funded 38 research projects, within 12 different institutions since starting out in 2011. All research projects are assessed and validated by a panel of expert scientists at the Michael J. Fox Foundation to eliminate redundancy globally. Once the projects are approved, they are monitored and benchmarked by a team of PhD's and business trained project managers. Listed below are research projects that have been funded or are currently being funded by the Shake It Up Australia Foundation which have shown positive outcomes to slow, stop and cure Parkinson's disease:
Biomarker's research at La Trobe University
Shake it Up committed funding to Professor Andrew Hill and Dr Lesley Cheng at La Trobe University to test the power of extracellular vesicles or cell particles, to detect Parkinson's via a simple blood test. The biological clue the test is looking for relates to the bloods 'appetite' for blood, as patient's with Parkinson's disease have white blood cells which consumes oxygen four times faster than normal cells. They are assessing whether this biomarker can be collected to determine a patient's neurological status. Researchers used blood samples from The Australian Parkinson's Disease Registry to determine whether biomarkers could be detected in both early onset and advanced patients who have taken medication for upwards of 5 years. Human trials of this test, which picks up on a key biological biomarker found in the blood of Parkinson's patients, delivered a 95% accuracy rate.
La Trobe's development of the world's-first diagnostic blood test is labelled by Parkinson's patients and scientists as a "medical breakthrough" in the SMH. Professor Fisher stated that the test's early diagnosis and treatment abilities will provide "better outcomes and a greater quality of life for people with the condition". A definitive diagnostic test will enable early intervention, limiting the number of brain cells and dopamine levels lost before one's diagnosis. La Trobe's research team estimates that the diagnostic blood test could be available within the next five years if additional funding is provided to speed up the development of the trial.
Inflammation study at the University of Queensland
The inflammasome study conducted at the University of Queensland produced a promising new therapy to stop Parkinson's disease progressing through counteracting brain inflammation caused by immune cells. The study primarily targeting ways to counteract immune cells, specifically the microglia which is highly activated in patients with Parkinson's disease by NLRP3 inflammasome. The NLRP3 inflammasome reacts to synuclein containing protein clumps by increasing inflammatory signals which contributes to the depletion of dopamine producing cells within the brain. Whilst, recruiting more pathological proteins to create an intense cycle of neuroinflammation and a synuclein buildup within the brain. The study found that a small molecule MCC950 administered orally once a day, stopped the development of Parkinson's disease in several animal models. It was found to block NLRP3 activation within the brain and prevent both the loss of dopamine producing brain cells and neuroinflammation, resulting in significantly improved motor function - the leading symptom of parkinson's disease. UQ Bioscience researcher Professor Matt Cooper stated that the "MCC950 molecule effectively 'cooled the brains on fire', turning down microglial inflammatory activity, and allowing neurons to function normally."
Currently, there are no medications on the market that prevent brain cell loss in Parkinson's patients. Current therapies focus on managing symptoms rather than halting the progression of the degenerative disease. The promising results from this study validate a promising new target for Parkinson's researcher's and therapeutics to substantiate a potential drug to stop the progression of Parkinson's disease. The drug is now being commercialised for clinical trials by inflamzone, a pioneering biotech company that develops several small molecule drugs that inhibit harmful inflammation within the brain. With the success and extensive research studies taken place to ensure the drug is safe, tolerable and efficacious, human trials of the drug is now attainable. Phase 1 of the clinical trials on healthy volunteers is expected to start in 2019, with phase 2 trials on Parkinson's patient's predicted to take place in 2020 dependent on the results of Phase 1.
References
External links
Health charities in Australia
Foundations based in Australia
Medical and health foundations
Biomedical research foundations
Parkinson's disease organizations | Shake it Up Australia Foundation | [
"Engineering",
"Biology"
] | 2,452 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
55,631,153 | https://en.wikipedia.org/wiki/Tensor%20representation | In mathematics, the tensor representations of the general linear group are those that are obtained by taking finitely many tensor products of the fundamental representation and its dual. The irreducible factors of such a representation are also called tensor representations, and can be obtained by applying Schur functors (associated to Young tableaux). These coincide with the rational representations of the general linear group.
More generally, a matrix group is any subgroup of the general linear group. A tensor representation of a matrix group is any representation that is contained in a tensor representation of the general linear group. For example, the orthogonal group O(n) admits a tensor representation on the space of all trace-free symmetric tensors of order two. For orthogonal groups, the tensor representations are contrasted with the spin representations.
The classical groups, like the symplectic group, have the property that all finite-dimensional representations are tensor representations (by Weyl's construction), while other representations (like the metaplectic representation) exist in infinite dimensions.
References
, chapters 9 and 10.
Bargmann, V., & Todorov, I. T. (1977). Spaces of analytic functions on a complex cone as carriers for the symmetric tensor representations of SO(n). Journal of Mathematical Physics, 18(6), 1141–1148.
Tensors | Tensor representation | [
"Engineering"
] | 273 | [
"Tensors"
] |
55,631,942 | https://en.wikipedia.org/wiki/Tantalcarbide | Tantalcarbide is a rare mineral of tantalum carbide with formula TaC. With a molecular weight of 192.96 g/mol, its primary constituents are tantalum (93.78%) and carbon (6.22%), and has an isometric crystal system. It generally exhibits a bronze or brown to yellow color. On the Mohs hardness scale it registers as a 6–7. Tantalcarbide is generally found in a granular state. It is extremely dense at 14.6 g/. Sub-conchoidal fracturing is exhibited.
Specimens are extremely rare in nature. It is the only known mineral to exhibit the composition of TaC.
Tantalum carbide powder is used for many real world applications. Generally however it is not produced from the mineral tantalcarbide due to the rarity. Instead it is prepared by other means.
Natural occurrence
Tantalcarbide in its natural state is extremely rare. Most specimens have been found in the middle Urals or mines in Italy. The first documented specimen was discovered in the Nizhnetagilsky District in the Middle Urals, by P. Walther in 1909.
Other locations have been documented. Western Australia, and in Craveggia, Italy.
Etymology
The name tantalcarbide is quite clearly a mention to its primary constituents of tantalum and carbon. However it was originally thought to be native tantalum in 1909. It was renamed to tantalum carbide in 1926, then renamed to tantalcarbide in 1966.
Properties
Tantalcarbide is hexoctahedral, with a space group of Fm3m. Its generally found as granular or tabular crystals. Quite often it is found mixed with other sands.
It has an extremely high melting point of around 3800 °C, although actually testing of this has not been documented. Tantalcarbide is isostructural with niobocarbide, and is coincidentally found in the same localities. Niobocarbide is composed of niobium and carbon.
Use and applications
Tantalcarbide is found in too small of quantities for it be used commercially. However tantalum carbide powders are used for tools, or cermet.
References
Carbides
Carbide minerals
Native element minerals
Refractory materials
Superhard materials
Tantalum compounds | Tantalcarbide | [
"Physics"
] | 498 | [
"Refractory materials",
"Materials",
"Superhard materials",
"Matter"
] |
55,634,671 | https://en.wikipedia.org/wiki/Multilevel%20Monte%20Carlo%20method | Multilevel Monte Carlo (MLMC) methods in numerical analysis are algorithms for computing expectations that arise in stochastic simulations. Just as Monte Carlo methods, they rely on repeated random sampling, but these samples are taken on different levels of accuracy. MLMC methods can greatly reduce the computational cost of standard Monte Carlo methods by taking most samples with a low accuracy and corresponding low cost, and only very few samples are taken at high accuracy and corresponding high cost.
Goal
The goal of a multilevel Monte Carlo method is to approximate the expected value of the random variable that is the output of a stochastic simulation. Suppose this random variable cannot be simulated exactly, but there is a sequence of approximations with increasing accuracy, but also increasing cost, that converges to as . The basis of the multilevel method is the telescoping sum identity,
that is trivially satisfied because of the linearity of the expectation operator. Each of the expectations is then approximated by a Monte Carlo method, resulting in the multilevel Monte Carlo method. Note that taking a sample of the difference at level requires a simulation of both and .
The MLMC method works if the variances as , which will be the case if both and approximate the same random variable . By the Central Limit Theorem, this implies that one needs fewer and fewer samples to accurately approximate the expectation of the difference as . Hence, most samples will be taken on level , where samples are cheap, and only very few samples will be required at the finest level . In this sense, MLMC can be considered as a recursive control variate strategy.
Applications
The first application of MLMC is attributed to Mike Giles, in the context of stochastic differential equations (SDEs) for option pricing, however, earlier traces are found in the work of Heinrich in the context of parametric integration. Here, the random variable is known as the payoff function, and the sequence of approximations , use an approximation to the sample path with time step .
The application of MLMC to problems in uncertainty quantification (UQ) is an active area of research. An important prototypical example of these problems are partial differential equations (PDEs) with random coefficients. In this context, the random variable is known as the quantity of interest, and the sequence of approximations corresponds to a discretization of the PDE with different mesh sizes.
An algorithm for MLMC simulation
A simple level-adaptive algorithm for MLMC simulation is given below in pseudo-code.
repeat
Take warm-up samples at level
Compute the sample variance on all levels
Define the optimal number of samples on all levels
Take additional samples on each level according to
if then
Test for convergence
end
if not converged then
end
until converged
Extensions of MLMC
Recent extensions of the multilevel Monte Carlo method include multi-index Monte Carlo, where more than one direction of refinement is considered, and the combination of MLMC with the Quasi-Monte Carlo method.
See also
Monte Carlo method
Monte Carlo methods in finance
Quasi-Monte Carlo methods in finance
Uncertainty quantification
Partial differential equations with random coefficients
References
Monte Carlo methods
Numerical analysis
Sampling techniques
Stochastic simulation
Randomized algorithms
Articles with example pseudocode | Multilevel Monte Carlo method | [
"Physics",
"Mathematics"
] | 652 | [
"Monte Carlo methods",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
56,997,080 | https://en.wikipedia.org/wiki/LISAA%20School%20of%20Art%20%26%20Design | LISAA School of Art & Design, (L'Institut Supérieur des Arts Appliqués) is a French private college for applied arts education founded in 1986. LISAA has locations in Paris, Rennes, Nantes, Strasbourg, Bordeaux and Toulouse. The school is one of about 100 recognized by the French Ministry of Culture and Communication. Diplomas are offered in graphic design, animation & video games, interior architecture & design, and fashion.
History
In 1986, Michel Glize, architect and entrepreneur founds LISAA.
In 2012, the school is sold to Galileo Global Education.
Teaching
The main concentrations in the academic curriculum are graphic design, animation & video games, interior architecture & design, and fashion. Many programs last five years (bac+5), while some are shorter (bachelors, BTS, MANAA).
Foundation year
Two one-year foundation courses are offered at LISAA:
Introductory course in applied arts
Foundation year in Architecture (preparatory class for schools of architecture)
Fashion courses
Master and bachelor diplomas exist for the fashion industry:
Master of Interior Decoration
Master of Journalist, blogger, influencer fashion/beauty & luxury
Master of Fashion Design & Business
Two or three year course of Artistic make-up artist
Bachelor (BTS) of Fashion Design / Pattern Making
Bachelor (BTS) of Fashion Design / Textiles, materials, surfaces
Bachelor (BTS) of Fashion Design / Textile Design
Master of Fashion & Luxury Management
Interior architecture and design courses
Master, bachelor diplomas are offered in the interior architecture and design field:
Master of Interior Decoration
Postgraduate program of Interior Architecture & Design (5 years)
Foundation year in Architecture (preparatory class for schools of architecture in one year)
Bachelor (BTS) of Interior Architecture
Master of Interior Architecture & Connected Design
Master of Interior Architecture & Service Design
Master of Interior Architecture & Global Design
Animation and video games
Bachelor, master diplomas are offered in the animation and video game field:
MBA Video Game Production (1 year)
Master Supervisor & Director Animation & Special Effects (2-year programme)
Master of Video Game Creative Director
Bachelor of 2-D/3-D Animation
Bachelor of 2-D Animation
Bachelor of 3-D Animation
Bachelor of 2-D/3-D Video Games
Bachelor of Visual Effects
Graphic and motion design
Bachelor, master diplomas are offered in the graphic and motion design field:
Bachelor (BTS) of Graphic Design / Print
Bachelor (BTS) of Graphic Design / Digital Media
Bachelor (BTS) of Graphic Design
Bachelor Motion Design
Bachelor Graphic Design
Master Digital Art Direction / Animated media
Master Digital Art Direction / UX Design
Master Digital Art Direction
Master Art Direction for Creative & Cultural Industries
Partnerships
Academic partnerships
LISAA has academic partnerships with schools in Europe or on the American continent:
Helmo (Liège / Belgium)
Vilnius College of Design (Vilnius / Lithuania)
Thomas More Mechelen (Malines / Belgium)
IED (Milan / Italy)
IED (Madrid / Spain)
IADE Instituto Superior de Design (Lisbon / Portugal)
VIA (The Netherlands)
KISD (Cologne / Germany)
Hochschule (Trier / Germany)
Manchester Metropolitan University (United Kingdom)
Abadir Academy (Italy)
University of Applied Sciences MACROMEDIA
Leeds College of Arts (United Kingdom)
ARTCOM (Morocco)
Nazareth College, Rochester, New York (USA)
Universidad IEU - (Mexico)
Rankings
The French student magazine "l'Étudiant" and "Le Figaro Etudiant" regularly rank LISAA in the top schools in France in various fields of applied arts.
International
LISAA School of Art & Design is a member of the international association CUMULUS. For the design course student exchanges are made with other schools of the international association of art & design universities CUMULUS. Ten percent of the cohort is of foreign origin.
Ownership
The school was bought by the investment fund Galileo Global Education in 2012.
References
External links
Private universities and colleges in France
Art schools in France
Art schools in Paris
Design schools in France
Architecture schools in France
Industrial design
Education in Paris
Education in Île-de-France | LISAA School of Art & Design | [
"Engineering"
] | 819 | [
"Industrial design",
"Design engineering",
"Design"
] |
57,000,496 | https://en.wikipedia.org/wiki/EIDD-1723 | EIDD-1723, also known as EPRX-01723 or as progesterone 20E-[O-[(phosphonooxy)methyl]oxime] sodium salt, is a synthetic, water-soluble analogue of progesterone and a neurosteroid which was developed for the potential treatment of traumatic brain injury. It is a rapidly converted prodrug of EIDD-036 (EPRX-036; progesterone 20-oxime), which is considered to be the active form of the agent. Previous C3 and C20 oxime derivatives of progesterone, such as P1-185 (progesterone 3-O-(-valine)-E-oxime), were also developed and studied prior to EIDD-1723.
See also
List of neurosteroids § Inhibitory > Synthetic > Pregnanes
List of progestogen esters § Oximes of progesterone derivatives
References
Experimental drugs
Enones
Neuroprotective agents
Neurosteroids
Organic sodium salts
Phosphate esters
Pregnanes
Prodrugs
Steroid oximes | EIDD-1723 | [
"Chemistry"
] | 245 | [
"Organic sodium salts",
"Chemicals in medicine",
"Prodrugs",
"Salts"
] |
57,002,362 | https://en.wikipedia.org/wiki/Attractant | An attractant is any chemical that attracts an organism, e.g. i) synthetic lures; ii) aggregation and sex pheromones (intraspecific interactions); and iii) synomone (interspecific interactions)
Synomone
An interspecific semiochemical that is beneficial to both interacting organisms, the emitter and receiver, e.g. floral synomone of certain Bulbophyllum species (Orchidaceae) attracts fruit fly males (Tephritidae: Diptera) as pollinators. In this true mutualistic inter-relationship, both organisms gain benefits in their respective sexual reproduction - i.e. orchid flowers are pollinated and the Dacini fruit fly males are rewarded with a sex pheromone precursor or booster; and the floral synomones, also act as rewards to pollinators, are in the form of phenylpropanoids (e.g. methyl eugenol) and phenylbutanoids (e.g. raspberry ketone zingerone and anisyl acetone/a combination of the three phenylbutanoids.
References
Chemical ecology
Semiochemicals | Attractant | [
"Chemistry",
"Biology"
] | 245 | [
"Biochemistry",
"Chemical ecology",
"Semiochemicals"
] |
36,929,631 | https://en.wikipedia.org/wiki/Operando%20spectroscopy | Operando spectroscopy is an analytical methodology wherein the spectroscopic characterization of materials undergoing reaction is coupled simultaneously with measurement of catalytic activity and selectivity. The primary concern of this methodology is to establish structure-reactivity/selectivity relationships of catalysts and thereby yield information about mechanisms. Other uses include those in engineering improvements to existing catalytic materials and processes and in developing new ones.
Overview and terms
In the context of organometallic catalysis, an in situ reaction involves the real-time measurement of a catalytic process using techniques such as mass spectrometry, NMR, infrared spectroscopy, and gas chromatography to help gain insight into functionality of the catalyst.
Approximately 90% of industrial precursor chemicals are synthesized using catalysts. Understanding the catalytic mechanism and active site is crucial to creating catalysts with optimal efficiency and maximal product yield.
In situ reactor cell designs typically are incapable of pressure and temperature consistency required for true catalytic reaction studies, making these cells insufficient. Several spectroscopic techniques require liquid helium temperatures, making them inappropriate for real-world studies of catalytic processes. Therefore, the operando reaction method must involve in situ spectroscopic measurement techniques, but under true catalytic kinetic conditions.
Operando (Latin for working) spectroscopy refers to continuous spectra collection of a working catalyst, allowing for simultaneous evaluation of both structure and activity/selectivity of the catalyst.
History
The term operando first appeared in catalytic literature in 2002. It was coined by Miguel A. Bañares, who sought to name the methodology in a way that captured the idea of observing a functional material — in this case a catalyst — under actual working, i.e. device operation, conditions. The first international congress on operando spectroscopy took place in Lunteren, Netherlands, in March 2003, followed by further conferences in 2006 (Toledo, Spain),2009 (Rostock, Germany), 2012 (Brookhaven, USA), and 2015 (Deauville, France). The name change from in situ to operando for the research field of spectroscopy of catalysts under working conditions was proposed at the Lunteren congress.
The analytical principle of measuring the structure, property and function of a material, a component disassembled or as part of a device simultaneously under operation conditions is not restricted to catalysis and catalysts. Batteries and fuel cells have been subject to operando studies with respect to their electrochemical function.
Methodology
Operando spectroscopy is a class of methodology, rather than a specific spectroscopic technique such as FTIR or NMR. Operando spectroscopy is a logical technological progress in situ studies. Catalyst scientists would ideally like to have a "motion picture" of each catalytic cycle, whereby the precise bond-making or bond-breaking events taking place at the active site are known; this would allow a visual model of the mechanism to be constructed. The ultimate goal is to determine the structure-activity relationship of the substrate-catalyst species of the same reaction. Having two experiments—the performing of a reaction plus the real-time spectral acquisition of the reaction mixture—on a single reaction facilitates a direct link between the structures of the catalyst and intermediates, and of the catalytic activity/selectivity. Although monitoring a catalytic process in situ can provide information relevant to catalytic function, it is difficult to establish a perfect correlation because of the current physical limitations of in situ reactor cells. Complications arise, for example, for gas phase reactions which require large void volumes, which make it difficult to homogenize heat and mass within the cell. The crux of a successful operando methodology, therefore, is related to the disparity between laboratory setups and industrial setups, i.e., the limitations of properly simulating the catalytic system as it proceeds in industry.
The purpose of operando spectroscopy is to measure the catalytic changes that occur within the reactor during operation using time-resolved (and sometimes spatially-resolved) spectroscopy. Time-resolved spectroscopy theoretically monitor the formation and disappearance of intermediate species at the active site of the catalyst as bond are made and broken in real time. However, current operando instrumentation often only works in the second or subsecond time scale and therefore, only relative concentrations of intermediates can be assessed. Spatially resolved spectroscopy combines spectroscopy with microscopy to determine active sites of the catalyst studied and spectator species present in the reaction.
Cell design
Operando spectroscopy requires measurement of the catalyst under (ideally) real working conditions, involving comparable temperature and pressure environments to those of industrially catalyzed reactions, but with a spectroscopic device inserted into the reaction vessel. The parameters of the reaction are then measured continuously during the reaction using the appropriate instrumentation, i.e., online mass spectrometry, gas chromatography or IR/NMR spectroscopy.
Operando instruments (in situ cells) must ideally allow for spectroscopic measurement under optimal reaction conditions. Most industrial catalysis reactions require excessive pressure and temperature conditions which subsequently degrades the quality of the spectra by lowering the resolution of signals. Currently many complications of this technique arise due to the reaction parameters and the cell design. The catalyst may interact with the components of the operando apparatus; open space in the cell can have an effect on the absorption spectra, and the presence of spectator species in the reaction may complicate analysis of the spectra. Continuing development of operando reaction-cell design is in line with working towards minimizing the need for compromise between optimal catalysis conditions and spectroscopy. These reactors must handle specific temperature and pressure requirements while still providing access for spectrometry.
Other requirements considered when designing operando experiments include reagent and product flow rates, catalyst position, beam paths, and window positions and sizes. All of these factors must also be accounted for while designing operando experiments, as the spectroscopic techniques used may alter the reaction conditions. An example of this was reported by Tinnemans et al., which noted that local heating by a Raman laser can give spot temperatures exceeding 100 °C. Also, Meunier reports that when using DRIFTS, there is a noticeable temperature difference (on the order of hundreds of degrees) between the crucible core and the exposed surface of the catalyst due to losses caused by the IR-transparent windows necessary for analysis.
Raman spectroscopy
Raman spectroscopy is one of the easiest methods to integrate into a heterogeneous operando experiment, as these reactions typically occur in the gas phase, so there is very low litter interference and good data can be obtained for the species on the catalytic surface. In order to use Raman, all that is required is to insert a small probe containing two optical fibers for excitation and detection. Pressure and heat complications are essentially negligible, due to the nature of the probe. Operando confocal Raman micro-spectroscopy has been applied to the study of fuel cell catalytic layers with flowing reactant streams and controlled temperature.
UV-vis spectroscopy
Operando UV-vis spectroscopy is particularly useful for many homogeneous catalytic reactions because organometallic species are often colored. Fiber-optical sensors allow monitoring of the consumption of reactants and production of product within the solution through absorption spectra. Gas consumption as well as pH and electrical conductivity can also be measured using fiber-optic sensors within an operando apparatus.
IR spectroscopy
One case study investigated the formation of gaseous intermediates in the decomposition of CCl in the presence of steam over LaO using Fourier-transform infrared spectroscopy. This experiment produced useful information about the reaction mechanism, active site orientation, and about which species compete for the active site.
X-ray diffraction
A case study by Beale et al. involved preparation of iron phosphates and bismuth molybdate catalysts from an amorphous precursor gel. The study found that there were no intermediate phases in the reaction, and helped to determine kinetic and structural information. The article uses the dated term in-situ, but the experiment uses, in essence, an operando method. Although x-ray diffraction does not count as a spectroscopy method, it is often being used as an operando method in various fields, including catalysis.
X-ray spectroscopy
X-ray spectroscopy methods can be used for genuine operando analyses of catalysts and other functional materials. The redox dynamics of sulfur with Ni/GDC anode during solid oxide fuel cell (SOFC) operation at mid- and low-range temperatures in an operando S K-edge XANES have been studied. Ni is a typical catalyst material for the anode in high temperature SOFCs.
The operando spectro-electrochemical cell for this high temperature gas-solid reaction study under electrochemical conditions was based on a typical high temperature heterogeneous catalysis cell, which was further equipped with electric terminals.
Very early method development for operando studies on PEM-FC fuel cells was done by Haubold et al. at Forschungszentrum Jülich and HASYLAB. Specifically they developed plexiglas spectro-electrochemical cells for XANES, EXAFS and SAXS and ASAXS studies with control of the electrochemical potential of the fuel cell. Under operation of the fuel cell they determined the change of the particle size of and oxidation state and shell formation of the platinum electrocatalyst. In contrast to the SOFC operation conditions, this was a PEM-FC study in liquid environment under ambient temperature.
The same operando method is applied to battery research and yields information on the changes of the oxidation state of electrochemically active elements in a cathode such as Mn via XANES, information on coronation shell and bond length via EXAFS, and information on microstructure changes during battery operation via ASAXS. Since lithium-ion batteries are intercalation batteries, information on the chemistry and electronic structure going on in the bulk during operation are of interest. For this, soft x-ray information can be obtained using hard X-ray Raman scattering.
Fixed energy methods (FEXRAV) have been developed and applied to the study of the catalytic cycle for the oxygen evolution reaction on iridium oxide. FEXRAV consists of recording the absorption coefficient at a fixed energy while varying at will the electrode potential in an electrochemical cell during the course of an electrochemical reaction. It allows to obtain a rapid screening of several systems under different experimental conditions (e.g., nature of the electrolyte, potential window), preliminary to deeper XAS experiments.
The soft X-Ray regime (i.e. with photon energy < 1000 eV) can be profitably used for investigating heterogeneous solid-gas reaction. In this case, it is proved that XAS can be sensitive both to the gas phase and to the solid surface states.
Gas chromatography
One case study monitored the dehydrogenation of propane to propene using micro-GC. Reproducibility for the experiment was high. The study found that the catalyst (Cr/AlO) activity increased to a sustained maximum of 10% after 28 minutes — an industrially useful insight into the working stability of a catalyst.
Mass spectrometry
Use of mass spectrometry as a second component of an operando experiment allows for optical spectra to be obtained before obtaining a mass spectrum of the analytes. Electrospray ionization allows a wider range of substances to be analysed than other ionization methods, due to its ability to ionize samples without thermal degradation. In 2017, Prof. Frank Crespilho and coworks introduced a new approach to operando DEMS, aiming the enzyme activity evaluation by differential electrochemical mass spectrometry (DEMS). NAD-dependent alcohol dehydrogenase (ADH) enzymes for ethanol oxidation were investigated by DEMS. The broad mass spectra obtained under bioelectrochemical control and with unprecedented accuracy were used to provide new insight into the enzyme kinetics and mechanisms.
Impedance spectroscopy
Applications
Nanotechnology
Operando spectroscopy has become a vital tool for surface chemistry. Nanotechnology, used in materials science, involves active catalytic sites on a reagent surface with at least one dimension in the nano-scale of approximately 1–100 nm. As particle size decreases, surface area increases. This results in a more reactive catalytic surface. The reduced scale of these reactions affords several opportunities while presenting unique challenges; for example, due to the very small size of the crystals (sometimes <5 nm), any X-ray crystallography diffraction signal may be very weak.
As catalysis is a surface process, one particular challenge in catalytic studies is resolving the typically weak spectroscopic signal of the catalytically active surface against that of the inactive bulk structure. Moving from the micro to the nano scale increases the surface to volume ratio of the particles, maximizing the signal of the surface relative to that of the bulk.
Furthermore, as the scale of the reaction decreases towards nano scale, individual processes can be discerned that would otherwise be lost in the average signal of a bulk reaction composed of multiple coincident steps and species such as spectators, intermediates, and reactive sites.
Heterogeneous catalysis
Operando spectroscopy is widely applicable to heterogeneous catalysis, which is largely used in industrial chemistry.
An example of operando methodology to monitor heterogeneous catalysis is the dehydrogenation of propane with molybdenum catalysts commonly used in industrial petroleum. Mo/SiO and Mo/AlO were studied with an operando setup involving EPR/UV-Vis, NMR/UV-Vis, and Raman. The study examined the solid molybdenum catalyst in real time. It was determined that the molybdenum catalyst exhibited propane dehydrogenation activity, but deactivated over time. The spectroscopic data showed that the most likely catalytic active state was in the production of propene. The deactivation of the catalyst was determined to be the result of coke formation and the irreversible formation of crystals, which were difficult to reduce back to . The dehydrogenation of propane can also be achieved with chromium catalysts, through the reduction of to . Propylene is one of the most important organic starting materials is used globally, particularly in the synthesis of various plastics. Therefore, the development of effective catalysts to produce propylene is of great interest. Operando spectroscopy is of great value to the further research and development of such catalysts.
Homogeneous catalysis
Combining operando Raman, UV–Vis and ATR-IR is particularly useful for studying homogeneous catalysis in solution. Transition-metal complexes can perform catalytic oxidation reactions on organic molecules; however, much of the corresponding reaction pathways are still unclear. For example, an operando study of the oxidation of veratryl alcohol by salcomine catalyst at high pH determined that the initial oxidation of the two substrate molecules to aldehydes is followed by the reduction of molecular oxygen to water, and that the rate determining step is the detachment of the product. Understanding organometallic catalytic activity on organic molecules is incredibly valuable for the further development of material science and pharmaceuticals.
Gas or volatile organic compounds (VOC) sensing
A recent study from laboratory of Günther Rupprechter shows that operando spectroscopy can also be used to investigate the performance of VOC sensing semiconductor nanomaterials. To demonstrate this, operando spectroscopy was applied to directly investigate the room temperature detection of methanol by metal oxide semiconductor composites (mainly Anatase titanium dioxide nanoparticles with reduced graphene oxide) in gas sensors. Operando-DRIFTS along with resistance measurements, were employed to examine methanol interactions with the sensors. Moreover, mass spectroscopy (MS) with resistance measurements revealed surface electrochemical reactions. Overall, operando spectroscopy findings showed that the nanocomposite sensor's mechanism involves a combination of reversible physisorption and irreversible chemisorption of methanol, sensor modification over time, and electron/oxygen depletion and restoration, resulting in the formation of carbon dioxide and water.
References
Spectroscopy
Catalysis | Operando spectroscopy | [
"Physics",
"Chemistry"
] | 3,314 | [
"Catalysis",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Chemical kinetics",
"Spectroscopy"
] |
44,012,079 | https://en.wikipedia.org/wiki/Meter%20water%20equivalent | In physics, the meter water equivalent (often m.w.e. or mwe) is a standard measure of cosmic ray attenuation in underground laboratories. A laboratory at a depth of 1000 m.w.e is shielded from cosmic rays equivalently to a lab below the surface of a body of water. Because laboratories at the same depth (in meters) can have greatly varied levels of cosmic ray penetration, the m.w.e. provides a convenient and consistent way of comparing cosmic ray levels in different underground locations.
Cosmic ray attenuation is dependent on the density of the material of the overburden, so the m.w.e. is defined as the product of depth and density (also known as an interaction depth). Because the density of water is , of water gives an interaction depth of . Some publications use hg/cm2 instead of m.w.e., although the two units are equivalent.
For example, the Waste Isolation Pilot Plant, located deep in a salt formation, achieves 1585 m.w.e. shielding. Soudan Mine, at depth is only 8% deeper, but because it is in denser iron-rich rock it achieves 2100 m.w.e. shielding, 32% more.
Another factor that must be accounted for is the shape of the overburden. While some laboratories are located beneath a flat ground surface, many are located in tunnels in mountains. Thus, the distance to the surface in directions other than straight up is less than it would be assuming a flat surface. This can increase the muon flux by a factor of .
The usual conversion between m.w.e. and total muon flux is given by Mei and Hime:
where is the depth in m.w.e. and is the total muon flux per cm2⋅s. (The first term dominates for depths up to 1681.5 m.w.e.; below that, the second term dominates. Thus, for great depths, the factor of 4 mentioned above corresponds to a difference of 698 ln 4 ≈ 968 m.w.e.)
Standard rock
In addition to m.w.e., underground laboratory depth can also be measured in meters of standard rock. Standard rock is defined to have mass number A = 22, atomic number Z = 11, and density . Because most laboratories are under earth and not underwater, the depth in standard rock is often closer to the actual underground depth of the laboratory.
Existing underground laboratories
Underground laboratories exist at depths ranging from just below ground level to approximately 6000 m.w.e. at SNOLAB and 6700 m.w.e. at the Jinping Underground Laboratory in China.
References
Equivalent units
Underground laboratories
Physical quantities | Meter water equivalent | [
"Physics",
"Mathematics"
] | 565 | [
"Physical phenomena",
"Equivalent quantities",
"Physical quantities",
"Quantity",
"Equivalent units",
"Physical properties",
"Units of measurement"
] |
44,012,526 | https://en.wikipedia.org/wiki/Hexaperchloratoaluminate | The hexaperchloratoaluminate ion is a polyatomic anion with the chemical formula . It is composed of six perchlorate () anions bound to a central aluminium ion (), resulting in a net charge of –3. This ion is a highly oxidizing and reactive complex, similar to other hexacoordinate aluminium complexes such as hexanitratoaluminate.
The aluminium perchlorate salts formed with hexaperchloratoaluminate are of particular interest due to their potential uses as energetic materials. The series of hexaperchloratoaluminate salts includes lithium hexaperchloratoaluminate, ammonium hexaperchloratoaluminate, tetramethylammonium hexaperchloratoaluminate, and trinitronium hexaperchloratoaluminate. Each of these compounds possess unique properties and may have potential applications in areas such as rocket propellants, pyrotechnics, and other explosive-based technologies.
Preparation
Hexaperchloratoaluminates can be synthesized by combining aluminum trichloride and various perchlorates in liquid sulfur dioxide at a temperature of –10°C:
,
,
.
To form hexaperchloratoaluminates, one can heat aluminium nitrate in the presence of nitrosonium or nitronium perchlorate at a temperature of 125 °C:
10–14 [NO]ClO4 + Al(NO3)3 → [NO2]3[Al(ClO4)6] + (gaseous products),
6–10 [NO2]ClO4 + Al(NO3)3 → [NO2]3[Al(ClO4)6] + (gaseous products),
.
Obtaining hydrazinium hexaperchloratoaluminate in a highly pure form is problematic. According to the available studies, this compound can only be produced via a low-yielding synthesis route including the reaction of aluminium chloride, hydrazinium perchlorate, and nitronium perchlorate:
.
Guanidinium hexaperchloratoaluminate can be synthesized via the following reaction:
.
References
Perchlorates
Aluminium complexes
Anions | Hexaperchloratoaluminate | [
"Physics",
"Chemistry"
] | 474 | [
"Matter",
"Anions",
"Perchlorates",
"Salts",
"Ions"
] |
44,012,866 | https://en.wikipedia.org/wiki/Real-root%20isolation | In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one (and only one) real root of the polynomial, and, together, contain all the real roots of the polynomial.
Real-root isolation is useful because usual root-finding algorithms for computing the real roots of a polynomial may produce some real roots, but, cannot generally certify having found all real roots. In particular, if such an algorithm does not find any root, one does not know whether it is because there is no real root. Some algorithms compute all complex roots, but, as there are generally much fewer real roots than complex roots, most of their computation time is generally spent for computing non-real roots (in the average, a polynomial of degree has complex roots, and only real roots; see ). Moreover, it may be difficult to distinguish the real roots from the non-real roots with small imaginary part (see the example of Wilkinson's polynomial in next section).
The first complete real-root isolation algorithm results from Sturm's theorem (1829). However, when real-root-isolation algorithms began to be implemented on computers it appeared that algorithms derived from Sturm's theorem are less efficient than those derived from Descartes' rule of signs (1637).
Since the beginning of 20th century there is an active research activity for improving the algorithms derived from Descartes' rule of signs, getting very efficient implementations, and computing their computational complexity. The best implementations can routinely isolate real roots of polynomials of degree more than 1,000.
Specifications and general strategy
For finding real roots of a polynomial, the common strategy is to divide the real line (or an interval of it where root are searched) into disjoint intervals until having at most one root in each interval. Such a procedure is called root isolation, and a resulting interval that contains exactly one root is an isolating interval for this root.
Wilkinson's polynomial shows that a very small modification of one coefficient of a polynomial may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of this implies that an error of on the value of the root may produce a value of the polynomial at the approximate root that is of the order of It follows that, except maybe for very low degrees, a root-isolation procedure cannot give reliable results without using exact arithmetic. Therefore, if one wants to isolate roots of a polynomial with floating-point coefficients, it is often better to convert them to rational numbers, and then take the primitive part of the resulting polynomial, for having a polynomial with integer coefficients.
For this reason, although the methods that are described below work theoretically with real numbers, they are generally used in practice with polynomials with integer coefficients, and intervals ending with rational numbers. Also, the polynomials are always supposed to be square free. There are two reasons for that. Firstly Yun's algorithm for computing the square-free factorization is less costly than twice the cost of the computation of the greatest common divisor of the polynomial and its derivative. As this may produce factors of lower degrees, it is generally advantageous to apply root-isolation algorithms only on polynomials without multiple roots, even when this is not required by the algorithm. The second reason for considering only square-free polynomials is that the fastest root-isolation algorithms do not work in the case of multiple roots.
For root isolation, one requires a procedure for counting the real roots of a polynomial in an interval without having to compute them, or, at least a procedure for deciding whether an interval contains zero, one or more roots. With such a decision procedure, one may work with a working list of intervals that may contain real roots. At the beginning, the list contains a single interval containing all roots of interest, generally the whole real line or its positive part. Then each interval of the list is divided into two smaller intervals. If one of the new intervals does not contain any root, it is removed from the list. If it contains one root, it is put in an output list of isolating intervals. Otherwise, it is kept in the working list for further divisions, and the process may continue until all roots are eventually isolated
Sturm's theorem
The first complete root-isolation procedure results of Sturm's theorem (1829), which expresses the number of real roots in an interval in terms of the number of sign variations of the values of a sequence of polynomials, called Sturm's sequence, at the ends of the interval. Sturm's sequence is the sequence of remainders that occur in a variant of Euclidean algorithm applied to the polynomial and its derivatives. When implemented on computers, it appeared that root isolation with Sturm's theorem is less efficient than the other methods that are described below. Consequently, Sturm's theorem is rarely used for effective computations, although it remains useful for theoretical purposes.
Descartes' rule of signs and its generalizations
Descartes' rule of signs asserts that the difference between the number of sign variations in the sequence of the coefficients of a polynomial and the number of its positive real roots is a nonnegative even integer. It results that if this number of sign variations is zero, then the polynomial does not have any positive real roots, and, if this number is one, then the polynomial has a unique positive real root, which is a single root. Unfortunately the converse is not true, that is, a polynomial which has either no positive real root or has a single positive simple root may have a number of sign variations greater than 1.
This has been generalized by Budan's theorem (1807), into a similar result for the real roots in a half-open interval : If is a polynomial, and is the difference between of the numbers of sign variations of the sequences of the coefficients of and , then minus the number of real roots in the interval, counted with their multiplicities, is a nonnegative even integer. This is a generalization of Descartes' rule of signs, because, for sufficiently large, there is no sign variation in the coefficients of , and all real roots are smaller than .
Budan's may provide a real-root-isolation algorithm for a square-free polynomial (a polynomial without multiple root): from the coefficients of polynomial, one may compute an upper bound of the absolute values of the roots and a lower bound on the absolute values of the differences of two roots (see Properties of polynomial roots). Then, if one divides the interval into intervals of length less than , then every real root is contained in some interval, and no interval contains two roots. The isolating intervals are thus the intervals for which Budan's theorem asserts an odd number of roots.
However, this algorithm is very inefficient, as one cannot use a coarser partition of the interval , because, if Budan's theorem gives a result larger than 1 for an interval of larger size, there is no way for insuring that it does not contain several roots.
Vincent's and related theorems
(1834) provides a method for real-root isolation, which is at the basis of the most efficient real-root-isolation algorithms. It concerns the positive real roots of a square-free polynomial (that is a polynomial without multiple roots). If is a sequence of positive real numbers, let
be the th convergent of the continued fraction
For proving his theorem, Vincent proved a result that is useful on its own:
For working with real numbers, one may always choose , but, as effective computations are done with rational numbers, it is generally convenient to suppose that are integers.
The "small enough" condition has been quantified independently by Nikola Obreshkov, and Alexander Ostrowski:
For polynomials with integer coefficients, the minimum distance may be lower bounded in terms of the degree of the polynomial and the maximal absolute value of its coefficients; see . This allows the analysis of worst-case complexity of algorithms based on Vincent's theorems. However, Obreschkoff–Ostrowski theorem shows that the number of iterations of these algorithms depend on the distances between roots in the neighborhood of the working interval; therefore, the number of iterations may vary dramatically for different roots of the same polynomial.
James V. Uspensky gave a bound on the length of the continued fraction (the integer in Vincent's theorem), for getting zero or one sign variations:
Continued fraction method
The use of continued fractions for real-root isolation has been introduced by Vincent, although he credited Joseph-Louis Lagrange for this idea, without providing a reference. For making an algorithm of Vincent's theorem, one must provide a criterion for choosing the that occur in his theorem. Vincent himself provided some choice (see below). Some other choices are possible, and the efficiency of the algorithm may depend dramatically on these choices. Below is presented an algorithm, in which these choices result from an auxiliary function that will be discussed later.
For running this algorithm one must work with a list of intervals represented by a specific data structure. The algorithm works by choosing an interval, removing it from the list, adding zero, one or two smaller intervals to the list, and possibly outputs an isolation interval.
For isolating the real roots of a polynomial of degree , each interval is represented by a pair where is a polynomial of degree and is a Möbius transformation with integer coefficients. One has
and the interval represented by this data structure is the interval that has and as end points. The Möbius transformation maps the roots of in this interval to the roots of in .
The algorithm works with a list of intervals that, at the beginning, contains the two intervals and corresponding to the partition of the reals into the positive and the negative ones (one may suppose that zero is not a root, as, if it were, it suffices to apply the algorithm to ). Then for each interval in the list, the algorithm remove it from the list; if the number of sign variations of the coefficients of is zero, there is no root in the interval, and one passes to the next interval. If the number of sign variations is one, the interval defined by and is an isolating interval. Otherwise, one chooses a positive real number for dividing the interval into and , and, for each subinterval, one composes with a Möbius transformation that maps the interval onto , for getting two new intervals to be added to the list. In pseudocode, this gives the following, where denotes the number of sign variations of the coefficients of the polynomial .
function continued fraction is
input: P(x), a square-free polynomial,
output: a list of pairs of rational numbers defining isolating intervals
/* Initialization */
L := [(P(x), x), (P(–x), –x)] /* two starting intervals */
Isol := [ ]
/* Computation */
while L [ ] do
Choose (A(x), M(x)) in L, and remove it from L
v := var(A)
if v = 0 then exit /* no root in the interval */
if v = 1 then /* isolating interval found */
add (M(0), M(∞)) to Isol
exit
b := some positive integer
B(x) = A(x + b)
w := v – var(B)
if B(0) = 0 then /* rational root found */
add (M(b), M(b)) to Isol
B(x) := B(x)/x
add (B(x), M(b + x)) to L /* roots in (M(b), M(+∞)) */
if w = 0 then exit /* Budan's theorem */
if w = 1 then /* Budan's theorem again */
add (M(0), M(b)) to Isol
if w > 1 then
add ( A(b/(1 + x)), M(b/(1 + x)) )to L /* roots in (M(0), M(b)) */
The different variants of the algorithm depend essentially on the choice of . In Vincent's papers, and in Uspensky's book, one has always , with the difference that Uspensky did not use Budan's theorem for avoiding further bisections of the interval associated to
The drawback of always choosing is that one has to do many successive changes of variable of the form . These may be replaced by a single change of variable , but, nevertheless, one has to do the intermediate changes of variables for applying Budan's theorem.
A way for improving the efficiency of the algorithm is to take for a lower bound of the positive real roots, computed from the coefficients of the polynomial (see Properties of polynomial roots for such bounds).
Bisection method
The bisection method consists roughly of starting from an interval containing all real roots of a polynomial, and divides it recursively into two parts until getting eventually intervals that contain either zero or one root. The starting interval may be of the form , where is an upper bound on the absolute values of the roots, such as those that are given in . For technical reasons (simpler changes of variable, simpler complexity analysis, possibility of taking advantage of the binary analysis of computers), the algorithms are generally presented as starting with the interval . There is no loss of generality, as the changes of variables and move respectively the positive and the negative roots in the interval . (The single changes variable may also be used.)
The method requires an algorithm for testing whether an interval has zero, one, or possibly several roots, and for warranting termination, this testing algorithm must exclude the possibility of getting infinitely many times the output "possibility of several roots". Sturm's theorem and Vincent's auxiliary theorem provide such convenient tests. As the use Descartes' rule of signs and Vincent's auxiliary theorem is much more computationally efficient than the use of Sturm's theorem, only the former is described in this section.
The bisection method based on Descartes' rules of signs and Vincent's auxiliary theorem has been introduced in 1976 by Akritas and Collins under the name of Modified Uspensky algorithm, and has been referred to as the Uspensky algorithm, the Vincent–Akritas–Collins algorithm, or Descartes method, although Descartes, Vincent and Uspensky never described it.
The method works as follows. For searching the roots in some interval, one changes first the variable for mapping the interval onto giving a new polynomial . For searching the roots of in , one maps the interval onto by the change of variable giving a polynomial . Descartes' rule of signs applied to the polynomial gives indications on the number of real roots of in the interval , and thus on the number of roots of the initial polynomial in the interval that has been mapped on . If there is no sign variation in the sequence of the coefficients of , then there is no real root in the considered intervals. If there is one sign variation, then one has an isolation interval. Otherwise, one splits the interval into and , one maps them onto by the changes of variable and . Vincent's auxiliary theorem insures the termination of this procedure.
Except for the initialization, all these changes of variable consists of the composition of at most two very simple changes of variable which are the scalings by two , the translation , and the inversion , the latter consisting simply of reverting the order of the coefficients of the polynomial. As most of the computing time is devoted to changes of variable, the method consisting of mapping every interval to is fundamental for insuring a good efficiency.
Pseudocode
The following notation is used in the pseudocode that follows.
is the polynomial for which the real roots in the interval have to be isolated
denotes the number of sign variations in the sequence of the coefficients of the polynomial
The elements of working list have the form , where
and are two nonnegative integers such that , which represent the interval
where is the degree of (the polynomial may be computed directly from , and , but it is less costly to compute it incrementally, as it will be done in the algorithm; if has integer coefficients, the same is true for )
function bisection is
input: , a square-free polynomial, such that ,
for which the roots in the interval are searched
output: a list of triples ,
representing isolating intervals of the form
/* Initialization */
L := [(0, 0, p(x))] /* a single element in the working list L */
Isol := [ ]
n := degree(p)
/* Computation */
while L [ ] do
Choose (c, k, in L, and remove it from L
if then
n := n – 1 /* A rational root found */
add (c, k, 0) to Isol
v :=
if v = 1 then /* An isolating interval found */
add (c, k, 1) to Isol
if v > 1 then /* Bisecting */
add (2c, k + 1, to L
add (2c + 1, k + 1, to L
end
This procedure is essentially the one that has been described by Collins and Akritas. The running time depends mainly on the number of intervals that have to be considered, and on the changes of variables. There are ways for improving the efficiency, which have been an active subject of research since the publication of the algorithm, and mainly since the beginning of the 21st century.
Recent improvements
Various ways for improving Akritas–Collins bisection algorithm have been proposed. They include a method for avoiding storing a long list of polynomials without losing the simplicity of the changes of variables, the use of approximate arithmetic (floating point and interval arithmetic) when it allows getting the right value for the number of sign variations, the use of Newton's method when possible, the use of fast polynomial arithmetic, shortcuts for long chains of bisections in case of clusters of close roots, bisections in unequal parts for limiting instability problems in polynomial evaluation.
All these improvement lead to an algorithm for isolating all real roots of a polynomial with integer coefficients, which has the complexity (using soft O notation, , for omitting logarithmic factors)
where is the degree of the polynomial, is the number of nonzero terms, is the maximum of digits of the coefficients.
The implementation of this algorithm appears to be more efficient than any other implemented method for computing the real roots of a polynomial, even in the case of polynomials having very close roots (the case which was previously the most difficult for the bisection method).
References
Sources
Polynomials
Polynomial factorization algorithms
Real algebraic geometry
Computer algebra | Real-root isolation | [
"Mathematics",
"Technology"
] | 3,969 | [
"Polynomials",
"Computational mathematics",
"Computer algebra",
"Computer science",
"Algebra"
] |
44,014,090 | https://en.wikipedia.org/wiki/Yottabyte%20LLC | Yottabyte LLC was a software-defined data center (SDDC) company founded in 2010 and headquartered in Bloomfield Township, Oakland County, Michigan. Yottabyte also operates three physical data centers throughout the United States. Yottabyte software enables companies to build virtual data centers from industry standard server, storage, and networking gear and software.
History
Yottabyte was founded in 2010. Its founders include Greg Campbell (Vice President of Technology), Paul E. Hodges III (President and CEO), and Duane Tursi. The Yottabyte concept originated when Campbell and Hodges were sitting in a conference room tossing around ideas. Campbell (CTO) developed the Yottabyte software architecture. .
Yottabyte LLC was named a "Cool Vendor in Compute Platform" by Gartner in 2016, and was a runner up for the Virtualization Trailblazers in 2015 by Tech Trailblazers.
In September 2016, Yottabyte partnered with the University of Michigan to accelerate data-intensive research. The project, known as the Yottabyte Research Cloud, gives scientists access to high-performance, secure and flexible computing environments that enables the analysis of sensitive data sets restricted by federal privacy laws, proprietary access agreements, or confidentiality requirements.
In May 2017, Yottabyte brought Michael J. Aloe on board as Senior Vice President of Sales & Operations. Aloe was announced as Chief Operations Officer (COO) in May 2018.
See also
Cloud storage
Computer data storage
Software-defined data center
Storage virtualization
References
Cloud platforms
Cloud computing providers
2010 establishments in Michigan
Companies based in Oakland County, Michigan | Yottabyte LLC | [
"Technology"
] | 333 | [
"Cloud platforms",
"Computing platforms"
] |
44,017,500 | https://en.wikipedia.org/wiki/Jerrold%20Meinwald | Jerrold Meinwald (January 16, 1927 – April 23, 2018) was an American chemist known for his work on chemical ecology, a field he co-founded with his colleague and friend Thomas Eisner. He was a Goldwin Smith Professor Emeritus of Chemistry at Cornell University. He was author or co-author of well over 400 scientific articles. His interest in chemistry was sparked by fireworks done with his friend Michael Cava when they were still in junior high school. Meinwald was also a music aficionado and studied flute with Marcel Moyse – the world's greatest flutist of his time.
Career
Jerrold Meinwald was born in 1927 in New York City. He studied chemistry at the University of Chicago where he earned his Bachelor of Science degree in 1948. He then went on to Harvard University where he obtained his Ph.D. with R.B. Woodward in 1952. A DuPont Fellowship brought him to Cornell, where he has spent most of his subsequent career.
Since the early 1960s, he has worked, often in collaboration with Thomas Eisner, on chemical signalling in animals, particularly insects and arthropods; he is regarded as one of the founders of the field of chemical ecology. A particular field of interest was the ways in which insects either use chemicals synthesised by the plants that they feed on, or use those plant chemicals as substrates from which to synthesize their own. A species on which he and Eisner published several times over decades is the moth Utetheisa ornatrix, which collects pyrrolizidine alkaloids from its food source and uses them as a deterrent to predators; the male also uses them as a pheromone and passes them on in its semen to the female who uses them to make her eggs unpalatable.
In analysing the constituents of plant signalling, he developed a number of retrosynthetic techniques, including the Meinwald Rearrangement where an epoxide is converted to a carbonyl in the presence of a Lewis acid; he has also performed substantial research over forty years in NMR spectroscopy. and in reactions for producing chiral derivatives in order to determine absolute configurations of chiral molecules.
In 1981, Meinwald became a founding member of the World Cultural Council. He died in Ithaca on April 23, 2018, at the age of 91.
Awards
He won the National Medal of Science in 2012. He was a member of the National Academy of Sciences since 1969, Fellow of the American Academy of Arts and Sciences since 1970, and member of the American Philosophical Society since 1987. Other notable honours:
Distinguished Leadership Award, American Academy of Arts and Sciences (2016)
Grand Prix de la Fondation de la Maison de la Chimie, Paris, France (2006)
Nakanishi Prize, American Chemical Society (2014)
Roger Adams Award in Organic Chemistry, American Chemical Society (2005)
Chemical Pioneer Award, American Institute of Chemists (1997)
Silver Medal of the International Society of Chemical Ecology (1991)
Tyler Prize for Environmental Achievement (1990)
A. C. Cope Scholar Award, American Chemical Society (1989)
Fellow, American Academy of Arts and Sciences (1970)
Publications
Eisner, T, & Meinwald, J, Eds. (1995) Chemical Ecology: The Chemistry of Biotic Interaction. National Academy Press.
References
External links
Faculty page at Cornell University
A conversation with Jerrold Meinwald. Video on YouTube
Plenary Lecture (2014), Annual Meeting of the International Society of Chemical Ecology. Video on YouTube
1927 births
2018 deaths
American chemists
National Medal of Science laureates
University of Chicago alumni
Harvard University alumni
Cornell University faculty
Members of the United States National Academy of Sciences
Founding members of the World Cultural Council
Fellows of the American Academy of Arts and Sciences
Members of the American Philosophical Society
Chemical ecologists
Scientists from New York City
Benjamin Franklin Medal (Franklin Institute) laureates | Jerrold Meinwald | [
"Chemistry"
] | 789 | [
"Chemical ecologists",
"Chemical ecology"
] |
44,017,873 | https://en.wikipedia.org/wiki/Negative%20energy | Negative energy is a concept used in physics to explain the nature of certain fields, including the gravitational field and various quantum field effects.
Gravitational energy
Gravitational energy, or gravitational potential energy, is the potential energy a massive object has because it is within a gravitational field. In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. As two objects move apart and the distance between them approaches infinity, the gravitational force between them approaches zero from the positive side of the real number line and the gravitational potential approaches zero from the negative side. Conversely, as two massive objects move towards each other, the motion accelerates under gravity causing an increase in the (positive) kinetic energy of the system and, in order to conserve the total sum of energy, the increase of the same amount in the gravitational potential energy of the object is treated as negative.
A universe in which positive energy dominates will eventually collapse in a Big Crunch, while an "open" universe in which negative energy dominates will either expand indefinitely or eventually disintegrate in a Big Rip. In the zero-energy universe model ("flat" or "Euclidean"), the total amount of energy in the universe is exactly zero: its amount of positive energy in the form of matter is exactly cancelled out by its negative energy in the form of gravity. It is unclear which, if any, of these models accurately describes the real universe.
Black hole ergosphere
For a classically rotating black hole, the rotation creates an ergosphere outside the event horizon, in which spacetime itself begins to rotate, in a phenomenon known as frame-dragging. Since the ergosphere is outside the event horizon, particles can escape from it. Within the ergosphere, a particle's energy may become negative (via the relativistic rotation of its Killing vector). The negative-energy particle then crosses the event horizon into the black hole, with the law of conservation of energy requiring that an equal amount of positive energy should escape.
In the Penrose process, a body divides in two, with one half gaining negative energy and falling in, while the other half gains an equal amount of positive energy and escapes. This is proposed as the mechanism by which the intense radiation emitted by quasars is generated.
Quantum field effects
Negative energies and negative energy density are consistent with quantum field theory.
Virtual particles
In quantum theory, the uncertainty principle allows the vacuum of space to be filled with virtual particle-antiparticle pairs which appear spontaneously and exist for only a short time before, typically, annihilating themselves again. Some of these virtual particles can have negative energy. This behaviour plays a role in several important phenomena, as described below.
Casimir effect
In the Casimir effect, two flat plates placed very close together restrict the wavelengths of quanta which can exist between them. This in turn restricts the types and hence number and density of virtual particle pairs which can form in the intervening vacuum and can result in a negative energy density. Since this restriction does not exist or is much less significant on the opposite sides of the plates, the forces outside the plates are greater than those between the plates. This causes the plates to appear to pull on each other, which has been measured. More accurately, the vacuum energy caused by the virtual particle pairs is pushing the plates together, and the vacuum energy between the plates is too small to negate this effect since fewer virtual particles can exist per unit volume between the plates than can exist outside them.
Squeezed light
It is possible to arrange multiple beams of laser light such that destructive quantum interference suppresses the vacuum fluctuations. Such a squeezed vacuum state involves negative energy. The repetitive waveform of light leads to alternating regions of positive and negative energy.
Dirac sea
According to the theory of the Dirac sea, developed by Paul Dirac in 1930, the vacuum of space is full of negative energy. This theory was developed to explain the anomaly of negative-energy quantum states predicted by the Dirac equation. A year later, after work by Weyl, the negative energy concept was abandoned and replaced by a theory of antimatter. The following year, 1932, saw the discovery of the positron by Carl Anderson.
Quantum gravity phenomena
The intense gravitational fields around black holes create phenomena which are attributed to both gravitational and quantum effects. In these situations, a particle's Killing vector may be rotated such that its energy becomes negative.
Hawking radiation
Virtual particles can exist for a short period. When a pair of such particles appears next to a black hole's event horizon, one of them may get drawn in. This rotates its Killing vector so that its energy becomes negative and the pair have no net energy. This allows them to become real and the positive particle escapes as Hawking radiation, while the negative-energy particle reduces the black hole's net energy. Thus, a black hole may slowly evaporate.
Speculative suggestions
Wormholes
Negative energy appears in the speculative theory of wormholes, where it is needed to keep the wormhole open. A wormhole directly connects two locations which may be separated arbitrarily far apart in both space and time, and in principle allows near-instantaneous travel between them. However physicists such as Roger Penrose regard such ideas as unrealistic, more fiction than speculation.
Warp drive
A theoretical principle for a faster-than-light (FTL) warp drive for spaceships has been suggested, using negative energy. The Alcubierre drive is based on a solution to the Einstein field equations of general relativity in which a "bubble" of spacetime is constructed using a hypothetical negative energy. The bubble is then moved by expanding space behind it and shrinking space in front of it. The bubble may travel at arbitrary speeds and is not constrained by the speed of light. This does not contradict general relativity, as the bubble's contents do not actually move through their local spacetime.
Negative-energy particles
Speculative theoretical studies have suggested that particles with negative energies are consistent with Relativistic quantum theory, with some noting interrelationships with negative mass and/or time reversal.
See also
Antimatter
Dark energy
Dark matter
Negative mass
Negative pressure
References
Inline notes
Bibliography
Lawrence H. Ford and Thomas A. Roman; "Negative energy, wormholes and warp drive", Scientific American January 2000, 282, Pages 46–53.
Roger Penrose; The Road to Reality, ppbk, Vintage, 2005. Chapter 30: Gravity's Role in Quantum State Reduction.
Energy (physics) | Negative energy | [
"Physics",
"Mathematics"
] | 1,345 | [
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities"
] |
32,752,045 | https://en.wikipedia.org/wiki/Linear%20optics | Linear optics is a sub-field of optics, consisting of linear systems, and is the opposite of nonlinear optics. Linear optics includes most applications of lenses, mirrors, waveplates, diffraction gratings, and many other common optical components and systems.
If an optical system is linear, it has the following properties (among others):
If monochromatic light enters an unchanging linear-optical system, the output will be at the same frequency. For example, if red light enters a lens, it will still be red when it exits the lens.
The superposition principle is valid for linear-optical systems. For example, if a mirror transforms light input A into output B, and input C into output D, then an input consisting of A and C simultaneously give an output of B and D simultaneously.
Relatedly, if the input light is made more intense, then the output light is made more intense but otherwise unchanged.
These properties are violated in nonlinear optics, which frequently involves high-power pulsed lasers. Also, many material interactions including absorption and fluorescence are not part of linear optics.
Linear versus non-linear transformations (examples)
As an example, and using the Dirac bracket notations (see bra-ket notations), the transformation
is linear, while the transformation
is non-linear. In the above examples, is an integer representing the number of photons. The transformation in the first example is linear in the number of photons, while in the second example it is not. This specific nonlinear transformation plays an important role in optical quantum computing.
Linear versus nonlinear optical devices (examples)
Phase shifters and beam splitters are examples of devices commonly used in linear optics.
In contrast, frequency-mixing processes, the optical Kerr effect, cross-phase modulation, and Raman amplification, are a few examples of nonlinear effects in optics.
Connections to quantum computing
One currently active field of research is the use of linear optics versus the use of nonlinear optics in quantum computing. For example, one model of linear optical quantum computing, the KLM model, is universal for quantum computing, and another model, the boson sampling-based model, is believed to be non-universal (for quantum computing) yet still seems to be able to solve some problems exponentially faster than a classical computer.
The specific nonlinear transformation , (called "a gate" when using computer science terminology) presented above, plays an important role in optical quantum computing: on the one hand, it is useful for deriving a universal set of gates, and on the other hand, with (only) linear-optical devices and post-selection of specific outcomes plus a feed-forward process, it can be applied with high success probability, and be used for obtaining universal linear-optical quantum computing, as done in the KLM model.
See also
Optics
Quantum optics
Nonlinear optics
Linear optical quantum computing (LOQC)
KLM model for LOQC
Optical phase space
Optical physics
Nonclassical light
Optics | Linear optics | [
"Physics",
"Chemistry"
] | 607 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
32,752,933 | https://en.wikipedia.org/wiki/Du%20No%C3%BCy%E2%80%93Padday%20method | The du Noüy–Padday method is a minimized version of the du Noüy ring method replacing the large platinum ring with a thin rod that is used to measure equilibrium surface tension or dynamic surface tension at an air–liquid interface. In this method, the rod is oriented perpendicular to the interface, and the force exerted on it is measured. Based on the work of Padday, this method finds wide use in the preparation and monitoring of Langmuir–Blodgett films, ink & coating development, pharmaceutical screening, and academic research.
Detailed description
The du Noüy Padday rod consists of a rod usually on the order of a few millimeters square making a small ring. The rod is often made from a composite metal material that may be roughened to ensure complete wetting at the interface. The rod is cleaned with water, alcohol and a flame or with strong acid to ensure complete removal of surfactants. The rod is attached to a scale or balance via a thin metal hook. The Padday method uses the maximum pull force method, i.e. the maximum force due to the surface tension is recorded as the probe is first immersed ca. one mm into the solution and then slowly withdrawn from the interface. The main forces acting on a probe are the buoyancy (due to the volume of liquid displaced by the probe) and the mass of the meniscus adhering to the probe. This is an old, reliable, and well-documented technique.
An important advantage of the maximum pull force technique is that the receding contact angle on the probe is effectively zero. The maximum pull force is obtained when the buoyancy force reaches its minimum,
The surface tension measurement used in the Padday devices based on the du Noüy ring/maximum pull force method is explained further here:
The force acting on the probe can be divided into two components:
i) Buoyancy stemming from the volume displaced by the probe, and
ii) the mass of the meniscus of the liquid adhering to the probe.
The latter is in equilibrium with the surface tension force, i.e.
where
is the perimeter of the probe,
is the surface tension and the weight of the meniscus under the probe. In the situation considered here the volume displaced by the probe is included in the meniscus.
is the contact angle between the probe and the solution that is measured, and is negligible for the majority of solutions with Kibron’s probes.
Thus, the force measured by the balance is given by
where
is the force acting on the probe and
is the force due to buoyancy.
At the point of detachment the volume of the probe immersed in the solution vanishes, and thus, also the buoyancy term. This is observed as a maximum in the force curve, which relates to the surface tension through
The above derivation holds for ideal conditions. Non-idealities, e.g. from defect probe shape, are partly compensated in the calibration routine using a solution with known surface tension.
Advantages and practice
Unlike a du Noüy ring, no correction factors are required when calculating surface tensions. Due to its small size the rod can be used in high throughput instruments that use a 96-well plate to determine the surface tension. The small diameter of the rod allows its use in a small volume of liquid with 50 l samples being used in some devices.
In addition, the rod also allows use for the Wilhelmy method because the rod is not completely removed during measurements. For this the dynamic surface tension can be used for accurate determination of surface kinetics on a wide range of timescales.
The Padday technique also offers low operator variance and does not need an anti-vibration table. This advantage over other devices allows the Padday devices to be used in the field easily. The rod when made of composite material is also less likely to bend and therefore cheaper than the more costly platinum rod offered in the du Noüy method.
In a typical experiment, the rod is lowered using a manual or automatic device to the surface being analyzed until a meniscus is formed, and then raised so that the bottom edge of the rod lies on the plane of the undisturbed surface. One disadvantage of this technique is that it can not bury the rod into the surface to measure interfacial tension between two liquids.
Practical uses
The practical uses of an instrument that uses a single probe are that it allows for the developing of a high throughput device. A high throughput surface tension device can be used for formulation in real time for understanding the penetration of drugs in the blood–brain barrier (BBB), understanding the solubility of drugs, development of a screen to test a drugs toxicity, determining the physicochemical properties of oxidized phospholipids, and development of new surfactant/polymers.
Penetration of drugs in the BBB
The physicochemical profiling of poorly soluble drug candidates performed using a HTS surface tension device. Allowed prediction of penetration through the blood–brain barrier.
Development of a screen to test a drugs toxicity
A correlation with drug-lipid-complexes were correlated with high-throughput surface tension device to predict phospholipidosis in particular cationic drugs.
Understanding the solubility of drugs
Drug solubility has previously been done by the shaker method. A 96-well high throughput device has allowed development of a new method to test drugs.
Oxidized Phospholipids
The physicochemical properties of oxidized lipids were characterized using a high throughput device. Since these oxidized lipids are expensive and only available in small quantities a surface tension device requiring only a small amount of volume is better.
Development of new surfactant/polymers
The surface tension profiles of the branched copolymers solutions were
performed using a HTS surface tensiometer as a function polymer concentration to produce pH-triggered aggregation emulsion droplets.
See also
Tensiometer (surface tension)
du Noüy ring method
Sessile drop technique
Wilhelmy plate method
References
Surface science
Fluid mechanics | Du Noüy–Padday method | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,240 | [
"Civil engineering",
"Fluid mechanics",
"Condensed matter physics",
"Surface science"
] |
32,760,552 | https://en.wikipedia.org/wiki/Megadrought | A megadrought is an exceptionally severe drought, lasting for many years and covering a wide area.
Definition
There is no exact definition of a megadrought. The term was first used by Connie Woodhouse and Jonathan Overpeck in their 1998 paper, 2000 Years of Drought Variability in the Central United States. In this, it referred to two periods of severe drought in the US – one at the end of the 13th century and the other in the middle of the 16th century. The term was then popularised as a similar severe drought affected the Southwestern US from the year 2000.
Benjamin Cook suggested that the definition be a drought which is exceptionally severe compared to the weather during the previous 2,000 years. This was still quite imprecise and so research has suggested quantitative measures based on a Standard Precipitation Index.
Causes
Past megadroughts in North America have been associated with persistent multiyear La Niña conditions (cooler than normal water temperatures in the tropical eastern Pacific Ocean).
Impact
Megadroughts have historically led to the mass migration of humans away from drought affected lands, resulting in a significant population decline from pre-drought levels. They are suspected of playing a primary role in the collapse of several pre-industrial civilizations, including the Ancestral Puebloans of the North American Southwest, the Khmer Empire of Cambodia, the Maya of Mesoamerica, the Tiwanaku of Bolivia, and the Yuan Dynasty of China.
The African Sahel region in particular has suffered multiple megadroughts throughout history, with the most recent lasting from approximately 1400 AD to 1750 AD. North America experienced at least four megadroughts during the Medieval Warm Period.
Historical evidence
There are several sources for establishing the past occurrence and frequency of megadroughts, including:
When megadroughts occur, lakes dry up and trees and other plants grow in the dry lake beds. When the drought ends, the lakes refill; when this happens the trees are submerged and die. In some locations these trees have remained preserved and can be studied giving accurate radio-carbon dates, and the tree rings of the same long dead trees can be studied. Such trees have been found in Mono and Tenaya lakes in California, Lake Bosumtwi in Ghana; and various other lakes.
Dendrochronology, the dating and study of annual rings in trees. The tree-ring data indicate that the Western U.S. states have experienced droughts that lasted ten times longer than anything the modern U.S. has seen. Based upon data derived from annual tree rings, NOAA has recorded patterns of drought covering most of the U.S. for every year since 1700. Certain species of trees have given evidence over a longer period, in particular Montezuma Cypress and Bristlecone pine trees. The University of Arkansas has produced a 1238-year tree-ring based chronology of weather condition in central Mexico by examining core samples taken from living Montezuma Cypress trees.
Sediment core samples taken at the volcanic caldera in Valles Caldera, New Mexico and other locations. The cores from Valles Caldera go back 550,000 years and show evidence of megadroughts that lasted as long as 1,000 years during the mid-Pleistocene Epoch during which summer rains were almost non-existent. Plant and pollen remains found in core samples from the bottom of lakes have been also studied and added to the record.
Fossil corals on Palmyra Atoll. Using the relationship between tropical Pacific sea surface temperatures and the oxygen isotope ratio in living corals to convert fossil coral records into sea surface temperatures. This has been used to establish the occurrence and frequency of La Niña conditions.
During a 200-year mega drought in the Sierra Nevada that lasted from the 9th to the 12th centuries, trees would grow on newly exposed shoreline at Fallen Leaf Lake, then as the lake grew once again, the trees were preserved under cold water. However, a 2016–2017 expedition by the Undersea Voyager Project found evidence that the ancient trees did not grow there during an ancient drought, but rather slid into the lake during one of the many seismic events that have occurred in the Tahoe Basin since it was formed.
The 2000–present southwestern North American megadrought was the driest 22-year period in the region since at least 800. Both 2002 and 2021 were drier than any other years in nearly 300 years and were, respectively, the 11th and 12th driest years between 800 and 2021. The atmospheric rivers of 2024 delivered some of the wettest climate since 2004.
References
External links
Global Drought Information System Current worldwide drought conditions
US Drought Monitor Current U.S. drought conditions
Persistent drought in North America: a climate modeling and paleoclimate perspective Lamont–Doherty Earth Observatory of Columbia University
Droughts
Water supply
Weather hazards | Megadrought | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 965 | [
"Physical phenomena",
"Hydrology",
"Weather hazards",
"Weather",
"Environmental engineering",
"Water supply"
] |
57,389,807 | https://en.wikipedia.org/wiki/Correspondence%20%28algebraic%20geometry%29 | In algebraic geometry, a correspondence between algebraic varieties V and W is a subset R of V×W, that is closed in the Zariski topology. In set theory, a subset of a Cartesian product of two sets is called a binary relation or correspondence; thus, a correspondence here is a relation that is defined by algebraic equations. There are some important examples, even when V and W are algebraic curves: for example the Hecke operators of modular form theory may be considered as correspondences of modular curves.
However, the definition of a correspondence in algebraic geometry is not completely standard. For instance, Fulton, in his book on intersection theory, uses the definition above. In literature, however, a correspondence from a variety X to a variety Y is often taken to be a subset Z of X×Y such that Z is finite and surjective over each component of X. Note the asymmetry in this latter definition; which talks about a correspondence from X to Y rather than a correspondence between X and Y. The typical example of the latter kind of correspondence is the graph of a function f:X→Y. Correspondences also play an important role in the construction of motives (cf. presheaf with transfers).
See also
Adequate equivalence relation
References
Algebraic geometry | Correspondence (algebraic geometry) | [
"Mathematics"
] | 259 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
47,147,824 | https://en.wikipedia.org/wiki/Cell%20mechanics | Cell mechanics is a sub-field of biophysics that focuses on the mechanical properties and behavior of living cells and how it relates to cell function. It encompasses aspects of cell biophysics, biomechanics, soft matter physics and rheology, mechanobiology and cell biology.
Eukaryotic
Eukaryotic cells are cells that consist of membrane-bound organelles, a membrane-bound nucleus, and more than one linear chromosome. Being much more complex than prokaryotic cells, cells without a true nucleus, eukaryotes must protect its organelles from outside forces.
Plant
Plant cell mechanics combines principles of biomechanics and mechanobiology to investigate the growth and shaping of the plant cells. Plant cells, similar to animal cells, respond to externally applied forces, such as by reorganization of their cytoskeletal network. The presence of a considerably rigid extracellular matrix, the cell wall, however, bestows the plant cells with a set of particular properties. Mainly, the growth of plant cells is controlled by the mechanics and chemical composition of the cell wall. A major part of research in plant cell mechanics is put toward the measurement and modeling of the cell wall mechanics to understand how modification of its composition and mechanical properties affects the cell function, growth and morphogenesis.
Animal
Because animal cells do not have cell walls to protect them like plant cells, they require other specialized structures to sustain external mechanical forces. All animal cells are encased within a cell membrane made of a thin lipid bilayer that protects the cell from exposure to the outside environment. Using receptors composed of protein structures, the cell membrane is able to let selected molecules within the cell. Inside the cell membrane includes the cytoplasm, which contains the cytoskeleton. A network of filamentous proteins including microtubules, intermediate filaments, and actin filaments makes up the cytoskeleton and helps maintain the cell's shape. By working together, the three types of polymers can organize themselves to counter the applied external forces and resist deformation. However, there are differences between the three polymers.
The primary structural component of the cytoskeleton is actin filaments. Being the narrowest with a diameter of 7 nm and most flexible out of the three types of polymers, actin filaments are typically found at the very edge of the cytoplasm in animal cells. Formed by the linking of polymers of a protein called actin, they help give cells shape and structure and are able to transport protein packages and organelles. Furthermore, actin filaments have the ability to be assembled and disassembled quickly, allowing them to take part in cell mobility.
On the other hand, intermediate filaments are more permanent structures with a diameter of 8 to 10 nm. Composed of numerous fibrous protein strands wound together, intermediate proteins’ main role is bearing tension and retaining the shape and structure of the cell by securing the nucleus and other organelles in their designated areas.
The largest cytoskeletal structure of the three types of polymers is the microtubules with a diameter of 25 nm. Unlike actin filaments, microtubules are stiff, hollow structures that radiate outwards from the microtubule organizing center (MTOC). Composed of tubulin proteins, microtubules are dynamic structures that allows them to shrink or grow with the addition or removal of tubulin proteins. In terms of cell mechanics, microtubules’ main purpose is to resist compressive cellular forces and act as a transportation system for motor proteins.
It was shown that melanin also can have an impact on mechanic properties of cells. The research done by Sarna's team proved that heavily pigmented melanoma cells have Young's modulus about 4.93, when in non-pigmented ones it was only 0.98. In another experiment they found that elasticity of melanoma cells is important for its metastasis and growth: non-pigmented tumors were bigger than pigmented and it was much easier for them to spread. They shown that there are both pigmented and non-pigmented cells in melanoma tumors, so that they can both be drug-resistant and metastatic.
Measuring
Because cells are tiny, soft objects that must be measured differently than materials like metal, plastic, and glass, new techniques have been developed for the accurate measurement of cell mechanics. The variety of techniques can be divided into two categories: force application techniques and force sensing techniques.
In case of walled cells, such as plant or fungal cells, due to existence of a stiff, anisotropic and curved cell wall encapsulating the cells, special considerations and tailored approaches may be required compared to the methods used to measure the mechanics of animal cells.
Force application
Force application techniques uses the cell's response of deformation to force applied onto the cell as a way to measure cell mechanical properties. There are several different types of force application techniques including:
Micropipette aspiration uses applied suction pressure with a small diameter glass pipet. The measurement of the length of aspiration caused by the suction pressure can reveal several cell mechanical properties.
Cantilever manipulation operates through a magnetic, electrical, or mechanical interaction between a probe and the surface of the cell that gives off a signal that can be used to measure mechanical properties.
Optical techniques involves the usage of trapped photons to manipulate cells. The photons will change in direction based on the cell's refractive index, which will cause a change in momentum, leading to a force applied upon the cell.
Mechanical techniques utilizes the incorporation of ferromagnetic beads into the cell or attached to specific receptors on the cell. When a magnetic force is applied, the stretch of the membrane can be measured to calculate mechanical properties.
Substrate strain measures elasticity through stretching the cell. The elasticity of the cell provides information that can determine motility and adhesion.
Compression requires the usage of applied pressure onto the entire cell. By calculating the changes of the cell's shape, compression is a way to measure mechanical responses to force.
Flow technique uses Reynold's number, a dimensionless number in fluid mechanics, to distinguish whether the cell is subject to laminar, transitional, or turbulent flow.
Acoustic force spectroscopy can be used to extract mechanical properties of single cells.
Force sensing
Wrinkling membranes requires putting the cell into a flexible silicon envelope. As the cell contracts, the magnitude of the forces can be estimated by utilizing the length and number of wrinkles.
Traction force microscopy detects deformations through comparison of images the movement of fluorescent beads that have been adhered to the cell. A user-friendly software package is available for download and is free of charge. It includes both fast Fourier transform traction cytometry and parameter-free Bayesian Fourier transform traction cytometry.
Cantilever sensing can detect surface stresses with the attachment of micromechanical beams on one end of the cell.
Bioreactors allow the measurement of multicellular forces in a three-dimensional system, while external forces are applied at the same time. This enables better results and more accurate data from complex experiments.
When adherent cells are excited by acoustic waves, they start to generate acoustic microstreaming flow. The velocity magnitude of this flow near the cell membrane is directly proportional to the stiffness (i.e., modulus of elasticity) of the cell.
Research
Researchers who study cell mechanics are interested in the mechanics and dynamics of the assemblies and structures that make up the cell including membranes, cytoskeleton, organelles, and cytoplasm, and how they interact to give rise to the emergent properties of the cell as a whole.
A particular focus of many cell mechanical studies has been the cytoskeleton, which (in animal cells) can be thought to consist of:
actomyosin assemblies (F-actin, myosin motors, and associated binding, nucleating, capping, stabilizing, and crosslinking proteins),
microtubules and their associated motor proteins (kinesins and dyneins),
intermediate filaments,
other assemblies such as spectrins and septins.
The active non-equilibrium and non-linear rheological properties of cellular assemblies have been keen point of research in recent times. Another point of interest has been how cell cycle-related changes in cytoskeletal activity affect global cell properties, such as intracellular pressure increase during mitotic cell rounding.
References
See also
Elasticity of cell membranes
Biophysics | Cell mechanics | [
"Physics",
"Biology"
] | 1,758 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
47,150,296 | https://en.wikipedia.org/wiki/Aquatic%20Conservation%3A%20Marine%20and%20Freshwater%20Ecosystems | Aquatic Conservation: Marine and Freshwater Ecosystems is a bimonthly peer-reviewed scientific journal published by Wiley-Blackwell. The journal is dedicated to publishing original papers that relate specifically to freshwater, brackish or marine habitats and encouraging work that spans these ecosystems. According to the Journal Citation Reports, the journal has a 2014 impact factor of 2.136.
References
External links
Ecology journals
Conservation biology
English-language journals
Academic journals established in 1991
Wiley-Blackwell academic journals
Bimonthly journals
1991 establishments in the United Kingdom | Aquatic Conservation: Marine and Freshwater Ecosystems | [
"Biology",
"Environmental_science"
] | 103 | [
"Environmental science journals",
"Conservation biology",
"Ecology journals"
] |
47,153,428 | https://en.wikipedia.org/wiki/Nashwa%20Eassa | Nashwa Abo Alhassan Eassa is a nano-particle physicist from Sudan. She is an assistant professor of physics and Dean of the Deanship of Scientific Research at Al-Neelain University in Khartoum.
Education
Eassa received her BSc in physics from the University of Khartoum in 2004. In Sudan, those who wish to seek further education must study abroad, for women this is difficult for both financial reasons and cultural ones restricting women from traveling. Despite this she earned her Master of Science in nanotechnology and materials physics from Sweden's Linköping University in 2007. During her time at Linköping, tuition fees were not an issue for international students, however she still needed to manage her living expenses. To support her needs she worked at McDonald’s and distributed advertising to make a living for her work.
Through a OWSD PhD Fellowship she was able to attend and graduate Nelson Mandela Metropolitan University from August 2008 to September 2012. During the first year of her stay she had given birth to her first child which made it difficult.
In August of 2008, the OSWD PhD fellowship motivated her to continue her studies in South Africa. She was able to balance the demands of her studies, motherhood, and marriage. Despite a demanding supervisor, she completed her PhD and gained valuable research experience which was unavailable in Sudan due to a lack of facilities. Her achievements at Linköping University earned her the honorary doctorate by the Faculty of Science and Engineering at Linköping, in recognition of her support to women who would want to further their education in research in Sudan. She emphasizes her want to help women in many places not only in Sudan, but also in Egypt, Yemen, Ghana, Benin, and Syria. Reflecting on her experience she states, “Support from other women in the academic world may be decisive to them continuing their career.”
Career
She has been a lecturer and assistant professor of physics at Al-Neelain University since 2007. She earned her PhD from Nelson Mandela Metropolitan University (NMMU) in 2012. Since 2013, Eassa is pursuing a post-doctoral fellowship in nanophotonics at NMMU. She founded the non-governmental organization Sudanese Women in Sciences in 2013 and is a member of Organization for Women in Science for the Developing World's South African Institute of Physics. From 2016 to 2020, Eassa also held the title of Vice President of the "Arab Region of Organization for Women Scientist in Developing World
In 2015, Eassa won the Elsevier Foundation Award for Early Career Women Scientists in the Developing World. The award recognized her research on lessening film accumulation on the surface of high-speed semiconductors.
Eassa is involved in the development of nanotube structures and titanium oxide nanoparticles. She is also involved in projects to develop methods to split water molecules for hydrogen collection and to sanitize water with solar radiation.
She has been a candidate as Arab Countries Vice-President for Organization for Women in Science for the Developing World.
As an Assistant Professor in Khartoum, her achievements include prestigious awards like the Elsevier Award and the Lindau Prize. These awards brought a great deal of international visibility and opportunities. She aspires to inspire the next generation of women scientists in Sudan by improve training opportunities and fosters support through industries like an OWSD National Chapter.
Nashwa has an inspiring journey as a Sudanese woman physicist who overcame cultural and academic challenges to achieve international recognition. In Sudan, societal norms restrict women’s independence, but she pursued her passion for physics despite its limited resources and lack of stability.
Publications
Eassa's professional work appears in publications from journals like the African Journal of Engineering and the 2021 International Congress of Advanced Technology and Engineering. Aside from her publications on nanoparticles, she has also integrated her push for "gender equality" and the "gender gap in science and technology" in articles like her 2015 "Development of Sudanese Women in Physics".
Advocacy work
Eassa has contributed to multiple works detailing the roadblocks African scientists face. Eassa states that the African government's lack of attention to scientific research is the main cause of the low funding that researchers receive. This trickles down to individual researchers not developing the skills they need and working in poor research environments.Eassa has also conducted research on the demographic of female physicists in Sudan. The research, conducted at Al Neelain University in Khartoum, highlights a university where the enrollment of female undergraduate physics students is double that of male students. Eassa collected data about the female students' interest in continuing their post-secondary education journeys and the effects of financial hardships on that decision. This research showed that an overwhelming number of female physicists at the university held lecturer positions due to the lack of female physicists attaining higher degrees.
References
External links
Sudanese Women in Sciences
Year of birth missing (living people)
Living people
Nanotechnologists
Academic staff of Neelain University
Academic staff of Nelson Mandela University
Nelson Mandela University alumni
Particle physicists
Sudanese scientists
University of Khartoum alumni | Nashwa Eassa | [
"Physics",
"Materials_science"
] | 1,010 | [
"Nanotechnology",
"Particle physicists",
"Particle physics",
"Nanotechnologists"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.