id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,584,590
https://en.wikipedia.org/wiki/Electrokinetic%20phenomena
Electrokinetic phenomena are a family of several different effects that occur in heterogeneous fluids, or in porous bodies filled with fluid, or in a fast flow over a flat surface. The term heterogeneous here means a fluid containing particles. Particles can be solid, liquid or gas bubbles with sizes on the scale of a micrometer or nanometer. There is a common source of all these effects—the so-called interfacial 'double layer' of charges. Influence of an external force on the diffuse layer generates tangential motion of a fluid with respect to an adjacent charged surface. This force might be electric, pressure gradient, concentration gradient, or gravity. In addition, the moving phase might be either continuous fluid or dispersed phase. Family Various combinations of the driving force and moving phase determine various electrokinetic effects. According to J.Lyklema, the complete family of electrokinetic phenomena includes: electrophoresis, as motion of charged particles under influence of electric field; electro-osmosis, as motion of liquid in porous body under influence of electric field; diffusiophoresis, as motion of particles under influence of a chemical potential gradient; capillary osmosis, as motion of liquid in porous body under influence of the chemical potential gradient; sedimentation potential, as electric field generated by sedimenting colloid particles; streaming potential/current, as either electric potential or current generated by fluid moving through porous body, or relative to flat surface; colloid vibration current, as electric current generated by particles moving in fluid under influence of ultrasound; electric sonic amplitude, as ultrasound generated by colloidal particles in oscillating electric field. Further reading There are detailed descriptions of electrokinetic phenomena in many books on interface and colloid science. See also Isotachophoresis Onsager reciprocal relations Surface charge Cationization of cotton References Colloidal chemistry Condensed matter physics Soft matter Non-equilibrium thermodynamics Electrochemistry
Electrokinetic phenomena
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
411
[ "Colloidal chemistry", "Non-equilibrium thermodynamics", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Electrochemistry", "Condensed matter physics", "Matter", "Dynamical systems" ]
13,584,830
https://en.wikipedia.org/wiki/Francisco%20Bol%C3%ADvar%20Zapata
Francisco Gonzalo Bolívar Zapata (born March 1948, in Mexico City) is a Mexican biochemist and professor. After getting his PhD in biochemistry by the National Autonomous University of Mexico (UNAM), he joined the Research Center for Genetic Engineering and Biotechnology (now known as the Institute of Biotechnology) in the same university, undertaking studies on Molecular Biology and Biotechnology and becoming one of the most important researchers working in the development of techniques for the use and characterization of the cell genetic material. His studies have significantly contributed the design, construction and characterization of molecular vehicles for the transfer and expression of DNA (Deoxyribonucleic acid). In 1977 he worked in the production of human proteins like insulin and somatostatin in bacteria using genetic engineering techniques. Francisco Bolívar Zapata has been member of several expert committees in the UNESCO and the WHO, and has published over a hundred articles and books. He is member of the UNAM's Board of Directors and The National College. He received the Prince of Asturias Award in 1991 and the TWAS Prize in 1997. In September 2012, he was appointed to the presidential transition team of Enrique Peña Nieto to be responsible for science, innovation, and technology. This was accompanied by a presidential pledge to invest 1% of GNP to these fields, following the recently passed law on science and technology. References Department of Cell Engineering and Biocatalysis, Institute of Biotechnology, UNAM Founding of the Center for Genetic Engineering and Biotechnology and its transformation to the Institute of Biotechnology, in Spanish Appointment to presidential transition team, in Spanish External links The Prince of Asturias Foundation, Prince of Asturias Award on Scientific and Technical Research 1991 Francisco Bolívar Biography in Spanish Mexican biochemists Members of El Colegio Nacional (Mexico) Scientists from Mexico City 1948 births Living people Members of the Mexican Academy of Sciences TWAS laureates 21st-century Mexican scientists 20th-century Mexican scientists
Francisco Bolívar Zapata
[ "Chemistry" ]
387
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
13,585,914
https://en.wikipedia.org/wiki/Liquibase
Liquibase is an open-source database-independent library for tracking, managing and applying database schema changes. It was started in 2006 to allow easier tracking of database changes, especially in an agile software development environment. Overview All changes to the database are stored in text files (XML, YAML, JSON or SQL) and identified by a combination of an "id" and "author" tag as well as the name of the file itself. A list of all applied changes is stored in each database which is consulted on all database updates to determine what new changes need to be applied. As a result, there is no database version number but this approach allows it to work in environments with multiple developers and code branches. Liquibase automatically creates DatabaseChangeLog Table and DatabaseChangeLogLock Table when you first execute a changeLog File. Major functionality The following is a list of major features: Over 30 built-in database refactorings Extensibility to create custom changes Update database to current version Rollback last X changes to database Rollback database changes to particular date/time Rollback database to "tag" SQL for Database Updates and Rollbacks can be saved for manual review Stand-alone IDE and Eclipse plug-in "Contexts" for including/excluding change sets to execute Database diff report Database diff changelog generation Ability to create changelog to generate an existing database Database change documentation generation DBMS Check, user check, and SQL check preconditions Ability to split change log into multiple files for easier management Executable via command line, Apache Ant, Apache Maven, servlet container, or Spring Framework. Support for 10 database systems Commercial version Liquibase (formerly Datical) is both the largest contributor to the Liquibase project and the developer of Liquibase Enterprise – a commercial product which provides the core Liquibase functionality plus additional features. Change Forecasting: Forecast upcoming changes to be executed before they are run to determine how those changes will impact your data. Rules Engine to enforce Corporate Standards and Policies. Supports database Stored Logic: functions, stored procedures, packages, table spaces, triggers, sequences, user defined types, synonyms, etc. Compare Databases enables you to compare two database schemas to identify change and easily move it to your change log. Change Set Wizard to easily define and capture database changes in a database neutral manner. Deployment Plan Wizard for modeling and managing your logical deployment workflow Plug-ins to Jenkins, Bamboo, UrbanCode, CA Release Automation (Nolio), Serena Release Automation, BMC Bladelogic, Puppet, Chef, as well all popular source control systems like SVN, Git, TFS, CVS, etc. Liquibase products, including Liquibase Enterprise (formerly known as Datical DB), are used by DBAs, Release Managers, DevOps teams, Application Owners, Architects, and Developers involved in the Application Release process. It manages Database Schema changes together with application code in a programmatic fashion that eliminates errors and delays and enables rapid Agile releases. Liquibase commercial products build upon the Liquibase Data Model Approach for managing data structure specific content across application versions as they advance from Development to Test to Production environments. Datical previews the impact of Schema changes in any environment before deployment thus mitigating risk and resulting in smoother and faster application changes. Liquibase developer, Nathan Voxland, is an executive at Liquibase (formerly Datical). Sample Liquibase changelog file <?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog/1.3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog/1.3 http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-1.3.xsd"> <preConditions> <dbms type="oracle"/> </preConditions> <changeSet id="1" author="author1"> <createTable tableName="persons"> <column name="id" type="int" autoIncrement="true"> <constraints primaryKey="true" nullable="false"/> </column> <column name="name" type="varchar(50)"/> </createTable> </changeSet> <changeSet id="2" author="author2" context="test"> <insert tableName="persons"> <column name="id" value="1"/> <column name="name" value="Test1"/> </insert> <insert tableName="persons"> <column name="id" value="2"/> <column name="name" value="Test2"/> </insert> </changeSet> </databaseChangeLog> Related tools alembic DBmaestro Flyway References External links Database administration tools Java platform Agile software development
Liquibase
[ "Technology" ]
1,120
[ "Computing platforms", "Java platform" ]
13,586,375
https://en.wikipedia.org/wiki/Windows%20Libraries%20for%20OS/2
Windows Libraries for OS/2 Development Kit (WLO) is a collection of dynamic-link libraries for OS/2 that allow Win16 applications to run on OS/2. See also Microsoft Windows Cardfile References External links WLO 1.0 download (archived) Compatibility layers OS/2
Windows Libraries for OS/2
[ "Technology" ]
62
[ "Computing platforms", "OS/2" ]
9,553,738
https://en.wikipedia.org/wiki/Ensemble%20Kalman%20filter
The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the same thing as an ensemble member) but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter. Introduction The ensemble Kalman filter (EnKF) is a Monte Carlo implementation of the Bayesian update problem: given a probability density function (PDF) of the state of the modeled system (the prior, called often the forecast in geosciences) and the data likelihood, Bayes' theorem is used to obtain the PDF after the data likelihood has been taken into account (the posterior, often called the analysis). This is called a Bayesian update. The Bayesian update is combined with advancing the model in time, incorporating new data from time to time. The original Kalman filter, introduced in 1960, assumes that all PDFs are Gaussian (the Gaussian assumption) and provides algebraic formulas for the change of the mean and the covariance matrix by the Bayesian update, as well as a formula for advancing the mean and covariance in time provided the system is linear. However, maintaining the covariance matrix is not feasible computationally for high-dimensional systems. For this reason, EnKFs were developed. EnKFs represent the distribution of the system state using a collection of state vectors, called an ensemble, and replace the covariance matrix by the sample covariance computed from the ensemble. The ensemble is operated with as if it were a random sample, but the ensemble members are really not independent, as they all share the EnKF. One advantage of EnKFs is that advancing the PDF in time is achieved by simply advancing each member of the ensemble. Derivation Kalman filter Let denote the -dimensional state vector of a model, and assume that it has Gaussian probability distribution with mean and covariance , i.e., its PDF is Here and below, means proportional; a PDF is always scaled so that its integral over the whole space is one. This , called the prior, was evolved in time by running the model and now is to be updated to account for new data. It is natural to assume that the error distribution of the data is known; data have to come with an error estimate, otherwise they are meaningless. Here, the data is assumed to have Gaussian PDF with covariance and mean , where is the so-called observation matrix. The covariance matrix describes the estimate of the error of the data; if the random errors in the entries of the data vector are independent, is diagonal and its diagonal entries are the squares of the standard deviation (“error size”) of the error of the corresponding entries of the data vector . The value is what the value of the data would be for the state in the absence of data errors. Then the probability density of the data conditional of the system state , called the data likelihood, is The PDF of the state and the data likelihood are combined to give the new probability density of the system state conditional on the value of the data (the posterior) by the Bayes theorem, The data is fixed once it is received, so denote the posterior state by instead of and the posterior PDF by . It can be shown by algebraic manipulations that the posterior PDF is also Gaussian, with the posterior mean and covariance given by the Kalman update formulas where is the so-called Kalman gain matrix. Ensemble Kalman Filter The EnKF is a Monte Carlo approximation of the Kalman filter, which avoids evolving the covariance matrix of the PDF of the state vector . Instead, the PDF is represented by an ensemble is an matrix whose columns are the ensemble members, and it is called the prior ensemble. Ideally, ensemble members would form a sample from the prior distribution. However, the ensemble members are not in general independent except in the initial ensemble, since every EnKF step ties them together. They are deemed to be approximately independent, and all calculations proceed as if they actually were independent. Replicate the data into an matrix so that each column consists of the data vector plus a random vector from the -dimensional normal distribution . If, in addition, the columns of are a sample from the prior probability distribution, then the columns of form a sample from the posterior probability distribution. To see this in the scalar case with : Let , and Then . The first sum is the posterior mean, and the second sum, in view of the independence, has a variance , which is the posterior variance. The EnKF is now obtained simply by replacing the state covariance in Kalman gain matrix by the sample covariance computed from the ensemble members (called the ensemble covariance), that is: Implementation Basic formulation Here we follow. Suppose the ensemble matrix and the data matrix are as above. The ensemble mean and the covariance are where and denotes the matrix of all ones of the indicated size. The posterior ensemble is then given by where the perturbed data matrix is as above. Note that since is a covariance matrix, it is always positive semidefinite and usually positive definite, so the inverse above exists and the formula can be implemented by the Cholesky decomposition. In, is replaced by the sample covariance where and the inverse is replaced by a pseudoinverse, computed using the singular-value decomposition (SVD) . Since these formulas are matrix operations with dominant Level 3 operations, they are suitable for efficient implementation using software packages such as LAPACK (on serial and shared memory computers) and ScaLAPACK (on distributed memory computers). Instead of computing the inverse of a matrix and multiplying by it, it is much better (several times cheaper and also more accurate) to compute the Cholesky decomposition of the matrix and treat the multiplication by the inverse as solution of a linear system with many simultaneous right-hand sides. Observation matrix-free implementation Since we have replaced the covariance matrix with ensemble covariance, this leads to a simpler formula where ensemble observations are directly used without explicitly specifying the matrix . More specifically, define a function of the form The function is called the observation function or, in the inverse problems context, the forward operator. The value of is what the value of the data would be for the state assuming the measurement is exact. Then the posterior ensemble can be rewritten as where and with Consequently, the ensemble update can be computed by evaluating the observation function on each ensemble member once and the matrix does not need to be known explicitly. This formula holds also for an observation function with a fixed offset , which also does not need to be known explicitly. The above formula has been commonly used for a nonlinear observation function , such as the position of a hurricane vortex. In that case, the observation function is essentially approximated by a linear function from its values at ensemble members. Implementation for a large number of data points For a large number of data points, the multiplication by becomes a bottleneck. The following alternative formula is advantageous when the number of data points is large (such as when assimilating gridded or pixel data) and the data error covariance matrix is diagonal (which is the case when the data errors are uncorrelated), or cheap to decompose (such as banded due to limited covariance distance). Using the Sherman–Morrison–Woodbury formula with gives which requires only the solution of systems with the matrix (assumed to be cheap) and of a system of size with right-hand sides. See for operation counts. Further extensions The EnKF version described here involves randomization of data. For filters without randomization of data, see. Since the ensemble covariance is rank deficient (there are many more state variables, typically millions, than the ensemble members, typically less than a hundred), it has large terms for pairs of points that are spatially distant. Since in reality the values of physical fields at distant locations are not that much correlated, the covariance matrix is tapered off artificially based on the distance, which gives rise to localized EnKF algorithms. These methods modify the covariance matrix used in the computations and, consequently, the posterior ensemble is no longer made only of linear combinations of the prior ensemble. For nonlinear problems, EnKF can create posterior ensemble with non-physical states. This can be alleviated by regularization, such as penalization of states with large spatial gradients. For problems with coherent features, such as hurricanes, thunderstorms, firelines, squall lines, and rain fronts, there is a need to adjust the numerical model state by deforming the state in space (its grid) as well as by correcting the state amplitudes additively. In 2007, Ravela et al. introduce the joint position-amplitude adjustment model using ensembles, and systematically derive a sequential approximation which can be applied to both EnKF and other formulations. Their method does not make the assumption that amplitudes and position errors are independent or jointly Gaussian, as others do. The morphing EnKF employs intermediate states, obtained by techniques borrowed from image registration and morphing, instead of linear combinations of states. Formally, EnKFs rely on the Gaussian assumption. In practice they can also be used for nonlinear problems, where the Gaussian assumption may not be satisfied. Related filters attempting to relax the Gaussian assumption in EnKF while preserving its advantages include filters that fit the state PDF with multiple Gaussian kernels, filters that approximate the state PDF by Gaussian mixtures, a variant of the particle filter with computation of particle weights by density estimation, and a variant of the particle filter with thick tailed data PDF to alleviate particle filter degeneracy. See also Data assimilation Particle filter Recursive Bayesian estimation References External links EnKF webpage TOPAZ, real-time forecasting of the North Atlantic ocean and Arctic sea-ice with the EnKF EnKF-C, a compact framework for data assimilation into large-scale layered geophysical models with the EnKF PDAF – Parallel Data Assimilation Framework – an open-source software for data assimilation providing different variants of the EnKF Linear filters Nonlinear filters Bayesian statistics Signal estimation Monte Carlo methods
Ensemble Kalman filter
[ "Physics" ]
2,204
[ "Monte Carlo methods", "Computational physics" ]
9,553,854
https://en.wikipedia.org/wiki/Artin%E2%80%93Rees%20lemma
In mathematics, the Artin–Rees lemma is a basic result about modules over a Noetherian ring, along with results such as the Hilbert basis theorem. It was proved in the 1950s in independent works by the mathematicians Emil Artin and David Rees; a special case was known to Oscar Zariski prior to their work. An intuitive characterization of the lemma involves the notion that a submodule N of a module M over some ring A with specified ideal I holds a priori two topologies: one induced by the topology on M, and the other when considered with the I-adic topology over A. Then Artin-Rees dictates that these topologies actually coincide, at least when A is Noetherian and M finitely-generated. One consequence of the lemma is the Krull intersection theorem. The result is also used to prove the exactness property of completion. The lemma also plays a key role in the study of ℓ-adic sheaves. Statement Let I be an ideal in a Noetherian ring R; let M be a finitely generated R-module and let N a submodule of M. Then there exists an integer k ≥ 1 so that, for n ≥ k, Proof The lemma immediately follows from the fact that R is Noetherian once necessary notions and notations are set up. For any ring R and an ideal I in R, we set (B for blow-up.) We say a decreasing sequence of submodules is an I-filtration if ; moreover, it is stable if for sufficiently large n. If M is given an I-filtration, we set ; it is a graded module over . Now, let M be a R-module with the I-filtration by finitely generated R-modules. We make an observation is a finitely generated module over if and only if the filtration is I-stable. Indeed, if the filtration is I-stable, then is generated by the first terms and those terms are finitely generated; thus, is finitely generated. Conversely, if it is finitely generated, say, by some homogeneous elements in , then, for , each f in can be written as with the generators in . That is, . We can now prove the lemma, assuming R is Noetherian. Let . Then are an I-stable filtration. Thus, by the observation, is finitely generated over . But is a Noetherian ring since R is. (The ring is called the Rees algebra.) Thus, is a Noetherian module and any submodule is finitely generated over ; in particular, is finitely generated when N is given the induced filtration; i.e., . Then the induced filtration is I-stable again by the observation. Krull's intersection theorem Besides the use in completion of a ring, a typical application of the lemma is the proof of the Krull's intersection theorem, which says: for a proper ideal I in a commutative Noetherian ring that is either a local ring or an integral domain. By the lemma applied to the intersection , we find k such that for , Taking , this means or . Thus, if A is local, by Nakayama's lemma. If A is an integral domain, then one uses the determinant trick (that is a variant of the Cayley–Hamilton theorem and yields Nakayama's lemma): In the setup here, take u to be the identity operator on N; that will yield a nonzero element x in A such that , which implies , as is a nonzerodivisor. For both a local ring and an integral domain, the "Noetherian" cannot be dropped from the assumption: for the local ring case, see local ring#Commutative case. For the integral domain case, take to be the ring of algebraic integers (i.e., the integral closure of in ). If is a prime ideal of A, then we have: for every integer . Indeed, if , then for some complex number . Now, is integral over ; thus in and then in , proving the claim. Footnotes References gives a somehow more precise version of the Artin–Rees lemma. External links Commutative algebra Lemmas in algebra Module theory Theorems in ring theory
Artin–Rees lemma
[ "Mathematics" ]
917
[ "Theorems in algebra", "Lemmas in algebra", "Fields of abstract algebra", "Module theory", "Commutative algebra", "Lemmas" ]
9,554,041
https://en.wikipedia.org/wiki/XM%20PCR
The XM PCR is a satellite receiver sold by XM Radio and discontinued in 2004, amidst piracy concerns. Programs allowed users to record every song played on an XM channel, quickly and cheaply building an MP3 library. History The Personal Computer Receiver (PCR) was first announced in 2003. The next year, XM pulled the PCRs from the market, reportedly due to music piracy. Enhancements Several enhancements have been created for the PCR, both software and hardware. In the software arena, PCR Replacement programs have been sprouting up on Internet forums and web sites. These are software packages that replace the interface included with the PCR, XMMT. Several features have been added to these new programs, including the ability to rip songs and build an MP3 library, time shift shows so that the user can listen at a more convenient time, control the radio via a web browser, and stream audio to other computers. Some web sites also offer a playlist log, which allows a user to browse a list of all the recently played songs or shows. A hardware modification has also been discovered that allows the addition of a TOSLINK optical output, allowing users to connect the PCR to the optical digital input on a home theater receiver. Replacements The XM Direct receiver, also marketed as the XM Commander, can now serve the same purpose as the PCR. While the XM Direct is intended for automotive use, the unit itself is controlled by RS-232 command signals, and so is easily adapted to PC control. When combined with a "smart cable", which is really just a USB to Serial cable and a wiring adapter to connect to the XM Direct's control port, the XM Direct supports some features not found on the original PCR. The XM Mini tuner may also hold promise for hardware tweakers. It uses the newest XM tuner and is much smaller than the XM Direct. Like the Direct, the Mini is designed to be used with an external system, in this case a home theater receiver. Unlike the Direct, the Mini is also capable of receiving XM's newest technologies, including HD audio. References XM Satellite Radio Satellite broadcasting
XM PCR
[ "Engineering" ]
454
[ "Telecommunications engineering", "Satellite broadcasting" ]
9,555,007
https://en.wikipedia.org/wiki/Virtual%20telecine
A virtual telecine is a piece of video equipment that can play back data files in real time. The colorist-video operator controls the virtual telecine like a normal telecine, although without controls like focus and framing. The data files can be from a Spirit DataCine, motion picture film scanner (like a Cineon), CGI animation computer, or an Acquisition professional video camera. The normal input data file standard is DPX. The output of data files are often used in digital intermediate post-production using a film recorder for film-out. The control room for the virtual telecine is called the color suite. The 2000 movie O Brother, Where Art Thou? was scanned with Spirit DataCine, color corrected with a VDC-2000 and a Pandora Int. Pogle Color Corrector with MegaDEF. A Kodak Lightning II film recorder was used to output the data back on to film. Virtual telecines are also used in film restoration. Another advantage of a virtual telecine is once the film is on the storage array the frames may be played over and over again without damage or dirt to the film. This would be the case for outputting to different TV standards (NTSC or PAL) or formats: (pan and scan, letterboxed, or other aspect ratio. Restoration, special effect, color grading, and other changes can be applied to the data file frames before playout. Virtual telecine is like a "tape to tape" color correction process, but with the difference of: higher resolution (2k or 4k) and the use of film restoration tools with standards-aspect ratio tools. 2k virtual DataCine products First virtual telecine by Philips, now Grass Valley a Thomson SA Brand: VDC-2000 Virtual DataCine Specter FS Virtual DataCine These are able to play out 2k data files in non-linear real time. Size, rotation and color correction-color grading are all able to be done in real time controlled by a telecine color corrector. A Silicon Graphics-SGI computer, an Origin 2000, is used to play the data files to "Spirit DataCine hardware". The Virtual DataCine can output SDTV (NTSC or PAL) and HDTV-high definition or Data files DPX (or TIF), the same as the Datacine. First generation input/output interface for data files as the optical fiber HIPPI cables (up to 6 frame/s at 2k), the next generation interface is GSN-Gigabit Ethernet fibre Optic (up to 30 frame/s at 2k). GSN is also called HIPPI-6400 and was later renamed GSN (for Gigabyte System Network). The SAN hard disk are interfaces to the Virtual DataCine by dual FC-Fibre Channel, cables. Real time 2k Color Correction is done by a Pandora International's Pogle with a MegaDEF. Input and output 3D LUT-Look up tables are also used to control the look and standard of the clips. On a Spirit Datacine Phantom TransferEngine software running on an SGI computer or Bones Linux-based software is used to record the DPX files from the Spirit DataCine. These files are stored in the virtual telecine or on a SAN hard disk storage array. The end product was accomplished by playing the DPX files back through the Spirit Datacine's process electronics and a Pandora International's MegaDef Colour Correction system. VDC-2000 Specter and Specter FS are made in Weiterstadt-Darmstadt Germany by Grass Valley - a Thomson SA brand, former names see Philips Broadcast and Robert Bosch GmbH, Fernseh Division. Real-time virtual telecines HDTV 4:2:2 and better 4:4:4 RGB can be used as a Virtual Telecine. In this case, standard HDTV video products can be used in a post-production work flow. As faster computers and SAN-Storage area network came on the market, more Real time 2k Virtual Telecine came on to the market. SDTV is easier to output in real time than HDTV or 2k or 4k display resolution files. Limitation to speed are: color correction, resizing aspect ratio, dirt removal, special effects, motion picture credits, and other restoration. Also bandwidth speed of the hardware limits real time playout: CPU, interface, SAN, memory, software and hardware. Some Current virtual telecines are: Da Vinci Systems Splice Bones by DFT Digital Film Technology in Weiterstadt, Germany. Filmlight Baselight Marquise Technologies MIST prime and MIST i/o SpectSoft RaveHD and Rave2K DFT Flexxity Non-real-time virtual telecines A number of products are on the market that can output frames in less than real time. These can be used to output DPX data file, but are too slow for HDTV. For some digital intermediate work 4k data is needed. These large 4k display resolution files cannot be transferred in real time. See also Digital cinematography Direct to Disk Recording Fernseh Hard disk recorder Lustre (file system) Telecine Further reading Film production Television technology Video hardware
Virtual telecine
[ "Technology", "Engineering" ]
1,077
[ "Information and communications technology", "Electronic engineering", "Television technology", "Video hardware" ]
9,555,814
https://en.wikipedia.org/wiki/Trans-Earth%20injection
A trans-Earth injection (TEI) is a propulsion maneuver used to set a spacecraft on a trajectory which will intersect the Earth's sphere of influence, usually putting the spacecraft on a free return trajectory. The maneuver is performed by a rocket engine. From the Moon The spacecraft is usually in a parking orbit around the Moon at the time of TEI, in which case the burn is timed so that its midpoint is opposite the Earth's location upon arrival. Uncrewed space probes have also performed this maneuver from the Moon starting with Luna 16's direct ascent traverse from the lunar surface in 1970. On the Apollo missions, it was performed by the restartable Service Propulsion System (SPS) engine on the Service Module after the undocking of the (LM) Lunar Module if provided. An Apollo TEI burn lasted approximately 150 seconds, providing a posigrade velocity increase of 1,000 m/s (3,300 ft/s). It was first performed by the Apollo 8 mission on December 25, 1968. It was last performed by the propulsion module of Chandrayaan-3 mission during 13 October 2023 List of missions that performed a Trans-Earth injection Total 17 missions have performed such a maneuver. NASA has performed it the most (10 times), followed by Soviet Union (3 times), China (3 times), and India (once). These missions are in order, Apollo 8 Apollo 10 Apollo 11 Apollo 12 Luna 16 Apollo 14 Apollo 15 Luna 20 Apollo 16 Apollo 17 Luna 24 Clementine Chang'e 5-T1 Chang'e 5 Artemis 1 Chandrayaan-3 Chang'e 6 From outside the Earth-Moon system In 2004, from outside the Earth-Moon system, the Stardust probe comet dust return mission performed TEI after visiting Comet Wild 2. See also Lunar orbit insertion Trans-lunar injection Trans-Mars injection References Astrodynamics Spacecraft propulsion Exploration of the Moon Apollo program
Trans-Earth injection
[ "Astronomy", "Engineering" ]
397
[ "Aerospace engineering", "Astrodynamics", "Astronomy stubs", "Spacecraft stubs" ]
9,556,231
https://en.wikipedia.org/wiki/Bole%20hill
A bole hill (also spelt bail hill) was a place where lead was formerly smelted in the open air. The bole was usually situated at or near the top of a hill where the wind was strong. Totley Bole Hill on the western fringes of Sheffield consisted of a long low wall with two shorter walls at right angles to it at each end. At the base of a bole long were laid great trees called blocks. On these were laid blackwork, partly smelted ore about half a yard thick. Then came ten or twelve trees called shankards. On top of these three or four courses of fire trees were laid with fresh ore. This was ignited and burnt for about 48 hours. This smelted lead, which ran down channels provided for the purpose and was cast into sows of about 11 hundredweight. A single firing produced 16 fothers of lead (about 18 tons) from 160 loads of ore (about 40 tons) and 30 tons of wood. Much of the ore was left incompletely smelted having become blackwork. Some of this was smelted in a foot-pump blown furnace, but some was left to be used when the bole was next fired. Bole smelting was replaced by smelting in smeltmills in the late 16th century. That was in turn replaced by smelting in cupolas, a variety of reverberatory furnace in the 18th century. Further reading D. Kiernan and Robert van de Noort, 'Bole smelting in Derbyshire' in L. Willies and D. Cranstone (eds.), Boles and Smeltmills (Historical Metallurgy Society, 1992), 19-21. Other articles in the same work. R. F. Tylecote, A History of Metallurgy (2nd edn, Institute of Materials, London 1992), 90 113. Lead Metallurgy Metallurgical processes
Bole hill
[ "Chemistry", "Materials_science", "Engineering" ]
407
[ "Metallurgical processes", "Metallurgy", "nan", "Materials science" ]
9,556,567
https://en.wikipedia.org/wiki/Ethics%20of%20cloning
In bioethics, the ethics of cloning concerns the ethical positions on the practice and possibilities of cloning, especially of humans. While many of these views are religious in origin, some of the questions raised are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Advocates support the development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to technology. Opponents of cloning have concerns that technology is not yet developed enough to be safe, and that it could be prone to abuse, either in the form of clones raised as slaves, or leading to the generation of humans from whom organs and tissues would be harvested. Opponents have also raised concerns about how cloned individuals could integrate with families and with society at large. Religious groups are divided, with some opposing the technology as usurping God's place and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while meat of cloned animals has been approved by the US FDA, its use is opposed by some other groups concerned about food safety. Philosophical debate The various forms of cloning, particularly human cloning, are controversial. There have been numerous demands for all progress in the human cloning field to be halted. Most scientific, governmental and religious organizations oppose reproductive cloning. The American Association for the Advancement of Science (AAAS) and other scientific organizations have made public statements suggesting that human reproductive cloning be banned until safety issues are resolved. Serious ethical concerns have been raised by the future possibility of harvesting organs from clones. Advocates of human therapeutic cloning believe the practice could provide genetically identical cells for regenerative medicine, and tissues and organs for transplantation. Such cells, tissues, and organs would neither trigger an immune response nor require the use of immunosuppressive drugs. Both basic research and therapeutic development for serious diseases such as cancer, heart disease, and diabetes, as well as improvements in burn treatment and reconstructive and cosmetic surgery, are areas that might benefit from such new technology. One bioethicist, Jacob M. Appel of New York University, has gone so far as to argue that "children cloned for therapeutic purposes" such as "to donate bone marrow to a sibling with leukemia" may someday be viewed as heroes. Proponents claim that human reproductive cloning also would produce benefits to couples who cannot otherwise procreate. In the early 2000s Severino Antinori and Panos Zavos stirred controversy when they publicly stated plans to create a fertility treatment that allows parents who are both infertile to have children with at least some of their DNA in their offspring. In Aubrey de Grey's proposed SENS (Strategies for Engineered Negligible Senescence), one of the considered options to repair the cell depletion related to cellular senescence is to grow replacement tissues from stem cells harvested from a cloned embryo. There are also ethical objections. Article 11 of UNESCO's Universal Declaration on the Human Genome and Human Rights asserts that the reproductive cloning of human beings is contrary to human dignity, that a potential life represented by the embryo is destroyed when embryonic cells are used, and there is a significant likelihood that cloned individuals would be biologically damaged, due to the inherent unreliability of cloning technology. Ethicists have speculated on difficulties that might arise in a world where human clones exist. For example, human cloning might change the shape of family structure by complicating the role of parenting within a family of convoluted kinship relations. For example, a female DNA donor would be the clone's genetic twin, rather than mother, complicating the genetic and social relationships between mother and child as well as the relationships between other family members and the clone. In another example, there may be expectations that the cloned individuals would act identically to the human from which they were cloned, which could infringe on the right to self-determination. Proponents of animal rights argue that non-human animals possess certain moral rights as living entities and should therefore be afforded the same ethical considerations as human beings. This would negate the exploitation of animals in scientific research on cloning, cloning used in food production, or as other resources for human use or consumption. Religious views Religious views of cloning are mixed. Jainism and Hinduism Hinduism views on cloning are very diverse. While some Hindu people view therapeutic cloning as necessary to fix childlessness, others believe it is immoral to tamper with nature. The Sanatan Dharm (meaning the eternal set of duties for humans, which is what many people refer to Hinduism as) approves therapeutic cloning but does not approve human cloning. In Hinduism, one view has the creator, or the Brahman not as insecure to lay restrictions on scientific endeavours. Another view restricts human cloning. In Jainism, the birth of Mahavira is depicted as an operation of embryo transfer. In modern-day India, there have been clones of livestock species. Examples include Garima from the National Dairy Research Institute located in Karnal, where many other clones have been developed in Bovine species. Judaism Jewish view on cloning is unclear, but some orthodox rabbis allows cloning as a method of reproduction if no other method is available. Also Jewish religion treats all life equally even if it was formed by cloning. Liberal Jewish groups oppose the cloning of humans. Christianity Most of the Christian churches, including World council of Churches and United Methodist Church, oppose the research of cloning of either human embryos or whole human. The Roman Catholic Church, under the papacy of Benedict XVI, condemned the practice of human cloning, in the magisterial instruction Dignitas Personae, stating that it represents a "grave offense to the dignity of that person as well as to the fundamental equality of all people." Many conservative Christian groups have opposed human cloning and the cloning of human embryos, since they believe that life begins at the moment of conception. Other Christian denominations such as the United Church of Christ do not believe a fertilized egg constitutes a living being, but still they oppose the cloning of embryonic cells. Islam There are different views when it comes to cloning in Islam, some scholars are of the view that human reproductive cloning is absolutely forbidden whilst others are of the view that there are some exceptions. Animal cloning is allowed in Islam only if it bring benefits to all people and no harm is caused to the animal used in the cloning process. Animal cloning Cloned animals are used in medical research, pet cloning or for food. Pet cloning In 2005, an article in The Hastings Center Report said: Critics of pet cloning typically offer three objections: (1) the cloning process causes animals to suffer; (2) widely available pet cloning could have bad consequences for the overwhelming numbers of unwanted companion animals; and, (3) companies that offer pet cloning are deceiving and exploiting grieving pet owners. Cloning animals for food On December 28, 2006, the U.S. Food and Drug Administration (FDA) approved the consumption of meat and other products from cloned animals. Cloned-animal products were said to be indistinguishable from the non-cloned animals. Furthermore, companies would not be required to provide labels informing the consumer that the meat comes from a cloned animal. In 2007, some meat and dairy producers did propose a system to track all cloned animals as they move through the food chain, suggesting that a national database system integrated into the National Animal Identification System could eventually allow food labeling. However, as of 2013 no tracking system exists, and products from cloned animals are sold for human consumption in the United States. Critics have raised objections to the FDA's approval of cloned-animal products for human consumption, arguing that the FDA's research was inadequate, inappropriately limited, and of questionable scientific validity. Several consumer-advocate groups are working to encourage a tracking program that would allow consumers to become more aware of cloned-animal products within their food. A 2013 review noted that there is widespread misunderstanding about cloned and cattle, and found that cloned cattle that reached adulthood and entered the food supply were substantially equivalent to conventional cattle with respect to the quality of meat and milk, and with respect to their reproductive capability. In 2015, the European Union voted to ban the cloning of farm animals (cattle, pigs, sheep, goats, and horses), and the sale of cloned livestock, their offspring, and products derived from them, such as meat and milk. The ban excluded cloning for research, and for the conservation of rare breeds and endangered species. However, no law was passed after the vote. As of 2024, horse cloning continues to be legal in the EU, with the Zangersheide registry in Belgium offering three cloned stallions for breeding. In popular culture Orphan Black, a Canadian television series, explores the ethics of cloning through its clone protagonists. The ethics of creating clones for organ harvesting (similar to savior siblings) is explored in Never Let Me Go by Kazuo Ishiguro. The ethical dilemma of cloning deceased loved ones as well as questions regarding the right of clones to exist alongside their original counterparts as equals are present in the Japanese version of the film Mewtwo Strikes Back and its accompanying radio drama. Contribute your essay on to Wikiversity References Further reading Seyyed Hassan Eslami Ardakani, Human Cloning in Catholic and Islamic Perspectives, University of Religions and Denominations, 2007 Cloning Cloning
Ethics of cloning
[ "Technology", "Engineering", "Biology" ]
2,070
[ "Bioethics", "Cloning", "Genetic engineering", "Ethics of science and technology" ]
9,556,706
https://en.wikipedia.org/wiki/Arthur%20Lynch%20%28politician%29
Arthur Alfred Lynch (16 October 1861 – 25 March 1934) was an Irish Australian civil engineer, physician, journalist, author, soldier, anti-imperialist and polymath. He served as MP in the UK House of Commons as member of the Irish Parliamentary Party, representing Galway Borough from 1901 to 1902, and later West Clare from 1909 to 1918. Lynch fought on the Boer side during the Boer War in South Africa, for which he was sentenced to death but later pardoned. He supported the British war effort in the First World War, raising his own Irish battalion in Munster towards the end of the war. Australian years Lynch was born at Smythesdale near Ballarat, Victoria, the fourth of 14 children. His father, John Lynch, was an Irish Catholic surveyor and civil engineer and his mother Isabella (née MacGregor) was Scottish. John Lynch was a founder and first president of the Ballarat School of Mines, and a captain of Peter Lalor at the Eureka Stockade rebellion (1854) and John Lynch wrote a book, Austral Light (1893–94), about it – later republished as The Story of the Eureka Stockade. Arthur Lynch was educated at Grenville College, Ballarat, (where he was "entranced" by differential calculus) and the University of Melbourne, where he took the degrees of BA in 1885 and MA in 1887. Lynch qualified as a civil engineer and practised this profession for a short period in Melbourne. Europe and Ireland Lynch left Australia and went to Berlin, where he studied physics, physiology and psychology at the University of Berlin in 1888–1889. He had a particular respect for Hermann von Helmholtz. Moving to London, Lynch took up journalism. In 1892, he contested Galway Borough as a Parnellite candidate, but was defeated. Lynch met Annie Powell (daughter of the Rev. John D. Powell) in Berlin and they were married in 1895. They were to have no children. In Lynch's words, the marriage "never lost its happiness" (My Life Story, p. 85). In 1898 he was Paris correspondent for the Daily Mail. Boer brigade When the Second Boer War broke out, Lynch was sympathetic to the Boers and decided to go to South Africa as a war correspondent. In Pretoria, he met General Louis Botha, and decided to join the Boer side. Lynch raised the Second Irish Brigade, which consisted of Irishmen, Cape colonists and others opposed to the British. He was given the rank of Colonel and saw limited active service. In his comprehensive history on the Australia's Boer War, Craig Wilcox said, it was misleading to call the 70 or so men in the Irish unit raised by Lynch "a brigade", rather he suggested that "the publicity that comes from spectacular gestures..." made Lynch appear "a romantic warrior" and that his actions "flattered many Irishmen and women...". In contrast, Antony O'Brien's fictional Bye-Bye Dolly Gray, is kinder to Lynch's showy South African exploits and his uitlanders. Michael Davitt who travelled to South Africa has photos of Lynch with his brigade on the veldt in The Boer Fight for Freedom. Conviction and pardon From South Africa, Lynch went to the United States, and then returned to Paris, from where he again stood for Galway Borough in November 1901 and was elected in his absence as MP. On going to London, Lynch was arrested for his pro-Boer activities and on remand for eight months. Lynch was tried for treason before three judges, and on 23 January 1903 was found guilty and sentenced to be hanged. This sentence was immediately commuted to a life sentence, and a year later Lynch was released "on licence" by the Balfour government. In July 1907, Lynch was pardoned, and in 1909 he was again elected Member of Parliament, this time for West Clare, in Ireland. Munster battalion During World War I, Lynch volunteered for the New British Army. He raised a private 10th Battalion, Royal Munster Fusiliers and was given the rank of colonel, although he and his unit never saw active front service. At the end of the war, Lynch chose to stand as a Labour candidate in newly created Battersea South for the 1918 General election. He finished second to the Conservative candidate. He had qualified as a physician many years earlier, and began to practise medicine in London, at Haverstock Hill. He died in London on 25 March 1934. Publications Lynch wrote and published a large number of books ranging from poetry to a sophisticated attempt to refute Albert Einstein's theory of relativity. His verse was clever and satirically Byronic, and his essays and studies show much reading and acuteness of mind. E. Morris Miller, himself a professor of philosophy, mentions Lynch's "high reputation as a critical and philosophical writer especially for his contributions to psychology and ethics" (Australian Literature, p. 273). His publications include: Modern Authors (1891) Approaches the Poor Scholar's Quest of a Mecca (1892) A Koran of Love (1894) Our Poets (1894) Religio Athletae (1895) Human Documents (1896) Prince Azreel (1911) Psychology; A New System (two vol.; 1912) Purpose and Evolution (1913) Sonnets of the Banner and the Star (1914) Ireland: Vital Hour (1915) Poppy Meadows, Roman Philosophique (1915) La Nouvelle Ethique (1917) L'Evolution dons ses Rapports avec l'ethique (1917) Moments of Genius (1919) The Immortal Caravel (1920) Moods of Life (1921) O'Rourke the Great (1921) Ethics, an Exposition of Principles (1922) Principles of Psychology (1923) Seraph Wings (1923) My Life Story (1924) Science, Leading and Misleading (1927) The Rosy Fingers (1929) The Case Against Einstein (1932) Notes References John Lynch, The Story of the Eureka Stockade: Epic Days of the Early Fifties at Ballarat, (1895). Republished 1947(?) and later by Ballarat Heritage Services, Ballarat, 1999. Popular culture Antony O'Brien, Bye-Bye Dolly Gray, Artillery Publishing, Hartwell, 2006. (a novel includes several sympathetic scenes involving Lynch's exploits on the Colenso, Johannesburg and Transvaal front during 1899 and 1900) At the Boer War Craig Wilcox. (2002), Australia's Boer War: The War in South Africa, 1899-1902, Oxford. A blunt appraisal of A.A's action in the war Michael Davitt. (1902) The Boer Fight For Freedom: From the Beginning of Hostilities to the Peace of Pretoria, Funk & Wagnalls, New York, 3rd ed. External links 1861 births 1934 deaths University of Melbourne alumni Australian non-fiction writers Australian poets Royal Munster Fusiliers officers Irish Parliamentary Party MPs UK MPs 1900–1906 UK MPs 1906–1910 UK MPs 1910 UK MPs 1910–1918 Australian prisoners sentenced to death People convicted of treason against the United Kingdom Prisoners sentenced to death by the United Kingdom Australian civil engineers Recipients of British royal pardons Relativity critics People from Smythesdale, Victoria Members of the Parliament of the United Kingdom for County Galway constituencies (1801–1922) Members of the Parliament of the United Kingdom for County Clare constituencies (1801–1922) Labour Party (UK) parliamentary candidates 19th-century Australian writers People from the Colony of Victoria Foreign volunteers in the Second Boer War
Arthur Lynch (politician)
[ "Physics" ]
1,522
[ "Relativity critics", "Theory of relativity" ]
9,556,852
https://en.wikipedia.org/wiki/Quark%20epoch
In physical cosmology, the quark epoch was the period in the evolution of the early universe when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. The quark epoch began approximately 10−12 seconds after the Big Bang, when the preceding electroweak epoch ended as the electroweak interaction separated into the weak interaction and electromagnetism. During the quark epoch, the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. The following period, when quarks became confined within hadrons, is known as the hadron epoch. See also Timeline of the early universe Chronology of the universe Cosmology References Further reading Big Bang Physical cosmology
Quark epoch
[ "Physics", "Astronomy" ]
249
[ "Cosmogony", "Astronomical sub-disciplines", "Big Bang", "Theoretical physics", "Astrophysics", "Physical cosmology" ]
9,557,604
https://en.wikipedia.org/wiki/Tire%20uniformity
Tire uniformity refers to the dynamic mechanical properties of pneumatic tires as strictly defined by a set of measurement standards and test conditions accepted by global tire and car makers. These standards include the parameters of radial force variation, lateral force variation, conicity, ply steer, radial run-out, lateral run-out, and sidewall bulge. Tire makers worldwide employ tire uniformity measurement as a way to identify poorly performing tires so they are not sold to the marketplace. Both tire and vehicle manufacturers seek to improve tire uniformity in order to improve vehicle ride comfort. Force variation background The circumference of the tire can be modeled as a series of very small spring elements whose spring constants vary according to manufacturing conditions. These spring elements are compressed as they enter the road contact area, and recover as they exit the footprint. Variation in the spring constants in both radial and lateral directions cause variations in the compressive and restorative forces as the tire rotates. Given a perfect tire, running on a perfectly smooth roadway, the force exerted between the car and the tire will be constant. However, a normally manufactured tire running on a perfectly smooth roadway will exert a varying force into the vehicle that will repeat every rotation of the tire. This variation is the source of various ride disturbances. Both tire and car makers seek to reduce such disturbances in order to improve the dynamic performance of the vehicle. Tire uniformity parameters Axes of measurement Tire forces are divided into three axes: radial, lateral, and tangential (or fore-aft). The radial axis runs from the tire center toward the tread, and is the vertical axis running from the roadway through the tire center toward the vehicle. This axis supports the vehicle's weight. The lateral axis runs sideways across the tread. This axis is parallel to the tire mounting axle on the vehicle. The tangential axis is the one in the direction of the tire travel. Radial force variation In so far as the radial force is the one acting upward to support the vehicle, radial force variation describes the change in this force as the tire rotates under load. As the tire rotates and spring elements with different spring constants enter and exit the contact area, the force will change. Consider a tire supporting a load running on a perfectly smooth roadway. It would be typical for the force to vary up and down from this value. A variation between would be characterized as an radial force variation (RFV). The radial force variation can be expressed as a peak-to-peak value, which is the maximum minus minimum value, or any harmonic value as described below. Some tire manufactures mark the sidewall with a red dot to indicate the location of maximal radial force and runout, the high spot. A yellow dot indicates the point of least weight. Use of the dots is specified in Technology Maintenance Council's RP243 performance standard. To compensate for this variation, tires are supposed to be installed with the red dot near the valve stem, assuming the valve stem is at the low point, or with the yellow dot near the valve stem, assuming the valve stem is at the heavy point. Harmonic analysis Radial force variation, as well as all other force variation measurements, can be shown as a complex waveform. This waveform can be expressed according to its harmonics by applying Fourier transform (FT). FT permits one to parameterize various aspects of the tire dynamic behavior. The first harmonic, expressed as radial force first harmonic (RF1H) describes the force variation magnitude that exerts a pulse into the vehicle one time for each rotation. Radial force second harmonic (RF2H) expresses the magnitude of the radial force that exerts a pulse twice per revolution, and so on. Often, these harmonics have known causes, and can be used to diagnose production problems. For example, a tire mold installed with 8 segments may thermally deform as to induce an eighth harmonic, so the presence of a high radial force eight harmonic (RF8H) would point to a mold sector parting problem. RF1H is the primary source of ride disturbances, followed by RF2H. High harmonics are less problematic because the rotating speed of the tire at highway speeds times the harmonic value makes disturbances at such high frequencies that they are damped or overcome by other vehicle dynamic conditions. Lateral force variation Insofar as the lateral force is the one acting side-to-side along the tire axle, lateral force variation describes the change in this force as the tire rotates under load. As the tire rotates and spring elements with different spring constants enter and exit the contact area, the lateral force will change. As the tire rotates it may exert a lateral force on the order of , causing steering pull in one direction. It would be typical for the force to vary up and down from this value. A variation between would be characterized as a lateral force variation (LFV). The lateral force variation can be expressed as a peak-to-peak value, which is the maximum minus minimum value, or any harmonic value as described above. Lateral force is signed, such that when mounted on the vehicle, the lateral force may be positive, making the vehicle pull to the left, or negative, pulling to the right. Tangential force variation Insofar as the tangential force is the one acting in the direction of travel, tangential force variation describes the change in this force as the tire rotates under load. As the tire rotates and spring elements with different spring constants enter and exit the contact area, the tangential force will change. As the tire rotates it exerts a high traction force to accelerate the vehicle and maintain its speed under constant velocity. Under steady-state conditions it would be typical for the force to vary up and down from this value. This variation would be characterized as tangential force variation (TFV). In a constant velocity test condition, the tangential force variation would be manifested as a small speed fluctuation occurring every rotation due to the change in rolling radius of the tire. Conicity Conicity is a parameter based on lateral force behavior. It is the characteristic that describes the tire's tendency to roll like a cone. This tendency affects the steering performance of the vehicle. In order to determine conicity, lateral force must be measured in both clockwise (LFCW) and counterclockwise direction (LFCCW). Conicity is calculated as one-half the difference of the values, keeping in mind that clockwise and counterclockwise values have opposite signs. Conicity is an important parameter in production testing. In many high-performance cars, tires with equal conicity are mounted on left and right sides of the car in order that their conicity effects will cancel each other and generate a smoother ride performance, with little steering effect. This necessitates the tire maker measuring conicity and sorting tires into groups of like-values. Ply steer Ply steer describes the lateral force a tire generates due to asymmetries in its carcass as is rolls forward with zero slip angle and may be called pseudo side slip. It is the characteristic that is usually described as the tire's tendency to “crab walk”, or move sideways while maintaining a straight-line orientation. This tendency affects the steering performance of the vehicle. In order to determine ply steer, the lateral force generated is measured as the tire rolls both forward and back, and ply steer is then calculated as one-half the sum of the values, keeping in mind that values have opposite signs. Radial run-out Radial run-out (RRO) describes the deviation of the tire's roundness from a perfect circle. The radial run-out can be expressed as the peak-to-peak value as well as harmonic values. Radial run-out imparts an excitation into the vehicle in a manner similar to radial force variation. It is most often measured near the tire's centerline, although some tire makers have adopted measurement of radial run-out at three positions: left shoulder, center, and right shoulder. Some tire manufactures mark the sidewall with a red dot to indicate the location of maximal radial force and runout. Lateral run-out Lateral run-out (LRO) describes the deviation of the tire's sidewall from a perfect plane. LRO can be expressed as the peak-to-peak value as well as harmonic values. LRO imparts an excitation into the vehicle in a manner similar to lateral force variation. LRO is most often measured in the upper sidewall, near the tread shoulder. Sidewall bulge and depression Given that the tire is an assembly of multiple components that are cured in a mold, there are many process variations that cause cured tires to be classified as rejects. Bulges and depressions in the sidewall are such defects. A bulge is a weak spot in the sidewall that expands when the tire is inflated. A depression is a strong spot that does not expand in equal measure as the surrounding area. Both are deemed visual defects. Tires are measured in production to identify those with excessive visual defects. Bulges may also indicate defective construction conditions such as missing cords, which pose a safety hazard. As a result, tire makers impose stringent inspection standards to identify tires with bulges. Sidewall Bulge and Depression is also referred to as bulge and dent, and bumpy sidewall. Tire uniformity measurement machines Tire uniformity machines are special-purpose machines that automatically inspect tires for the tire uniformity parameters described above. They consist of several subsystems, including tire handling, chucking, measurement rims, bead lubrication, inflation, load wheel, spindle drive, force measurement, and geometry measurement. The tire is first centered, and the bead areas are lubricated to assure a smooth fitment to the measurement rims. The tire is indexed into the test station and placed on the lower chuck. The upper chuck lowers to make contact with the upper bead. The tire is inflated to the set point pressure. The load wheel advances to contact the tire and apply the set loading force. The spindle drive accelerates the tire to the test speed. Once speed, force, and pressure are stable, load cells measure the force exerted on the load wheel by the tire. The force signal is processed in analog circuitry, and then analyzed to extract the measurement parameters. Tires are marked according to various standards that may include radial force variation (RFV) high point angle, side of positive conicity, and conicity magnitude. Other types of uniformity machines There are numerous variations and innovations among several tire uniformity machine makers. The standard test speed for tire uniformity machines is 60 r/min of a standard load wheel that approximates 5 miles per hour. High speed uniformity machines are used in research and development environments that reach 250 km/h and higher. High speed uniformity machines have also been introduced for production testing. Machines that combine force variation measurement with dynamic balance measurement are also in use. Tire uniformity correction Radial and lateral force variation can be reduced at the tire uniformity machine via grinding operations. In the center grind operation, a grinder is applied to the tread center to remove rubber at the high point of radial force variation. On the top and bottom tread shoulder grinders are applied to reduce the size of the road contact area, or footprint, and the resulting force variation. Top and bottom grinders can be controlled independently to reduce conicity values. Grinders are also employed to correct excessive radial run-out. Effects of tire variations can also be reduced by mounting the tire in such a way that unbalanced rims and valve stems helps compensate for imperfect tires. Geometry measurement systems Radial run-out, lateral run-out, conicity, and bulge measurements are also performed on the tire uniformity machine. There are several generations of measurement technologies in use. These include Contact Stylus, Capacitive Sensors, Fixed-Point Laser Sensors, and Sheet-of-Light Laser Sensors. Contact stylus Contact Stylus technology utilizes a touch-probe to ride along the tire surface as it rotates. Analog instrumentation senses the movement of the probe, and records the run-out waveform. When used to measure radial runout, the stylus is fitted to a large-area paddle that can span the voids in the tread pattern. When used to measure lateral runout on the sidewall the stylus runs in a very narrow smooth track. The contact stylus method is one of the earliest technologies, and requires considerable effort to maintain its mechanical performance. The small area-of-interest in the sidewall area limits the effectiveness in discerning sidewall bulges and depressions elsewhere on the sidewall. Capacitive sensors Capacitive Sensors generate a dielectric field between the tire and sensor. As the distance between the tire and the sensor varies, the voltage and/or current properties of the dielectric field change. Analog circuitry is employed to measure the field changes and record the run-out waveform. Capacitive sensors have a larger area-of-interest, on the order of 10mm compared to the very narrow contact stylus method. The capacitive sensor method is one of the earliest technologies, and has proven highly reliable; however, the sensor must be positioned very close to the tire surface during measurement, so collisions between tire and sensor have led to long-term maintenance problems. In addition, some sensors are very sensitive to moisture/humidity and ended with erroneous readings. The 10mm area-of-interest also means that bulge measurement is limited to a small portion of the tire. Capacitive sensors employ void filtering to remove the effect of the voids between the tread lugs in radial runout measurement, and letter filtering to remove the effect of raised letters and ornamentation on the sidewall. Fixed-point laser sensors Fixed-Point Laser Sensors were developed as an alternative to the above methods. Lasers combine the narrow-track area-of-interest with a large stand off distance from the tire. In order to cover a larger area-of-interest, mechanical positioning systems have been employed to take readings at multiple positions in the sidewall. Fixed-Point Laser sensors employ void filtering to remove the effect of the voids between the tread lugs in radial run-out measurement, and letter filtering to remove the effect of raised letters and ornamentation on the sidewall. Sheet-of-light laser systems Sheet-of-light laser (SL) systems were introduced in 2003, and have emerged as the most capable and reliable run-out, bulge and depression measurement methods. Sheet-of-light sensors project a laser line instead of a laser point, and thereby create a very large area-of-interest. Sidewall sensors can easily span an area from the bead area to the tread shoulder, and inspect the complete sidewall for bulge and depression defects. Large radial sensors can span 300mm or more to cover the entire tread width. This enables characterization of RRO in multiple tracks. Sheet-of-light sensors also feature stand off distances large enough to assure no collisions with the tire. Two-dimensional tread void filtering and sidewall letter filtering are also employed to eliminate these characteristics from the runout measurements. References Tires Vehicle technology
Tire uniformity
[ "Engineering" ]
3,141
[ "Vehicle technology", "Mechanical engineering by discipline" ]
9,559,591
https://en.wikipedia.org/wiki/Tire%20balance
Tire balance, also called tire unbalance or tire imbalance, describes the distribution of mass within an automobile tire or the entire wheel (including the rim) on which it is mounted. When the wheel rotates, asymmetries in its mass distribution may cause it to apply periodic forces and torques to the axle, which can cause ride disturbances, usually as vertical and lateral vibrations, and this may also cause the steering wheel to oscillate. The frequency and magnitude of this ride disturbance usually increases with speed, and vehicle suspensions may become excited when the rotating frequency of the wheel equals the resonant frequency of the suspension. Tire balance is measured in factories and repair shops by two methods: with static balancers and with dynamic balancers. Tires with large unbalances are downgraded or rejected. When tires are fitted to wheels at the point of sale, they are measured again on a balancing machine, and correction weights are applied to counteract their combined unbalance. Tires may be rebalanced if driver perceives excessive vibration. Tire balancing is distinct from wheel alignment. Static balance Static balance requires the wheel center of mass to be located on its axis of rotation, usually at the center of the axle on which it is mounted. Static balance can be measured by a static balancing machine where the tire is placed on a vertical, non-rotating spindle. If the center of mass of the tire is not located on this vertical axis, then gravity will cause the axis to deflect. The amount of deflection indicates the magnitude of the unbalance, and the orientation of the deflection indicates the angular location of the unbalance. In tire manufacturing factories, static balancers use sensors mounted to the spindle assembly. In tire retail shops, static balancers are usually non-rotating bubble balancers, where the magnitude and angle of the unbalance is indicated by the center bubble in an oil-filled glass sighting gauge. While some very small shops that lack specialized machines still do this process, they have been largely replaced in larger shops with machines. Dynamic balance Dynamic balance requires that a principal axis of the tire's moment of inertia be aligned with the axis about which the tire rotates, usually the axle on which it is mounted. In the tire factory, the tire and wheel are mounted on a balancing machine test wheel, the assembly is rotated at 100 RPM (10 to 15 mph with recent high sensitivity sensors) or higher, 300 RPM (55 to 60 mph with typical low sensitivity sensors), and forces of unbalance are measured by sensors. These forces are resolved into static and couple values for the inner and outer planes of the wheel, and compared to the unbalance tolerance (the maximum allowable manufacturing limits). If the tire is not checked, it has the potential to cause vibration in the suspension of the vehicle on which it is mounted. In tire retail shops, tire/wheel assemblies are checked on a spin-balancer, which determines the amount and angle of unbalance. Balance weights are then fitted to the outer and inner flanges of the wheel. Although dynamic balance is theoretically better than static balance, because both dynamic and static imbalances can be measured and corrected, its effectiveness is disputed because of the flexible nature of the rubber. A tire in a free spinning machine may not experience the same centrifugal distortion, heat distortion, nor weight and camber that it would on a vehicle. Dynamic balancing may therefore create new unintended imbalances. Dynamic balancing has traditionally required removing the wheel from the vehicle, but sensors installed in modern cars, such as for anti-lock brakes, could enable estimating the imbalance while driving. Physics To a first approximation, which neglects deformations due to its elasticity, the wheel is a rigid rotor that is constrained to rotate about its axle. If a principal axis of the wheel's moment of inertia is not aligned with the axle, due to an asymmetric mass distribution, then an external torque, perpendicular to the axle, is necessary to force the wheel to rotate about the axle. This additional torque must be provided by the axle and its orientation rotates continuously with the wheel. The reaction to this torque, by Newton's Third Law is applied to the axle, which transfers it to the suspension and can cause it to vibrate. Automotive technicians can reduce this vibration to an acceptable level when balancing the wheel by adding small masses to the inner and outer wheel rims that bring the principal axis into alignment with the axle. Vehicle vibration Vibration in automobiles may occur for many reasons, such as wheel unbalance, imperfect tire or wheel shape, brake pulsation, and worn or loose driveline, suspension, or steering components. Unbalance can result from collision-induced wheel deformations, uneven tire wear, or a shift of the tire on the rim. In some cases, losing a counterweight or bumping the curb hard can lead to wheel unbalance. Foreign material, such as road tar, stones, ice, or snow, that is stuck in a tire's tread or otherwise adhered to the tire or wheel may also cause a temporary unbalance and subsequent vibration. Uneven weight distribution in the wheel and tire assembly can result from manufacturing inaccuracies, uneven tread wear, damage over time, or improper tire mounting. Environmental consequences Every year, millions of small weights are attached to wheels by automotive technicians balancing them. Traditionally, these weights have been made of lead; it is estimated that up to of lead, having fallen off car wheels, ended up in the environment. According to the US Environmental Protection Agency, worldwide these total more than 20,000 tonnes of lead annually, and therefore the use of less-toxic materials is encouraged. In Europe, lead weights have been banned since 2005; in the US, some states have also banned them. Alternatives are weights made of lead alloys that include zinc or copper, or weights that are altogether lead-free. See also Speed wobble Rotordynamics References Tires Vehicle technology
Tire balance
[ "Engineering" ]
1,224
[ "Vehicle technology", "Mechanical engineering by discipline" ]
9,560,227
https://en.wikipedia.org/wiki/E-research
The term e-Research (alternately spelled eResearch) refers to the use of information technology to support existing and new forms of research. This extends cyber-infrastructure practices established in STEM fields such as e-Science to cover other all research areas, including HASS fields such as digital humanities. Principles Practices in e-Research typically aim to improve efficiency, interconnectedness and scalability across the full research data lifecycle: collection, storage, analysis, visualisation and sharing of data. E-Research therefore involves collaboration of researchers (often in a multi-disciplinary team), with data scientists and computer scientists, data stewards and digital librarians, and significant information and communication technology infrastructure. In addition to human resources, it often requires the physical infrastructure for data-intensive activities, often using high performance computing systems such as grid computing. Applications Examples of e-Research problems range across disciplines which include: Modelling of ecosystems or economies Exploration of human genome structures Studies of large linguistic corpora Integrated social policy analyses In Australia Specialist services, centres or programmes instituted to support Australian data and technology intensive research operate under the umbrella term: eResearch. In March 2012, representatives from these eResearch groups came together to discuss the need build a "collaborative program to strengthen eResearch and address issues facing the sector nationally". The Australian eResearch Organisation (AeRO) emerged from this forum as "a collaborative organisation of national and state-based research organisations to advance eResearch implementation and innovation in Australia". Professionals working in Australian eResearch annually convene a conference known as: eResearch Australasia. See also Berkeley Open Infrastructure for Network Computing (BOINC) e-Science References External links New Zealand eScience Infrastructure (NeSI) Centre for eResearch, University of Auckland eResearch, the University of Michigan Research Paper Service Oxford e-Research Centre Centre for eResearch and Digital Innovation (CeRDI) at Federation University University of Cape Town eResearch Centre Research Information technology
E-research
[ "Technology" ]
419
[ "Information and communications technology", "Information technology" ]
9,560,337
https://en.wikipedia.org/wiki/Energy%20security
Energy security is the association between national security and the availability of natural resources for energy consumption (as opposed to household energy insecurity). Access to cheaper energy has become essential to the functioning of modern economies. However, the uneven distribution of energy supplies among countries has led to significant vulnerabilities. International energy relations have contributed to the globalization of the world leading to energy security and energy vulnerability at the same time. Renewable resources and significant opportunities for energy efficiency and transitions exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of wind power and solar power and energy efficiency, and technological diversification of energy sources, would result in significant energy security. Threats The modern world relies on a vast energy supply to fuel anything from transportation to communication, to security and health delivery systems. Peak oil expert Michael Ruppert has claimed that for every kilocalorie of food produced in the industrial world, 10 kilocalories of oil and gas energy are invested in the forms of fertilizer, pesticide, packaging, transportation, and running farm equipment. Energy plays an important role in the national security of any given country as a fuel to power the economic engine. Some sectors rely on energy more heavily than others; for example, the Department of Defense relies on petroleum for approximately 77% of its energy needs. Not every sector is as critical as the others. Some have greater importance to energy security. Threats to a nation's energy security include: Political/Domestic instability of major energy-producing countries (e.g. change in leadership's environmental values, or regime change) Reliance on foreign countries for oil Foreign in-state conflict (e.g. religious civil wars) Foreign exporters' interests (e.g. Quid Pro Quo/blackmail/extortion) Foreign non-state actors targeting the supply and transportation of oil resources (e.g. theft) Manipulation of energy supplies (e.g. mega-corporation or state-backed racketeering) Competition over energy sources (e.g. biofuel(biodiesel, bioethanol) vs oil(crude, distilled fuel) vs coal vs natural gas vs nuclear vs wind vs solar vs hydro(dam, pumped)) Unreliable energy stores (e.g. long time to spin a turbine to create power, or Li-ion battery grid explosion, or pumped hydro dam becoming clogged) Attacks on supply infrastructure (e.g. hackers stopping flow pumps inside a pipeline or intentionally surging an electrical grid to over/underload it) Terrorism (e.g. napalming oil and/or fuel reserves) Accidents (e.g. shoddy weld causing debris buildup in a pipeline) Natural disasters (e.g. wind turbine collapsing from a major earthquake) Political and economic instability caused by war or other factors, such as strike action, can also prevent the proper functioning of the energy industry in a supplier country. For example, the nationalization of oil in Venezuela has triggered strikes and protests in which Venezuela's oil production rates have yet to recover. Exporters may have political or economic incentive to limit their foreign sales or cause disruptions in the supply chain. Since Venezuela's nationalization of oil, anti-American Hugo Chávez threatened to cut off supplies to the United States more than once. The 1973 oil embargo against the United States is an historical example in which oil supplies were cut off to the United States due to U.S. support of Israel during the Yom Kippur War. This has been done to apply pressure during economic negotiations—such as during the 2007 Russia–Belarus energy dispute. Terrorist attacks targeting oil facilities, pipelines, tankers, refineries, and oil fields are so common they are referred to as "industry risks". Infrastructure for producing the resource is extremely vulnerable to sabotage. One of the worst risks to oil transportation is the exposure of the five ocean chokepoints, like the Iranian-controlled Strait of Hormuz. Anthony Cordesman, a scholar at the Center for Strategic and International Studies in Washington, D.C., warns, "It may take only one asymmetric or conventional attack on a Ghawar Saudi oil field or tankers in the Strait of Hormuz to throw the market into a spiral." New threats to energy security have emerged in the form of the increased world competition for energy resources due to the increased pace of industrialization in countries such as India and China, as well as due to the increasing consequences of climate change. Although still a minority concern, the possibility of price rises resulting from the peaking of world oil production is also starting to attract the attention of at least the French government. Increased competition over energy resources may also lead to the formation of security compacts to enable an equitable distribution of oil and gas between major powers. However, this may happen at the expense of less developed economies. The Group of Five, precursors to the G8, first met in 1975 to coordinate economic and energy policies in the wake of the 1973 Arab oil embargo, a rise in inflation and a global economic slowdown. Long-term security The impact of the 1973 oil crisis and the emergence of the OPEC cartel was a particular milestone that prompted some countries to increase their energy security. Japan, almost totally dependent on imported oil, steadily introduced the use of natural gas, nuclear power, high-speed mass transit systems, and implemented energy conservation measures. The United Kingdom began exploiting North Sea oil and gas reserves, and became a net exporter of energy into the 2000s. Increasing energy security is also one of the reasons behind a block on the development of natural gas imports in Sweden. Greater investment in native renewable energy technologies and energy conservation is envisaged instead. India is carrying out a major hunt for domestic oil to decrease its dependency on OPEC, while Iceland is well advanced in its plans to become energy independent by 2050 through deploying 100% renewable energy. Short-term security Petroleum Petroleum, otherwise known as "crude oil", has become the resource most used by countries all around the world, including Russia, China and the United States of America. With all the oil wells located around the world, energy security has become a main issue to ensure the safety of the petroleum that is being harvested. In the middle east, oil fields have become main targets for sabotage due to how heavily countries rely on oil. Many countries hold strategic petroleum reserves as a buffer against the economic and political impacts of an energy crisis. For example, all 31 members of the International Energy Agency hold a minimum of 90 days of their oil imports. These countries also committed to passing legislation to develop an emergency response plan in the case of oil supply shocks and other short-term threats to energy security. The value of such reserves was demonstrated by the relative lack of disruption caused by the 2007 Russia-Belarus energy dispute, when Russia indirectly cut exports to several countries in the European Union. Due to the theories in peak oil and need to curb demand, the United States military and Department of Defense had made significant cuts, and have been making a number of attempts to come up with more efficient ways to use oil. Natural gas Compared to petroleum, reliance on imported natural gas creates significant short-term vulnerabilities. The gas conflicts between Ukraine and Russia of 2006 and 2009 serve as vivid examples of this. Many European countries saw an immediate drop in supply when Russian gas supplies were halted during the Russia-Ukraine gas dispute in 2006. Natural gas has been a viable source of energy in the world. Consisting of mostly methane, natural gas is produced using two methods: biogenic and thermogenic. Biogenic gas comes from methanogenic organisms located in marshes and landfills, whereas thermogenic gas comes from the anaerobic decay of organic matter deep under the Earth's surface. Russia is one of the three current leading country in production of natural gas alongside US and Saudi Arabia. In the European Union, security of gas supply is protected by Regulation 2017/1938 of 25 October 2017, which concerns "measures to safeguard the security of gas supply" and took the place of the previous regulation 994/2010 on the same subject. EU policy operates on a number of regional groupings, a network of common gas security risk assessments, and a "solidarity mechanism", which would be activated in the event of a significant gas supply crisis. A bilateral solidarity agreement was signed between Germany and Denmark on 14 December 2020. The proposed UK-EU Trade and Cooperation Agreement "provides for a new set of arrangements for extensive technical cooperation ... particularly with regard to security of supply". Nuclear power Uranium for nuclear power is mined and enriched in countries including Canada (23% of the world's total in 2007), Australia (21%), Kazakhstan (16%) and more than 10 other countries. Uranium is mined and fuel is manufactured significantly in advance of need. Nuclear fuel is considered by some to be a relatively reliable power source, being more common in the Earth's crust than tin, mercury or silver, though a debate over the timing of peak uranium does exist. Nuclear power is seen as a means to reduce carbon emissions. Although generally considered a viable energy resource, nuclear power remains controversial due to the risks associated with it. Another factor in the debate with nuclear power is the concern from people or companies regarding the location of a nuclear energy plant or the disposal radioactive waste nearby. In 2022, nuclear power provided 10% of the world's total electricity share. The most notable use of nuclear power within the United States is in U.S. Navy aircraft carriers and submarines, which have been exclusively nuclear-powered for several decades. These classes of ship provide the core of the Navy's power, and as such are the single most noteworthy application of nuclear power in the United States. Renewable energy The deployment of renewable fuels: Increases the diversity of electricity sources, reducing strangleholds of one fuel type. Increases backup energy via biofuel reserves. Increases backup electricity stores via batteries that can produce and/or store electricity. Contributes to the flexibility of the rigid electrical grid via local generation (independent of easily targeted centralized power distributors). Increases resistance to threats to energy security. For countries where growing dependence on imported gas is a significant energy security issue, renewable technologies can provide alternative sources of electric power as well as possibly displacing electricity demand through direct heat production (e.g. geothermal and burning fuels for heat and electricity). Renewable biofuels for transport represent a key source of diversification from petroleum products. As the finite resources that have been so crucial to survival in the world decline day by day, countries will begin to realize that the need for renewable fuel sources will be more vital than ever before. Moreover, renewable energy resources are more evenly distributed than fossil fuels and, as a result, can improve energy security and reduce geopolitical tensions among states. Geothermal (renewable and clean energy) can indirectly reduce the need for other sources of fuel. By using the heat from the outer core of the Earth to heat water, steam created from the heated water can not only power electricity-generating turbines, but also eliminate the need for consuming electricity to create hot water for showers, washing machines, dishwashers, sterilizers, and more; geothermal is one of the cleanest and most efficient options, needing fuel to dig deep holes, hot water pumps, and tubing to distribute the hot water. Geothermal not only helps energy security, but also food security via year-round heated greenhouses. Hydroelectric, already incorporated into many dams around the world, produces a lot of energy, usually on demand, and is very easy to produce energy as the dams control the gravity-fed water allowed through gates which spin up turbines located inside of the dam. Biofuels have been researched relatively thoroughly, using several different sources such as sugary corn (very inefficient) and cellulose-rich switchgrass (more efficient) to produce ethanol, and fat-rich algae to produce a synthetic crude oil (or algae-derived ethanol, which is very, very inefficient), these options are substantially cleaner than the consumption of petroleum. "Most life cycle analysis results for perennial and ligno-cellulosic crops conclude that biofuels can supplement anthropogenic energy demands and mitigate green house gas emissions to the atmosphere". Using net-carbon-positive oil to fuel transportation is a major source of green house gases, any one of these developments could replace the energy we derive from oil. Traditional fossil fuel exporters (e.g. Russia) who built their country's wealth from memorialized plant remains (fossil fuels) and have not yet diversified their energy portfolio to include renewable energy have greater national energy insecurity. In 2021, global renewable energy capacity made record-breaking growth, increasing by 295 gigawatts (295 billion Watts, equivalent to 295,000,000,000 Watts, or a third of a trillion Watts) despite supply chain issues and high raw material prices. The European Union was especially impactful—its annual additions increased nearly 30% to 36 gigawatts in 2021. The International Energy Agency's 2022 Renewable Energy Market Update predicts that the global capacity of renewables would increase an additional 320 gigawatts. For context, that would almost entirely cover the electricity demand of Germany. However, the report cautioned that current public policies are a threat to future renewable energy growth: "the amount of renewable power capacity added worldwide is expected to plateau in 2023, as continued progress for solar is offset by a 40% decline in hydropower expansion and little change in wind additions." Solar power is generally less vulnerable to enemy action than large fossil fuel and hydro plants and can be more quickly repaired. See also By area :Category:Energy policy by country Cebu Declaration on East Asian Energy Security Energy Independence and Security Act of 2007 Energy Security Act Energy security of Afghanistan Energy security of the People's Republic of China U.S. Energy Independence Economic Energy price Energy supply Oil Shockwave Peak oil Strategic Eco-nationalism Energy and Environmental Security Initiative Energy independence Energy policy Energy security and renewable technology Energy storage Energy superpower Global strategic petroleum reserves High Speed Rail International Energy Agency International Energy Forum International Risk Governance Council National security Nationalization of oil supplies Pro-nuclear movement Strategic reserve References Further reading Deese, David A. (1979). "Energy: Economics, Politics, and Security". International Security. 4 (3): 140–153. External links Journal of Energy Security Institute for the Analysis of Global Security: Energy Security Research United States Energy Security Council Energy and Environmental Security Initiative (EESI) NATO and Energy Security Security Security Security National security
Energy security
[ "Environmental_science" ]
3,009
[ "Energy economics", "Environmental social science", "Energy policy" ]
9,560,493
https://en.wikipedia.org/wiki/Smeltmill
Smeltmills were water-powered mills used to smelt lead or other metals. The older method of smelting lead on wind-blown bole hills began to be superseded by artificially-blown smelters. The first such furnace was built by Burchard Kranich at Makeney, Derbyshire in 1554, but produced less good lead than the older bole hill. William Humfrey (the Queen's assay master), and a leading shareholder in the Company of Mineral and Battery Works introduced the ore hearth from the Mendips about 1577. This was initially blown by a foot-blast, but was soon developed into a water-powered smelt mill at Beauchief (now a suburb of Sheffield). A typical smelt mill had an orehearth and a slaghearth, the latter being used to reprocess slags from the orehearth in order to recover further lead from the slag Further reading L. Willies, 'Lead: ore preparation and smelting' in J. Day and R. F. Tylecote, The Industrial Revolution in Metals (Institute of Metals, London 1991), 93-102. Various articles in L. Willies and D. Cranstone (eds.), Boles and Smeltmills (Historical Metallurgy Society, 1992). M. B. Donald, Elizabethan Monolopies (Oliver & Boyd Edinburgh 1961), 142-78. See also Derbyshire lead mining history. External links North Pennine Smelt Mills – Interactive mapping and information on North Pennine Smelt Mills (Northern Mine Research Society) Yorkshire Smelt Mills – Interactive mapping and information on Yorkshire Smelt Mills (Northern Mine Research Society) Metallurgical processes Lead Smelting
Smeltmill
[ "Chemistry", "Materials_science" ]
365
[ "Metallurgical processes", "Metallurgy", "Smelting" ]
9,561,423
https://en.wikipedia.org/wiki/Quinine%20total%20synthesis
The total synthesis of quinine, a naturally-occurring antimalarial drug, was developed over a 150-year period. The development of synthetic quinine is considered a milestone in organic chemistry although it has never been produced industrially as a substitute for natural occurring quinine. The subject has also been attended with some controversy: Gilbert Stork published the first stereoselective total synthesis of quinine in 2001, meanwhile shedding doubt on the earlier claim by Robert Burns Woodward and William Doering in 1944, claiming that the final steps required to convert their last synthetic intermediate, quinotoxine, into quinine would not have worked had Woodward and Doering attempted to perform the experiment. A 2001 editorial published in Chemical & Engineering News sided with Stork, but the controversy was eventually laid to rest once and for all when Robert Williams and coworkers successfully repeated Woodward's proposed conversion of quinotoxine to quinine in 2007. Chemical structure The aromatic component of the quinine molecule is a quinoline with a methoxy substituent. The amine component has a quinuclidine skeleton and the methylene bridge in between the two components has a hydroxyl group. The substituent at the 3 position is a vinyl group. The molecule is optically active with five stereogenic centers (the N1 and C4 constituting a single asymmetric unit), making synthesis potentially difficult because it is one of 16 stereoisomers. Quinine total synthesis timeline 1817: First isolation of quinine from cinchona tree by Pierre Joseph Pelletier and Joseph Caventou. 1853: Louis Pasteur obtains quinotoxine (or quinicine in older literature) by acid-catalysed isomerization of quinine. 1856: Sir William Henry Perkin attempts quinine synthesis by oxidation of N-allyltoluidine based on the erroneous idea that two equivalents of this compound with chemical formula C10H13N plus three equivalents of oxygen yield one equivalent of C20H24N2O2 (quinine's chemical formula) and one equivalent of water. His oxidations with other toluidines sets him on the path to discover mauveine. The commercial importance of mauveine eventually lead to the birth of the chemical industry. 1907: the correct atom connectivity established by Paul Rabe. 1918: Paul Rabe and Karl Kindler synthesize quinine from quinotoxine, reversing the Pasteur chemistry. The lack of experimental details in this publication would become a major issue in the Stork–Woodward controversy almost a century later. The first step in this sequence is sodium hypobromite addition to quinotoxine to an N-bromo intermediate possibly with structure 2. The second step is organic oxidation with sodium ethoxide in ethanol. Because of the basic conditions the initial product quininone interconverts with quinidinone via a common enol intermediate and mutarotation is observed. In the third step the ketone group is reduced with aluminum powder and sodium ethoxide in ethanol and quinine can be identified. Quinotoxine is the first relay molecule in the Woodward/Doering claim. 1939: Rabe and Kindler re investigate a sample left over from their 1918 experiments and identify and isolate quinine (again) together with diastereomers quinidine, epi-quinine and epi-quinidine. 1940: Robert Burns Woodward signs on as a consultant for the Polaroid Corporation at the request of Edwin H. Land. Quinine is of interest to Polaroid for its light polarizing properties. 1943: Prelog and Proštenik interconvert an allylpiperidine called homomeroquinene and quinotoxine. Homomeroquinene (the second relay molecule in the Woodward/Doering claim) is obtained in several steps from the biomolecule cinchonine (related to quinidine but without the methoxy group): The key step in the assembly of quinotoxine is a Claisen condensation: 1944: Robert Burns Woodward and W. E. Doering report the synthesis of quinine, starting from 7-hydroxyisoquinoline. Although the title of their one-page publication is The total synthesis of quinine it is oddly not the synthesis of quinine but that of the precursor homomeroquinene (racemic) and then with groundwork already provided by Prelog a year earlier to quinotoxine (enantiopure after chiral resolution) that is described. Woodward and Doering argue that Rabe in 1918 already proved that this compound will eventually give quinine but do not repeat Rabe's work. In this project 27-year-old assistant professor Woodward is the theorist and postdoc Doering (age 26) the bench worker. According to William, Bob was able to boil water but an egg would be a challenge. As many natural quinine resources were tied up in the enemy-held Dutch East Indies, synthetic quinine was a promising alternative for fighting malaria on the battlefield and both men become instant war heroes making headlines in the New York Times, Newsweek and Life. 1944: The then 22-year-old Gilbert Stork writes to Woodward asking him if he did repeat Rabe's work. 1945: Woodward and Doering publish their second lengthy quinine paper. One of the two referees rejects the manuscript (too much historic material, too much experimental details and poor literary style with inclusion of words like adumbrated and apposite) but it is published without changes nonetheless. 1974: Kondo and Mori synthesize racemic vinylic gamma-lactones, a key starting material in Stork's 2001 quinine synthesis. The starting materials are trans-2-butene-1,4-diol and ethyl orthoacetate and the key step is a Claisen rearrangement 1988: Ishibashi & Taniguchi resolve said lactone to enantiopure compounds via chiral resolution: In this process the racemic lactone reacts in aminolysis with (S)-methylbenzylamine assisted by triethylaluminum to a diastereomeric pair of amides which can be separated by column chromatography. The S-enantiomer is converted back to the S-lactone in two steps by hydrolysis with potassium hydroxide and ethylene glycol followed by azeotropic ring closure. 2001: Gilbert Stork publishes his stereoselective quinine synthesis. He questions the validity of the Woodward/Doering claim: "the basis of their characterization of Rabe’s claim as “established” is unclear". M. Jacobs, writing in The Chemical & Engineering News, is equally critical. 2007: Researcher Jeffrey I. Seeman in a 30-page review concludes that the Woodward–Doering–Rabe–Kindler total synthesis of quinine is a valid achievement. He notes that Paul Rabe was an extremely experienced alkaloid chemist, that he had ample opportunity to compare his quinine reaction product with authentic samples and that the described 1918 chemistry was repeated by Rabe although not with quinotoxine itself but still with closely related derivatives. 2008: Smith and Williams revisit and confirm Rabe's d-quinotoxine to quinine route. 2018: Nuno Maulide and his team report the total synthesis of quinine via C–H activation, including analogues with improved antimalarial activity Stork quinine total synthesis The Stork quinine synthesis starts from chiral (S)-4-vinylbutyrolactone 1. The compound is obtained by chiral resolution and in fact, in the subsequent steps all stereogenic centers are put in place by chiral induction: the sequence does not contain asymmetric steps. The lactone is ring-opened with diethylamine to amide 2 and its hydroxyl group is protected as a tert-butyldimethyl silyl ether (TBS) in 3. The C5 and C6 atoms are added as tert-butyldiphenylsilyl (TBDPS) protected iodoethanol in a nucleophilic substitution of acidic C4 with lithium diisopropylamide (LDA) at −78 °C to 4 with correct stereochemistry. Removal of the silyl protecting group with p-toluenesulfonic acid to alcohol 4b and ring-closure by azeotropic distillation returns the compound to lactone 5 (direct alkylation of 1 met with undisclosed problems). The lactone is then reduced to the lactol 5b with diisobutylaluminum hydride and its liberated aldehyde reacts in a Wittig reaction with methoxymethylenetriphenylphosphine (delivering the C8 atom) to form enol ether 6. The hydroxyl group is replaced in a Mitsunobu reaction by an azide group with diphenylphosphoryl azide in 7 and acid hydrolysis yields the azido aldehyde 8. The methyl group in 6-methoxy-4-methylquinoline 9 is sufficiently acidic for nucleophilic addition of its anion (by reaction with LDA) to the aldehyde group in 8 to form 10 as a mixture of epimers. This is of no consequence for stereocontrol because in the next step the alcohol is oxidized in a Swern oxidation to ketone 11. A Staudinger reaction with triphenylphosphine closes the ring between the ketone and the azide to the tetrahydropyridine 12. The imine group in this compound is reduced to the amine 13 with sodium borohydride with the correct stereospecificity. The silyl protecting group is removed with hydrogen fluoride to alcohol 14 and then activated as a mesyl leaving group by reaction with mesyl chloride in pyridine which enables the third ring closure to 15. In the final step the C9 hydroxyl group was introduced by oxidation with sodium hydride, dimethylsulfoxide and oxygen with quinine to epiquinine ratio of 14:1. Woodward–Doering formal quinine total synthesis The 1944 Woodward–Doering synthesis starts from 7-hydroxyisoquinoline 3 for the quinuclidine skeleton which is somewhat counter intuitive because one goes from a stable heterocyclic aromatic system to a completely saturated bicyclic ring. This compound (already known since 1895) is prepared in two steps. The first reaction step is condensation reaction of 3-hydroxybenzaldehyde 1 with (formally) the diacetal of aminoacetaldehyde to the imine 2 and the second reaction step is cyclization in concentrated sulfuric acid. Isoquinoline 3 is then alkylated in another condensation by formaldehyde and piperidine and the product is isolated as the sodium salt of 4. Hydrogenation at 220 °C for 10 hours in methanol with sodium methoxide liberates the piperidine group and leaving the methyl group in 5 with already all carbon and nitrogen atoms accounted for. A second hydrogenation takes place with Adams catalyst in acetic acid to tetrahydroisoquinoline 6. Further hydrogenation does not take place until the amino group is acylated with acetic anhydride in methanol but by then 7 is again hydrogenated with Raney nickel in ethanol at 150 °C under high pressure to decahydroisoquinoline 8. The mixture of cis and trans isomers is then oxidized by chromic acid in acetic acid to the ketone 9. Only the cis isomer crystallizes and used in the next reaction step, a ring opening with the alkyl nitrite ethyl nitrite with sodium ethoxide in ethanol to 10 with a newly formed carboxylic ester group and an oxime group. The oxime group is hydrogenated to the amine 11 with platinum in acetic acid and alkylation with iodomethane gives the quaternary ammonium salt 12 and subsequently the betaine 13 after reaction with silver oxide. Quinine's vinyl group is then constructed by Hofmann elimination with sodium hydroxide in water at 140 °C. This process is accompanied by hydrolysis of both the ester and the amide group but it is not the free amine that is isolated but the urea 14 by reaction with potassium cyanate. In the next step the carboxylic acid group is esterified with ethanol and the urea group replaced with a benzoyl group. The final step is a Claisen condensation of 15 with ethyl quininate 16, which after acidic workup yields racemic quinotoxine 17. The desired enantiomer is obtained by chiral resolution with the chiral dibenzoyl ester of Tartaric acid. The conversion of this compound to quinine is based on the Rabe–Kindler chemistry discussed in the timelime. External links Quinine Total Syntheses @ SynArchive.com Quinine story at Harvard.edu Link References Total synthesis Quinine
Quinine total synthesis
[ "Chemistry" ]
2,786
[ "Total synthesis", "Chemical synthesis" ]
9,561,628
https://en.wikipedia.org/wiki/Theology%20of%20creationism%20and%20evolution
The theology of creation and evolution is theology that deals with issues concerning the universe, the life, and especially man, in terms of creation or evolution. Creationism Creationism is the religious belief that the universe and life originated "from specific acts of divine creation", as opposed to the scientific conclusion that they came about through natural processes such as evolution. Churches address the theological implications raised by creationism and evolution in different ways. Evolution Most contemporary Christian leaders and scholars from many mainstream churches, such as Roman Catholic, Anglican and some Lutheran denominations, reject reading the Bible as though it could shed light on the physics of creation instead of the spiritual meaning of creation. According to the Archbishop of Canterbury, Rowan Williams, "[for] most of the history of Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time." The Roman Catholic Church now explicitly accepts the theory of evolution, (albeit with most conservatives and traditionalists within the Church in dissent), as do Anglican scholars such as John Polkinghorne, arguing that evolution is one of the principles through which God created living beings. Earlier examples of this attitude include Frederick Temple, Asa Gray and Charles Kingsley, who were enthusiastic supporters of Darwin's theories on publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin, who saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Liberal theology assumes that Genesis is a poetic work, and that just as human understanding of God increases gradually over time, so does the understanding of his creation. In fact, both Jews and Christians have been considering the idea of the creation narrative as an allegory (instead of an historical description) long before the development of Darwin's theory of evolution. Two notable examples are Saint Augustine (4th century) who, on theological grounds, argued that everything in the universe was created by God in the same instant, (and not in seven days as a plain account of Genesis would require) and the 1st century Jewish scholar Philo of Alexandria, who wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. See also Anti-intellectualism Faith and rationality References Creationism Evolution and religion Intelligent design controversies
Theology of creationism and evolution
[ "Biology" ]
483
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
9,561,807
https://en.wikipedia.org/wiki/Michinori%20Yamashita
Michinori Yamashita (born 1953 in Japan) is a professor (Japanese mathematician) at the Rissho University. He studied at the Sophia University under Yukiyoshi Kawada and Kiichi Morita. External links Home page at Rissho Univ. 20th-century Japanese mathematicians 21st-century Japanese mathematicians Number theorists Living people 1953 births Date of birth missing (living people) Academic staff of Rissho University Sophia University alumni
Michinori Yamashita
[ "Mathematics" ]
91
[ "Number theorists", "Number theory" ]
9,562,761
https://en.wikipedia.org/wiki/Microsoft%20Office%20PerformancePoint%20Server
Microsoft Office PerformancePoint Server is a business intelligence software product released in 2007 by Microsoft. The product was generally an integration of the acquisitions from ProClarity - the Planning Server and Monitoring Server - into Microsoft's SharePoint server product line. Although discontinued in 2009, the dashboard, scorecard, and analytics capabilities of PerformancePoint Server were incorporated into SharePoint 2010 and later versions. PerformancePoint Server also provided a planning and budgeting component directly integrated with Excel. History Microsoft offered preview releases of PerformancePoint Server starting in mid-2006. Previews of the product were formed from Business Scorecard Manager 2005 and the Planning Server component. Acquisitions ProClarity and Great Plains brought additional analytics and planning/reporting capabilities, as well as companion products ProClarity 6.3 and FRx. PerformancePoint Server was officially released in November 2007. Microsoft discontinued PerformancePoint Server as an independent product in 2009 and folded its dashboard, scorecard and analytics capabilities into PerformancePoint Services in SharePoint Server 2010. Monitoring Server Component Business monitoring capabilities, including dashboards, scorecards & key performance indicators, navigable reports for deeper analysis, strategy maps, and linked filtering, are provided by PerformancePoint's Monitoring Server component. A Dashboard Designer application that is distributed from Monitoring Server enables business analysts or IT Administrators to: create & test data source connections create views that use those data connections assemble the views into a dashboard deploy the dashboard as a SharePoint page Dashboard Designer saved content and security information back to the Monitoring Server. Data source connections, such as OLAP cubes or relational tables, were also made through Monitoring Server. After a dashboard has been published to the Monitoring Server database, it would be deployed as a SharePoint page and shared with other users as such. When the pages were opened in a web browser, Monitoring Server updated the data in the views by connecting back to the original data sources. Planning Server Component PerformancePoint's Planning Server component supported maintenance of logical business models, budget & approval workflows, enterprise data sources, and it followed Generally Accepted Accounting Principles. Planning Server made use of Excel for input and line-of-business reporting, as well as SQL Server for storing and processing business models. Management Reporter Component The Management Reporter component was designed to perform financial reporting and can read PerformancePoint Planning models directly. A development kit was also available to allow this component to read other models. References External links PerformancePoint Server 2007 Developer Portal Data Puzzle PerformancePoint Insider Performance Point Planning being discontinued Microsoft Office servers Business intelligence software Data management
Microsoft Office PerformancePoint Server
[ "Technology" ]
508
[ "Data management", "Data" ]
9,563,449
https://en.wikipedia.org/wiki/Dichlorosilane
Dichlorosilane, or DCS as it is commonly known, is a chemical compound with the formula H2SiCl2. In its major use, it is mixed with ammonia (NH3) in LPCVD chambers to grow silicon nitride in semiconductor processing. A higher concentration of DCS·NH3 (i.e. 16:1), usually results in lower stress nitride films. History Dichlorosilane was originally prepared by Stock and Somieski by the reaction of SiH4 with hydrogen chloride. Dichlorosilane reacts with water vapor to initially give monomeric prosiloxane: Monomeric polymerizes rapidly upon condensation or in solution. Reactions and formation Most dichlorosilane results as a byproduct of the reaction of HCl with silicon, a reaction intended to give trichlorosilane. Disproportionation of trichlorosilane is the preferred route. 2 SiHCl3 SiCl4 + SiH2Cl2 Hydrolysis Stock and Somieski completed the hydrolysis of dichlorosilane by putting the solution of H2SiCl2 in benzene in brief contact with a large excess of water. A large-scale hydrolysis was done in a mixed ether/alkane solvent system at 0 °C, which gave a mixture of volatile and nonvolatile [H2SiO]n. Fischer and Kiegsmann attempted the hydrolysis of dichlorosilane in hexane, using NiCl2⋅6H2O as the water source, but the system failed. They did, however, complete the hydrolysis using dilute Et2O/CCl4 at -10 °C. The purpose of completing the hydrolysis of dichlorosilane is to collect the concentrated hydrolysis products, distill the solution, and retrieve a solution of [H2SiO]n oligomers in dichloromethane. These methods were used to obtain cyclic polysiloxanes. Another purpose for hydrolyzing dichlorosilane is to obtain linear polysiloxanes, and can be done by many different complex methods. The hydrolysis of dichlorosilane in diethyl ether, dichloromethane, or pentane gives cyclic and linear polysiloxanes. Decomposition Su and Schlegal studied the decomposition of dichlorosilane using transition state theory (TST) using calculations at the G2 level. Wittbrodt and Schlegel worked with these calculations and improved them using the QCISD(T) method. The primary decomposition products were determined by this method to be SiCl2 and SiClH. Ultrapurification Dichlorosilane must be ultrapurified and concentrated in order to be used for the manufacturing of semiconducting epitaxial silicon layers, which are used for microelectronics. The buildup of the silicon layers produces thick epitaxial layers, which creates a strong structure. Advantage of use Dichlorosilane is used as a starting material for semiconducting silicon layers found in microelectronics. It is used because it decomposes at a lower temperature and has a higher growth rate of silicon crystals. Safety hazards It is a chemically active gas, which will readily hydrolyze and self ignite in air. Dichlorosilane is also very toxic, and preventative measures must be used for any experiment involving the use of the chemical. Safety hazards also includes skin and eye irritation and inhalation. References External links Safety data sheet for dichlorosilane from Praxair® Chlorosilanes Inorganic silicon compounds
Dichlorosilane
[ "Chemistry" ]
768
[ "Inorganic silicon compounds", "Inorganic compounds" ]
9,563,728
https://en.wikipedia.org/wiki/Rudolf%20Luneburg
Rudolf Karl Lüneburg (30 March 1903, Volkersheim (Bockenem) - 19 August 1949, Great Falls, Montana), after his emigration at first Lueneburg, later Luneburg, sometimes misspelled Luneberg or Lunenberg) was a professor of mathematics and optics at the Dartmouth College Eye Institute. He was born in Germany, received his doctorate at Göttingen, and emigrated to the United States in 1935. His work included an analysis of the geometry of visual space as expected from physiology and the assumption that the angle of vergence provides a constant measure of distance. From these premises he concluded that near field visual space is hyperbolic. Bibliography published in: Reprint: See also Luneburg lens Luneburg method 1903 births 1949 deaths Emigrants from Nazi Germany to the United States Geometers Optical physicists Dartmouth College faculty 20th-century German mathematicians Academic staff of Leiden University University of Göttingen alumni New York University faculty University of Southern California faculty Brown University faculty University of Göttingen alumni
Rudolf Luneburg
[ "Mathematics" ]
205
[ "Geometers", "Geometry" ]
17,533,891
https://en.wikipedia.org/wiki/Virtual%20manipulatives%20for%20mathematics
Virtual manipulatives for mathematics are digital representations of physical mathematics manipulatives used in classrooms. The goal of this technology is to allow learners to investigate, explore and derive mathematical concepts using concrete models. Common manipulatives include base ten blocks, coins, 3D blocks, tangrams, rulers, fraction bars, algebra tiles, geoboards, geometric planes, and solid figures. Use in special education Virtual math manipulatives are sometimes included in the general academic curriculum as assistive technology for students with physical or mental disabilities. Students with disabilities are often able to still participate in activities using virtual manipulatives even if they are unable to engage in physical activity. Further reading Moyer, P. S., Bolyard, J. J., & Spikell, M. A. (2000). What are virtual manipulatives? [Online]. Teaching Children Mathematics, 8(6), 372-377. Available: - My NCTM Moyer, P. S., Niezgoda, D., & Stanley, J. (2005). Young children's use of virtual manipulatives and other forms of mathematical representations. In W. J. Masalaski & P. C. Elliot (Eds.), Technology-Supported Mathematics Learning Environments (pp. 17–34). Reston, VA: National Council of Teachers of Mathematics. Ortiz, Enrique (2017).Pre-service teachers’ ability to identify and implement cognitive levels in mathematics learning. Issues in the Undergraduate Mathematics Preparation of School Teachers (IUMPST): The Journal (Technology), 3, pp. 1–14. Retrieved from Issues in the Undergraduate Mathematics Preparation of School Teachers: The Journal -- Volume 1. pdf: Ortiz, Enrique, Eisenreich, Heidi & Tapp, Laura (2019). Physical and virtual manipulative framework conceptions of undergraduate pre-service teachers. International Journal for Mathematics Teaching and Learning, 20(1), 62-84. Retrieved from Physical and Virtual Manipulative Framework Conceptions of Undergraduate Pre-service Teachers. External links Pre-service teachers’ ability to identify and implement cognitive levels in mathematics learning. or Issues in the Undergraduate Mathematics Preparation of School Teachers: The Journal -- Volume 1 Physical and virtual manipulative framework conceptions of undergraduate pre-service teachers. References Mathematical manipulatives
Virtual manipulatives for mathematics
[ "Mathematics" ]
491
[ "Recreational mathematics", "Mathematical manipulatives" ]
17,534,875
https://en.wikipedia.org/wiki/Pill%20thermometer
A pill thermometer is an ingestible thermometer that allows a person's core temperature to be continuously monitored. It was developed by NASA in collaboration with Johns Hopkins University for use with astronauts. Since then the pill has been used by mountain climbers, football players, cyclists, F1 drivers. and in the mining industry. The Thermometer Pill is currently manufactured by the company HQ Inc under the brand name CorTemp. References External links HQ Inc - the manufacturer of the pill The pill's patent Transcript of the NOVA episode, "Deadly Ascent" which features the pill prominently Thermometers NASA spin-off technologies
Pill thermometer
[ "Technology", "Engineering" ]
131
[ "Thermometers", "Measuring instruments" ]
17,538,021
https://en.wikipedia.org/wiki/Petitgrain
Petitgrain () is an essential oil that is extracted from the leaves and green twigs of the bitter orange tree (Citrus aurantium ssp. amara) via steam distillation. It is also known as petitgrain bigarade. Etymology Petitgrain (Fr.: “little grain”) gains its name from the fact that it used to be extracted from the unripe small green fruits of the plant. Production Its main regions of production are Paraguay and France, with the former's product being of higher odour tenacity. The oil has a greenish woody orange smell that is widely used in perfumery and found in colognes. Though distilled from the same botanical species as neroli and bitter orange essential oil, petitgrain bigarade oil possesses its own characteristically unique aroma. The oil is distilled from the leaves and sometimes the twigs and branches of the tree, whereas neroli is distilled from the blossoms and bitter orange oil is typically cold pressed from the rinds of the fruits. Petitgrain mandarin (Petit grain Mandarine) is distilled from leaves and branches of trees producing mandarin fruit. Chemical composition Use It is used in perfumery and aromatherapy as fresh-scented essential oils. As of 1923, it was part of the formula for Pepsi-Cola. References Essential oils Oranges (fruit)
Petitgrain
[ "Chemistry" ]
282
[ "Essential oils", "Natural products" ]
17,538,047
https://en.wikipedia.org/wiki/Willam%E2%80%93Warnke%20yield%20criterion
The Willam–Warnke yield criterion is a function that is used to predict when failure will occur in concrete and other cohesive-frictional materials such as rock, soil, and ceramics. This yield criterion has the functional form where is the first invariant of the Cauchy stress tensor, and are the second and third invariants of the deviatoric part of the Cauchy stress tensor. There are three material parameters ( - the uniaxial compressive strength, – the uniaxial tensile strength, - the equibiaxial compressive strength) that have to be determined before the Willam-Warnke yield criterion may be applied to predict failure. In terms of , the Willam-Warnke yield criterion can be expressed as where is a function that depends on and the three material parameters and depends only on the material parameters. The function can be interpreted as the friction angle which depends on the Lode angle (). The quantity is interpreted as a cohesion pressure. The Willam-Warnke yield criterion may therefore be viewed as a combination of the Mohr–Coulomb and the Drucker–Prager yield criteria. Willam-Warnke yield function In the original paper, the three-parameter Willam-Warnke yield function was expressed as where is the first invariant of the stress tensor, is the second invariant of the deviatoric part of the stress tensor, is the yield stress in uniaxial compression, and is the Lode angle given by The locus of the boundary of the stress surface in the deviatoric stress plane is expressed in polar coordinates by the quantity which is given by where The quantities and describe the position vectors at the locations and can be expressed in terms of as (here is the failure stress under equi-biaxial compression and is the failure stress under uniaxial tension) The parameter in the model is given by The Haigh-Westergaard representation of the Willam-Warnke yield condition can be written as where Modified forms of the Willam-Warnke yield criterion An alternative form of the Willam-Warnke yield criterion in Haigh-Westergaard coordinates is the Ulm-Coussy-Bazant form: where and The quantities are interpreted as friction coefficients. For the yield surface to be convex, the Willam-Warnke yield criterion requires that and . See also Yield (engineering) Yield surface Plasticity (physics) References Chen, W. F. (1982). Plasticity in Reinforced Concrete. McGraw Hill. New York. External links Kaspar Willam and E.P. Warnke (1974). Constitutive model for the triaxial behavior of concrete Palko, J. L. (1993). Interactive reliability model for whisker-toughened ceramics The ‘‘Chunnel’’ Fire. I: Chemoplastic softening in rapidly heated concrete by Franz-Josef Ulm, Olivier Coussy, and Zdeneˇk P. Bazˇant. Plasticity (physics) Solid mechanics Yield criteria
Willam–Warnke yield criterion
[ "Materials_science" ]
624
[ "Deformation (mechanics)", "Plasticity (physics)" ]
17,538,745
https://en.wikipedia.org/wiki/Multimedia%20telephony
The 3GPP/NGN IP Multimedia Subsystem (IMS) multimedia telephony service (MMTel) is a global standard based on the IMS, offering converged, fixed and mobile real-time multimedia communication using the media capabilities such as voice, real-time video, text, file transfer and sharing of pictures, audio and video clips. With MMTel, users have the capability to add and drop media during a session. You can start with chat, add voice (for instance Mobile VoIP), add another caller, add video, share media and transfer files, and drop any of these without losing or having to end the session. MMTel is one of the registered ICSI (IMS Communication Service Identifier) feature tags. Description The MMTel standard is a joint project between the 3GPP and ETSI/TISPAN standardization bodies. The MMTel standard is today the only global standard that defines an evolved telephony service that enables real-time multimedia communication with the characteristics of a telephony service over both fixed broadband, fixed narrowband and mobile access types. MMTel also provides a standardized network-to-network interface (NNI). This allow operators to interconnect their networks which in turn enables users belonging to different operators to communicate with each other, using the full set of media capabilities and supplementary services defined within the MMTel service definition. One of the main differences with the MMTel standard is that, in contrast to legacy circuit switched telephony services, IP transport is used over the mobile access. This means that the mobile access technologies that are in main focus for MMTel are access types such as high-speed packet access (HSPA), 3GPP long-term evolution (LTE) and EDGE Evolution that all are developed with efficient IP transport in mind. MMTel allows a single SIP session to control virtually all MMTel supplementary services and MMTel media. All available media components can easily be accessed or activated within the session. Employing a single session for all media parts means that no additional sessions need to be set up to activate video, to add new users, or to start transferring a file. Even though it is possible to manage single-session user scenarios with several sessions – for instance, using a circuit-switched voice service that is complemented with a packet-switched video session, a messaging service or both – there are some concrete benefits to MMTel’s single-session approach. A single SIP session in an all-IP environment benefits conferencing; in particular, lip synchronization, which is quite complex when the voice part is carried over a circuit-switched service and the video part is carried over a packet-switched service. In fixed-mobile convergence scenarios, the single-session approach enables all media parts of the multimedia communication solution to interoperate. References External links 3GPP Stage 1 (requirements) Stage 3 (protocol) Media handling and interactions Charging UE conformance test specifications (protocol conformance) UE conformance test specifications (Implementation Conformance Statement) UE conformance test specifications (TTCN Test Suite) IMS services Multimedia Network architecture Voice over IP Mobile telecommunications standards Telecommunications infrastructure
Multimedia telephony
[ "Technology", "Engineering" ]
654
[ "Network architecture", "Computer networks engineering", "Mobile telecommunications standards", "Mobile telecommunications", "Multimedia", "IMS services" ]
17,539,184
https://en.wikipedia.org/wiki/Opaque%20travel%20inventory
An opaque inventory is the market of selling unsold travel inventory at a discounted price. The inventory is called "opaque" because the specific suppliers (i.e. hotel, airline, etc.) remain hidden until after the purchase has been completed. This is done to prevent sales of unsold inventory from cannibalizing full-price retail sales. According to TravelClick, the opaque channel accounted for 6% of all hotel reservations for major brands in 2012, up 2% from 2010. The primary consumers of opaque inventories are price-conscious people whose primary aim is the cheapest travel possible and are less concerned with the specifics of their travel plans. Hotel discounts of 30-60% are typical, and bargains are stronger at a higher star hotel. While one has control over the dates and times of a travel itinerary, the downside is these purchases are absolutely non-refundable and non-changeable and, as noted above, the specific hotel or airline is not revealed until after purchase. The main sources of opaque inventories are Hotwire.com and Priceline.com, but Travelocity.com and Expedia.com also offer opaque booking options. Hotwire has a fixed pricing model, where it sells a room at a fixed price with a limited description of a given venue, whereas Priceline offers both a similar fixed pricing model and a bidding model where travelers bid for a hotel room from among a group of hotels of a given star rating and location. Typically hotel deals are greater than airline discounts on opaque travel sites, namely because airlines have limited seating and also take monetary cuts when publishing discounted fares, whereas a hotel sells to opaque sites to fill empty rooms. In response to these opaque travel sites, there are also 'decoder' sites that use feedback from recent purchasers to help construct a profile of the opaque travel property. References Travel technology Inventory optimization Computer reservation systems
Opaque travel inventory
[ "Technology" ]
395
[ "Computer reservation systems", "Computer systems" ]
17,539,252
https://en.wikipedia.org/wiki/Security%20level%20management
Security level management (SLM) comprises a quality assurance system for information system security. The aim of SLM is to display the information technology (IT) security status transparently across an organization at any time, and to make IT security a measurable quantity. Transparency and measurability are the prerequisites for improving IT security through continuous monitoring. SLM is oriented towards the phases of the Deming Cycle/Plan-Do-Check-Act (PDCA) Cycle: within the scope of SLM, abstract security policies or compliance guidelines at a company are transposed into operative, measureable specifications for the IT security infrastructure. The operative aims form the security level to be reached. The security level is checked permanently against the current status of the security software used (malware scanner, update/patch management, vulnerability scanner, etc.). Deviations can be recognised at an early stage and adjustments made to the security software. In corporate contexts, SLM typically falls under the range of duties of the chief security officer (CSO), the chief information officer (CIO), or the chief information security officer (CISO), who report directly to an executive board on IT security and data availability. Classification SLM is related to the disciplines of security information management (SIM) and security event management (SEM) (as well as their combined practice, security information and event management (SIEM)), which Gartner defines as follows: […] SIM provides reporting and analysis of data primarily from host systems and applications, and secondarily from security devices — to support security policy compliance management, internal threat management and regulatory compliance initiatives. SIM supports the monitoring and incident management activities of the IT security organization […]. SEM improves security incident response capabilities. SEM processes near-real-time data from security devices, network devices and systems to provide real-time event management for security operations. […] SIM and SEM relate to the infrastructure for realising superordinate security aims, but are not descriptive of a strategic management system with aims, measures, revisions and actions to be derived from this. SLM unites the requisite steps for realising a measurable, functioning IT security structure in a management control cycle. SLM can be categorised under the strategic panoply of IT governance, which, via suitable organisation structures and processes, ensures that IT supports corporate strategy and objectives. SLM allows CSOs, CIOs and CISOs to prove that SLM is contributing towards protecting electronic data relevant to processes adequately, and therefore makes a contribution in part to IT governance. Procedure Defining the Security Level (Plan): Each company specifies security policies. It defines aims in relation to the integrity, confidentiality, availability and authority of classified data. In order to be able to verify compliance with these specifications, concrete objectives for the security software used in the company must be derived from the abstract security policies. A security level consists of a collection of measurable limiting and threshold values. Limits and thresholds must be defined separately for different system classes of the network, for example, because the local IT infrastructure and other framework conditions must be taken into account. Overarching security policies therefore result in different operational objectives, such as: The security-relevant software updates should be installed on all workstations in our network no later than 30 days after their release. On certain server and host systems after 60 days at the latest. The IT control manual Control Objectives for Information and Related Technologies (COBIT) provides companies with instructions on transposing subordinate, abstract aims into measurable objectives in a few steps. Collecting and Analysing Data (Do):Information on the current status of the systems in a network can be obtained from the log data and the status reports of the management consoles of the security software used. Monitoring solutions that analyse the security software of different vendors can simplify and accelerate data collection. Checking the Security Level (Check): SLM provides continual comparison of the defined security level with the actual values collected. Automated real-time comparison supplies companies with a continuous monitoring of the security situation of the entire company network. Adjusting the Security Structure (Act): Efficient SLM allows trend analyses and long-term comparative assessments to be made. By continuously monitoring the security level, weak spots in the network can be identified at an early stage and proactive adjustments can be made to the security software to improve system protection. Standards Besides defining the specifications for engineering, introducing, operating, monitoring, maintaining and improving a documented information security management system, ISO/IEC 27001 also defines the specifications for implementing suitable security mechanisms. ITIL, a collection of best practices for IT control processes, goes far beyond IT security. In relation, it supplies criteria for how Security Officers can conceive IT security as an independent, qualitatively measurable service and integrate it into the universe of business-process-oriented IT processes. ITIL also works from the top down with policies, processes, procedures and job-related instructions, and assumes that both superordinate, but also operative aims need to be planned, implemented, controlled, evaluated and adjusted. See also Information security Security management IT management Information security management References External links COBIT Summary and material from the ISACA ISO/IEC 27000 International Organization for Standardization ITIL Summary and material from AXELOS Data security
Security level management
[ "Engineering" ]
1,087
[ "Cybersecurity engineering", "Data security" ]
17,539,307
https://en.wikipedia.org/wiki/International%20Society%20of%20Limnology
The International Society of Limnology (SIL) is an international scientific society that disseminates information among limnologists, those who study all aspects of inland waters, including their physics, chemistry, biology, geology, and management. It was founded by August Thienemann and Einar Naumann in 1922 as the International Association of Theoretical and Applied Limnology and Societas Internationalis Limnologiae. It had about 2800 members in 2008. SIL celebrated its 100th anniversary at a meeting in Berlin, Germany, in August 2022. SIL publishes the following scientific publications: the journal Fundamental and Applied Limnology:Archiv für Hydrobiologie ; prior to 2007, it was called Archiv für Hydrobiologie. Communications (Mitteilungen), irregular publication. Limnology in Developing Countries, a book series. Congress proceedings, until 2007, published as Verhandlungen Internationale Vereinigung für theoretische und angewandte Limnologie. SIL has discontinued publication of the Verhandlungen and has replaced it with a peer-reviewed journal entitled Inland Waters. The new journal was launched at the 31st SIL Congress in Cape Town 2010, with first publication in 2011. The journal is supported by the electronic submission and tracking system of the Freshwater Biological Association. Manuscripts will be published consecutively online (as accepted) and quarterly in paper format. Access to the electronic version is provided to all SIL members and subscribers. Congresses 1922 Germany 1923 Austria 1925 USSR 1927 Italy 1930 Hungary 1932 Netherlands 1934 Yugoslavia 1937 France 1939 Sweden 1948 Switzerland 1950 Belgium 1953 Britain 1956 Finland 1959 Austria 1962 United States 1965 Poland 1968 Israel 1971 USSR 1974 Canada 1977 Denmark 1980 Japan 1983 France 1987 New Zealand 1989 Germany 1992 Spain 1995 Brazil 1998 Ireland 2001 Australia 2004 Finland 2007 Canada (Above list from Jones, 2010) 2010 South Africa 2013 Hungary 2016 Italy 2018 China 2021 South Korea 2022 Germany References External links SIL official web site Limnology Earth sciences organizations International scientific organizations Hydrology organizations
International Society of Limnology
[ "Environmental_science" ]
407
[ "Hydrology", "Hydrology organizations" ]
17,539,448
https://en.wikipedia.org/wiki/Rosaleen%20Love
Rosaleen Love (born 1940) is an Australian science journalist and writer. She has a PhD in the history and philosophy of science from the University of Melbourne. She has written works on the Great Barrier Reef and other science or conservation topics. She has also written science fiction, which has been noted for her use of irony and feminism. She has been nominated for the Ditmar Award six times, and won the Chandler Award in 2009. Bibliography Collections The Total Devotion Machine and Other Stories (1989) Evolution Annie and Other Stories (1993) Secret Lives of Books (2014) Chapterbooks The Traveling Tide (2005) Short fiction "The Laws of Life" (1985) in The Total Devotion Machine and Other Stories "Trickster" (1986) in The Total Devotion Machine and Other Stories "Alexia and Graham Bell" (1986) in Aphelion Science Fiction Magazine, Summer 1986/1987 (ed. Peter McNamara) "No Resting Place" (1987) in The Total Devotion Machine and Other Stories "The Sea-Serpent of Sandy Cape" (1987) in The Total Devotion Machine and Other Stories "Power Play" (1987) in The Total Devotion Machine and Other Stories "The Invisible Woman" (1988) in The Total Devotion Machine and Other Stories "If You Go Down to the Park Today" (1989) in The Total Devotion Machine and Other Stories "The Total Devotion Machine" (1989) in The Total Devotion Machine and Other Stories "Bat Mania" (1989) in The Total Devotion Machine and Other Stories "Tanami Drift" (1989) in The Total Devotion Machine and Other Stories "Dolphins and Deep Thought" (1989) in The Total Devotion Machine and Other Stories "The Bottomless Pit" (1989) in The Total Devotion Machine and Other Stories "Where Are They?" (1989) in The Total Devotion Machine and Other Stories "The Children Don't Leave Home Any More" (1989) in The Total Devotion Machine and Other Stories "The Tea Room Tapes" (1989) in The Total Devotion Machine and Other Stories "Tremendous Potential for Tourism" (1989) in The Total Devotion Machine and Other Stories "The Heavenly City, Perhaps" (1990) in Evolution Annie and Other Stories "Hovering Rock" (1990) in Aurealis #2 (ed. Stephen Higgins, Dirk Strasser) "Turtle Soup" (1990) in Eidolon (Australian magazine), Spring 1990 (ed. Jeremy G. Byrne) "Cosmic Dusting" (1991) in Evolution Annie and Other Stories "Evolution Annie" (1991) in Evolution Annie and Other Stories "The Palace of the Soul" (1991) in Evolution Annie and Other Stories "Strange Things Grow at Chernobyl" (1991) in Evolution Annie and Other Stories "Blue Venom" (1991) in Eidolon (Australian magazine), Spring 1991 (ed. Jeremy G. Byrne) "Holiness" (1992) in Intimate Armageddons (ed. Bill Congreve) "Mortal Remains" (1993) in Crank!, Fall 1993 (ed. Bryan Cholfin) "Starbaby" (1993) in Overland Summer 1993 "A Pattern to Life" (1993) in Evolution Annie and Other Stories "The Daughters of Darius" (1993) in Evolution Annie and Other Stories "Bubbles in the Cosmic Saucepan" (1993) in The Traveling Tide "Sex and Death" (1995) in Eidolon (Australian magazine), Winter 1995 (ed. Jeremy G. Byrne) "The Reef Builders" (1997) with Karen Joy Fowler and Maureen F. McHugh and Terry Bisson in Omni Online, May 1997 (ed. Ellen Datlow) "Alexander's Feats" (1997) in Eidolon (Australian magazine), Issue 25/26 (ed. Jeremy G. Byrne, Richard Scriven) "Real Men" (1998) in Dreaming Down-Under (ed. Jack Dann, Janeen Webb) "Two Recipes for Magic Beans" (1998) in Dreaming Down-Under (ed. Jack Dann, Janeen Webb) "The Worst Thing in the World" (1999) in Ghosts and Ghoulies (ed. Paul Collins, Meredith Costain) "The Gate of Heaven" (2003) in Forever Shores (ed. Margaret Winch, Peter McNamara) "In the Shadow of the Stones" (2003) in Southern Blood: New Australian Tales of the Supernatural (ed. Bill Congreve) "The Raptures of the Deep" (2003) in Gathering the Bones (ed. Ramsey Campbell, Jack Dann, Dennis Etchison) "Once Giants Roamed the Earth" (2005) in Daikaiju! Giant Monster Tales (ed. Robin Pen, Robert Hood), and The Traveling Tide "GoGo" (2005) in The Traveling Tide "Wanderer 8" (2005) in The Elastic Book of Numbers (ed. Allen Ashley) Anthologies edited If Atoms Could Talk (1987) Non-fiction Reefscape: Reflections on the Great Barrier Reef (2000) Essays Ursula K. le Guin and Therolinguistics (1998) The Onion Skin Theory of Identity, the Paint Pot Theory of Gender, and the Blu-Tack Theory of Position (1999) Star Drover (2001) In Tribulation and with Jubilee: On Pilgrimage with Bridie King (2005) References External links Australian science journalists Australian science fiction writers University of Melbourne alumni 1940 births Living people Australian women science fiction and fantasy writers Australian women journalists Women science writers
Rosaleen Love
[ "Technology" ]
1,122
[ "Women science writers", "Women in science and technology" ]
17,539,485
https://en.wikipedia.org/wiki/Oceanic%20Worldwide
Oceanic is an American manufacturer of scuba gear. It was founded by Bob Hollis in 1972 and is based in San Leandro, California, United States. Its products include dive computers, rebreathers and a novel diving mask incorporating a heads-up-display of information. History In 1972, Robert Hollis founded the parent company American Underwater Products which did business as Oceanic. , Originally also a brand of American Underwater Products, founded in 1998, was merged with Oceanic in 2014. The Aeris brand covered a wide range of recreational scuba equipment, including regulators, dive computers, buoyancy compensators, harnesses, masks, fins, and snorkels. In 2017, Huish Outdoors acquired the Oceanic and Hollis brands from AUP. Products Rebreathers They developed the Phibian CCS50 and CCS100 rebreathers; Stuart Clough of Undersea Technologies developed the Phibian's electronics package. With its purpose-built training facility, Oceanic UK working closely with American Divers International, developed and delivered by both Stuart Clough and Paul Morrall training and familiarisation courses. They have developed military rebreathers for use by frogmen and naval work divers, for example the US Navy MK-25 and the MK-16 mixed-gas rebreather. Data mask Oceanic developed the first HUD style mask, which is an eyes-and-nose diving mask with a built-in LCD display, commercially known as a DataMask, capable of providing various dive data from an on-board diving computer. Dive computers Oceanic manufactures several dive computers for recreational divers. Oceanic's computer division Pelagic Pressure Systems was sold to Aqua Lung in 2015. OC1, OCi VT 4.1 Atom 3.1 Geo 2.0 Wrist Computer VEO 1.0, 2.0, 3.0 B.U.D. Back up dive computer Wetsuit for a penguin The company developed, in early 2008, a custom wetsuit for an alpha-male African penguin at Steinhart Aquarium who was suffering from problems maintaining core body temperature due to feather loss. External links References Underwater diving engineering Underwater diving equipment manufacturers Rebreather makers
Oceanic Worldwide
[ "Engineering" ]
446
[ "Underwater diving engineering", "Marine engineering" ]
17,539,575
https://en.wikipedia.org/wiki/Chernobyl%20Recovery%20and%20Development%20Programme
Chernobyl Recovery and Development Programme (CRDP) is developed by the United Nations Development Programme and aims at ensuring return to normal life as a realistic prospect for people living in regions affected by Chernobyl disaster. The Programme provides continuing support to the Government of Ukraine for elaboration and implementation of development-oriented solutions for the regions. The CRDP, part of the United Nations Development Programme activities in Ukraine, has been launched based on the recommendations of “The Human Consequences of the Chernobyl Nuclear Accident. A strategy for Recovery” , the joint report by UN agencies initiated in February 2002. Since 2003 the CRDP is constantly working to mitigate long-term social, economic and environmental consequences of the Chernobyl catastrophe, to create more favorable living conditions and to promote sustainable human development in the Chernobyl-affected regions. In partnerships with international organizations, oblast, rayon and state administrations, village councils, scientific institutions, non-governmental organizations and private business, CRDP supports community organizations and helps them to implement their initiatives on economic, social development and environmental recovery. In addition, the CRDP distributes information about the Chernobyl catastrophe internationally and within Ukraine. CRDP activities With a strong emphasis on economic development, the project builds a sustainable national framework supporting the return to normal life in the region and in particular focuses on the following areas: Strategic solutions to support sustainable local economic development: provision of ongoing advisory support to the Government and assisting in the elaboration of development-oriented solutions for the rehabilitation of the Chornobyl-affected regions. Enabling local governance environment to foster economic development - enhancement of local authorities' capacities to transparently define and implement local development strategies, deliver public services, and foster local economic development, including support of strategic planning at rayon level and enhancement of local economic development agencies capacities to facilitate local economic development, provide services for business and authorities in the region. Consolidation of community-based recovery and development –involvement of a larger number of affected communities in recovery and development processes, ensuring the introduction of strong national ownership of the approach that addresses specific needs of communities, undergoes revision of radioactive-contamination zones; and targets youth-specific issues in the region such as access to ICT technologies and the Internet. Human security through local information provision - development of national capacities to sustain community-based information provision network for the Chornobyl- affected regions and enhancement of local authorities' capacities to improve public awareness and levels of human security in communities living around nuclear facilities based on the Chornobyl lessons learned. CRDP achievements and results 2003-2008 CRDP ensured the shift of national strategy on Chornobyl and improvement of national programmes for the mitigation of Chornobyl catastrophe consequences via advisory support on national policy and regional cooperation issues provided to the Government of Ukraine, hosting of various round tables and providing support for studies on Chornobyl-specific policy issues, organizing conferences, facilitating participation of CRDP’s national and international partners in the dialogue. As a result the New National Programme on Chernobyl for 2006–2010 adopted by the Parliament of Ukraine incorporates key recovery-oriented recommendations. At the 20th Chornobyl Anniversary commemorative conferences, the UN/UNDP Chornobyl strategy was largely based on CRDP’s experiences. Since 2005 the Chernobyl Economic Development Forum initiated by CRDP effectively works as a platform for the elaboration of strategies for sustainable development of territories, attracting investments into the region, creation of concurrent circumstances for partnerships between businesses, local authorities and communities for recovery and development of affected territories. More than 800 media publications: news reports and articles, TV and radio reports released based on advocacy and awareness campaigns on the developmental approach for Chernobyl. A number of round-table meetings, donor visits, and pres-trips were organised. CRDP strategic approach was shared at the sub-regional level with colleagues of Belarus and the Russian Federation and acknowledged at the highest United Nations level in UN General Assembly resolutions in 2005 and 2007. The principle of “partnership between community organisations and authorities for recovery and development” successfully introduced in the region. 279 community organisations (COs) formed in 192 villages in Ukraine (involve over 20,000 community members). COs resolve important socio-economic problems in villages: reconstruct water pipelines and provide gasification; reconstruct schools, bathes, village health centres, and ambulatories; create youth, public and service centres; etc. Community organisations implemented more than 191 recovery and development projects totaling over 18 mln. UAH, 6,6 mln. of which were contributed by CRDP. Nearly 200,000 people benefited from community-driven development projects supported by UNDP/CRDP. Community organisations successfully mobilized significant financial resources for the implementation of their own priority projects. On average, for the implementation of one project, a community organisation itself contributed 20% of the total amount, local village and rayon authorities – 40%, CRDP – 31%, and other sponsors – 9%. Eight Regional Economic Development Agencies established, 3 in rayons of Zhytomyr oblast (Brusyliv, Korosten and Ovruch), 2 in Kyiv Oblast (Borodyanka and Ivankiv), 2 in Rivne oblast (Rokytne and Dubrovytsya) and 1 in Chernihiv oblast (Ripky rayon). The Agencies of Regional Economic Development provide everyday consultation to private entrepreneurs and those active citizens who want to establish private business and organise lifelong education for various groups of people in the affected areas. Agencies ensure equal opportunities for everybody regardless of gender, age and race. A model of Youth centre - an institution specially designed to respond to young people’s needs. The Youth Centre is composed of a gym, computer equipment, Internet, and a meeting room for self-organised training, classes, etc. The Youth Centre has become both a human resource centre and a social enterprise for the village. During 2004-2007, CRDP supported the establishment of 35 Youth Centres in Chernobyl-affected areas. Two Internet clubs at local schools and 11 rural youth centres were connected to the Internet and offered the opportunity to enjoy everyday virtual communication with peers from around the world. Development of Internet and Communication Technologies is one of CRDP's priorities as it raises the opportunity for youth development in Chernobyl-affected rural areas. Over 20 titles of information materials (brochures, booklets, films, posters, CDs) on the Chernobyl catastrophe consequences and conditions for secure living at contaminated territories developed and distributed by CRDP in cooperation with leading scientific institutions. A series of 'trainings for teachers and medical workers on issues of radiation security and healthy lifestyles was conducted. CRDP developed the film ‘Alphabet of Understanding’ and the publication ‘Teachers Guidebook on the Chernobyl Accident’, which was supported by the Ministry of Emergencies for mass production and dissemination in 2007. Areas of Interest CRDP works in the 4 most Chernobyl-affected oblasts (provinces) in Ukraine, namely the Kyiv, Zhytomyr, Chernihiv and Rivne Oblasts. External links Chernobyl Recovery and Development Programme Chernobyl.info United Nations Development Programme in Ukraine. Aftermath of the Chernobyl disaster United Nations Development Programme
Chernobyl Recovery and Development Programme
[ "Technology" ]
1,481
[ "Aftermath of the Chernobyl disaster", "Environmental impact of nuclear power" ]
17,539,964
https://en.wikipedia.org/wiki/Luisa%20Ottolini
Luisa Ottolini (born July 10, 1954, in Tortona, province of Alessandria, Italy) is an Italian physicist. Biography In 1978, Luisa Ottolini graduated in Physics at the University of Pavia. From 1982 to 1986, she was the Head of the Structuristic Section at the Istituto Sperimentale dei Metalli Leggeri (I.S.M.L.) in Novara. In 1987, she activated the Strategic Project of the National Research Council of Italy (CNR) An Ion Microprobe for Advanced Researches in the Earth Sciences with the installation at the “Centro di Studio per la Cristallografia Strutturale” in Pavia of the first, and so far, the only one National Laboratory of Secondary Ion Mass Spectrometry (SIMS) in the Earth Sciences. Since that time she has been the Head of the SIMS Lab. Starting from 1989, she activated the National SIMS service for University and CNR Institutions offering the Earth Science Committee (05), following more than 90 research projects. In 2002-2005 she coordinated a research Unit in Pavia, sponsored by the European Framework Project EUROMELT (European Community’s Human Potential Programme, contract HPRN-CT-2002-00211). Between December 2005 and September 2017 she was the Head of CNR-Institute of Geosciences and Geo-resources (IGG)-Section of Pavia. She has co-authored more than 150 international ISI publications, of which 5 in Nature; more than 200 Abstracts at International and National Meetings, 35 monographs and inner reports. Main scientific interests Her research activities mainly concerned the use of SIMS for the quantitative measurement of low-concentration constituents, of light (Lithium, Beryllium and Boron) and volatile elements (Hydrogen, Fluorine, Chlorine, Carbon) in geological samples, with particular reference to the investigation of the physical/chemical processes underlying the production of secondary ions, aiming at overcoming the limitations of the technique (interferences and non-linear effects, “matrix effects”); the development, set up and optimization of SIMS procedures for trace elements, light and volatile elements, and ultra-trace elements in the frame of petrologic, geochemical and crystal-chemical studies, with particular reference to the investigation of melt inclusions, silicate minerals, artificial glasses, chemically-complex silicate and non-silicate matrixes, experimental charges. Awards and honors 2004 - Awards from CNR in the celebration of the 80th anniversary of the foundation of CNR, as one of the 12 CNR female researchers who contributed to the development of the scientific progress of Italy. 2004 - The name of Ferri-ottoliniite to a new amphibole end-member (IMA-CNMMN 2001-67A) to acknowledge the “fundamental contribution of L. Ottolini to the advancement of ion-probe analysis of minerals, with particular reference to light elements”. 2006 - Inclusion of L. Ottolini in Who’s Who in the World (23rd Edition by Marquis, Who’s Who, Philadelphia, PA, USA) and in Who's Who in Science and Engineering. Since 2013 - Partner of the University of Liège. References External links CNR - National Research Council of Italy CNR - Institute of Geosciences and Geo-resources (IGG)-Section of Pavia IGG CNR 1954 births 20th-century Italian women scientists 20th-century Italian physicists 21st-century Italian women scientists 21st-century Italian physicists Living people Italian women physicists Mass spectrometrists People from Tortona Rare earth scientists 20th-century Italian women 21st-century Italian women
Luisa Ottolini
[ "Physics", "Chemistry" ]
766
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
17,540,331
https://en.wikipedia.org/wiki/Departure%20resistance
Departure resistance is a quality of an aircraft which enables it to remain in controlled flight and resist entering potentially dangerous less-controlled maneuvers such as spin. Depending on its design, an aircraft may be more or less likely to leave (depart from) controlled flight when the pilot applies extreme control inputs. Good departure resistance characteristics allow the pilot to remain in control of the aircraft in such situations. Departure resistance is considered to contribute more towards flight safety than departure recovery. Departure recovery is the ability of an aircraft to return to controlled flight once in a certain uncontrolled maneuver. Being able to recover from spin is an example of departure recovery. References External links New High AOA Departure Criteria for High Agility Fighters Aerospace engineering Aviation safety
Departure resistance
[ "Engineering" ]
146
[ "Aerospace engineering" ]
17,540,734
https://en.wikipedia.org/wiki/Sandwich%20panel
A sandwich panel is any structure made of three layers: a low-density core (PIR, mineral wool, XPS), and a thin skin-layer bonded to each side. Sandwich panels are used in applications where a combination of high structural rigidity and low weight is required. The structural functionality of a sandwich panel is similar to the classic I-beam, where two face sheets primarily resist the in-plane and lateral bending loads (similar to flanges of an I- beam), while the core material mainly resists the shear loads (similar to the web of an I-beam). The idea is to use a light/soft but thick layer for the core and strong but thin layers for face sheets. This results in increasing the overall thickness of the panel, which often improves the structural attributes, like bending stiffness, and maintains or even reduces the weight. Sandwich panels are an example of a sandwich-structured composite: the strength and lightness of this technology makes it popular and widespread. Its versatility means that the panels have many applications and come in many forms: the core and skin materials can vary widely and the core may be a honeycomb or a solid filling. Enclosed panels are termed cassettes. Applications One obvious application is in aircraft, where mechanical performance and weight-saving are essential. Transportation and automotive applications also exist. In building and construction, these prefabricated products designed for use as building envelopes. They appear in industrial and office buildings, in clean and cold rooms and also in private houses, whether renovation or new-build. They combine a high-quality product with high flexibility regarding design. They generally have a good energy-efficiency and sustainability. In packaging, applications include fluted polypropylene boards and polypropylene honeycomb boards. Types 3D-printed biopolymer panels Due to the ability of 3D printers to fabricate complex sandwich panels there has recently been a flourishing of research in this area covering energy absorption, natural fiber, with continuous synthetic fibers, and for vibration. The promise of this technology is for new geometric complexities in sandwich panels not possible with other fabrication processes. SIP Structural insulated panels or structural insulating panels (commonly referred to as SIPs) are panels used as a building material. ACP Aluminium composite panels (ACP), made of aluminium composite material (ACM), are flat panels consisting of two thin coil-coated aluminium sheets bonded to a non-aluminium core. ACPs are frequently used for external cladding or facades of buildings, insulation, and signage. ACP is mainly used for external and internal architectural cladding or partitions, false ceilings, signage, machine coverings, container construction, etc. Applications of ACP are not limited to external building cladding, but can also be used in any form of cladding such as partitions, false ceilings, etc. ACP is also widely used within the signage industry as an alternative to heavier, more expensive substrates. ACP has been used as a light-weight but very sturdy material in construction, particularly for transient structures like trade show booths and similar temporary elements. It has recently also been adopted as a backing material for mounting fine art photography, often with an acrylic finish using processes like Diasec or other face-mounting techniques. ACP material has been used in famous structures as Spaceship Earth, VanDusen Botanical Garden, and the Leipzig branch of the German National Library. These structures made optimal use of ACP through its cost, durability, and efficiency. Its flexibility, low weight, and easy forming and processing allow for innovative design with increased rigidity and durability. Where the core material is flammable, the usage must be considered. The standard ACP core is polyethylene (PE) or polyurethane (PU). These materials do not have good fire-resistant (FR) properties unless specially treated and are therefore not generally suitable as a building material for dwellings; several jurisdictions have banned their use completely. Arconic, owner of the Reynobond brand, cautions the prospective buyer. Concerning the core, it says that distance of the panel from the ground is a determinant of "which materials are safer to use". In a brochure it has a graphic of a building in flames, with the caption "[a]s soon as the building is higher than the firefighters’ ladders, it has to be conceived with an incombustible material". It shows that the Reynobond polyethylene product is for up to circa 10 meters; the fire-retardant product (c. 70% mineral core) from there to up to c. 30 meters, the height of the ladder; and the European A2-rated product (c. 90% mineral core) for anything above that. In this brochure, Fire Safety in High-rise Buildings: Our Fire Solutions, product specification is only given for the last two products. The cladding materials, in this case having the highly combustible Polyethylene (PE) core, were implicated as the principal cause of the rapid spread of flame in the 2017 Grenfell Tower fire in London. It has also been involved in high-rise building fires in Melbourne, Australia; France; the United Arab Emirates; South Korea; and the United States. Fire-rated cores (typically designated as "FR" by the manufacturers) are a safer alternative as they have a maximum of 30% Polyethylene Content, and will self-extinguish in the absence of heat/ventilation. As with any building product, fitness for use is dependent on multiple other products and methods. In the case of ACP, building codes in USA have many requirements related to the wall assembly depending on the materials used and the building type. When these building codes are followed, the FR core products are safe. Note that the term ACP does not apply to sandwich panels with Mineral Wool cores, which fall under the category of Insulated Metal Panels (IMP). The aluminium sheets can be coated with polyvinylidene fluoride (PVDF), fluoropolymer resins (FEVE), or polyester paint. Aluminium can be painted in any kind of colour, and ACPs are produced in a wide range of metallic and non-metallic colours as well as patterns that imitate other materials, such as wood or marble. The core is commonly low-density polyethylene (PE), or a mix of low-density polyethylene and mineral material to exhibit fire retardant properties. 3A Composites (formerly Alcan Composites & Alusuisse) invented aluminium composites in 1964 - as a joint invention with BASF- and commercial production of Alucobond commenced in 1969. The product was patented in 1971, a patent which expired in 1991. After the expiration of the patent several companies started commercial production such as Reynobond (1991), Alpolic (Mitsubishi Chemicals, 1995), etalbond (1995). Today, it is estimated that more than 200 companies across the world are producing ACP. History Sandwich panel construction techniques have experienced considerable development in the last 40 years. Previously, sandwich panels were considered products suitable only for functional constructions and industrial buildings. However, their good insulation characteristics, their versatility, quality and appealing visual appearance, have resulted in a growing and widespread use of the panels across a huge variety of buildings. Code of practice Sandwich panels require the CE mark to be sold in Europe. The European sandwich panel standard is EN14509:2013 Self-supporting double-skin metal-faced insulating-panels - Factory-made products – Specifications. Sandwich panels quality can be certified by applying the quality level EPAQ Characteristics The qualities that have produced the rapid growth in the use of sandwich panels, particularly in construction, include: Thermal resistance Sandwich panels have λ-values from 0.024 W/(m·K) for polyurethane to 0.05 W/(m·K) for mineral wool. Therefore, they can achieve different U-values depending on the core and the thickness of the panel. The installation of a system with sandwich panels minimizes thermal bridges through the joints. Acoustic insulation The assessed sound reduction measurement lies at approx. 25 dB for PU elements and at approx. 30 dB for MW elements. Mechanical properties The space between the supports can be up to 11 m (walls), depending on the type of panel used. Normal applications have spaces between the supports that are approx. 3 m – 5 m. The thickness of panels is from 40 mm up to more than 200 mm. The density of sandwich panels range from 10 kg/m2 up to 35 kg/m2, depending on the foam and metal thickness, decreasing time and effort in: transportation, handling and installation. All these geometric and material properties influence the global/local failure behavior of the sandwich panels under different loading conditions such as indentation, impact, fatigue and bending. Fire behaviour Sandwich panels have different fire behaviours, resistance and reaction, depending on: the foam, the metal thickness, the coating, etc. The user will need to choose between the different sandwich panel types, depending on the requirements. Research by the Association of British Insurers and the Building Research Establishment in the UK highlighted that "sandwich panels do not start a fire on their own, and where these systems have been implicated in fire spread, the fire has often started in high risk areas such as cooking areas, subsequently spreading as a result of poor fire risk management, prevention and containment measures". There is evidence that when sandwich panels are used to clad a building it can contribute to the rapid spread of fire up the outside of the building itself. As an architect put it, in choosing the core material for a sandwich panel "I only use the mineral wool ones because your gut tells you it is not right to wrap a building in plastic". In 2000 Gordon Cooke, a leading fire safety consultant, reported that "the use of plastic foam cored sandwich panels ... is difficult to justify when considering life safety". He said the panels "can contribute to the severity and speed of fire development" and this has led to "massive fire losses". Design of a cavity between the cladding and the exterior wall of the building (or its sheath of insulation) is also significant: flames can occupy the cavity and be drawn upwards by convection, elongating to create secondary fires, and do so "regardless of the materials used to line the cavities". Impermeability The assembly system of sandwich panels helps create air and water-tight buildings. See also Sandwich theory Sandwich-structured composite Composite honeycomb Hill yield criteria Plate theory Thermal insulation Acoustic insulation Mineral wool References External links PPA-Europe: European Association for Panels and Profiles IFBS: Internationaler Verband für den Metallleichtbau SNPPA: Syindicat National du Profilage des products Plats en Acier EURIMA: European Insulation Manufactures Association PU Europe: European polyurethane insulation industry ISOPA: European Diisocyanate and Poliol Producers Association MFB: Alliance of European metal associations Syed Engineering : Leading Sandwich Panles Manufactures Building engineering Building insulation materials Building materials Composite materials Aluminium composite panels Photography equipment Printing cs:Dibond de:Dibond fr:Dibond
Sandwich panel
[ "Physics", "Engineering" ]
2,321
[ "Building engineering", "Composite materials", "Construction", "Materials", "Building materials", "Civil engineering", "Matter", "Architecture" ]
17,541,500
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20energy%20intensity
The following are lists of countries by energy intensity, or total energy consumption per unit GDP. Our World in Data (2021/22) The following is a list of countries by energy intensity as published by the World Resources Institute for the year 2022. It is given in units of kilowatt-hours per constant year 2011 international dollar of GDP. World Resources Institute (2003) The following is a list of countries by energy intensity as published by the World Resources Institute for the year 2003. It is given in units of tonnes of oil equivalent per million constant year 2000 international dollars. * indicates "Energy consumption in COUNTRY or TERRITORY" or "Energy in COUNTRY or TERRITORY" links. World energy intensity of GDP at purchasing parities from 2006 to 2009 The following table displays the energy intensity in the world by koe/$05p (Kilogram oil equivalent per USD at constant exchange rate, price and purchasing power parities of the year 2005), by region and by country. The energy intensity are published by Enerdata and they are also available in the energy review for 2011. The energy intensity is the ratio of primary energy consumption over gross domestic product measured in constant US $ at purchasing power parities. In 2009, energy intensity in OECD countries remained stable at 0.15 koe/$05p, with 0.12 koe/$05p in both the European Union and Japan and 0.17 koe/$05p in the USA. It remained particularly high in CIS (0.35 koe/$05p) as well as in Africa (0.25 koe/$05p) and Middle East (0.26 koe/$05p). In Asia, energy intensity reached 0.22 koe/$05p. On the opposite, Latin America posted a relatively low ratio of 0.14 koe/$05p. See also List of countries by energy consumption and production List of countries by carbon intensity of GDP List of countries by energy consumption per capita List of countries by renewable electricity production References Sources Energy economics
List of countries by energy intensity
[ "Environmental_science" ]
422
[ "Energy economics", "Environmental social science" ]
17,541,903
https://en.wikipedia.org/wiki/John%20Scott%20Medal
John Scott Award, created in 1816 as the John Scott Legacy Medal and Premium, is presented to men and women whose inventions improved the "comfort, welfare, and happiness of human kind" in a significant way. Since 1919 the Board of Directors of City Trusts of Philadelphia provide this award, recommended by an advisory committee. In 1822 the first awards were given to thirteen people by the Philadelphia Society for Promoting Agriculture entrusted by the "Corporation of the city of Philadelphia". The druggist John Scott of Edinburgh organized a $4,000 fund which, after his death in 1815 was administered by a merchant until the first award, a copper medal and "an amount not to exceed twenty dollars", was given in 1822. (At the time, $20 could buy one ox or a 12-volume encyclopedia.) Several hundred recipients have since been selected by the City Council of Philadelphia, which decides from the annual list of nominees made by the Franklin Institute. Notable recipients Most awards have been given for inventions in science and medicine. Notable recipients include: Luis W. Alvarez Frederick G. Banting John Bardeen James Black William T. Bovie Ralph L. Brinster Marie Curie William Duane Thomas Edison Alexander Fleming Peter Koch Irving Langmuir Edwin Land Christian J. Lambertsen Luther D. Lovekin Benoît Mandelbrot Guglielmo Marconi Edgar Sharp McFadden Humberto Fernandez Moran Kary B. Mullis Jonas Salk Glenn Seaborg Richard E. Smalley Nikola Tesla Wright brothers Robert Burns Woodward David Gestetner Recent winners See also Carl Roman Abt, A past recipient (1889) References External links The award webpage The Franklin Institute: The John Scott Legacy Medal Medals awarded by The Franklin Institute between 1822 and 2017. Awards established in 1816 Invention awards History of Philadelphia Franklin Institute awards American awards 1816 establishments in Pennsylvania
John Scott Medal
[ "Technology" ]
369
[ "Science and technology awards", "Invention awards" ]
17,542,516
https://en.wikipedia.org/wiki/Axanthism
Axanthism is a mutation that interferes with an animal's ability to produce yellow pigment. The mutation affects the amount of xanthophores and carotenoid vesicles, sometimes causing them to be completely absent. Erythrophores and iridophores, which are responsible for red coloration and light reflecting pigments respectively, may also be affected. Axanthism is most obvious in green animals, specifically amphibians, making them appear blue. Green coloration in animals is caused by iridiphores reflecting blue wavelengths of light back through the carotenoids in the xanthophores. In the absence of xanthophores and carotenoids, the blue light is unaltered and reflected back normally. Animals that are normally yellow will appear white if affected with axanthism. While axanthism commonly makes green animals blue, it can also make the animal gray or even black, making it appear as if the animal has melanism; though they can be distinguished by how axanthic animals are slightly lighter and how melanistic animals produce more melanophores. When iridophores are affected by axanthism, the animal typically becomes duller or darker in coloration due to a lesser amount of light reflected. Typically it is only the skin that is affected, and the eyes still have iridophores. The opposite of axanthism is xanthochromism, which is an excess of yellow coloration. In amphibians There are three basic types of axanthism in amphibians: complete to partial blue coloration, complete or partial gray or dark coloration, and normal coloration with black eyes. These are not distinct categories, and there can be amphibians that have a combination of these. The first one is most common in the family Ranidae, which is also the family that happens to be most commonly affected by axanthism. It is not yet known exactly why axanthism occurs in amphibians and whether it is genetic or environmental. Axanthism seems to be most common in North America, and is more common in Northern regions; there have been over one hundred reports of blue green frogs (Lithobates clamitans), but only one is from the southeastern United States. Axanthism is also most common in frogs, with salamanders and newts having almost no cases. Axanthic individuals are usually at higher risk at predation by sight predators compared to normal amphibians. Axanthism can affect the camouflage and aposematic patterns of amphibians, making these individual stand out more or render their defenses useless. However, individuals that are darker than normal may have an advantage in thermoregulation, which is especially important in ectothermic vertebrates. References Genetic disorders with no OMIM Disturbances of pigmentation
Axanthism
[ "Biology" ]
585
[ "Disturbances of pigmentation", "Pigmentation" ]
17,543,136
https://en.wikipedia.org/wiki/Structured%20Stream%20Transport
In computer networking, Structured Stream Transport (SST) is an experimental transport protocol that provides an ordered, reliable byte stream abstraction similar to TCP's, but enhances and optimizes stream management to permit applications to use streams in a much more fine-grained fashion than is feasible with TCP streams. External links SST home page Transport layer protocols Network protocols
Structured Stream Transport
[ "Technology" ]
77
[ "Computing stubs", "Computer network stubs" ]
17,543,372
https://en.wikipedia.org/wiki/C%20space
In the mathematical field of functional analysis, the space denoted by c is the vector space of all convergent sequences of real numbers or complex numbers. When equipped with the uniform norm: the space becomes a Banach space. It is a closed linear subspace of the space of bounded sequences, , and contains as a closed subspace the Banach space of sequences converging to zero. The dual of is isometrically isomorphic to as is that of In particular, neither nor is reflexive. In the first case, the isomorphism of with is given as follows. If then the pairing with an element in is given by This is the Riesz representation theorem on the ordinal . For the pairing between in and in is given by See also References . Banach spaces Functional analysis Normed spaces Norms (mathematics)
C space
[ "Mathematics" ]
169
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations", "Norms (mathematics)" ]
17,543,385
https://en.wikipedia.org/wiki/Warthin%E2%80%93Finkeldey%20cell
A Warthin–Finkeldey cell is a type of giant multinucleate cell found in hyperplastic lymph nodes early in the course of measles and also in HIV-infected individuals, as well as in Kimura disease, and more rarely in a number of neoplastic (e.g. lymphoma) and non-neoplastic lymph node disorders. Their origin is uncertain, but they have previously been shown to stain with markers similar to those of follicular dendritic cells, including CD21. Under the light microscope, these cells consist of a large, grape-like cluster of nuclei. References Lymphatic organ diseases Histopathology
Warthin–Finkeldey cell
[ "Chemistry" ]
148
[ "Histopathology", "Microscopy" ]
17,543,768
https://en.wikipedia.org/wiki/Pseudo-order
In constructive mathematics, pseudo-order is a name given to certain binary relations appropriate for modeling continuous orderings. In classical mathematics, its axioms constitute a formulation of a strict total order (also called linear order), which in that context can also be defined in other, equivalent ways. Examples The constructive theory of the real numbers is the prototypical example where the pseudo-order formulation becomes crucial. A real number is less than another if there exists (one can construct) a rational number greater than the former and less than the latter. In other words, here x < y holds if there exists a rational number z such that x < z < y. Notably, for the continuum in a constructive context, the usual trichotomy law does not hold, i.e. it is not automatically provable. The axioms in the characterization of orders like this are thus weaker (when working using just constructive logic) than alternative axioms of a strict total order, which are often employed in the classical context. Definition A pseudo-order is a binary relation satisfying the three conditions: It is not possible for two elements to each be less than the other. That is, for all and , Every two elements for which neither one is less than the other must be equal. That is, for all and , For all , , and , if then either or . That is, for all , and , Auxiliary notation There are common constructive reformulations making use of contrapositions and the valid equivalences as well as . The negation of the pseudo-order of two elements defines a reflexive partial order . In these terms, the first condition reads and it really just expresses the asymmetry of . It implies irreflexivity, as familiar from the classical theory. Classical equivalents to trichotomy The second condition exactly expresses the anti-symmetry of the associated partial order, With the above two reformulations, the negation signs may be hidden in the definition of a pseudo-order. A natural apartness relation on a pseudo-ordered set is given by . With it, the second condition exactly states that this relation is tight, Together with the first axiom, this means equality can be expressed as negation of apartness. Note that the negation of equality is in general merely the double-negation of apartness. Now the disjunctive syllogism may be expressed as . Such a logical implication can classically be reversed, and then this condition exactly expresses trichotomy. As such, it is also a formulation of connectedness. Discussion Asymmetry The non-contradiction principle for the partial order states that or equivalently , for all elements. Constructively, the validity of the double-negation exactly means that there cannot be a refutation of any of the disjunctions in the classical claim , whether or not this proposition represents a decidable problem. Using the asymmetry condition, the above also implies , the double-negated strong connectedness. In a classical logic context, "" thus constitutes a (non-strict) total order. Co-transitivity The contrapositive of the third condition exactly expresses that the associated relation (the partial order) is transitive. So that property is called co-transitivity. Using the asymmetry condition, one quickly derives the theorem that a pseudo-order is actually transitive as well. Transitivity is common axiom in the classical definition of a linear order. The condition also called is comparison (as well as weak linearity): For any nontrivial interval given by some and some above it, any third element is either above the lower bound or below the upper bound. Since this is an implication of a disjunction, it ties to the trichotomy law as well. And indeed, having a pseudo-order on a Dedekind-MacNeille-complete poset implies the principle of excluded middle. This impacts the discussion of completeness in the constructive theory of the real numbners. Relation to other properties This section assumes classical logic. At least then, following properties can be proven: If R is a co-transitive relation, then R is also quasitransitive; R satisfies axiom 3 of semiorders; incomparability w.r.t. R is a transitive relation; and R is connex iff it is reflexive. Sufficient conditions for a co-transitive relation R to be transitive also are: R is left Euclidean; R is right Euclidean; R is antisymmetric. A semi-connex relation R is also co-transitive if it is symmetric, left or right Euclidean, transitive, or quasitransitive. If incomparability w.r.t. R is a transitive relation, then R is co-transitive if it is symmetric, left or right Euclidean, or transitive. See also Constructive analysis Indecomposability (intuitionistic logic) Linear order Notes References Constructivism (mathematics) Order theory
Pseudo-order
[ "Mathematics" ]
1,034
[ "Mathematical logic", "Order theory", "Constructivism (mathematics)" ]
17,543,924
https://en.wikipedia.org/wiki/Spotted%20green%20pigeon
The spotted green pigeon or Liverpool pigeon (Caloenas maculata) is a species of pigeon which is most likely extinct. It was first mentioned and described in 1783 by John Latham, who had seen two specimens of unknown provenance and a drawing depicting the bird. The taxonomic relationships of the bird were long obscure, and early writers suggested many different possibilities, though the idea that it was related to the Nicobar pigeon (C. nicobarica) prevailed, and it was therefore placed in the same genus, Caloenas. Today, the species is only known from a specimen kept in World Museum, Liverpool. Overlooked for much of the 20th century, it was recognised as a valid extinct species by the IUCN Red List only in 2008. It may have been native to an island somewhere in the South Pacific Ocean or the Indian Ocean, and it has been suggested that a bird referred to as titi by Tahitian islanders was this bird. In 2014, a genetic study confirmed it as a distinct species related to the Nicobar pigeon, and showed that the two were the closest relatives of the extinct dodo and Rodrigues solitaire. The surviving specimen is long, and has very dark, brownish plumage with a green gloss. The neck feathers are elongated, and most of the feathers on the upperparts and wings have a yellowish spot on their tips. It has a black bill with a yellow tip, and the end of the tail has a pale band. It has relatively short legs and long wings. It has been suggested it had a knob on its bill, but there is no evidence for this. Unlike the Nicobar pigeon, which is mainly terrestrial, the physical features of the spotted green pigeon suggest it was mainly arboreal, and fed on fruits. The spotted green pigeon may have been close to extinction by the time Europeans arrived in its native area, and may have disappeared due to over-hunting and predation by introduced animals around the 1820s. Taxonomy The spotted green pigeon was first mentioned and described by the English ornithologist John Latham in his 1783 work A General Synopsis of Birds. Latham stated that he had seen two specimens, in the collections of the English major Thomas Davies and the naturalist Joseph Banks, but it is uncertain how these ended up in the respective collections, and their provenance is unknown. Though Banks received many specimens from the British explorer James Cook, and Davies received specimens from contacts in New South Wales, implying a location in the South Pacific Ocean, there are no records of spotted green pigeons having been sent from these sources. After Davies' death, his specimen was bought in 1812 by Edward Smith-Stanley, 13th Earl of Derby, who kept it in Knowsley Hall. Smith-Stanley's collection was transferred to the Derby Museum in 1851, where the specimen was prepared from the original posed mount (which had perhaps been taxidermised by Davies himself) into a study skin. This museum later became World Museum, where the specimen is housed today, but Banks' specimen is now lost. Latham also mentioned a drawing of a spotted green pigeon in the collection of the British antiquarian Ashton Lever, but it is unknown which specimen this was based on; it could have been either, or a third individual. Latham included an illustration of the spotted green pigeon in his 1823 work A General History of Birds, and though the basis for his illustration is unknown, it differs from Davies' specimen in some details. It is possible that it was based on the drawing in the Leverian collection, since Latham stated that this drawing showed the end of the tail as "deep ferruginous" (rust-coloured), a feature also depicted in his own illustration. The spotted green pigeon was scientifically named by the German naturalist Johann Friedrich Gmelin in 1789, based on Latham's description. The original binomial name Columba maculata means "spotted pigeon" in Latin. Latham himself accepted this name, and used it in his 1790 work Index ornithologicus. Since Latham appears to have based his 1783 description on Davies' specimen, this can therefore be considered the holotype specimen of the species. Subsequent writers were uncertain about the validity and relationships of the species; the English naturalist James Francis Stephens suggested that it belonged in the fruit pigeon genus Ptilinopus in 1826, and the German ornithologist Johann Georg Wagler instead suggested that it was a juvenile Nicobar pigeon (Caloenas nicobarica) in 1827. The Italian zoologist Tommaso Salvadori listed the bird in an appendix about "doubtful species of pigeons, which have not yet been identified" in 1893. In 1898, the Scottish ornithologist Henry Ogg Forbes supported the validity of the species, after examining Nicobar pigeon specimens and concluding that none resembled the spotted green pigeon at any stage of development. He therefore considered it a distinct species of the same genus as the Nicobar pigeon, Caloenas. In 1901, the British zoologist Walter Rothschild and the German ornithologist Ernst Hartert agreed that the pigeon belonged to Caloenas, but suggested that it was probably an "abnormity", though more than one specimen had been recorded. The spotted green pigeon was only sporadically mentioned in the literature throughout the 20th century; little new information was published, and the bird remained an enigma. In 2001, the English writer Errol Fuller suggested that the bird had been historically overlooked because Rothschild (an avid collector of rare birds) dismissed it as an aberration, perhaps because he did not own the surviving specimen himself. Fuller considered it a valid, extinct species, and also coined an alternative common name for the bird: the Liverpool pigeon. On the basis of Fuller's endorsement, BirdLife International listed the spotted green pigeon as "Extinct" on the IUCN Red List in 2008; it was previously "Not Recognized". In 2001, the British ornithologist David Gibbs stated that the spotted green pigeon was only superficially similar to the Nicobar pigeon, and possibly distinct enough to warrant its own genus (related to Ptilinopus, Ducula, or Gymnophaps). He also hypothesised that the bird might have inhabited a Pacific island, based on stories told by Tahitian islanders to the Tahitian scholar Teuira Henry in 1928 about a green and white speckled bird called titi. The American palaeontologist David Steadman disputed the latter claim in a book review, noting that titi is an onomatopoeic word (resembling the sound of the bird) used especially for shearwaters (members of Procellariidae) in east Polynesia. The English ornithologists Julian P. Hume and Michael Walters, writing in 2012, agreed with Gibbs that the bird warranted generic status. In 2020, after examining historical texts to clarify the origin and extinction date of the spotted green pigeon, the French ornithologist Philippe Raust pointed out that the information in Henry's 1928 book Ancient Tahiti was not gathered by her, but by her grandfather, the English reverend John Muggridge Orsmond, who collected ancient Tahitian traditions during the first half of the 19th century. The book devotes several pages to birds of Tahiti and its surroundings, including extinct ones, and the entry that Gibbs had linked to the spotted green pigeon reads: "The titi, which cried “titi”, now extinct in Tahiti, was speckled green and white and it was the shadow of the mountain gods". The titi was included in the paragraph relating to pigeons, which suggests it was well recognised as such, and Raust found it consistent with the spotted green pigeon. Though Steadman had dismissed the idea based on the name titi also being used for shearwaters, Raust pointed out that this name was used for a wider variety of birds with vocalisations that sound like "titi". Raust also noted that an 1851 Tahitian dictionary compiled by the Welsh reverend John Davies included the word tītīhope’ore, which was used as a synonym for the titi of Henry in a 1999 dictionary. Based on definitions in Davies' dictionary, Raust translated this name as "tītī without (a long) tail." Raust suggested this name was used to distinguish it from the long-tailed koel (Urodynamis taitensis), which is called "tītī oroveo", and which is somewhat similar to Latham's illustration of the spotted green pigeon, being dark brown with paler spots and underparts. Raust believed that the study of these texts reinforced the Tahitian origin of the spotted green pigeon, and suggested this could be confirmed if possible bones of this species were one day found on Tahiti and analysed for DNA. So far, few paleontological sites on Tahiti have been studied, and fossils found there have not yet revealed unknown species. The only specimen is today catalogued as NML-VZ D3538 at World Museum, Liverpool, where it is locked under environmentally-controlled conditions outside public view. It is kept in a cabinet in the museum's basement with other particularly important bird specimens, such as other extinct birds and type specimens. The store room is monitored for pests, and the temperature and humidity is kept stable to avoid infestations and damp. The specimen is handled with gloves when examined by researchers, and exposure to light is limited. Evolution In 2014, an ancient DNA analysis by the geneticist Tim H. Heupink and colleagues compared the genes of the only spotted green pigeon specimen with that of other pigeons, based on samples extracted from two of its feathers. One of the resulting phylogenetic trees (or cladograms) is shown below: The spotted green pigeon was shown to be closest to the Nicobar pigeon. The genetic distance between the two was more than is seen within other pigeon species, but similar to the distance between different species within the same genus. This confirmed that the two were distinct species in the same genus, and that the spotted green pigeon was a unique, extinct taxon. The genus Caloenas was placed in a wider clade in which most members showed a mixture of arboreal (tree-dwelling) and terrestrial (ground-dwelling) traits. That Caloenas was placed in such a morphologically diverse clade may explain why many different relationships have previously been proposed for the members of the genus. A third species in the genus Caloenas, the Kanaka pigeon (C. canacorum), is only known from subfossils discovered in New Caledonia and Tonga. This species was larger than the two other members of the genus, so it is unlikely that it represents the same species as the spotted green pigeon. The possibility that the spotted green pigeon was a hybrid between other species can also be disregarded based on the genetic results. The genetic findings were confirmed by a 2016 study by the geneticist André E. R. Soares and colleagues, which managed to assemble complete mitochondrial genomes from eleven pigeon species, including the spotted green pigeon. The distribution of the Nicobar pigeon and the Kanaka pigeon (which does not appear to have had diminished flight abilities) suggests dispersal through island hopping and an origin for the spotted green pigeon in Oceania or southeast Asia. The fact that the closest relatives of Caloenas were the extinct subfamily Raphinae (first demonstrated in a 2002 study), which consists of the dodo from Mauritius and the Rodrigues solitaire from Rodrigues, indicates that the spotted green pigeon could also have originated somewhere in the Indian Ocean. In any case, it seems most likely that the bird inhabited an island location, like its relatives. That the Caloenas pigeons were grouped in a clade at the base of the lineage leading to Raphinae indicates that the ancestors of the flightless dodo and Rodrigues solitaire were able to fly, and reached the Mascarene Islands by island hopping from south Asia. Description Latham's 1823 description of the spotted green pigeon from A General History of Birds (expanded from the one in A General Synopsis of Birds) reads as follows: Most literature addressing the spotted green pigeon simply repeated Latham's descriptions, adding little new information, until Gibbs published a more detailed description in 2001, followed by the museum curator Hein van Grouw in 2014. The surviving specimen measures in length, though study specimens are often stretched or compressed during taxidermy, and may therefore not reflect the length of a living bird. The weight has not been recorded. The spotted green pigeon appears to have been smaller and slenderer than the Nicobar pigeon, which reaches , and the Kanaka pigeon appears to have been 25% larger than the latter. At in length, the tail is longer than that of the Nicobar pigeon, but the head is smaller in relation. The bill is , and the tarsus measures . Though the wings of the specimen appear to be short and rounded, and have been described as being long, vanGrouw discovered that the five outer primary feathers have been pulled out of each wing, and suggested that the wings would therefore had been about longer in life, around in total. This is in accordance with Latham's 1823 illustration, which shows a bird with longer wings. The bill of the specimen is black with a yellowish tip, and the cere is also black, with feathers on the upper side, almost to the nostrils. The lores are naked, and the upper part of the head is sooty black. The rest of the head is mostly brownish-black. The feathers of the nape and the neck are slightly bifurcated and have a dark green gloss, the latter with coppery reflections. The feathers of the neck are elongated (sometimes referred to as hackles), and some of those on the sides and lower part have paler spots near the tips. Most of the feathers on the upperparts and wings are dark brown or brownish-black with a dark green gloss. Almost all of these feathers have a triangular, yellowish-buff spot at their tips. The spots are almost whitish on some of the scapular feathers, vague and dark on the primary feathers. The underside of the wings is black with browner flight feathers, which have a pale spot or band at the tips. The breast is brownish-black with a faint green sheen. The tail is blackish with a dark green sheen, brownish-black on the underside, with a narrow, cinnamon-coloured band at the end. This differs from the rust-coloured tail-tip apparently shown in the drawing owned by Lever, and Latham's own illustration. The legs are small and slender, have long toes, large claws, and a comparatively short tarsus, whereas the Nicobar pigeon has shorter claws and a longer tarsus. When examining the specimen, van Grouw noticed that the legs had at one point been detached and are now attached opposite their natural positions. The short feathering of the legs would therefore have been attached to the inner side of the upper tarsus in life, not the outer. The plate accompanying Forbes' 1898 article shows the feathers on the outer side, and depicts the legs as pinkish, whereas they are yellow in the skin. The spotted green pigeon has at times been described as having a knob at the base of its bill, similar to that of the Nicobar pigeon. This idea seems to have originated with Forbes, who had the bird depicted with this feature, perhaps due to his conviction that it was a species of Caloenas; it was depicted with a knob as late as 2002. This is despite the fact that the surviving specimen does not possess a knob, and Latham did not mention or depict this feature, so such depictions are probably incorrect. The artificial eyes of the specimen were removed when it was prepared into a study skin, but red paint around the right eye-socket suggests that it was originally intended to have red eyes. It is unknown whether this represents the natural eye colour of the bird, yet the eyes were also depicted as red in Latham's illustration, which does not appear to have been based on the existing specimen. Forbes had the iris depicted as orange and the skin around the eye as green, though this was probably guesswork. The triangular spots of the spotted green pigeon are not unique among pigeons, but are also seen in the spot-winged pigeon (Columba maculosa) and the speckled pigeon (C. guinea), and are the result of lack of melanin deposition during development. The yellow-buff coloured spots are very worn, while less worn feathers have white tips; this indicates that the former were stained during life or represent a different stage of plumage, and that the latter were fresher. The plumage of the spotted green pigeon was distinct in being very soft compared to that of other pigeons, perhaps due to the body feathers being proportionally long. The hackles were not as elongated as those of the Nicobar pigeon, and the feathers did not differ from those of other pigeons in their microstructure. The plumage was also distinct in being very pigmented, except for the tips of the feathers, and even the down was dark, unlike that of most other birds (a feature otherwise seen in aberrant plumage). Though the plumage of the spotted green pigeon resembles that of the Nicobar pigeon in some respects, it is also similar to that of species in the imperial pigeon genus Ducula. The metallic-green colouration is commonly found among them, and similar hackles can be seen in the goliath imperial pigeon (D. goliath). The Polynesian imperial pigeon (D. aurorae) has similar soft feathers, and immature individuals of this species and the Pacific imperial pigeon (D. pacifica) have plumage different from that of juvenile and adult birds until they moult. Therefore, vanGrouw found it possible that the dull, brownish-black underparts of the surviving spotted green pigeon specimen represents the plumage of an immature bird, as the adults of similar birds have stronger and more glossy iridescence. He suggested that the brighter bird with paler underparts and whiter wing tips seen in Latham's illustration may represent the adult plumage. Behaviour and ecology The behaviour of the spotted green pigeon was not recorded, but theories have been proposed based on its physical features. Gibbs found the delicate, part-feathered legs and long tail indicative of at least partially arboreal habits. After noting that the wings were not short after all, van Grouw stated that the bird would not have been terrestrial, unlike the related Nicobar pigeon. He pointed out that the overall proportions and shape of the bird (longer tail, shorter legs, primary feathers probably reaching the middle of the tail) was more similar to the pigeons of the genus Ducula. It may therefore have been ecologically similar to those birds, have likewise been strongly arboreal, and kept to the dense canopy of forests. By contrast, the mainly terrestrial Nicobar pigeon forages on the forest floor. The dark eyes of the Nicobar pigeon are typical of species that forage on forest floors, whereas the coloured bill and presumably coloured eyes of the spotted green pigeon are similar to those of frugivorous (fruit-eating) pigeons. The feet of the spotted green pigeon are also similar to those of pigeons that forage in trees. The slender bill indicates that it fed on soft fruits. Believing that the wings were short and round, Gibbs thought the bird was not a strong flyer, and therefore not nomadic (periodically moving from place to place). In spite of the evidently longer wings which might have made it a strong flyer, vanGrouw also thought it would have been a sedentary (mostly staying at the same location) bird, that preferred not to fly across open water, similar to species in Ducula. It may have had a limited distribution on a small, remote island, which may explain why its provenance remains unknown. Raust pointed out that the fact that Polynesians considered the titi to emanate from mountain gods suggests that it lived in remote, high-altitude forests. Extinction The spotted green pigeon is most likely extinct, and may already have been close to extinction by the time Europeans arrived in its native area. The species may have disappeared due to being over-hunted and being preyed upon by introduced animals. Hume suggested that the bird may have survived until the 1820s. Raust agreed that the bird's extinction occurred during the 1820s, pointing out that the entry of the titi in Henry's 1928 book was based on much older accounts. References Caloenas Extinct birds Bird extinctions since 1500 Birds described in 1789 Taxa named by Johann Friedrich Gmelin Collection of the World Museum Species known from a single specimen
Spotted green pigeon
[ "Biology" ]
4,212
[ "Individual organisms", "Species known from a single specimen" ]
17,544,970
https://en.wikipedia.org/wiki/Damage-associated%20molecular%20pattern
Damage-associated molecular patterns (DAMPs) are molecules within cells that are a component of the innate immune response released from damaged or dying cells due to trauma or an infection by a pathogen. They are also known as danger signals, and alarmins because they serve as warning signs to alert the organism to any damage or infection to its cells. DAMPs are endogenous danger signals that are discharged to the extracellular space in response to damage to the cell from mechanical trauma or a pathogen. Once a DAMP is released from the cell, it promotes a noninfectious inflammatory response by binding to a pattern recognition receptor (PRR). Inflammation is a key aspect of the innate immune response; it is used to help mitigate future damage to the organism by removing harmful invaders from the affected area and start the healing process. As an example, the cytokine IL-1α is a DAMP that originates within the nucleus of the cell which, once released to the extracellular space, binds to the PRR IL-1R, which in turn initiates an inflammatory response to the trauma or pathogen that initiated the release of IL-1α. In contrast to the noninfectious inflammatory response produced by DAMPs, pathogen-associated molecular patterns (PAMPs) initiate and perpetuate the infectious pathogen-induced inflammatory response. Many DAMPs are nuclear or cytosolic proteins with defined intracellular function that are released outside the cell following tissue injury. This displacement from the intracellular space to the extracellular space moves the DAMPs from a reducing to an oxidizing environment, causing their functional denaturation, resulting in their loss of function. Outside of the aforementioned nuclear and cytosolic DAMPs, there are other DAMPs originated from different sources, such as mitochondria, granules, the extracellular matrix, the endoplasmic reticulum, and the plasma membrane. Overview DAMPs and their receptors are characterized as: History Two papers appearing in 1994 anticipated the deeper understanding of innate immune reactivity, pointing towards the subsequent understanding of the nature of the adaptive immune response. The first came from transplant surgeons who conducted a prospective randomized, double-blind, placebo-controlled trial. Administration of recombinant human superoxide dismutase (rh-SOD) in recipients of cadaveric renal allografts demonstrated prolonged patient and graft survival with improvement in both acute and chronic rejection events. They speculated that the effect was related to SOD's antioxidant action on the initial ischemia/reperfusion injury of the renal allograft, thereby reducing the immunogenicity of the allograft. Thus, free radical-mediated reperfusion injury was seen to contribute to the process of innate and subsequent adaptive immune responses. The second study suggested the possibility that the immune system detected "danger", through a series of what is now called damage-associated molecular pattern molecules (DAMPs), working in concert with both positive and negative signals derived from other tissues. Thus, these papers anticipated the modern sense of the role of DAMPs and redox, important, apparently, for both plant and animal resistance to pathogens and the response to cellular injury or damage. Although many immunologists had earlier noted that various "danger signals" could initiate innate immune responses, the "DAMP" was first described by Seong and Matzinger in 2004. Examples DAMPs vary greatly depending on the type of cell (epithelial or mesenchymal) and injured tissue, but they all share the common feature of stimulating an innate immune response within an organism. Protein DAMPs include intracellular proteins, such as heat-shock proteins or HMGB1, and materials derived from the extracellular matrix that are generated following tissue injury, such as hyaluronan fragments. Non-protein DAMPs include ATP, uric acid, heparin sulfate and DNA. In humans Protein DAMPs High-mobility group box 1: HMGB1, a member of the HMG protein family, is a prototypical chromatin-associated LSP (leaderless secreted protein), secreted by hematopoietic cells through a lysosome-mediated pathway. HMGB1 is a major mediator of endotoxin shock and is recognized as a DAMP by certain immune cells, triggering an inflammatory response. It is known to induce inflammation by activating NF-κB pathway by binding to TLR, TLR4, TLR9, and RAGE (receptor for advanced glycation end products). HMGB1 can also induce dendritic cell maturation via upregulation of CD80, CD83, CD86 and CD11c, and the production of other pro-inflammatory cytokines in myeloid cells (IL-1, TNF-a, IL-6, IL-8), and it can lead to increased expression of cell adhesion molecules (ICAM-1, VCAM-1) on endothelial cells. DNA and RNA: The presence of DNA anywhere other than the nucleus or mitochondria is perceived as a DAMP and triggers responses mediated by TLR9 and DAI that drive cellular activation and immunoreactivity. Some tissues, such as the gut, are inhibited by DNA in their immune response because the gut is filled with trillions of microbiota, which help break down food and regulate the immune system. Without being inhibited by DNA, the gut would detect these microbiota as invading pathogens, and initiate an inflammatory response, which would be detrimental for the organism's health because while the microbiota may be foreign molecules inside the host, they are crucial in promoting host health. Similarly, damaged RNAs released from UVB-exposed keratinocytes activate TLR3 on intact keratinocytes. TLR3 activation stimulates TNF-alpha and IL-6 production, which initiate the cutaneous inflammation associated with sunburn. S100 proteins: S100 is a multigenic family of calcium modulated proteins involved in intracellular and extracellular regulatory activities with a connection to cancer as well as tissue, particularly neuronal, injury. Their main function is the management of calcium storage and shuffling. Although they have various functions, including cell proliferation, differentiation, migration, and energy metabolism, they also act as DAMPs by interacting with their receptors (TLR2, TLR4, RAGE) after they are released from phagocytes. Mono- and polysaccharides: The ability of the immune system to recognize hyaluronan fragments is one example of how DAMPs can be made of sugars. Nonprotein DAMPs Purine metabolites: Nucleotides (e.g., ATP) and nucleosides (e.g., adenosine) that have reached the extracellular space can also serve as danger signals by signaling through purinergic receptors. ATP and adenosine are released in high concentrations after catastrophic disruption of the cell, as occurs in necrotic cell death. Extracellular ATP triggers mast cell degranulation by signaling through P2X7 receptors. Similarly, adenosine triggers degranulation through P1 receptors. Uric acid is also an endogenous danger signal released by injured cells. Adenosine triphosphate (ATP) and uric acid, which are purine metabolites, activate NLR family, pyrin domain containing (NLRP) 3 inflammasomes to induce IL-1β and IL-18. In plants DAMPs in plants have been found to stimulate a fast immune response, but without the inflammation that characterizes DAMPs in mammals. Just as with mammalian DAMPs, plant DAMPs are cytosolic in nature and are released into the extracellular space following damage to the cell caused by either trauma or pathogen. The major difference in the immune systems between plants and mammals is that plants lack an adaptive immune system, so plants can not determine which pathogens have attacked them before and thus easily mediate an effective immune response to them. To make up for this lack of defense, plants use the pattern-triggered immunity (PTI) and effector-triggered immunity (ETI) pathways to combat trauma and pathogens. PTI is the first line of defense in plants and is triggered by PAMPs to initiate signaling throughout the plant that damage has occurred to a cell. Along with the PTI, DAMPs are also released in response to this damage, but as mentioned earlier they do not initiate an inflammatory response like their mammalian counterparts. The main role of DAMPs in plants is to act as mobile signals to initiate wounding responses and to promote damage repair. A large overlap occurs between the PTI pathway and DAMPs in plants, and the plant DAMPs effectively operate as PTI amplifiers. The ETI always occurs after the PTI pathway and DAMP release, and is a last resort response to the pathogen or trauma that ultimately results in programmed cell death. The PTI- and ETI-signaling pathways are used in conjunction with DAMPs to rapidly signal the rest of the plant to activate its innate immune response and fight off the invading pathogen or mediate the healing process from damage caused by trauma. Plant DAMPs and their receptors are characterized as: Many mammalian DAMPs have DAMP counterparts in plants. One example is with the high-mobility group protein. Mammals have the HMGB1 protein, while Arabidopsis thaliana has the HMGB3 protein. Clinical targets in various disorders Preventing the release of DAMPs and blocking DAMP receptors would, in theory, stop inflammation from an injury or infection and reduce pain for the affected individual. This is especially important during surgeries, which have the potential to trigger these inflammation pathways, making the surgery more difficult and dangerous to complete. The blocking of DAMPs also has theoretical applications in therapeutics to treat disorders such as arthritis, cancer, ischemia reperfusion, myocardial infarction, and stroke. These theoretical therapeutic options include: Preventing DAMP release – proapoptotic therapies, platinums, ethyl pyruvate Neutralizing or blocking DAMPs extracellularly – anti-HMGB1, rasburicase, , etc. Blocking the DAMP receptors or their signaling – RAGE small molecule antagonists, TLR4 antagonists, antibodies to DAMP-R DAMPs can be used as biomarkers for inflammatory diseases and potential therapeutic targets. For example, increased S100A8/A9 is associated with osteophyte progression in early human osteoarthritis, suggesting that S100 proteins can be used as biomarkers for the diagnosis of the progressive grade of osteoarthritis. Furthermore, DAMP can be a useful prognostic factor for cancer. This would improve patient classification, and a suitable therapy would be given to patients by diagnosing with DAMPs. The regulation of DAMP signaling can be a potential therapeutic target to reduce inflammation and treat diseases. For example, administration of neutralizing HMGB1 antibodies or truncated HMGB1-derived A-box protein ameliorated arthritis in collagen-induced arthritis rodent models. Clinical trials with HSP inhibitors have also been reported. For nonsmall-cell lung cancer, HSP27, HSP70, and HSP90 inhibitors are under investigation in clinical trials. In addition, treatment with dnaJP1, which is a synthetic peptide derived from DnaJ (HSP40), had a curative effect in rheumatoid arthritis patients without critical side effects. Taken together, DAMPs can be useful therapeutic targets for various human diseases, including cancer and autoimmune diseases. DAMPs can trigger re-epithelialization upon kidney injury, contributing to epithelial–mesenchymal transition, and potentially, to myofibroblast differentiation and proliferation. These discoveries suggest that DAMPs drive not only immune injury, but also kidney regeneration and renal scarring. For example, TLR2-agonistic DAMPs activate renal progenitor cells to regenerate epithelial defects in injured tubules. TLR4-agonistic DAMPs also induce renal dendritic cells to release IL-22, which also accelerates tubule re-epithelialization in acute kidney injury. Finally, DAMPs also promote renal fibrosis by inducing NLRP3, which also promotes TGF-β receptor signaling. References Further reading Damage Associated Molecular Pattern Molecules Group at University of Pittsburgh Immunology
Damage-associated molecular pattern
[ "Biology" ]
2,575
[ "Immunology" ]
17,545,070
https://en.wikipedia.org/wiki/List%20of%20instruments%20used%20in%20microbiological%20sterilization%20and%20disinfection
This is a list of instruments used in microbiological sterilization and disinfection. Instrument list References Microbiology equipment
List of instruments used in microbiological sterilization and disinfection
[ "Biology" ]
28
[ "Microbiology equipment" ]
17,545,102
https://en.wikipedia.org/wiki/Instruments%20used%20in%20medical%20laboratories
This is a list of instruments used in general in laboratories, including: Biochemistry Microbiology Pharmacology Instrument list Image gallery References Medical equipment Biochemistry methods Laboratory equipment Microbiology equipment Clinical pathology
Instruments used in medical laboratories
[ "Chemistry", "Biology" ]
39
[ "Biochemistry methods", "Medical equipment", "Microbiology equipment", "Biochemistry", "Medical technology" ]
17,545,110
https://en.wikipedia.org/wiki/Instruments%20used%20in%20microbiology
Instruments used specially in microbiology include: Instrument list As well as those "used in microbiological sterilization and disinfection" (see relevant section). Image gallery References Medical equipment Microbiology equipment
Instruments used in microbiology
[ "Biology" ]
45
[ "Medical technology", "Medical equipment", "Microbiology equipment" ]
17,545,130
https://en.wikipedia.org/wiki/Instruments%20used%20in%20pathology
Instruments used specially in pathology are as follows: Instrument list Gallery References Medical equipment Pathology
Instruments used in pathology
[ "Biology" ]
18
[ "Medical equipment", "Pathology", "Medical technology" ]
17,545,148
https://en.wikipedia.org/wiki/Instruments%20used%20in%20general%20medicine
Image gallery Notes References Medical equipment
Instruments used in general medicine
[ "Biology" ]
8
[ "Medical equipment", "Medical technology" ]
17,545,275
https://en.wikipedia.org/wiki/List%20of%20instruments%20used%20in%20forensics
Instruments used in Forensics, including autopsy dissections are as follows: Instrument list Serological, chemical and genetic testings are done by the respective people of these branches. Image gallery References Anatomy Medical equipment
List of instruments used in forensics
[ "Biology" ]
43
[ "Medical equipment", "Anatomy", "Medical technology" ]
17,545,297
https://en.wikipedia.org/wiki/Instruments%20used%20in%20obstetrics%20and%20gynecology
The following is a list of instruments that are used in modern obstetrics and gynaecology. See also Instruments used in general medicine Medical procedure Women's health References Obstetrics Gynaecology Medical devices Medical lists
Instruments used in obstetrics and gynecology
[ "Biology" ]
48
[ "Medical devices", "Medical technology" ]
17,545,313
https://en.wikipedia.org/wiki/List%20of%20instruments%20used%20in%20ophthalmology
This is a list of instruments used in ophthalmology. Instrument list A complete list of ophthalmic instruments can be found below: Image gallery References Ophthalmology instruments Instruments
List of instruments used in ophthalmology
[ "Biology" ]
40
[ "Medical equipment", "Medical technology" ]
17,545,414
https://en.wikipedia.org/wiki/The%20budgetary%20rule
The budgetary rule () is a rule concerning the usage of capital gains from The Government Pension Fund - Global of Norway. The rule states that a maximum of 3% of the fund's value should be allocated to the yearly government budget. Its main stated justification is to avoid the Dutch disease in the Norwegian economy due to the large influx of oil-sourced revenue. It is believed that the fund will grow with more than 3% yearly over time, which makes it possible to allocate up to 3% to the yearly budget without decreasing the value of the fund. The rule was introduced in 2001 during the First cabinet Stoltenberg, and has a broad cross-party support. The rule was last changed from 4% to 3% February 2017. Every party in the parliament was in favour of the change, except the right wing Progress Party (Norway). See also Golden Rule (fiscal policy) References External links About the government pension fund, from the Norwegian government Energy economics Energy in Norway Public finance of Norway 2001 introductions 2001 establishments in Norway
The budgetary rule
[ "Environmental_science" ]
212
[ "Environmental social science stubs", "Energy economics", "Environmental social science" ]
17,545,444
https://en.wikipedia.org/wiki/Aflibercept
Aflibercept, sold under the brand names Eylea and Zaltrap among others, is a medication used to treat wet macular degeneration and metastatic colorectal cancer. It was developed by Regeneron Pharmaceuticals. It is an inhibitor of vascular endothelial growth factor (VEGF). Aflibercept is a recombinant fusion protein consisting of the extracellular domains of human VEGF receptor 1 and 2 fused to the Fc portion of human IgG1. By acting as a soluble decoy for the natural VEGF receptors, aflibercept inhibits their activation, thereby reducing angiogenesis. Medical uses Aflibercept (Eylea) is indicated for the treatment of people with neovascular (wet) age-related macular degeneration, macular edema following retinal vein occlusion, diabetic macular edema, diabetic retinopathy, and retinopathy of prematurity. Aflibercept (Zaltrap), in combination with fluorouracil, leucovorin, and irinotecan (known as FOLFIRI), is indicated for the treatment of people with metastatic colorectal cancer that is resistant to, or has progressed following, an oxaliplatin-containing regimen. It is used for the treatment of wet macular degeneration and is administered as an intravitreal injection, that is, into the eye. For cancer treatment, it is given intravenously in combination with fluorouracil, leucovorin, and irinotecan. In July 2014, aflibercept (Eylea) was approved for the treatment of people with visual impairment due to diabetic macular edema In May 2019, the US FDA expanded the indication for aflibercept to include all stages of diabetic retinopathy. In February 2023, the US FDA approved aflibercept (Eylea) as a treatment for retinopathy of prematurity. Contraindications Aflibercept (Eylea) is contraindicated in people with infections or active inflammations of or near the eye, while aflibercept (Zaltrap) has no contraindications. Adverse effects Common adverse effects of the eye formulation include conjunctival hemorrhage, eye pain, cataract, vitreous detachment, floaters, and ocular hypertension. Aflibercept (Zaltrap) has adverse effects typical of anti-cancer drugs, such as reduced blood cell count (leukopenia, neutropenia, thrombocytopenia), gastrointestinal disorders like diarrhea and abdominal pain, and fatigue. Another common effect is hypertension (increased blood pressure). Interactions No interactions are described for either formulation. Mechanism of action In wet macular degeneration, abnormal blood vessels grow in the choriocapillaris, a layer of capillaries in the eye, leading to blood and protein leakage below the macula. Aflibercept (Zaltrap) binds to circulating VEGFs and acts like a "VEGF trap". It thereby inhibits the activity of the vascular endothelial growth factor subtypes VEGF-A and VEGF-B, as well as to placental growth factor (PGF), inhibiting the growth of new blood vessels in the choriocapillaris or the tumour, respectively. The aim of the cancer treatment, so to speak, is to starve the tumor. Composition Aflibercept is a recombinant fusion protein consisting of vascular endothelial growth factor (VEGF)-binding portions from the extracellular domains of human VEGF receptors 1 and 2, that are fused to the Fc portion of the human IgG1 immunoglobulin. History Regeneron commenced clinical testing of aflibercept in cancer in 2001. In 2003, Regeneron signed a major deal with Aventis to develop aflibercept in the field of cancer. In 2004 Regeneron started testing the compound, locally delivered, in proliferative eye diseases, and in 2006 Regeneron and Bayer signed an agreement to develop the eye indications. Society and culture Legal status In November 2011, the US Food and Drug Administration (FDA) approved aflibercept for the treatment of wet macular degeneration. In August 2012, the US FDA approved aflibercept (Zaltrap) for use in combination with 5-fluorouracil, folinic acid and irinotecan to treat adults with metastatic colorectal cancer that is resistant to, or has progressed following, an oxaliplatin‑containing regimen. To avoid confusion with the version that is injected into the eye, the FDA assigned a new name, ziv-aflibercept, to the active ingredient. In November 2012, the European Medicines Agency (EMA) approved aflibercept (Eylea) for the treatment of wet macular degeneration. In February 2013, the European Medicines Agency (EMA) approved aflibercept (Zaltrap) for the treatment of adults with metastatic colorectal cancer for whom treatment based on oxaliplatin has not worked or the cancer got worse. Aflibercept (Zaltrap) is used with irinotecan, 5-fluorouracil, and folinic acid. In August 2023, the FDA approved aflibercept (Eylea) for the treatment of wet age-related macular degeneration, diabetic macular edema, and diabetic retinopathy. Biosimilars Yesafili was approved for medical use in the European Union in September 2023. In May 2024, aflibercept-jbvf (Yesafili) and aflibercept-yszy (Opuviz) were approved for medical use in the United States. Aflibercept-mrbb (Ahzantive) was approved for medical use in the United States in June 2024. It is a biosimilar to Eylea. In August 2024, aflibercept-abzv (Enzeevu) was approved for medical use in the United States. It is a biosimilar to Eylea. In August 2024, aflibercept-ayyh (Pavblu) was approved for medical use in the United States. It is a biosimilar to Eylea. In September 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Opuviz, intended for the treatment of neovascular (wet) age-related macular degeneration, visual impairment due to macular edema secondary to retinal vein occlusion (branch RVO or central RVO), visual impairment due to diabetic macular edema (DME) and visual impairment due to myopic choroidal neovascularization (myopic CNV). The applicant for this medicinal product is Samsung Bioepis NL B.V. Opuviz is a biosimilar medicinal product that is highly similar to the reference product Eylea. In September 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Afqlir, intended for the treatment of neovascular (wet) age-related macular degeneration, visual impairment due to macular edema secondary to retinal vein occlusion (branch RVO or central RVO), visual impairment due to diabetic macular edema (DME) and visual impairment due to myopic choroidal neovascularization (myopic CNV). The applicant for this medicinal product is Sandoz GmbH. Afqlir is a biosimilar medicinal product that is highly similar to the reference product Eylea. Afqlir was authorized for use in the EU in November 2024. In November 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Ahzantive, intended for the treatment of neovascular (wet) age-related macular degeneration, visual impairment due to macular edema secondary to retinal vein occlusion (branch RVO or central RVO), visual impairment due to diabetic macular edema (DME) and visual impairment due to myopic choroidal neovascularisation (myopic CNV). The applicant for this medicinal product is Klinge Biopharma GmbH. Ahzantive is a biosimilar medicinal product that is highly similar to the reference product Eylea. Ahzantive was approved for medical use in the European Union in January 2025. In November 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Baiama, intended for the treatment of neovascular (wet) age-related macular degeneration, visual impairment due to macular edema secondary to retinal vein occlusion (branch RVO or central RVO), visual impairment due to diabetic macular edema (DME) and visual impairment due to myopic choroidal neovascularisation (myopic CNV). The applicant for this medicinal product is Formycon AG. Baiama is a biosimilar medicinal product that is highly similar to the reference product Eylea. Baiama was approved for medical use in the European Union in January 2025. In December 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Eydenzelt, intended for the treatment of neovascular (wet) age-related macular degeneration (AMD), visual impairment due to macular oedema secondary to retinal vein occlusion (branch RVO or central RVO), visual impairment due to diabetic macular oedema (DME) and visual impairment due to myopic choroidal neovascularisation (myopic CNV). The applicant for this medicinal product is Celltrion Healthcare Hungary Kft. Eydenzelt is a biosimilar medicinal product. It is highly similar to the reference product Eylea. Economics In March 2015, aflibercept was one of a group of drugs delisted from the UK Cancer Drugs Fund. In 2017, injections of aflibercept (HCPCS code J0178) were responsible for the most billing to Medicare Part B, at . Research In March 2011, aflibercept failed its primary endpoint of overall survival in the Vital phase III trial for second-line treatment of locally advanced or metastatic non-small cell lung cancer, although it improved the secondary endpoint of progression-free survival. In April 2011, aflibercept improved its primary endpoint of overall survival in the Velour phase III clinical trial for second-line treatment for metastatic colorectal cancer. Aflibercept was also in a phase III trial for hormone-refractory metastatic prostate cancer . A 2016 Cochrane Review examined outcomes comparing aflibercept versus ranibizumab injections in over 2400 people with neovascular AMD, from two randomized controlled trials. Both treatment options yielded similar improvements in visual acuity and morphological outcomes, though the authors note that the aflibercept treatment regimen has the potential to reduce treatment burden and risks from frequent injections. A 2017 review update studying the effects of anti-VEGF drugs on diabetic macular edema found that while all three studied treatments have advantages over laser therapy, there was moderate evidence that aflibercept is significantly favored in all measured efficacy outcomes over ranibizumab and bevacizumab, after one year, longer term advantages were unclear. References External links Angiogenesis inhibitors Drugs developed by Bayer Engineered proteins Ophthalmology drugs Sanofi
Aflibercept
[ "Biology" ]
2,553
[ "Angiogenesis", "Angiogenesis inhibitors" ]
17,545,909
https://en.wikipedia.org/wiki/Coates%20graph
In mathematics, the Coates graph or Coates flow graph, named after C.L. Coates, is a graph associated with the Coates' method for the solution of a system of linear equations. The Coates graph Gc(A) associated with an n × n matrix A is an n-node, weighted, labeled, directed graph. The nodes, labeled 1 through n, are each associated with the corresponding row/column of A. If entry aji ≠ 0 then there is a directed edge from node i to node j with weight aji. In other words, the Coates graph for matrix A is the one whose adjacency matrix is the transpose of A. See also Flow graph (mathematics) Mason graph References Application-specific graphs Linear algebra
Coates graph
[ "Mathematics" ]
160
[ "Linear algebra", "Algebra" ]
17,545,919
https://en.wikipedia.org/wiki/Institute%20of%20Combinatorics%20and%20its%20Applications
The Institute of Combinatorics and its Applications (ICA) is an international scientific organization formed in 1990 to increase the visibility and influence of the combinatorial community. In pursuit of this goal, the ICA sponsors conferences, publishes a bulletin and awards a number of medals, including the Euler, Hall, Kirkman, and Stanton Medals. It is based in Duluth, Minnesota and its operation office is housed at University of Minnesota Duluth. The institute was minimally active between 2010 and 2016 and resumed its full activities in March 2016. Membership The ICA has over 800 members in over forty countries. Membership is at three levels. Members are those who have not yet completed a Ph.D. Associate Fellows are younger members who have received the Ph.D. or have published extensively; normally an Associate Fellow should hold the rank of assistant professor. Fellows are expected to be established scholars and typically have the rank of associate professor or higher. Some members are involved in highly theoretical research; there are members whose primary interest lies in education and instruction; and there are members who are heavily involved in the applications of combinatorics in statistical design, communications theory, cryptography, computer security, and other practical areas. Although being a fellow of the ICA is not itself a highly selective honor, the ICA also maintains another class of members, "honorary fellows", people who have made "pre-eminent contributions to combinatorics or its applications". The number of living honorary fellows is limited to ten at any time. The deceased honorary fellows include H. S. M. Coxeter, Paul Erdős, Haim Hanani, Bernhard Neumann, D. H. Lehmer, Leonard Carlitz, Robert Frucht, E. M. Wright, and Horst Sachs. Living honorary fellows include S. S. Shrikhande, C. R. Rao, G. J. Simmons, Simmons and Sos are no longer alive, same for Shrikhande afaik Vera Sós, Henry Gould, Carsten Thomassen, Neil Robertson, Cheryl Praeger, and R. M. Wilson. Publication The ICA publishes the Bulletin of the ICA (), a journal that combines publication of survey and research papers with news of members and accounts of future and past conferences. It appears three times a year, in January, May and September and usually consists of 128 pages. Beginning in 2017, the research articles in the Bulletin have been made available on an open access basis. Medals The ICA awards the Euler Medals annually for distinguished career contributions to combinatorics by a member of the institute who is still active in research. It is named after the 18th century mathematician Leonhard Euler. The ICA awards the Hall Medals, named after Marshall Hall, Jr., to recognize outstanding achievements by members who are not over age 40. The ICA awards the Kirkman Medals, named after Thomas Kirkman, to recognize outstanding achievements by members who are within four years past their Ph.D. The winners of the medals for the years between 2010 and 2015 were decided by the ICA Medals Committee between November 2016 and February 2017 after the ICA resumed its activities in 2016. In 2016, the ICA voted to institute an ICA medal to be known as the Stanton Medal, named after Ralph Stanton, in recognition of substantial and sustained contributions, other than research, to promoting the discipline of combinatorics. The Stanton Medal honours significant lifetime contributions to promoting the discipline of combinatorics through advocacy, outreach, service, teaching and/or mentoring. At most one medal per year is to be awarded, typically to a Fellow of the ICA. List of Euler Medal winners List of Hall Medal winners List of Kirkman Medal winners List of Stanton Medal winners References External links Official Website Mathematical societies Organizations established in 1990 Organizations based in Winnipeg Mathematics awards
Institute of Combinatorics and its Applications
[ "Technology" ]
779
[ "Science and technology awards", "Mathematics awards" ]
17,546,914
https://en.wikipedia.org/wiki/SAM1
SAM1, or "Semiempirical ab initio Model 1", is a semiempirical quantum chemistry method for computing molecular properties. It is an implementation the general Neglect of Differential Diatomic Overlap (NDDO) integral approximation, and is efficient and accurate. Related methods are AM1, PM3 and the older MNDO. SAM1 was developed by M.J.S. Dewar and co-workers at the University of Texas and the University of Florida. Papers describing the implementation of the method and its results were published in 1993 and 1994. The method is implemented in the AMPAC program produced by Semichem SAM1 builds on the success of the Dewar-style semiempirical models by adding two new aspects to the AM1/PM3 formalism: Two-electron repulsion integrals (TERIs) are computed from a minimal basis set of contracted Gaussian functions, as opposed to the previously used multipole expansion. Note that the NDDO approximation is still in effect, and that only a few of the possible TERIs are explicitly computed. The values of the explicit TERIs are scaled using empirically-derived functions to obtain experimentally relevant results. One-center two-electron repulsion integrals (OCTEs) are derived initially to reproduce atomic properties. These values are then fixed and carried forward as further elemental parameterization proceeds. The performance of SAM1 for C, H, O, N, F, Cl, Br, and I was claimed to be superior to other semiempirical methods. Especially noteworthy were the smaller systematic errors for heats for formation. . See also Semi-empirical quantum chemistry method NDDO References Semiempirical quantum chemistry methods
SAM1
[ "Chemistry" ]
357
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Computational chemistry", "Physical chemistry stubs", "Semiempirical quantum chemistry methods" ]
12,021,272
https://en.wikipedia.org/wiki/Nortel%20Certifications
Nortel Certifications are IT professional certifications for Nortel products and IEEE standard technologies. There were two major levels of certification, Specialist and Expert. Within each level were three sublevels; technology, support, and design. The certification exams were administered at Prometric testing centers. Various training courses were held by Nortel regional training partners, such as Global Knowledge. As Nortel began to exit certain businesses and divest itself, they retired various certification training programs and exams. They retired the GSM-related programs in March 2008, Optical and Carrier VoIP programs in September 2009, and lastly the Enterprise programs in November 2010. Nortel Certified Technology Specialist (NCTS) The Technology certification is a starting point for validating your education of basic technology knowledge, and creates a good foundation to build on for all professional certifications. Nortel Certified Support Specialist (NCSS) The NCSS certification demonstrates application of entry level knowledge with regard to the installation, administration, configuration, maintenance, and troubleshooting of Nortel products and solutions. Nortel Certified Design Specialist (NCDS) The NCDS certification demonstrates application of entry level knowledge with regard to the planning, design, and engineering of a network consisting of Nortel products. Expert Certifications Nortel Certified Technology Expert (NCTE) The NCTE certification builds upon the knowledge required for the NCTS certification. A NCTE demonstrates advanced technical proficiency with underlying network technologies and protocols. Nortel Certified Support Expert (NCSE) The NCSE certification builds upon the knowledge required for the NCSS certification. A NCSE demonstrates application of advanced level knowledge with regard to the installation, administration, configuration, maintenance, and troubleshooting of complex networks consisting of multiple products and networking functionality. Nortel Certified Design Expert (NCDE) The NCDE certification builds upon the knowledge required for the NCDS certification. A NCDE demonstrates application of advanced level knowledge with regard to the planning, design, and engineering of complex networks consisting of multiple products and underlying functionality. Architect Certification Nortel Certified Architect (NCA) The NCA certification was Nortel's highest credential and required expert-level knowledge of all three core areas: technology, design, and support. References External links Nortel's Training and Certification Website Professional titles and certifications Information technology qualifications Nortel
Nortel Certifications
[ "Technology" ]
468
[ "Computer occupations", "Information technology qualifications" ]
12,022,054
https://en.wikipedia.org/wiki/Cannon%27s%20algorithm
In computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon. It is especially suitable for computers laid out in an N × N mesh. While Cannon's algorithm works well in homogeneous 2D grids, extending it to heterogeneous 2D grids has been shown to be difficult. The main advantage of the algorithm is that its storage requirements remain constant and are independent of the number of processors. The Scalable Universal Matrix Multiplication Algorithm (SUMMA) is a more practical algorithm that requires less workspace and overcomes the need for a square 2D grid. It is used by the ScaLAPACK, PLAPACK, and Elemental libraries. Algorithm overview When multiplying two n×n matrices A and B, we need n×n processing nodes p arranged in a 2D grid. // PE(i , j) k := (i + j) mod N; a := a[i][k]; b := b[k][j]; c[i][j] := 0; for (l := 0; l < N; l++) { c[i][j] := c[i][j] + a * b; concurrently { send a to PE(i, (j + N − 1) mod N); send b to PE((i + N − 1) mod N, j); } with { receive a' from PE(i, (j + 1) mod N); receive b' from PE((i + 1) mod N, j ); } a := a'; b := b'; } We need to select k in every iteration for every Processor Element (PE) so that processors don't access the same data for computing . Therefore processors in the same row / column must begin summation with different indexes. If for example PE(0,0) calculates in the first step, PE(0,1) chooses first. The selection of k := (i + j) mod n for PE(i,j) satisfies this constraint for the first step. In the first step we distribute the input matrices between the processors based on the previous rule. In the next iterations we choose a new k' := (k + 1) mod n for every processor. This way every processor will continue accessing different values of the matrices. The needed data is then always at the neighbour processors. A PE(i,j) needs then the from PE(i,(j + 1) mod n) and the from PE((i + 1) mod n,j) for the next step. This means that has to be passed cyclically to the left and also cyclically upwards. The results of the multiplications are summed up as usual. After n steps, each processor has calculated all once and its sum is thus the searched . After the initial distribution of each processor, only the data for the next step has to be stored. These are the intermediate result of the previous sum, a and a . This means that all three matrices only need to be stored in memory once evenly distributed across the processors. Generalisation In practice we have much fewer processors than the matrix elements. We can replace the matrix elements with submatrices, so that every processor processes more values. The scalar multiplication and addition become sequential matrix multiplication and addition. The width and height of the submatrices will be . The runtime of the algorithm is , where is the time of the initial distribution of the matrices in the first step, is the calculation of the intermediate results and and stands for the time it takes to establish a connection and transmission of byte respectively. A disadvantage of the algorithm is that there are many connection setups, with small message sizes. It would be better to be able to transmit more data in each message. See also Systolic array References External links Distributed algorithms Matrix multiplication algorithms Mesh networking
Cannon's algorithm
[ "Technology" ]
826
[ "Wireless networking", "Mesh networking" ]
12,023,302
https://en.wikipedia.org/wiki/Monoblepharidomycetes
Members of the Monoblepharidomycetes have a filamentous thallus that is either extensive or simple and unbranched. They frequently have a holdfast at the base. In contrast to other taxa in their phylum, some reproduce using autospores, although many do so through zoospores. Oogamous sexual reproduction may also occur. In addition to the type genus, the order Monoblepharidales includes Harpochytrium and Oedogoniomyces. Taxonomy Based on the work of "The Mycota: A Comprehensive Treatise on Fungi as Experimental Systems for Basic and Applied Research" and synonyms from "Part 1- Virae, Prokarya, Protists, Fungi". Class Monoblepharidomycetes Schaffner 1909 Order Monoblepharidales Schröter 1883 Family Gonapodyaceae Petersen 1909 Genus Gonapodya Fischer 1892 Genus Monoblepharella Sparrow 1940 Family Harpochytriaceae Emerson & Whisler 1984 Genus Harpochytrium Lagerheim 1890 [Fulminaria Gobi 1900; Rhabdium Dangeard 1903 non Wallroth 1833 non Schrammen 1936 non Schaum 1859] Family Monoblepharidaceae Fischer 1892 Genus Monoblepharis Cornu 1871 [Diblepharis Lagerheim 1900; Monoblephariopsis Laibach 1927] Family Oedogoniomycetaceae Barr 1990 Genus Oedogoniomyces Kobayasi & Ôkubo 1954 Family Telasphaerulaceae Longcore et T.Y. James 2017 Genus Telasphaerula Longcore et T.Y. James 2017 References Chytridiomycota
Monoblepharidomycetes
[ "Biology" ]
351
[ "Fungus stubs", "Fungi" ]
12,023,401
https://en.wikipedia.org/wiki/Dermaseptin
Dermaseptins are a family of peptides isolated from skin of the frog genus Phyllomedusa. The sequence of the dermaseptins varies greatly but due to the presence of lysine residues all are cationic and most have the potential to form amphipathic helices in water or when integrated with the lipid bilayer of the bacterial membrane. Clear separation of two lobes of positive and negative intramolecular electrostatic potential is thought to be important in cytotoxic activity. Dermaseptins are typically 27-34 amino acid residues in length and were the first vertebrate peptides demonstrated as having a lethal effect on the filamentous fungi implicated in severe opportunistic infections accompanying immunodeficiency syndrome and immunosuppressive drug therapy. Dermaseptin use in a novel drug delivery system has been proposed. The system is based on the affinity of dermaseptins for the plasma membrane of human erythrocytes. After transient loading of the cells with the non-toxic dermaseptin S4 analogue K4–S4(1–13)a, the peptide is transported in the systemic circulation to distant microbial targets. Upon reaching a microorganism for which it has greater affinity the dermaseptin derivative is spontaneously transferred to the microbial membrane where it exerts its membrane-lytic activity. See also Antimicrobial resistance References Amphibian toxins Peptides
Dermaseptin
[ "Chemistry" ]
303
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
12,024,142
https://en.wikipedia.org/wiki/Tube%20furnace
A tube furnace is an electric heating device used to conduct syntheses and purifications of inorganic compounds and occasionally in organic synthesis. One possible design consists of a cylindrical cavity surrounded by heating coils that are embedded in a thermally insulating matrix. Temperature can be controlled via feedback from a thermocouple. More elaborate tube furnaces have two (or more) heating zones useful for transport experiments. Some digital temperature controllers provide an RS-232 interface, and permit the operator to program segments for uses like ramping, soaking, sintering, and more. Advanced materials in the heating elements, such as molybdenum disilicide (MoSi2) offered in certain models can now produce working temperatures up to 1800 °C. This facilitates more sophisticated applications. Common material for the reaction tubes include alumina, Pyrex, and fused quartz, or in the case of corrosive materials molybdenum or tungsten tubes can be used. The tube furnace was invented in the first decade of the 20th century and was originally used to manufacture ceramic filaments for Nernst lamps and glowers. Illustrative applications An example of a material prepared using a tube furnace is the superconductor YBa2Cu3O7. A mixture of finely powdered CuO, BaO, and Y2O3, in the appropriate molar ratio, contained in a platinum or alumina "boat," is heated in a tube furnace at several hundred degrees under flowing oxygen. Similarly tantalum disulfide is prepared in a tube furnace followed by purification, also in a tube furnace using the technique of chemical vapor transport. Because of the availability of tube furnaces, chemical vapor transport has become a popular technique not only in industry (see van Arkel–de Boer process) but also in the research laboratory. Tube furnaces can also be used for thermolysis reactions, involving either organic or inorganic reactants. One such example is the preparation of ketenes which may employ a tube furnace in the 'ketene lamp'. Flash vacuum pyrolysis often utilize a fused quartz tube, usually packed with quartz or ceramic beads, which is heated at high temperatures. References Furnaces Laboratory equipment Kilns ru:Трубчатая_печь
Tube furnace
[ "Chemistry", "Engineering" ]
475
[ "Chemical equipment", "Furnaces", "Kilns", "Combustion engineering" ]
12,024,353
https://en.wikipedia.org/wiki/Blastocladiomycota
Blastocladiomycota is one of the currently recognized phyla within the kingdom Fungi. Blastocladiomycota was originally the order Blastocladiales within the phylum Chytridiomycota until molecular and zoospore ultrastructural characters were used to demonstrate it was not monophyletic with Chytridiomycota. The order was first erected by Petersen for a single genus, Blastocladia, which was originally considered a member of the oomycetes. Accordingly, members of Blastocladiomycota are often referred to colloquially as "chytrids." However, some feel "chytrid" should refer only to members of Chytridiomycota. Thus, members of Blastocladiomycota are commonly called "blastoclads" by mycologists. Alternatively, members of Blastocladiomycota, Chytridiomycota, and Neocallimastigomycota lumped together as the zoosporic true fungi. Blastocladiomycota contains 5 families and approximately 12 genera. This early diverging branch of kingdom Fungi is the first to exhibit alternation of generations. As well, two (once) popular model organisms—Allomyces macrogynus and Blastocladiella emersonii—belong to this phylum. Morphology Morphology in Blastocladiomycota varies greatly. For example, members of Coelomycetaceae are simple, unwalled, and plasmodial in nature. Some species in Blastocladia are monocentric, like the chytrids, while others are polycentric. The most remarkable are those members, such as Allomyces that demonstrate determinant, differentiated growth. Reproduction/life cycle Sexual reproduction As stated above, some members of Blastocladiomycota exhibit alternation of generations. Members of this phylum also exhibit a form of sexual reproduction known as anisogamy. Anisogamy is the fusion of two sexual gametes that differ in morphology, usually size. In Allomyces, the thallus (body) is attached by rhizoids, and has an erect trunk on which reproductive organs are formed at the end of branches. During the haploid phase, the thallus forms male and female gametangia that release flagellated gametes. Gametes attract one another using pheromones and eventually fuse to form a Zygote. The germinated zygote produces a diploid thallus with two types of sporangia: thin-walled zoosporangia and thick walled resting spores (or sporangia). The thin walled sporangia release diploid zoospores. The resting spore serves as a means of enduring unfavorable conditions. When conditions are favorable again, meiosis occurs and haploid zoospores are released. These germinate and grow into haploid thalli that will produce “male” and “female” gametangia and gametes. Asexual reproduction Similar to Chytridiomycota, members of Blastocladiomycota produce asexual zoospores to colonize new substrates. In some species, a curious phenomenon has been observed in the asexual zoospores. From time to time, asexual zoospores will pair up and exchange cytoplasm but not nuclei. Ecological roles Similar to Chytridiomycota, members of Blastocladiomycota are capable of growing on refractory materials, such as pollen, keratin, cellulose, and chitin. The best known species, however, are the parasites. Members of Catenaria are parasites of nematodes, midges, crustaceans, and even another blastoclad, Coelomyces. Members of the genus Physoderma and Urophlyctis are obligate plant parasites. Of economic importance is Physoderma maydis, a parasite of maize and the causal agent of brown spot disease. Also of importance are the species of Urophlyctis that parasitize alfalfa. However, ecologically, Physoderma are important parasites of many aquatic and marsh angiosperms. Also of human interest, for health reasons, are members of Coelomomyces, an unusual parasite of mosquitoes that requires an alternate crustacean host (the same one parasitized by members of Catenaria) to complete its life cycle. Others that are ecologically interesting include a parasite of water bears and the zooplankter Daphnia. Taxonomy Based on the work of Philippe Silar and "The Mycota: A Comprehensive Treatise on Fungi as Experimental Systems for Basic and Applied Research" and synonyms from "Part 1- Virae, Prokarya, Protists, Fungi". Phylum Blastocladiomycota Tehler, 1988 ex James 2006 [Allomycota Cavalier-Smith 1981; Allomycotina Cavalier-Smith 1998] Class Physodermatomycetes Tedersoo et al. 2018 Order Physodermatales Cavalier-Smith 2012 [Physodermatineae] Family Physodermataceae [Urophlyctidaceae Schroeter 1886] Genus Paraphysoderma Boussiba, Zarka & James 2011 Genus Physoderma Wallroth 1833 [Oedomyces Saccardo ex Trabut 1894] Genus Urophlyctis Schröter 1886 Class Blastocladiomycetes James 2006 [Allomycetes Cavalier-Smith 1998] Order Blastocladiales Petersen 1909 sensu Cavalier-Smith 2012 [Allomycales; Blastocladiineae] Genus Endoblastidium Codreanu 1931 Genus Polycaryum Stempell 1901 Genus Nematoceromyces Doweld 2013 Genus Blastocladiella Matthews 1937 [Clavochytridium Couch & Cox 1939; Sphaerocladia Stüben 1939] Family Coelomomycetaceae Couch 1962 Genus Callimastix Weissenberg 1912 Genus Coelomycidium Debaisieux 1919 Genus Coelomomyces Keilin 1921 [Zografia Bogayavlensky 1922] Family Sorochytriaceae Dewel 1985 Genus Sorochytrium Dewel 1985 Family Catenariaceae Couch 1945 Genus Catenaria Sorokin 1889 non Roussel 1806 [Perirhiza Karling 1946] Genus Catenophlyctis Karling 1965 Family Blastocladiaceae Petersen 1909 Genus Blastocladiopsis Sparrow 1950 Genus Microallomyces Emerson & Robertson 1974 Genus Blastocladia Reinsch 1877 Genus Allomyces Butler 1911 [Septocladia Coker & Grant 1922] References External links Aquatic fungi Fungus phyla Fungi by classification
Blastocladiomycota
[ "Biology" ]
1,427
[ "Fungi", "Eukaryotes by classification", "Fungi by classification" ]
12,024,508
https://en.wikipedia.org/wiki/Small-bias%20sample%20space
In theoretical computer science, a small-bias sample space (also known as -biased sample space, -biased generator, or small-bias probability space) is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions. The main useful property of small-bias sample spaces is that they need far fewer truly random bits than the uniform distribution to fool parities. Efficient constructions of small-bias sample spaces have found many applications in computer science, some of which are derandomization, error-correcting codes, and probabilistically checkable proofs. The connection with error-correcting codes is in fact very strong since -biased sample spaces are equivalent to -balanced error-correcting codes. Definition Bias Let be a probability distribution over . The bias of with respect to a set of indices is defined as where the sum is taken over , the finite field with two elements. In other words, the sum equals if the number of ones in the sample at the positions defined by is even, and otherwise, the sum equals . For , the empty sum is defined to be zero, and hence . ϵ-biased sample space A probability distribution over is called an -biased sample space if holds for all non-empty subsets . ϵ-biased set An -biased sample space that is generated by picking a uniform element from a multiset is called -biased set. The size of an -biased set is the size of the multiset that generates the sample space. ϵ-biased generator An -biased generator is a function that maps strings of length to strings of length such that the multiset is an -biased set. The seed length of the generator is the number and is related to the size of the -biased set via the equation . Connection with epsilon-balanced error-correcting codes There is a close connection between -biased sets and -balanced linear error-correcting codes. A linear code of message length and block length is -balanced if the Hamming weight of every nonzero codeword is between and . Since is a linear code, its generator matrix is an -matrix over with . Then it holds that a multiset is -biased if and only if the linear code , whose columns are exactly elements of , is -balanced. Constructions of small epsilon-biased sets Usually the goal is to find -biased sets that have a small size relative to the parameters and . This is because a smaller size means that the amount of randomness needed to pick a random element from the set is smaller, and so the set can be used to fool parities using few random bits. Theoretical bounds The probabilistic method gives a non-explicit construction that achieves size . The construction is non-explicit in the sense that finding the -biased set requires a lot of true randomness, which does not help towards the goal of reducing the overall randomness. However, this non-explicit construction is useful because it shows that these efficient codes exist. On the other hand, the best known lower bound for the size of -biased sets is , that is, in order for a set to be -biased, it must be at least that big. Explicit constructions There are many explicit, i.e., deterministic constructions of -biased sets with various parameter settings: achieve . The construction makes use of Justesen codes (which is a concatenation of Reed–Solomon codes with the Wozencraft ensemble) as well as expander walk sampling. achieve . One of their constructions is the concatenation of Reed–Solomon codes with the Hadamard code; this concatenation turns out to be an -balanced code, which gives rise to an -biased sample space via the connection mentioned above. Concatenating Algebraic geometric codes with the Hadamard code gives an -balanced code with . achieves . achieves which is almost optimal because of the lower bound. These bounds are mutually incomparable. In particular, none of these constructions yields the smallest -biased sets for all settings of and . Application: almost k-wise independence An important application of small-bias sets lies in the construction of almost k-wise independent sample spaces. k-wise independent spaces A random variable over is a k-wise independent space if, for all index sets of size , the marginal distribution is exactly equal to the uniform distribution over . That is, for all such and all strings , the distribution satisfies . Constructions and bounds k-wise independent spaces are fairly well understood. A simple construction by achieves size . construct a k-wise independent space whose size is . prove that no k-wise independent space can be significantly smaller than . Joffe's construction constructs a -wise independent space over the finite field with some prime number of elements, i.e., is a distribution over . The initial marginals of the distribution are drawn independently and uniformly at random: . For each with , the marginal distribution of is then defined as where the calculation is done in . proves that the distribution constructed in this way is -wise independent as a distribution over . The distribution is uniform on its support, and hence, the support of forms a -wise independent set. It contains all strings in that have been extended to strings of length using the deterministic rule above. Almost k-wise independent spaces A random variable over is a -almost k-wise independent space if, for all index sets of size , the restricted distribution and the uniform distribution on are -close in 1-norm, i.e., . Constructions give a general framework for combining small k-wise independent spaces with small -biased spaces to obtain -almost k-wise independent spaces of even smaller size. In particular, let be a linear mapping that generates a k-wise independent space and let be a generator of an -biased set over . That is, when given a uniformly random input, the output of is a k-wise independent space, and the output of is -biased. Then with is a generator of an -almost -wise independent space, where . As mentioned above, construct a generator with , and construct a generator with . Hence, the concatenation of and has seed length . In order for to yield a -almost k-wise independent space, we need to set , which leads to a seed length of and a sample space of total size . Notes References Pseudorandomness Theoretical computer science
Small-bias sample space
[ "Mathematics" ]
1,334
[ "Theoretical computer science", "Applied mathematics" ]
12,024,873
https://en.wikipedia.org/wiki/G12/G13%20alpha%20subunits
{{DISPLAYTITLE:G12/G13 alpha subunits}} G12/G13 alpha subunits are alpha subunits of heterotrimeric G proteins that link cell surface G protein-coupled receptors primarily to guanine nucleotide exchange factors for the Rho small GTPases to regulate the actin cytoskeleton. Together, these two proteins comprise one of the four classes of G protein alpha subunits. G protein alpha subunits bind to guanine nucleotides and function in a regulatory cycle, and are active when bound to GTP but inactive and associated with the G beta-gamma complex when bound to GDP. G12/G13 are not targets of pertussis toxin or cholera toxin, as are other classes of G protein alpha subunits. G proteins G12 and G13 regulate actin cytoskeletal remodeling in cells during movement and migration, including cancer cell metastasis. G13 is also essential for receptor tyrosine kinase-induced migration of fibroblast and endothelial cells. Genes GNA12 () GNA13 See also Second messenger system G protein-coupled receptor Heterotrimeric G protein Gs alpha subunit Gi alpha subunit Gq alpha subunit Rho family of GTPases References External links Peripheral membrane proteins
G12/G13 alpha subunits
[ "Chemistry" ]
272
[ "G proteins", "Signal transduction" ]
12,024,999
https://en.wikipedia.org/wiki/IQ%20motif%20containing%20GTPase%20activating%20protein
IQ motif containing GTPase activating protein (IQGAP) is a carrier protein. It is associated with the Rho GTP-binding protein. Genes IQGAP1, IQGAP2, IQGAP3 See also IQ calmodulin-binding motif References GTP-binding protein regulators
IQ motif containing GTPase activating protein
[ "Chemistry" ]
65
[ "Biochemistry stubs", "Protein stubs" ]
12,026,133
https://en.wikipedia.org/wiki/East-Asian%20Planet%20Search%20Network
East-Asian Planet Search Network (EAPSNET) is an international collaboration between China, Japan, and Korea. Each facility, BOAO (Korea), Xinglong (China), and OAO (Japan), has a 2m class telescope, a high dispersion echelle spectrograph, and an iodine absorption cell for precise RV measurements, looking for extrasolar planets. Discovery NOTE: HD 119445 b is a brown dwarf candidate. References Exoplanet search projects Science and technology in East Asia
East-Asian Planet Search Network
[ "Astronomy" ]
111
[ "Astronomy projects", "Exoplanet search projects" ]
12,026,220
https://en.wikipedia.org/wiki/Carcinogenic%20bacteria
Cancer bacteria are bacteria infectious organisms that are known or suspected to cause cancer. While cancer-associated bacteria have long been considered to be opportunistic (i.e., infecting healthy tissues after cancer has already established itself), there is some evidence that bacteria may be directly carcinogenic. Evidence has shown that a specific stage in cancer can be associated with bacteria that is pathogenic. The strongest evidence to date involves the bacterium H. pylori and its role in gastric cancer. Oncoviruses are viral agents that are similarly suspected of causing cancer. Known to cause cancer Helicobacter pylori colonizes the human stomach and duodenum. It is described as a Class 1 carcinogen. In some cases it can cause stomach cancer and MALT lymphoma. Animal models have demonstrated Koch's third and fourth postulates for the role of Helicobacter pylori in the causation of stomach cancer. The mechanism by which H. pylori causes cancer may involve chronic inflammation, or the direct action of some of its virulence factors, for example, CagA has been implicated in carcinogenesis. Another bacteria that is in this genus is Helicobacter hepaticus, which causes hepatitis and liver cancer in mice. Chronic inflammation Chronic inflammation contributes to the pathogenesis of several types of malignant diseases, but it is particularly important for H. pylori. Following a H. pylori infection many circulating immune cells are recruited to the infection site including neutrophils. To destroy the pathogens, neutrophils produce substances with antimicrobial activities such as oxidants like reactive oxygen species (ROS) and reactive nitrogen species (RNS). H. pylori can survive the induced oxidative stress by producing antioxidant enzymes such as e.g., catalase. However, the overproduction of ROS and RNS induces various types of DNA damage in the infected gastric cells.At the same time H. pylori is known to down-regulate major DNA repair pathways. As a result, genomic and mitochondrial mutations accumulate, leading to genomic instability - a well-known Hallmark of Cancer - in the gastric cells. CagA The virulence factor CagA in H. pylori has been linked to the development of gastric cancer. Once CagA is injected into the cytoplasm it can change the gastric cell signaling in both a phosphorylation-dependent and -independent manner. Phosphorylated CagA affects cell adhesion, spread and migration but can also induce the release of the proinflammatory chemokine IL-8. Additionally, interactions of the CRPIA motif in non-phosphorylated CagA were shown to lead to the persistent activation of the PI3K/Akt pathway, a pathway that is often overly active in many human cancers. This leads to the activation of the pro-inflammatory NF-κB and β-catenin pathways as well as increased gastric cell proliferation. Furthermore, CagA has also been found to increase tumor suppressor gene hypermethylation and thereby inhibiting the tumor suppressor genes. This is achieved by upregulating the methyltransferase DNMT1 via the AKT–NF-κB pathway. Lastly, CagA also induces the expression of the enzyme spermine oxidase (SMOX) that converts spermine to spermidine. As a by-product H2O2 is produced which causes ROS accumulation and contributes to the oxidative stress that the gastric cells experience during chronic inflammation. Speculative links A number of bacteria have associations with cancer, although their possible role in carcinogenesis is unclear. Salmonella Typhi has been linked to gallbladder cancer but may also be useful in delivering chemotherapeutic agents for the treatment of melanoma, colon and bladder cancer. Bacteria found in the gut may be related to colon cancer but may be more complicated due to the role of chemoprotective probiotic cancers. Microorganisms and their metabolic byproducts, or impact of chronic inflammation, may also be linked to oral cancers. The relationship between cancer and bacteria may be complicated by different individuals reacting in different ways to different cancers. History In 1890, the Scottish pathologist William Russell reported circumstantial evidence for the bacterial cause of cancer. In 1926, Canadian physician Thomas Glover reported that he could consistently isolate a specific bacterium from the neoplastic tissues of animals and humans. One review summarized Glover's report as follows: Glover was asked to continue his work at the Public Health Service (later incorporated into the National Institutes of Health) completing his studies in 1929 and publishing his findings in 1930. He asserted that a vaccine or anti-serum manufactured from his bacterium could be used to treat cancer patients with varying degrees of success. According to historical accounts, scientists from the Public Health Service challenged Glover's claims and asked him to repeat his research to better establish quality control. Glover refused and opted to continue his research independently; not seeking consensus, Glover's claims and results led to controversy and are today not given serious merit. In 1950, a Newark-based physician named Virginia Livingston published a paper claiming that a specific Mycobacterium was associated with neoplasia. Livingston continued to research the alleged bacterium throughout the 1950s and eventually proposed the name Progenitor cryptocides as well as developed a treatment protocol. Ultimately, her claim of a universal cancer bacterium was not supported in follow up studies. In 1990 the National Cancer Institute published a review of Livingston's theories, concluding that her methods of classifying the cancer bacterium contained "remarkable errors" and it was actually a case of misclassification - the bacterium was actually Staphylococcus epidermidis. Other researchers and clinicians who worked with the theory that bacteria could cause cancer, especially from the 1930s to the 1960s, included Eleanor Alexander-Jackson, William Coley, William Crofton, Gunther Enderlein, Franz Gerlach, Josef Issels, Elise L'Esperance, Milbank Johnson, Arthur Kendall, Royal Rife, Florence Seibert, Wilhelm von Brehmer, and Ernest Villequez. Alexander-Jackson and Seibert worked with Virginia Livingston. Some of the researchers published reports that also claimed to have found bacteria associated with different types of cancers. See also List of oncogenic bacteria Infectious causes of cancer List of human diseases associated with infectious pathogens Oncovirus References Bacteria Infectious causes of cancer
Carcinogenic bacteria
[ "Biology" ]
1,368
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
12,026,279
https://en.wikipedia.org/wiki/Pancreatic%20ribonuclease%20family
Pancreatic ribonuclease family (, RNase, RNase I, RNase A, pancreatic RNase, ribonuclease I, endoribonuclease I, ribonucleic phosphatase, alkaline ribonuclease, ribonuclease, gene S glycoproteins, Ceratitis capitata alkaline ribonuclease, SLSG glycoproteins, gene S locus-specific glycoproteins, S-genotype-assocd. glycoproteins, ribonucleate 3'-pyrimidino-oligonucleotidohydrolase) is a superfamily of pyrimidine-specific endonucleases found in high quantity in the pancreas of certain mammals and of some reptiles. Specifically, the enzymes are involved in endonucleolytic cleavage of 3'-phosphomononucleotides and 3'-phosphooligonucleotides ending in C-P or U-P with 2',3'-cyclic phosphate intermediates. Ribonuclease can unwind the RNA helix by complexing with single-stranded RNA; the complex arises by an extended multi-site cation-anion interaction between lysine and arginine residues of the enzyme and phosphate groups of the nucleotides. Notable family members Bovine pancreatic ribonuclease is the best-studied member of the family and has served as a model system in work related to protein folding, disulfide bond formation, protein crystallography and spectroscopy, and protein dynamics. The human genome contains 8 genes that share the structure and function with bovine pancreatic ribonuclease, with 5 additional pseudo-genes. The structure and dynamics of these enzymes are related to their diverse biological functions. Other proteins belonging to the pancreatic ribonuclease superfamily include: bovine seminal vesicle and brain ribonucleases; kidney non-secretory ribonucleases; liver-type ribonucleases; angiogenin, which induces vascularisation of normal and malignant tissues; eosinophil cationic protein, a cytotoxin and helminthotoxin with ribonuclease activity; and frog liver ribonuclease and frog sialic acid-binding lectin. The sequence of pancreatic ribonucleases contains four conserved disulfide bonds and three amino acid residues involved in the catalytic activity. Human genes Human genes encoding proteins containing this domain include: ANG, RNASE1, RNASE10, RNASE12, RNASE2, RNASE3, RNASE4, RNASE6, RNASE7, and RNASE8. Cytotoxicity Some members of the pancreatic ribonuclease family have cytotoxic effects. Mammalian cells are protected from these effects due to their extremely high affinity for ribonuclease inhibitor (RI), which protects cellular RNA from degradation by pancreatic ribonucleases. Pancreatic ribonucleases that are not inhibited by RI are approximately as toxic as alpha-sarcin, diphtheria toxin, or ricin. Two pancreatic ribonucleases isolated from the oocytes of the Northern leopard frog - amphinase and ranpirnase - are not inhibited by RI and show differential cytotoxicity against tumor cells. Ranpirnase was studied in a Phase III clinical trial as a treatment candidate for mesothelioma, but the trial did not demonstrate statistical significance against primary endpoints. References Ribonucleases EC 3.1.27 Protein domains
Pancreatic ribonuclease family
[ "Biology" ]
795
[ "Protein domains", "Protein classification" ]
12,026,653
https://en.wikipedia.org/wiki/Round%20Oak%20Steelworks
The Round Oak Steelworks was a steel production plant in Brierley Hill, West Midlands (formerly Staffordshire), England. It was founded in 1857 by Lord Ward, who later became, in 1860, The 1st Earl of Dudley, as an outlet for pig iron made in the nearby blast furnaces. During the Industrial Revolution, the majority of iron-making in the world was carried out within 32 kilometres of Round Oak. For the first decades of operation, the works produced wrought iron. However, in the 1890s, steelmaking was introduced. At its peak, thousands of people were employed at the works. The steelworks was the first in the United Kingdom to be converted to natural gas, which was supplied from the North Sea. The works were nationalized in 1951, privatized in 1953 and nationalized again in 1967 although the private firm Tube Investments continued to part manage the operations at the site. The steelworks closed in December 1982. History The Round Oak Iron Works The Ward family, Lords of Dudley Castle, came to own and control a wide range of industrial concerns in the Black Country of the nineteenth century. The family owned land in the region as well as extensive mineral rights. In 1855, the Dudley Estate commenced the construction of the Round Oak Iron Works at Brierley Hill under the supervision of the estate's mineral agent, Richard Smith. The site was next to the Dudley Canal and two railway systems: the public railway run by the Oxford, Worcester and Wolverhampton Railway and the Pensnett Railway, a mineral line owned by the Dudley Estate itself. Also nearby were the Level New Furnaces (also known as the New Level Furnaces) where blast furnaces, owned by the Dudley Estate could supply pig iron for the new iron works at Round Oak. The iron works commenced production in 1857. It was a large-scale operation: on its opening it employed 600 men, and the equipment included 28 puddling furnaces and five mills. In 1862, the works won a Prize Medal at the International Exhibition. The works were extended between 1865 and 1868, and were then capable of producing 550 tons of finished iron per week. In 1889, the company started producing chains and ships' cables. Steel production Demand for iron began to fall from the 1870s as steel production began to compete with traditional iron products, and it was decided to convert the plant to steel production. In 1890, the Dudley Estate sold the iron works to a new public company, which would aim to convert to steel production. The price of the works was set at £110,000 of which £10,000 was paid in cash and the remainder by a mortgage provided by the Dudley estate itself. The company passed to the Lancashire Trust and Mortgage Insurance Corporation Limited, which floated the shares in the firm to the public, receiving £135,000. The new company was called the Earl of Dudley's Round Oak Iron and Steel Works and was incorporated on 16 April 1891. The chairman of the new company was Mr Richard Dalgleish and the managing director was Mr R. Smith Casson. Steel was first produced in August 1894. However, the company had run into financial difficulty and on 26 November 1894, the company went into liquidation, resulting in repossession by the Dudley Estate. A new company, The Earl of Dudley's Round Oak Works Ltd, was established on 15 July 1897 under the ownership of the Dudley family. The chairman of the new company was The 2nd Earl of Dudley, and the managing director was George Hatton. The steel plant, built next to the ironworks, included three 17-ton furnaces of the open-hearth type, a 30-inch cogging mill and a 28-inch finishing mill. In 1904, the works were described as consisting of: "iron works for the manufacture of high-class bar iron ; chain works for the manufacture of chain ; and steel works for the manufacture of Siemens-Martin steel in bars of every variety of section". It was also stated that the steelworks "comprise five large open-hearth-steel melting furnaces, standing in a shop 350 ft. long by 90 ft. wide." The Bertrand-Thiel process of making steel was being used at the works. The works prospered until just after the First World War when the firm faced a financial crisis due to a national depression combined with weaknesses at the plant itself It was the Earl's son, the then Viscount Ednam, who tackled the problems by taking specialist financial advice. In 1923, a new board of directors was constituted. In 1927, it was reported that the equipment at the plant included a 90-ton tilting furnace, two 50-ton and three 40-ton fixed open-hearth furnaces, in addition to a 30-inch cogging mill and a 28-inch finishing mill. The company's name was changed to Round Oak Steel Works Limited on 14 December 1936. At the end of the Second World War, it was found necessary to carry out a modernisation of the plant, which cost over £4,000,000, the financing coming from the Finance Corporation for Industry Limited. The works were nationalised by the British Government in 1951, but were sold to Tube Investments in 1953. Tube Investments paid £1.4 million and took responsibility for the repayment of loans totaling £4.2million. The chairman of Tube Investments, Ivan Stedeford, took over as chairman of Round Oak Steel Works. Steel was produced at the works using basic electric arc and open hearth methods. Principal products included alloy and carbon steel bars (case hardening, bright drawing, free cutting, machining, hot and cold forging), special sections, railway bearing plates, rounds, squares, flats, angles, channels, joists, billets, blooms, slabs and large forging ingots. Round Oak manufactured a weldable extra high-strength steel under the brand name, 'Thirty-Oak'. The 3½ mile-long railway between the steelworks and Baggeridge closed in September 1966. The plant was re-nationalized in 1967, becoming part of British Steel although the works continued to be part managed by Tube Investments. By the 1970s, the plant's future was in doubt and the workforce was shrinking. The plant had employed around 3,000 workers at its peak, but by 1982 that figure had fallen to around 1,200. An image of the works was used for the cover art for the Depeche Mode album Some Great Reward released in 1984. Closure By the late 1970s, jobs were being axed at the plant, which had employed around 2,500 people at its peak, and British Steel was planning to close it down completely. The plant finally closed on 23 December 1982, after 125 years of steel production, with the loss more than 1,200 jobs. Brierley Hill had already been hit hard by the economic downturn of the late 1970s and early 1980s, but the closure of Round Oak saw unemployment in the town peak at around 25% — one of the highest rates of any town or city in Britain at this time. The closure came in spite of a fierce argument by local Conservative MP John Blackburn that the plant was still profitable and should be retained. Demolition work took place during 1984, when the land purchased by Don and Roy Richardson, who in October that year were given the go-ahead to build a shopping complex on nearby farmland. Redevelopment of the site The farmland which stood in the shadow of Round Oak Steelworks was designated by the Government as an Enterprise Zone in 1980, being extended to include the site of the works in 1984 — the same year that the Round Oak buildings were demolished. In October 1984, Dudley Metropolitan Borough Council approved the plans of local twin brothers Don and Roy Richardson to build a retail park and shopping mall on the farmland. The first retail units were occupied in the autumn of 1985, and by April 1986 the first phase of the Merry Hill Shopping Centre had been completed. The site was gradually expanded until the final phase opened in November 1989. Merry Hill brought thousands of jobs to the local area and spearheaded a region-wide transition from manufacturing to services as the key employer of local workers, although many of the new shopping centre's jobs were occupied by people who had worked in other locations - mostly Dudley town centre = until the retailers decided to relocate to new units at Merry Hill. The first businesses did not move onto the steelworks site until December 1990, when new offices were completed as part of the Waterfront development. Despite the closure of the works in 1982, a steel terminal was opened on the adjacent railway in August 1986 and is still in use. References External links Archive images from the Express & Star Buildings and structures in the Metropolitan Borough of Dudley Ironworks and steelworks in England Brierley Hill 1857 establishments in England 1982 disestablishments in England
Round Oak Steelworks
[ "Chemistry" ]
1,787
[ "Metallurgical industry of the United Kingdom", "Metallurgical industry by country" ]
12,026,760
https://en.wikipedia.org/wiki/Oligonucleotidase
Oligonucleotidase (, oligoribonuclease) is an exoribonuclease derived from Flammulina velutipes. This enzyme catalyses the following chemical reaction 3'-end directed exonucleolytic cleavage of viral RNA-DNA hybrid References External links EC 3.1.13
Oligonucleotidase
[ "Chemistry", "Biology" ]
74
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
12,026,951
https://en.wikipedia.org/wiki/RiteSite
RiteSite.com is an online career development center for senior business and non-profit executives. It is supported by executive membership fees; there is no charge to employers and recruiters to advertise executive jobs paying over $100,000/year. Services to executive members include facilities for online global introduction and networking, connection with leading (“Rites-Honored”) executive search firms, and resume posting in two searchable databases—one with access restricted to members of the leading search firms and the other open to the World Wide Web. Members receive twice-weekly career development publications. Job opportunities are published within the site and members are also directed to personally appropriate positions published elsewhere. Matrixed classifications To facilitate interaction, both executive members and job opportunities are classified in a Linnaeus-like manner using 57 industries and, within each, 17 functions. This method of finding exactly what is sought is at the core of RiteSite. In addition, the site also provides conventional keyword searching. Origin and background John Lucht, author of the Rites of Passage at $100,000+ executive career handbook, began identifying and publishing a list of retainer-compensated U.S. and Canadian executive search firms and included it in the initial (1986) edition of Rites of Passage. In 1995, RiteSite.com was founded to make that information downloadable. The automatic sending of resumes to the executive's choices among the top search firms (using the executive's own email address) was launched in 2004. The posting of executive jobs and identity-revealed and identity-concealed resumes was implemented in 2001. Computerized matching of executives for personal networking unhampered by geographic location began in 2003. External links Company web site Personal development
RiteSite
[ "Biology" ]
349
[ "Personal development", "Behavior", "Human behavior" ]
12,027,149
https://en.wikipedia.org/wiki/Fire%20cut
In the construction of masonry buildings, a fire cut or fireman's cut is a diagonal chamfer of the end of a joist or beam where it enters a masonry wall. If the joist burns through somewhere along its length, damage to the wall is prevented as the fire cut allows the joist to fail and still leave the masonry wall standing. Without firecut joists, if the burnt joists fail and rotate the unchamferred ends of the joists as they deflect downwards, this would damage the masonry wall at the connection point and possibly pull the wall inwards. References Masonry Carpentry
Fire cut
[ "Engineering" ]
130
[ "Construction", "Civil engineering", "Civil engineering stubs", "Masonry" ]
12,027,484
https://en.wikipedia.org/wiki/Nuclease%20S1
Nuclease S1 () is an endonuclease enzyme that splits single-stranded DNA (ssDNA) and RNA into oligo- or mononucleotides. This enzyme catalyses the following chemical reaction Endonucleolytic cleavage to 5'-phosphomononucleotide and 5'-phosphooligonucleotide end-products Although its primary substrate is single-stranded, it can also occasionally introduce single-stranded breaks in double-stranded DNA or RNA, or DNA-RNA hybrids. The enzyme hydrolyses single stranded region in duplex DNA such as loops or gaps. It also cleaves a strand opposite a nick on the complementary strand. It has no sequence specificity. Well-known versions include S1 found in Aspergillus oryzae (yellow koji mold) and Nuclease P1 found in Penicillium citrinum. Members of the S1/P1 family are found in both prokaryotes and eukaryotes and are thought to be associated in programmed cell death and also in tissue differentiation. Furthermore, they are secreted extracellular, that is, outside of the cell. Their function and distinguishing features mean they have potential in being exploited in the field of biotechnology. Nomenclature Alternative names include endonuclease S1 (Aspergillus), single-stranded-nucleate endonuclease, deoxyribonuclease S1, deoxyribonuclease S1, Aspergillus nuclease S1, Neurospora crassa single-strand specific endonuclease, S1 nuclease, single-strand endodeoxyribonuclease, single-stranded DNA specific endonuclease, single-strand-specific endodeoxyribonuclease, single strand-specific DNase and Aspergillus oryzae S1 nuclease. Structure Most nucleases with EC 3.1.30.1 activity are homologous to each other in a protein domain family called Nuclease S1/P1. Members of this family, including P1 and S1, are glycoproteins with very distinguishing features, they are: a requirement for three zinc ions cofactors, containing common active site motifs and requires an acidic pH for catalysis. contains three glycans bound to the amino acid asparagine via N-glycosylation two Disulphide bridges between cysteine residues. These requirements and distinguishing features are responsible for function efficacy. It is an enzyme and these four features are needed for enzyme functionality. The three zinc ions are vital for catalysis. The first two zincs activate the attacking water in hydrolysis whilst the third zinc ion stabilizes the leaving oxyanion. Properties Aspergillus nuclease S1 is a monomeric protein of a molecular weight of 38 kilodalton. It requires Zn2+ as a cofactor and is relatively stable against denaturing agents like urea, SDS, or formaldehyde. The optimum pH for its activity lies between 4-4.5. Aspergillus nuclease S1 is known to be inhibited somewhat by 50 μM ATP and nearly completely by 1 mM ATP. 50% inhibition has been shown at 85 μM dAMP and 1 μM dATP but uninhibited by cAMP. Mechanism This zinc-dependent nuclease protein domain produces 5' nucleotides and cleaves phosphate groups from 3' nucleotides. Additionally, the side chain of tryptophan located in the cavity in the active site and its backbone supports the action one of the zinc ions. Such mechanisms are essential to the catalytic function of the enzyme. Uses Aspergillus nuclease S1 is used in the laboratory as a reagent in nuclease protection assays. In molecular biology, it is used in removing single stranded tails from DNA molecules to create blunt ended molecules and opening hairpin loops generated during synthesis of double stranded cDNA. See also Mung bean nuclease, similar activity but probably not homologous [structure search finds similarity -> ref needed for 'non homologous' statement] DNA/RNA non-specific endonuclease, non-homologous family with similar DNA/RNA activity but accepts double-stranded substrate better References Further reading Zinc enzymes EC 3.1.30 Nucleases Protein domains Protein families
Nuclease S1
[ "Biology" ]
946
[ "Protein families", "Protein domains", "Protein classification" ]
12,027,634
https://en.wikipedia.org/wiki/ASK%20Group
ASK Group, Inc., formerly ASK Computer Systems, Inc., was a producer of business and manufacturing software. It is best remembered for its Manman enterprise resource planning (ERP) software and for Sandra Kurtzig, the company's founder and one of the early female pioneers in the computer industry. At its peak, ASK had 91 offices in 15 countries before Computer Associates acquired the company in 1994. Beginning and growth (1972–1982) ASK was started in 1972 by Sandra Kurtzig in California. She left her job as a marketing specialist at General Electric and invested $2,000 of her savings to start the company in the apartment she shared with her HP salesman husband. At first, the firm built software for a variety of business applications. ASK was incorporated in 1974. In 1978, Kurtzig came up with ASK's most significant product, named Manman (originally "MaMa"), a contraction of manufacturing management. Manman was an ERP program that ran on Hewlett-Packard HP-3000 minicomputers. Manman helped manufacturing companies plan materials purchases, production schedules, and other administrative functions on a scale that was previously possible only on large, costly mainframe computers. Manman initially had a five-figure software price and was aimed at small and medium-sized manufacturers. Small companies desiring the least expensive implementation could use the software on a time-sharing contract. During the era when Manman was only running on HP-3000 systems, ASK would buy systems at a discount and resell them "with its programs for $125,000 to $300,000" as turnkey systems. Although ASK was initially named "standing for Arie and Sandra Kurtzig, although he is not an employee." Somewhat later, with her husband working for Hewlett Packard (HP); with the software being subsequently marketed both for HP's computers and those sold by Digital Equipment Corporation (DEC), Kurtzig said that "A" was for Associates. Manman was an enormous success and quickly came to dominate the market for manufacturing systems and software. ASK's fortunes rose as a result. The corporation went public in 1981. Two years later, Sandra Kurtzig's personal stake in the firm was worth more than $40 million. Plateau (1983–1989) Software Dimensions: (March 1983 - June 1984) In March 1983 ASK made its first acquisition, purchasing a privately held software company named Software Dimensions, Inc., publisher of Accounting Plus, for $6 million. After acquiring Software Dimensions, Kurtzig renamed it ASK Micro and launched an aggressive marketing program. ASK over-hired and mismanaged the sales channel for the product, angering existing sellers and ballooning the cash burn rate for the company; the product faltered. In June 1984, Kurtzig announced that she was shutting down ASK Micro, at a cost of $1 million, and auctioning off the rights to Accounting Plus. ASK also failed at rescaling Manman to run on personal computers. Of the company's failings in the emerging personal computer market, Kurtzig told BusinessWeek, "We have our fingerprints all over the murder weapon" that killed Software Dimensions. ASK never truly found its footing in the microcomputer market, and struggled to keep its market share from being eroded by competitors who offered similar solutions on smaller platforms. Manman: lower prices, other declines (1984-1989) By the fall of 1984, ASK planned to offer a version of its original product, Manman, for about one-third of its previous price. Lower-priced minicomputers from Hewlett-Packard and Digital Equipment Corporation (DEC), the product's two hardware platforms, made this possible. The company hoped to protect its market share with smaller companies and emergent middle-range manufacturers. However, by 1985, ASK declined as its customers reduced expenditures. Exacerbating the problem, Kurtzig and her family also began selling off large blocks of their stock holdings in the company, which triggered a shareholder lawsuit. Kurtzig also backed away from ASK's day-to-day operations. In 1984, Kurtzig named Ronald W. Branniff president of the company, and in 1985 he took over her post of chief executive officer as well. Kurtzig attributed her declining interest in the business to family pressures, along with other factors. Divorced from her husband, Kurtzig devoted more time to raising her two sons, who were aged 12 and 9 at the time. Although the company remained profitable, ASK's earnings and sales declined in 1986, falling to $5.89 million on revenues of $76 million. ASK acquired NCA Corporation for $43 million in cash in 1987 which was a significant premium for a competitor that was beating them in two out of every three deals. Despite these small advances, ASK was losing ground to its competitors. In its research and development activities, ASK began to focus nearly all of its resources on upgrading and improving existing products instead of creating new ones. Salespeople had long been bedeviled with having to sell a primitive, conversational, scrolling user interface (not long afterwards, the problem was that although not everyone knew what a relational database was, everyone wanted one.) ASK had lost its entrepreneurial edge. In the meantime, Kurtzig had spent her time traveling, writing her autobiography, and investing in other technology companies, but this proved to be unfulfilling. In mid-1989 the ASK managing board approached Kurtzig and asked her to resume an active role in the company, and she accepted their invitation. Kurtzig spearheaded ASK's purchase of Data 3 Systems for $18.7 million, a privately owned competitor to ASK. In addition to this complementary expansion, Kurtzig began to revamp the way her old company had been run, shifting organization and priorities to new products. She changed such minor, but important, details as the quality of the food and beer at the company's Friday evening celebrations in an effort to reconnect upper level management with the company's employees. As part of this effort, Kurtzig instituted 360 degree reviews (where employees review bosses), hired entrepreneurial managers, spearheaded product entry into IBM and Sun Microsystems platforms, and opened international offices in Europe and Asia. The improvements resulted in 1989's earnings of $13.5 million. Decline and sale (1990–1994) In 1990, ASK purchased the Ingres Corporation, a declining software company that developed the database management system called Ingres. The deal called for 30 percent of ASK to be sold to Hewlett-Packard and Electronic Data Systems (EDS) for a total of $60 million, which in turn enabled ASK to pay $110 million for Ingres. ASK's stockholders complained about this strange multi-way financing move. Shareholder James Lennane, who held ten percent of the company's shares, announced he would try to oust the company's board of directors at the next shareholders' meeting. Despite this, Kurtzig's deal proceeded as planned. ASK already made use of Ingres software in its own work, linking the accounting and manufacturing departments of its clients to its own database. Hewlett-Packard made the hardware upon which much of ASK's software ran, and the ASK resold Hewlett-Packard products as part of its software packages. Both Hewlett-Packard and EDS had strong histories of involvement with manufacturing businesses, and this heritage promised to open more potential markets for ASK. Although this seemed like good news, ASK had mediocre results over the next several quarters, due to a lull in business while the company tried to bring new products to market. With its new purchases, ASK had moved beyond its original scope to become a much larger, global, diversified company. The unified ASK and Ingres group had yearly revenues of $400 million. In the early 1990s, ASK concentrated on the development and introduction of new products designed to provide communication between different computer systems and programs. In 1992 the company introduced Manman/X, an update of its flagship product. Manman/x was built on the code base of a product called Triton 2.2d, from a little known Dutch company called Baan. ASK had acquired the rights to the code base and distribution in the 1990s. In 1992 ASK was restructured to better reflect the nature of its operations. The company was renamed ASK Group, Inc., and comprised three business units — ASK Computer Systems, Data 3, and Ingres. With the merger of ASK and Ingres completed, Kurtzig replaced herself as CEO in 1991, but remained non-executive chairman until 1992. Although ASK appeared to be on solid footing to face the computer industry's challenging, competitive environment, its fortunes continued to decline. ASK annual revenues reached nearly $1 billion before being acquired by Computer Associates in 1994. ManMan product family Manman was a family of Enterprise resource planning (ERP) marketed for Hewlett Packard HP-3000 and Digital Equipment Corporation (DEC) minicomputers. Its vendor, ASK Group, founded by Sandra Kurtzig, was selling this software from, at ASK's peak, 91 offices in 15 countries. By 1994 annual sales reached nearly $1 billion, and the company was acquired by Computer Associates (CA); both the software and CA subsequently declined. The product family's name, Manman, was "short for manufacturing management," Its components included: Manman/AP: an accounts payable program. Since both HP and DEC's computers were time-sharing systems, the entry of data was done interactively. Vendor names and supplier payables could be viewed and, if necessary, revised from a computer terminal. Manman/MFG: to help plan and track the manufacturing process. Manman/OMAR: order management/AR. Orders were tracked by this software "until payment is received." Manman/GL: general ledger Some of the ideas for these application programs came from founder Kurtzig's exposure to several areas within "General Electric, known to be synonymous with a well-run manufacturing operation." Modules for payroll, budgeting and other analysis were also sold by ASK. During Manman's early era when it was only running on HP-3000 systems, ASK would buy systems at a discount and resell them "with its programs for $125,000 to $300,000" as turnkey. In the mid 1980s, HP's hardware required direct connection of the terminals to the mini-computer. As a result, there was a limit to the number of terminals that can be installed. Digital Equipment added DECnet to their systems allowing for more terminals to be connected to their computers. ASK quickly created a version of their software that ran on the Digital hardware. This allowed larger companies to switch hardware platforms while continuing to use MANMAN/Mfg software suite. Seagate Technology's corporate headquarters and 4 disk drive production facilities required more user connections than the HP Systems would allow. As a result of HP's system limitations, Seagate Technology was one of the first companies to take advantage of the Digital Equipment VAX platform. Seagate became one of over 750 companies to use the MANMAN suite on VAX hardware. HP soon added networking. In the end, over 2000 companies used MANMAN/MFG on HP hardware systems. References Defunct companies based in California CA Technologies History of software Software companies established in 1972 Software companies disestablished in 1994 1994 disestablishments in California
ASK Group
[ "Technology" ]
2,366
[ "History of software", "History of computing" ]
12,027,750
https://en.wikipedia.org/wiki/Harpic
Harpic is the brand name of a toilet cleaner launched in the United Kingdom in 1932 by Reckitt and Sons (now Reckitt). It is currently available in Africa, the Middle East, South Asia, the Asia-Pacific, Europe, and the Americas. The toilet cleaning products marketed under the brand name include liquids, tablets, wipes, brush systems, toilet rim blocks, and in-cistern blocks. It contains hydrochloric acid (10%) as the active ingredient, along with butyl oleylamine and other ingredients, in an aqueous solution. History The original toilet cleaner was invented by Harry Pickup (hence the origin of the name Harpic), who was based in Roscoe Street, Scarborough, in North Yorkshire. He also invented Oxypic, which was a sealant used in cast iron heating systems, and patented the Lock & Lift circular manhole covers, which were used initially by the British Military. The company also produced the steel components used on the Mulberry harbours during the D-day landings. Advertising UK advertisements from the 1930s onwards used the slogan Cleans Round The Bend (for that reason, the name is occasionally used as slang for crazy – George Macdonald Fraser uses that sense in his autobiographical "Quartered Safe Out Here" when talking about an idiosyncratic British officer commanding an irregular unit. The 2008 Harpic advertisement, Send for the Experts, featured Tom Reynolds. Ingredients Ingredients: Hydrochloric acid, Hydroxyethyl oleylamine, Cetyltrimethylammonium bromide, Ammonium chloride, Methyl salicylate, Butylated hydroxytoluene, Acid Blue 25, Acid red 88, Deionized Water. Products , the Harpic product range was: Harpic Bathroom Cleaner Harpic Power Plus Harpic Max Rim Block Harpic Hygienic Harpic All in 1 Harpic Active Fresh Toilet Cleaner Harpic 100% Limescale Remover Toilet Cleaner Harpic Flushmatic Harpic 10X Max Clean (in Malaysia and other regions) References External links Reckitt Benckiser Product Information Website Cleaning product brands Cleaning product components Cleaning products Reckitt brands British inventions Companies based in Scarborough, North Yorkshire
Harpic
[ "Chemistry", "Technology" ]
454
[ "Cleaning products", "Components", "Cleaning product components", "Products of chemical industry" ]
12,028,764
https://en.wikipedia.org/wiki/Anamorphosis%20%28biology%29
Anamorphosis or anamorphogenesis is the process of postembryonic development and moulting in Arthropoda that results in the addition of abdominal body segments, even after sexual maturity. Examples of this mode of development occur in proturans and millipedes. Protura hatch with only 8 abdominal segments and add the remaining 3 in subsequent moults. These new segments arise behind the last abdominal segment, but in front of the telson. In myriapods, euanamorphosis is when the addition of new segments continues during each moult, without there being a fixed number of segments for the adult, teloanamorphosis is when the moulting ceases once the adult has reached a fixed number of segments, and hemianamorphosis is when a fixed number of segments is reached, after which moulting continues with segments only growing in size, not number. References Developmental biology Arthropod morphology
Anamorphosis (biology)
[ "Biology" ]
199
[ "Behavior", "Developmental biology", "Reproduction" ]
12,031,361
https://en.wikipedia.org/wiki/Walkability
In urban planning, walkability is the accessibility of amenities by foot. It is based on the idea that urban spaces should be more than just transport corridors designed for maximum vehicle throughput. Instead, it should be relatively complete livable spaces that serve a variety of uses, users, and transportation modes and reduce the need for cars for travel. The term "walkability" was primarily invented in the 1960s due to Jane Jacobs' revolution in urban studies. In recent years, walkability has become popular because of its health, economic, and environmental benefits. It is an essential concept of sustainable urban design. Factors influencing walkability include the presence or absence and quality of footpaths, sidewalks or other pedestrian rights-of-way, traffic and road conditions, land use patterns, building accessibility, and safety, among others. Factors One proposed definition for walkability is: "The extent to which the built environment is friendly to the presence of people living, shopping, visiting, enjoying or spending time in an area". A study attempted to comprehensively and objectively measure subjective qualities of the urban street environment. Using ratings from an expert panel, it was possible to measure five urban design qualities in terms of physical characteristics of streets and their edges: imageability, enclosure, human scale, transparency and complexity. Walkability relies on the interdependencies between density, mix, and access in synergy. The urban DMA (Density, Mix, Access) is a set of synergies between the ways cities concentrate people and buildings, how they mix different people and activities, and the access networks used to navigate through them. These factors cannot be taken singularly. Rather than an ideal functional mix, there is a mix of mixes and interdependencies between formal, social, and functional mixes. Likewise, walk-able access cannot be reduced to any singular measure of connectivity, permeability, or catchment but is dependent on destinations and geared to metropolitan access through public transit nodes. While DMA is based on walkability measures, popular "walk score" or "rate my street" websites offer more metrics to connect urban morphology with better environmental and health outcomes. Density Density is an interrelated assemblage of buildings, populations, and street life. It is a crucial property of walkability because it concentrates more people and places within walkable distances. There is difficulty determining density due to populations oscillating from the suburbs to the urban center. Moreover, measures of density can differ dramatically for different morphologies and building typologies. Density may be conflated with building height, contributing to the confusion. The ratio between the floor area and the site area is generally known as the Floor Area Ratio (FAR, also called Plot Ratio and Floor Space Index). For example, a ten-story building on 10% of the site has the same floor area as a single-story building with 100% site coverage. Secondly, the measure of dwellings/hectare is common but particularly blunt. It depends on the functional mix, household size, and dwelling size in relation to building or population densities. Larger houses will produce higher building densities for the same population, and larger households will lead to higher populations for the same number of dwellings. In functionally mixed neighborhoods, housing will be just one component of the mix and therefore not a measure of building or population density. The census-based density of residents/hectare is another common measure, but it does not include those who work there. Functional mix When each neighbourhood has a mixture of homes, schools, work and other places people want to visit, the distances between these places are shortened. This makes it more attractive for people to walk. The idea of a functional mix contrasts with the early 20th century modernist vision, which was that each zone in a city should have a single function. This mix is sometimes visualised with the "home, work, visit" triangle. The extremes of the triangle represent zones where one can only work, or visit, or live. A walkable city has few of these zones. Instead, there are places where when can combine at least two of the three functions. When a town or city has smaller plot sizes, it is easier to create a multi-functional neighbourhood. Access networks The access networks of a city enable and constrain pedestrian flows; it is the capacity or possibility to walk. Like density and mix, these are properties embodied in urban form and facilitate more efficient pedestrian flows. Access networks are also multi-modal and need to be understood from the perspective of those who choose between modes of walking, cycling, public transport, and cars. Public transport trips are generally coupled with walkable access to the transit stop. Walking will primarily be chosen for up to 10 minutes if it is the fastest mode and other factors are equal. Walking has the advantage that it is a much more predictable trip time than public transport or cars, where we have to allow for delays caused by poor service, congestion, and parking. Major infrastructural factors include access to mass transit, presence and quality of footpaths, buffers to moving traffic (planter strips, on-street parking or bike lanes) and pedestrian crossings, aesthetics, nearby local destinations, air quality, shade or sun in appropriate seasons, street furniture, traffic volume and speed. and wind conditions. Walkability is also examined based on the surrounding built environment. Reid Ewing and Robert Cervero's five D's of the built environment—density, diversity, design, destination accessibility, and distance to transit—heavily influence an area's walkability. Combinations of these factors influence an individual's decision to walk. History Before cars and bicycles were mass-produced, walking was the main way to travel. It was the only way to get from place to place for much of human history. In the 1920s, economic growth led to increased automobile manufacturing. Cars were also becoming more affordable, leading to the rise of the automobile during the Post–World War II economic expansion. Jane Jacobs' classic book The Death and Life of Great American Cities remains one of the most influential books in the history of American city planning, especially concerning the future developments of the walkability concept. She coined the terms "social capital", "mixed primary uses", and "eyes on the street", which were adopted professionally in urban design, sociology, and many other fields. While there has been a push towards better walkability in cities in recent years, there are still many obstacles that need to be cleared to achieve more complete and cohesive communities where residents won't have to travel as far to get to where they need to go. For example, the average time it has taken American commuters to get to work has actually increased from 25 minutes in 2006 to 27.6 minutes in 2019, so much is still to be done if walkability is to be realized and a lessened reliance on cars comes into fruition. Benefits Health Walkability indices have been found to correlate with both lower Body Mass Index (BMI) and high levels of physical activity of local populations. Physical activity can prevent chronic diseases, such as cardiovascular disease, diabetes, hypertension, obesity, depression, and osteoporosis. Thus for instance, an increase in neighborhood Walk Score has linked with both better Cardio metabolic risk profiles and a decreased risk of heart-attacks. The World Cancer Research Fund and American Institute for Cancer Research released a report that new developments should be designed to encourage walking, on the grounds that walking contributes to a reduction of cancer. A further justification for walkability is founded upon evolutionary and philosophical grounds, contending that gait is important to the cerebral development in humans. In addition, walkable neighborhoods have been linked to higher levels of happiness, health, trust, and social connections in comparison with more car-oriented places. In contrast to walkable environments, less walkable environments are associated with higher BMIs and higher rates of obesity. This is particularly true for the more car-dependent environments of US suburban sprawl. Compared to walking and biking, driving as a commuting option is associated with higher levels of obesity. There are well-established links between the design of an urban area (including its walkability and land use policy) and health outcomes for that community. Socioeconomic Walkability has also been found to have many socioeconomic benefits, including accessibility, cost savings both to individuals and to the public, student transport (which can include walking buses), increased efficiency of land use, increased livability, economic benefits from improved public health, and economic development, among others. The benefits of walkability are best guaranteed if the entire system of public corridors is walkable - not limited to certain specialized routes. More sidewalks and increased walkability can promote tourism and increase property value. In recent years, the demand for housing in a walkable urban context has increased. The term "Missing Middle Housing" as coined by Daniel Parolek of Opticos Design, Inc., refers to multi-unit housing types (such as duplexes, fourplexes, bungalow courts, and mansion apartments not bigger than a large house), which are integrated throughout most walkable Pre-1940s neighborhoods, but became much less common after World War II, hence the term "missing." These housing types are often integrated into blocks with primarily single-family homes, to provide diverse housing choices and generate enough density to support transit and locally-serving commercial amenities. Auto-focused street design diminishes walking and needed "eyes on the street" provided by the steady presence of people in an area. Walkability increases social interaction, mixing of populations, the average number of friends and associates where people live, reduced crime (with more people walking and watching over neighborhoods, open space and main streets), increased sense of pride, and increased volunteerism. Socioeconomic factors contribute to willingness to choose walking over driving. Income, age, race, ethnicity, education, household status, and having children in a household all influence walking travel. Environmental One of benefits of improving walkability is the decrease of the automobile footprint in the community. Carbon emissions can be reduced if more people choose to walk rather than drive or use public transportation, so proponents of walkable cities describe improving walkability as an important tool for adapting cities to climate change. The benefits of less emissions include improved health conditions and quality of life, less smog, and less of a contribution to global climate change. Further, cities that developed under guiding philosophies like walkability typically see lower levels of noise pollution in their neighborhoods. This goes beyond just making quieter communities to live, less noise pollution can also mean greater biodiversity. Studies have shown that noise pollution can disrupt certain senses that animals rely on to find food, reproduce, avoid predators, etc. which can weaken ecosystems in an already human dominated environment. Society depends on these ecosystem for many ecological services such as provisioning, regulation, cultural/tourism, and supporting services and any degradation of these services can go beyond just affecting the aesthetic of a neighborhood or community but can have serious implications for livability and wellbeing on entire regions. Cities that have a relatively high walkability score also tend to have a higher concentration of green spaces which facilitate a more walkable city. These green spaces can assist in regulatory ecological services such as flooding, improving the quality of both air and water, carbon sequestration, etc. all while also improving the attractiveness of the city or town in which it's implemented in. Increasing walkability Many communities have embraced pedestrian mobility as an alternative to older building practices that favor automobiles. This shift includes a belief that dependency on cars is ecologically unsustainable. Automobile-oriented environments engender dangerous conditions for motorists and pedestrians and are generally bereft of aesthetics. A type of zoning called Form-based coding is a tool that some American cities, like Cincinnati, are employing to improve walkability. The COVID-19 pandemic gave birth to proposals for radical change in the organization of the city, in particular in Barcelona with the publication of the Manifesto for the Reorganisation of the city -written by architecture theorist Massimo Paolini- in which the elimination of the car and the consequent pedestrianization of the whole city is one of the critical elements, as well as the proposed inversion of the concept of the sidewalk. There are several ways to make a community more walkable: Buffers: Vegetation buffers as grass areas between the street and the sidewalk also make sidewalks safer and also absorbs the carbon dioxide from automobile emissions and assists with water drainage. Moving obstructions: removing signposts and utility poles, can increase the walkable width of the sidewalk. Quality maintenance and proper sidewalks lighting reduce obstructions, improve safety, and encourage walking. Sidewalk gaps: Sidewalks can be implemented where there are "sidewalk gaps," with priority to areas where walking is encouraged, such as around schools or transit stations. Campaigns such as Atlanta, Georgia's safe transit routes provide safer access to transit stops for pedestrians. There are several aspects to consider when implementing new sidewalks, such as sidewalk width. The Americans with Disabilities Act (ADA) requires that sidewalks be at least five feet in width. Pedestrian zone: New infrastructure and pedestrian zones replace roads for better walkability. Cities undertake pedestrian projects for better traffic flow by closing automobile access and only allowing pedestrians to travel. Projects such as the High Line and the 606 Trail increase walkability by connecting neighborhoods, using landscape architectural elements to create visually aesthetic green space and allowing for physical activity. Towns can also be modified to be pedestrian villages. Curb extensions: Curb extensions decrease the radii of the corners of the curb at intersections, calm traffic, and reduce the distance pedestrians have to cross. On streets with parking, curb extensions allow pedestrians to see oncoming traffic better where they otherwise would be forced to walk into the street to see past parked cars. Striped crosswalks, or zebra crossings, also provide safer crossings because they provide better visibility for both drivers and pedestrians. Improving crosswalk safety also increases walkability. Improving safety: Monitoring and improving safety in neighborhoods can make walking a more attractive option. Safety is the primary concern among children when choosing how to get to and from school. Ensuring safer walking areas by keeping paths well-maintained and well-lit can encourage walkability. Work from home: working from home completely eliminates any travel time associated with work and allows for people to use the time spent commuting, an average of 27.6 minutes in America. An increase in people working from home in recent years after the COVID 19 pandemic not only has cut down on fossil fuels burned, but also has other benefits like improving productivity. Improving destinations: Create a destination within walking distance of every home where people can partake in indoor and outdoor games, sports, dance, food, etc. Although exclusive to children, these destinations sometimes exist in the form of schools. Measuring One way of assessing and measuring walkability is to undertake a walking audit. An established and widely used walking audit tool is PERS (Pedestrian Environment Review System) which has been used extensively in the UK. A simple way to determine the walkability of a block, corridor or neighborhood is to count the number of people walking, lingering and engaging in optional activities within a space. This process is a vast improvement upon pedestrian level of service (LOS) indicators, recommended within the Highway Capacity Manual. However it may not translate well to non-Western locations where the idea of "optional" activities may be different. In any case, the diversity of people, and especially the presence of children, seniors and people with disabilities, denotes the quality, completeness and health of a walkable space. A number of commercial walkability scores also exist: Walk Score is a company that creates a walkability index based on the distance to amenities such as grocery stores, schools, parks, libraries, restaurants, and coffee shops. Walk Score's algorithm awards maximum points to amenities within 5 minutes' walk (.25 mi), and a decay function assigns points for amenities up to 30 minutes away. Scores are normalized from 0 to 100. Walkonomics was a web app that combines open data and crowdsourcing to rate and review the walkability of each street. As of 2011, Walkonomics claimed to have ratings for every street in England (over 600,000 streets) and New York City., although it stopped service in 2018. RateMyStreet is a website that uses crowdsourcing, Google Maps and a five star rating system to allow users to rate the walkability of their local streets. Users can rate a street using eight different categories: Crossing the street, pavement/sidewalk width, trip hazards, wayfinding, safety from crime, road safety, cleanliness/attractiveness, and disabled peoples' access. Mapping A newly developing concept is the transit time map (sometimes called a transit shed map), which is a type of isochrone map. These are maps (often online and interactive) that display the areas of a metropolis which can be reached from a given starting point, in a given amount of travel time. Such maps are useful for evaluating how well-connected a given address is to other possible urban destinations, or conversely, how large a territory can quickly get to a given address. The calculation of transit time maps is computationally intensive, and considerable work is being done on more efficient algorithms for quickly producing such maps. To be useful, the production of a transit time map must take into consideration detailed transit schedules, service frequency, time of day, and day of week. Moreover, the recent development of computer vision and street view imagery has provided significant potential to automatically assess spaces for pedestrians from the ground level. See also Alley Forced rider Form-based code Free-range parenting Greening International charter for walking Jaywalking Missing Middle Housing New Urbanism Non-motorist Permeability (spatial and transport planning) Street reclamation Trail ethics Walking Walking audit Walking bus Walking distance measure Walking tour References Further reading Dovey, Kim & Pafka, Elek (2019). "What is walkability? The urban DMA", Urban Studies. Leyden, Kevin M. (2003). "Social Capital and the Built Environment: The Importance of Walkable Neighborhoods." American Journal of Public Health. Volume 93: 1546-1551 Speck, Jeff. (2012). Walkable City: How Downtown Can Save America, One Step at a Time. Macmillan. External links levelofservice.com, Walkability tools research and walking level of service calculator. walkscore.com, an online tool that maps walk-scores, a walkability index that is based on a number of measurable variables. Urban studies and planning terminology Sustainable transport Pedestrian infrastructure
Walkability
[ "Physics" ]
3,819
[ "Physical systems", "Transport", "Sustainable transport" ]
12,032,057
https://en.wikipedia.org/wiki/LG%20G1500
The LG G1500 or G1500 is a GSM mobile phone made by LG Electronics with a monochrome LCD display. It supports GPRS, which is very notable because other handsets of its category never include a GPRS feature. G 1500
LG G1500
[ "Technology" ]
55
[ "Mobile technology stubs", "Mobile phone stubs" ]
12,033,035
https://en.wikipedia.org/wiki/Mentha%20%C3%97%20gracilis
Mentha × gracilis (syn. Mentha × gentilis L.; syn. Mentha cardiaca (S.F. Gray) Bak.) is a hybrid mint species within the genus Mentha, a sterile hybrid between Mentha arvensis (cornmint) and Mentha spicata (native spearmint). It is cultivated for its essential oil, used to flavour spearmint chewing gum. It is known by the common names of gingermint, redmint and Scotchmint in Europe, and as Scotch spearmint in North America. History Gingermint is a naturally occurring hybrid indigenous throughout the overlapping native regions of cornmint and spearmint in Europe and Asia. It was first introduced to North America by a gardener in Wisconsin in 1908; due to the Scottish origin of the variety and its similarity in flavour to spearmint, it is known there as Scotch spearmint. From Wisconsin it spread as a crop throughout the US Midwest and later to the Pacific Northwest states of Washington, Oregon, and Idaho, where the majority of global Scotch spearmint production is now concentrated. In 1990 it was brought from the Pacific Northwest to southern Alberta and Saskatchewan in Canada, which has become the second largest production region supplying about 25% of the North American market. Cultivation As a sterile hybrid gingermint does not set seed; it is instead propagated by plant cuttings from the runners of healthy mints. It is most commonly cultivated for steam distillation of its essential oil. Production is concentrated in North America north of the 41st parallel; below the 40th parallel north summer day lengths are insufficiently long to produce quality essential oil. In the Pacific northwest Scotch spearmint oil, along with native spearmint oil, is protected by a marketing board for farmers. In 2000, 89.4 tonnes of spearmint oil were produced in the US Midwest, 420.9 tonnes in the Pacific northwest, and 167.9 tonnes in Canada. An additional 10 tonnes were produced in India and limited quantities were produced in France and Argentina. Diseases Verticillium wilt is a major constraint in Mentha × gracilis cultivation. Uses Since the smell of ginger mint wards off rats and mice, it is placed in granaries to keep pests away from the grain. In North America, Scotch spearmint essential oil is used in flavouring chewing gum and candies. It is used as the traditional flavouring of Scotch mint candies. In Vietnamese cuisine the fresh herb is used as a flavouring in chicken or beef pho. As a medicinal herb it is used to treat fevers, headaches, and digestive ailments. References gracilis Hybrid plants
Mentha × gracilis
[ "Biology" ]
558
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
14,704,581
https://en.wikipedia.org/wiki/Oncoantigen
An oncoantigen is a surface or soluble tumor antigen that supports tumor growth. A major problem of cancer immunotherapy is the selection of tumor cell variants that escape immune recognition. The notion of oncoantigen was set forth in the context of cancer immunoprevention to define a class of persistent tumor antigens not prone to escape from immune recognition. Features of oncoantigens Extracellular localization Localization of oncoantigens outside tumor cells allows recognition by antibodies if downregulation of class I major histocompatibility complex (MHC-I) molecules prevents T cell recognition. Most tumor antigens are intracellular proteins. Circulating antibodies do not penetrate inside cells, hence intracellular proteins are only recognized by T cells as MHC-I-bound antigenic peptides exposed on the surface of tumor cells. However downmodulation or complete loss of MHC-I expression occurs in most human tumors, making them altogether invisible to the immune system of the host. When tumor cells downregulate MHC-I, only antigens expressed on the cell surface and/or secreted in the extracellular fluids can be recognized by antibodies. Support of the neoplastic phenotype Loss of oncoantigen expression is unlikely, because oncoantigens support tumor growth. Loss of tumor antigen expression is another cause of escape from immune recognition. This occurs because most tumor antigens are not essential for tumor growth. Hence loss of expression does not decrease the fitness of cancer cells. In contrast, downmodulation of molecules like oncogene products, which are essential for tumor growth, would impair tumor cells. The complete dependence (also called "addiction") of tumor growth from a given gene product can cease if further genetic alterations occur that activate alternative signaling pathways. Thus, the persistence of oncoantigens is not an absolute property, but rather a feature of specific stages of tumor development. Identification of oncoantigens The prototypic oncoantigen is HER2/neu, a membrane tyrosine kinase similar to the epidermal growth factor receptor (EGFR, or HER-1), expressed in about one-fourth of breast cancers. Vaccines against HER2/neu were shown to prevent mammary carcinoma in HER2/neu transgenic mice and are being tested for cancer therapy in humans. Monoclonal antibodies against HER-2 (e.g. trastuzumab) are approved for therapy of human breast cancer. Other molecules fulfilling the definition of oncoantigen are EGFR/HER-1, the mucin MUC1 and the idiotype of B and T cell malignancies. Further candidates are receptor tyrosine kinases and growth factors, but in most cases the induction of effective anti-tumor immune responses against such molecules remains to be demonstrated. Most tumor antigens are not oncoantigens, either because they are intracellular molecules, like cancer-testis antigen such as MAGE family members, or because they appear to be dispensable without significant alterations of tumorigenicity, like the carcinoembryonic antigen (CEA) or the prostate specific antigen (PSA). Novel strategies will be required to identify new oncoantigens amenable to human application. Applications of oncoantigens Prevention of mouse mammary carcinoma with vaccines against HER2/neu led to the development of the oncoantigen concept, thanks to the addiction of transgenic tumors to HER-2 expression and to the fundamental role of vaccine-induced anti-HER-2 antibodies in the arrest of tumor development. Oncoantigens are thought to be the ideal target for immunologic prevention of cancer in individuals at risk, because the continuous generation of precancerous or early cancerous cells might easily lead over time to the emergence of antigen- or MHC-loss escape variants. As escape variants are a major cause of failure also in cancer immunotherapy, it is likely that targeting oncoantigens with vaccines or antibodies will have a stronger clinical impact than attempts at targeting other tumor antigens. The problem so far in using vaccines in oncoantigen research is that the vaccines are typically not long lasting. This is because of the heterogeneous nature of cancer cells. Vaccinations may help the immune system locate certain oncoantigens such as MET, RET, CD20 and CD22. However; cells that evade the immune system begin to populate and thus cause the growth of a more resistant tumor. There is a use of oncoantigens as markers for faster diagnosis of cancer. The oncoantigens presented from cancerous cells can be used in genomics as biomarkers. This can help in faster and easier diagnosis of cancer. Oncoantigens may have a use in cancer research in the future through such advances. References Carcinogenesis Immunology
Oncoantigen
[ "Biology" ]
1,031
[ "Immunology" ]
14,704,760
https://en.wikipedia.org/wiki/EN%2062262
The European Standard EN 62262 — the equivalent of international standard IEC 62262 (2002) — relates to IK (impact protection) ratings. This is an international numeric classification for the degrees of protection provided by enclosures for electrical equipment against external mechanical impacts. It provides a means of specifying the capacity of an enclosure to protect its contents from external impacts. The IK Code was originally defined in European Standard BS EN 50102 (1995, amended 1998). Following its adoption as an international standard in 2002, the European standard was renumbered EN 62262. Before the advent of the IK code, a third numeral had been occasionally added to the closely related IP Code on ingress protection, to indicate the level of impact protection — e.g. IP66(9). Nonstandard use of this system was one of the factors leading to the development of this standard, which uses a separate two numeral code to distinguish it from the old differing systems. The standard came into effect in October 1995 and conflicting national standards had to be withdrawn by April 1997. IK ratings help to classify products by its resistance to impacts by kinetic energy, while EN 62262 specifies the way enclosures should be mounted when tests are carried out, the atmospheric conditions that should prevail, the number of impacts (5) and their (even) distribution, and the size, style, material, dimensions etc. of the various types of hammer designed to produce the energy levels required. In 2021 an additional IK code 11 representing the impact energy value of 50 joules was added. * not protected according to the standard HR 100 Rockwell hardness according to ISO 2039/2 Fe 490-2 according to ISO 1052, Rockwell hardness HR 50 to HR 58 according to ISO 6508 References External links BS EN 62262:2002 IEC 62262 Ed. 1.0 b:2002 DIN IEC 50102:1999 Electrical safety 62262 Hardness tests
EN 62262
[ "Materials_science" ]
396
[ "Hardness tests", "Materials testing" ]
14,704,878
https://en.wikipedia.org/wiki/Table%20of%20permselectivity%20for%20different%20substances
This is a table of permselectivity for different substances in the glomerulus of the kidney in renal filtration. References Physiology
Table of permselectivity for different substances
[ "Biology" ]
32
[ "Physiology" ]
14,705,292
https://en.wikipedia.org/wiki/Floyd%27s%20triangle
Floyd's triangle is a triangular array of natural numbers used in computer science education. It is named after Robert Floyd. It is defined by filling the rows of the triangle with consecutive numbers, starting with a 1 in the top left corner: The problem of writing a computer program to produce this triangle has been frequently used as an exercise or example for beginning computer programmers, covering the concepts of text formatting and simple loop constructs. Properties The numbers along the left edge of the triangle are the lazy caterer's sequence and the numbers along the right edge are the triangular numbers. The nth row sums to , the constant of an magic square . Summing up the row sums in Floyd's triangle reveals the doubly triangular numbers, triangular numbers with an index that is triangular. 1            = 1 = T(T(1)) 1            = 6 = T(T(2)) 2 + 3 1 2 + 3     = 21 = T(T(3)) 4 + 5 + 6 Each number in the triangle is smaller than the number below it by the index of its row. See also Pascal's triangle References External links Floyd's triangle at Rosetta code Triangles of numbers Computer programming Computer science education
Floyd's triangle
[ "Mathematics", "Technology", "Engineering" ]
248
[ "Computer programming", "Combinatorics", "Software engineering", "Computer science education", "Computer science", "Triangles of numbers", "Computers" ]
14,705,432
https://en.wikipedia.org/wiki/Packet%20concatenation
Packet concatenation is a computer networking optimization that coalesces multiple packets under a single header. The use of packet containment reduces the overhead at the physical and link layers. See also Frame aggregation Packet aggregation References Computer networking Packets (information technology)
Packet concatenation
[ "Technology", "Engineering" ]
51
[ "Computer networking", "Computer engineering", "Computer network stubs", "Computer science", "Computing stubs" ]
14,705,454
https://en.wikipedia.org/wiki/3C%20212
3C 212 is a quasar located in the constellation Cancer. At redshift 1.048, it is one of the luminous and distant AGNs (Active Galactic Nuclei) observed. 3C 212 is classified as radio-loud and is considered to be a prototype "red quasar", with a faint optical counterpart. In additional, it is said to contain an X-ray absorber. References External links Detailed CCD image of 3C 212 based on 60 min total exposure Quasars 212 Cancer (constellation)
3C 212
[ "Astronomy" ]
111
[ "Cancer (constellation)", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
14,705,543
https://en.wikipedia.org/wiki/Civic%20intelligence
Civic intelligence is an "intelligence" that is devoted to addressing public or civic issues. The term has been applied to individuals and, more commonly, to collective bodies, like organizations, institutions, or societies. Civic intelligence can be used in politics by groups of people who are trying to achieve a common goal. Social movements and political engagement in history might have been partly involved with collective thinking and civic intelligence. Education, in its multiple forms, has helped some countries to increase political awareness and engagement by amplifying the civic intelligence of collaborative groups. Increasingly, artificial intelligence and social media, modern innovations of society, are being used by many political entities and societies to tackle problems in politics, the economy, and society at large. The concept Like the term social capital, civic intelligence has been used independently by several people since the beginning of the 20th century. Although there has been little or no direct contact between the various authors, the different meanings associated with the term are generally complementary to each other. The first usage identified was made in 1902 by Samuel T. Dutton, Superintendent of Teachers College Schools on the occasion of the dedication of the Horace Mann School when it noted that "increasing civic intelligence" is a "true purpose of education in this country." More recently, in 1985, David Matthews, president of the Kettering Foundation, wrote an article entitled Civic Intelligence in which he discussed the decline of civic engagement in the United States. A still more recent version is Douglas Schuler's "Cultivating Society's Civic Intelligence: Patterns for a New 'World Brain'". In Schuler's version, civic intelligence is applied to groups of people because that is the level where public opinion is formed and decisions are made or at least influenced. It applies to groups, formal or informal, who are working towards civic goals such as environmental amelioration or non-violence among people. This version is related to many other concepts that are currently receiving a great deal of attention including collective intelligence, civic engagement, participatory democracy, emergence, new social movements, collaborative problem-solving, and Web 2.0. When Schuler developed the Liberating Voices pattern language for communication revolution, he made civic intelligence the first of 136 patterns. Civic intelligence is similar to John Dewey's "cooperative intelligence" or the "democratic faith" that asserts that "each individual has something to contribute, and the value of each contribution can be assessed only as it entered into the final pooled intelligence constituted by the contributions of all". Civic intelligence is implicitly invoked by the subtitle of Jared Diamond's 2004 book, Collapse: Why Some Societies Choose to Fail or Succeed and to the question posed in Thomas Homer-Dixon's 2000 book Ingenuity Gap: How Can We Solve the Problems of the Future? that suggests civic intelligence will be needed if humankind is to stave off problems related to climate change and other potentially catastrophic occurrences. With these meanings, civic intelligence is less a phenomenon to be studied and more of a dynamic process or tool to be shaped and wielded by individuals or groups. Civic intelligence, according to this logic, can affect how society is built and how groups or individuals can utilize it as a tool for collective thinking or action. Civic intelligence sometimes involves large groups of people, but other times it involves only a few individuals. civic intelligence might be more evidently seen in smaller groups when compared to bigger groups due to more intimate interactions and group dynamics. Robert Putnam, who is largely responsible for the widespread consideration of "social capital", has written that social innovation often occurs in response to social needs. This resonates with George Basalla's findings related to technological innovation, which simultaneously facilitates and responds to social innovation. The concept of "civic intelligence," an example of social innovation, is a response to a perceived need. The reception that it receives or doesn't receive will be in proportion to its perceived need by others. Thus, social needs serve as causes for social innovation and collective civic intelligence. Civic intelligence focuses on the role of civil society and the public for several reasons. At a minimum, the public's input is necessary to ratify important decisions made by business or government. Beyond that, however, civil society has originated and provided the leadership for a number of vital social movements. Any inquiry into the nature of civic intelligence is also collaborative and participatory. Civic intelligence is inherently multi-disciplinary and open-ended. Cognitive scientists address some of these issues in the study of "distributed cognition." Social scientists study aspects of it with their work on group dynamics, democratic theory, social systems, and many other subfields. The concept is important in business literature ("organizational learning") and in the study of "epistemic communities" (scientific research communities, notably). Civic intelligence and politics Politically, civic intelligence brings people together to form collective thoughts or ideas to solve political problems. Historically, Jane Addams was an activist who reformed Chicago's cities in terms of housing immigrants, hosting lecture events on current issues, building the first public playground, and conducting research on cultural and political elements of communities around her. She is just one example of how civic intelligence can influence society. Historical movements in America such as those related to human rights, the environment, and economic equity have been started by ordinary citizens, not by governments or businesses. To achieve changes in these topics, people of different backgrounds come together to solve both local and global issues. Another example of civic intelligence is how governments in 2015 came together in Paris to formulate a plan to curb greenhouse gas emission and alleviate some effects of global warming. Politically, no atlas of civic intelligence exists, yet the quantity and quality of examples worldwide is enormous. While a comprehensive "atlas" is not necessarily a goal, people are currently developing online resources to record at least some small percentage of these efforts. The rise in the number of transnational advocacy networks, the coordinated worldwide demonstrations protesting the invasion of Iraq, and the World Social Forums that provided "free space" for thousands of activists from around the world, all support the idea that civic intelligence is growing. Although smaller in scope, efforts like the work of the Friends of Nature group to create a "Green Map" of Beijing are also notable. Political engagement of citizens sometimes comes from the collective intelligence of engaging local communities through political education. Tradition examples of political engagement includes voting, discussing issues with neighbors and friends, working for a political campaign, attending rallies, forming political action groups, etc. Today, social and economic scientists such as Jason Corburn and Elinor Ostrom continue to analyze how people come together to achieve collective goals such as sharing natural resources, combating diseases, formulating political action plans, and preserving the natural environment. From one study, the author suggests that it might be helpful for educational facilities such as colleges or even high schools to educate students on the importance of civic intelligence in politics so that better choices could be made when tackling societal issues through a collective citizen intelligence. Harry C. Boyte, in an article he wrote, argues that schools serve as a sort of "free space" for students to engage in community engagement efforts as describe above. Schools, according to Boyte, empower people to take actions in their communities, thus rallying increasing number of people to learn about politics and form political opinions. He argues that this chain reaction is what then leads to civic intelligence and the collective effort to solve specific problems in local communities. It is shown by one study that citizens who are more informed and more attentive to the world of politics around them are more politically engaged both at the local and national level. One study, aggregating the results of 70 articles about political awareness, finds that political awareness is important in the onset of citizen participation and voicing opinion. In recent years and the modern world, there is a shift in how citizens stay informed and become attentive to the political world. Although traditional political engagement methods are still being used by most individuals, particularly older people, there is a trend shifting towards social media and the internet in terms of political engagement and civic intelligence. Economics and civic engagement Civic intelligence is involved in economic policymaking and decision-making around the world. According to one article, community members in Olympia, Washington worked with local administrations and experts on affordable housing improvements in the region. This collaboration utilized the tool of civic intelligence. In addition, the article argues that nonprofit organizations can facilitate local citizen participation in discussions about economic issues such as public housing, wage rates, etc. In Europe, according to RSA's report on Citizens' Economic Council, democratic participation and discussions have positive impacts on economic issues in society such as poverty, housing situations, the wage gap, healthcare, education, food availability, etc. The report emphasizes citizen empowerment, clarity and communication, and building legitimacy around economic development. The RSA's economic council is working towards enforcing more crowdsourced economic ideas and increasing the expertise level of fellows who will advise policymakers on engaging citizens in the economy. The report argues that increasing citizen engagement makes governments more legitimate through increased public confidence, stockholder engagement, and government political commitment. Ideas such as creating citizen juries, citizen reference panels, and the devolution process of policymaking are explored in more depth in the report. Collective civic intelligence is seen as a tool by the RSA to improve economic issues in society. Globally, civic participation and intelligence interact with the needs of businesses and governments. One study finds that increased local economic concentration is correlated with decreased levels of civic engagement because citizen's voices are covered up by the needs of corporations. In this situation, governments overvalue the needs of big corporations when compared to the needs of groups of individual citizens. This study points out that corporations can negatively impact civic intelligence if citizens are not given enough freedom to voice their opinions regarding economic issues. The study shows that the US has faced civic disengagement in the past three decades due to monopolizations of opinions by corporations. On the other hand, if a government supports local capitalism and civic engagement equally, there might be beneficial socioeconomic outcomes such as more income equality, less poverty, and less unemployment. The article adds that in a period of global development, local forces of civic intelligence and innovation will likely benefit citizen's lives and distinguish one region from another in terms of socioeconomic status. The concept of civic health is introduced by one study as a key component to the wellbeing of local or national economy. According to the article, civic engagement can increase citizens's professional employment skills, foster a sense of trust in communities, and allow a greater amount of community investment from citizens themselves. Artificial intelligence One recent prominent example of civic intelligence in the modern world is the creation and improvements of artificial intelligence. According to one article, AI enables people to propose solutions, communicate with each other more effectively, obtain data for planning, and tackle society issues from across the world. In 2018, at the second annual AI for Good Global summit, industry leaders, policymakers, research scientists, AI enthusiasts all came together to formulate plans and ideas regarding how to use artificial intelligence to solve modern society issues, including political problems in countries of different backgrounds. The summit proposed ideas regarding how AI can benefit safety, health, and city governance. The article mentions that in order for artificial intelligence to achieve effective use in society, researchers, policymakers, community members, and technology companies all need to work together to improve artificial intelligence. With this logic, it takes coordinated civic intelligence to make artificial intelligence work. There are some shortcomings to artificial intelligence. According to one report, AI is increasingly being used by governments to limit civil freedom of citizens through authoritarian regimes and restrictive regulations. Technology and the use of automated systems are used by powerful governments to dismiss civic intelligence. There is also the concern for losing civic intelligence and human jobs if AI was to replace many sectors of the economy and political landscapes around the world. AI has the dangerous possibility of getting out of control and self-replicate destructive behaviors that might be detrimental to society. However, according to one article, if world communities work together to form international standards, improve AI regulation policies, and educate people about AI, political and civil freedom might be more easily achieved. Social media Recent shifts towards modern technology, social media, and the internet influence how civic intelligence interact with politics in the world. New technologies expand the reach of data and information to more people, and citizens can engage with each other or the government more openly through the internet. Civic intelligence can take a form of increased presence among groups of individuals, and the speed of civic intelligence onset is intensified as well. The internet and social media play roles in civic intelligence. Social Medias like Facebook, Twitter, and Reddit became popular sites for political discoveries, and many people, especially younger adults, choose to engage with politics online. There are positive effects of social media on civic engagement. According to one article, social media has connected people in unprecedented ways. People now find it easier to form democratic movements, engage with each other and politicians, voice opinions, and take actions virtually. Social media has been incorporated into people's lives, and many people obtain news and other political ideas from online sources. One study explains that social media increase political participation through more direct forms of democracy and bottom-up approach of solving political, social, or economical issues. The idea is that social media will lead people to participate politically in novel ways other than traditional actions of voting, attending rallies, and supporting candidates in real life. The study argues that this leads to new ways of enacting civic intelligence and political participation. Thus, the study points out that social media is designed to gather civic intelligence at one place, the internet. A third article featuring an Italian case study finds that civic collaboration is important in helping a healthy government function in both local and national communities. The article explains that there seems to be more individualized political actions and efforts when people choose to innovate new ways of political participation. Thus, one group's actions of political engagement might be entirely different than those of another group. However, social media also has some negative effects on civic intelligence in politics or economics. One study explains that even though social media might have increased direct citizen participation in politics and economics, it might have also opened more room for misinformation and echo chambers. More specifically, trolling, the spread of false political information, stealing of person data, and usage of bots to spread propaganda are all examples of negative consequences of internet and social media. These negative results, along the lines of the article, influence civic intelligence negatively because citizens have trouble discovering the lies from the truths in the political arena. Thus, civic intelligence would either be misleading or vanish altogether if a group is using false sources or misleading information. A second article points out that a filter bubble is created through group isolation as a result of group polarization. False information and deliberate deception of political agendas play a major role in forming filter bubbles of citizens. People are conditioned to believe what they want to believe, so citizens who focus more on one-sided political news might form one's own filter bubble. Next, a research journal found that Twitter increases political knowledge of users while Facebook decrease the political knowledge of users. The journal points out that different social media platforms can affect users differently in terms of political awareness and civic intelligence. Thus, social media might have uncertain political effects on civic intelligence. References Information society Collective intelligence Active citizenship
Civic intelligence
[ "Technology" ]
3,135
[ "Computing and society", "Information society" ]
14,705,768
https://en.wikipedia.org/wiki/Cees%20Dekker
Cornelis "Cees" Dekker (born 7 April 1959 in Haren, Groningen) is a Dutch physicist, and Distinguished University Professor at Delft University of Technology. He is known for his research on carbon nanotubes, single-molecule biophysics, and nanobiology. Biography Born in Haren, Groningen in 1959, Dekker studied at University of Utrecht, where he received a PhD in Experimental Physics in 1988. In 1988 Dekker started his academic career as Assistant Professor at the University of Utrecht; in these years he also worked in the United States as Visiting Researcher at IBM Research. It was during this period that Dekker carried out research on magnetic spin systems and on noise in superconductors and semiconductors. In 1993 he was appointed as Associate Professor at Delft University of Technology. In the mid-1990s Dekker and his team achieved success with the discovery of the electronic properties of carbon nanotubes, the first single-molecule transistor and other nanoscience. In 1999 he was appointed to the Antoni van Leeuwenhoek Professorship, a chair for outstanding young scientists. In 2000, he was appointed in a regular full professorship in Molecular Biophysics at the Faculty of Applied Sciences at Delft. In 2007, he was appointed as a Distinguished University Professor at Delft. From 2010 to 2012, he was the inaugurating Chair of a new Department of Bionanoscience at the Delft University. From 2010 until 2018, Dekker acted as the Director of the Kavli Institute of Nanoscience at Delft. From 2015 until 2020 he was Royal Academy Professor of the Royal Netherlands Academy of Arts and Sciences. Dekker has been awarded a number of national and international prizes, including the 2001 Agilent Europhysics Prize, the 2003 Spinozapremie, the 2012 Nanoscience Prize, and the 2021 Nano Research Award. He also was granted an honorary doctorate from Hasselt University, Belgium. In recognition of his achievements, Dekker was elected Member of the Royal Netherlands Academy of Arts and Sciences in 2003, Fellow of the American Physical Society and the Institute of Physics and in 2014 he was awarded Knight of the Order of the Netherlands Lion. Work Dekker started his research on single carbon nanotubes in 1993 when he set up a new line of research to study electrical transport through single organic molecules between nanoelectrodes. In 1996 a breakthrough was realized with carbon nanotubes. This was achieved in a collaboration with the group of Nobel laureate Richard Smalley. STM and nanolithography techniques were used to demonstrate that these nanotubes are quantum wires at the single-molecule level, with outstanding physical properties. Many new phenomena were discovered, and he and his research group established a leading position in this field of research. Dekker and his research group discovered new physics of nanotubes as well as explored the feasibility of molecular electronics. In 1998, they were the first to build a transistor based on a single nanotube molecule. Since 2000, Dekker has shifted the main focus of his work towards biophysics where he studies the properties of single biomolecules and cells using the tools of nanotechnology. This change of field was driven by his fascination for the remarkable functioning of biological molecular structures, as well as by the long-term perspective that many interesting discoveries can be expected in this field. Current lines of research in his biophysics group are in the areas of: Nanopores for sequencing of DNA and proteins Biophysics of chromatin maintenance Bottom up biology, working towards synthetic cells Research achievements Source: 1980s 1988, first realization of a model two-dimensional spin glass and verification of its dynamics 1990s 1990, first measurement of quantum size effect in the noise of quantum point contacts 1991, demonstration of a new vortex-glass phase in high-temperature superconductors 1996, first mesoscopic charge density waves devices; and first electrical measurements on a single metal nanocluster between nanoelectrodes 1997, discovery that carbon nanotubes behave as quantum coherent molecular wires 1998, discovery that carbon nanotubes act as chirality-dependent.semiconductors or metals; and discovery of room-temperature transistors, made from a single nanotube molecule 1999, first measurement of the wavefunction of single molecular orbitals of carbon nanotubes; and discovery of kink heterojunctions of carbon nanotubes which gave decisive evidence for a new Luttinger description of interacting electrons in nanotubes 2000s 2000, discovery that nanotubes can carry extraordinary large current densities; resolved the controversial issue of electronic transport through DNA molecules by measurements of insulating behavior at the single molecule level; and demonstration of an AFM technique for single-molecule manipulation of nanotubes 2001, discovery of single-electron transistors at room temperature based on nanotubes; realization of first logic circuits with carbon nanotube devices; and discovery of the molecular structure of DNA repair enzymes with AFM 2002, exploration of new assembly routes with carbon nanotubes functionalized with DNA 2003, demonstrated the first biosensors made out of a carbon nanotube; resolved the structure and mechanism of DNA repair proteins; and discovery of a new technique for fabricating solid-state nanopores for DNA translocation 2004, discovery of new physics in translocation of DNA through nanopores; first experimental study of ions conduction in nanofluidic channels; first electrochemistry with individual single-wall carbon nanotubes; STM detection and control of phonons in carbon nanotubes; first electrical docking of microtubules on kinesin-coated nanostructures; first biophysics characterization of the mechanical properties of double-stranded RNA; and first single-molecule study of DNA translocation by a restriction-modification enzyme. 2005, discovery of the mechanism of DNA uncoiling by topoisomerase enzymes; discovery of long-range conformational changes in Mre11/DNA repair complexes; and first force measurements on a DNA molecule in a nanopore 2006, first demonstration of molecular sorting in a lab on a chip using biomotors; discovery of nanobubbles in solid-state nanopores; and first estimate of electrokinetic energy conversion in a nanofluidic channel 2007, first real-time detection of strand exchange in homologous recombination by RecA; discovery of a low persistence length of ends of microtubules; and resolved the mechanism of biosensing with carbon nanotubes 2008, first observation of protein-coated DNA translocation through nanopores; resolved the origin of the electrophoretic force on DNA in nanopores; discovered a significant velocity increase of microtubules in electric fields; discovered an anomalous electro-hydrodynamic orientation of microtubules; and resolved the origin of noise in carbon nanotubes in liquid 2009, discovery of a new phenotype for bacteria in narrow nanofluidic slits; and first detection of local protein structures along DNA using solid-state nanopores 2010s 2010, developed a new way (‘wedging transfer’) to manipulate nanostructures; first report of DNA translocation through graphene nanopores; and realized hybrid nanopores by directed insertion of α-hemolysin into solid-state nanopores 2011, first in vitro measurements of transport across a single biomimetic nuclear pore complex; development of multiplexed magnetic tweezers for kilo-molecule experiments; and resolved the mechanism of homology recognition in DNA homologous recombination 2012, discovery that nucleoid occlusion underlies the accuracy of bacterial cell division; and first ever study of the dynamics DNA supercoils and the discovery of supercoil hopping 2013, controlled shaping of live bacterial cells into arbitrary shapes; and discovery of spontaneous fluctuations in the handedness of histone tetrasomes 2014, first study of Min protein oscillations in shape-shifted bacteria 2015, discovery that condensin is a highly flexible protein structure; and first detection of DNA knots using nanopores 2018, first direct visual proof for DNA loop-extrusion by SMC proteins 2019, first visualization of the circular chromosome of E coli bacteria. 2020s 2020, discovered a new type of loops in chromatin (Z loops) 2021, developed a nanopore electro-osmotic trap for the label-free study of single proteins 2021, demonstrated unlimited re-reading of single proteins using nanopore sequencing 2022, showed nontopological DNA loop extrusion by the SMC passage of huge roadblocks 2022, realized a nanoscale turbine built from DNA origami on a nanopore 2023, developed a one-component division machinery for synthetic cells Other interests Dekker is a Christian and active in the discussion about the relationship between science and religion, a topic on which he has co-edited several books. In 2005 Dekker became involved in Netherlands-wide discussions about Intelligent Design, a movement that he has since clearly distanced himself from. Dekker advocates that science and religion are not in opposition but can be harmonized. He wrote the foreword to the Dutch translation of ‘The Language of God' by Francis Collins, the former director of the National Institutes of Health. Like Collins, Dekker is a proponent of theistic evolution. He actively debates with creationists in the Netherlands. In 2015 he co-wrote a children's book that explained an evolutionary creation to young children. This got translated in English as 'Science Geek Sam and his Secret Logbook'. He also co-wrote 'Dawn: A Proton's Tale of All That Came to Be', a book that combines the scientific narrative about the evolution of the cosmos with the Christian creation story. Reception Dekker has more than 400 publications, including more than 30 papers in Nature and Science. Thirteen of his group's publications have been cited more than 1000 times, and in 2001, his group work was selected as Breakthrough of the Year by the journal Science. References External links CV of Dekker 1959 births Living people Carbon scientists Academic staff of the Delft University of Technology Dutch biophysicists Dutch Christian writers Dutch nanotechnologists 20th-century Dutch physicists Fellows of the American Physical Society Fellows of the Institute of Physics Knights of the Order of the Netherlands Lion Members of the Royal Netherlands Academy of Arts and Sciences People from Haren, Groningen Spinoza Prize winners Theistic evolutionists United Pentecostal and Evangelical Churches members Utrecht University alumni Academic staff of Utrecht University 21st-century Dutch physicists
Cees Dekker
[ "Biology" ]
2,166
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
14,706,356
https://en.wikipedia.org/wiki/Algorithms%20%2B%20Data%20Structures%20%3D%20Programs
Algorithms + Data Structures = Programs is a 1976 book written by Niklaus Wirth covering some of the fundamental topics of system engineering, computer programming, particularly that algorithms and data structures are inherently related. For example, if one has a sorted list one will use a search algorithm optimal for sorted lists. The book is one of the most influential computer science books of its time and, like Wirth's other work, has been used extensively in education. The Turbo Pascal compiler written by Anders Hejlsberg was largely inspired by the Tiny Pascal compiler in Niklaus Wirth's book. Chapter outline Chapter 1 - Fundamental Data Structures Chapter 2 - Sorting Chapter 3 - Recursive Algorithms Chapter 4 - Dynamic Information Structures Chapter 5 - Language Structures and Compilers Appendix A - the ASCII character set Appendix B - Pascal syntax diagrams See also Code: The Hidden Language of Computer Hardware and Software References External links ETH Zurich / N. Wirth / Books / Compilerbau: Algorithms + Data Structures = Programs (archive.org link) N. Wirth, Algorithms and Data Structures (1985 edition, updated for Oberon in August 2004. Pdf at ETH Zurich) (archive.org link) Computer programming books History of computing Computer science books 1976 non-fiction books Prentice Hall books
Algorithms + Data Structures = Programs
[ "Technology" ]
259
[ "Computing stubs", "Computer book stubs", "History of computing", "Computers" ]
14,706,722
https://en.wikipedia.org/wiki/3C%209
3C 9 is a lobe-dominated quasar located in the constellation Pisces. In 1965, it was the most distant object discovered at the time of discovery. This was the first object found with a redshift in excess of 2. References External links Wikisky image of 3C 9 Image of 3C 9 by PanSTARRS Quasars 009 Pisces (constellation) 2817473
3C 9
[ "Astronomy" ]
88
[ "Pisces (constellation)", "Constellations" ]
14,706,811
https://en.wikipedia.org/wiki/3C%20191
3C 191 is a quasar located in the constellation Cancer. It is located at redshift z = 1.95 and is hosted by an elliptical galaxy. The quasar contains a radio jet known to contain a high rotation measure with a thin shell configuration created in a form of wind inside the central regions. According to studies, the quasar is producing energy outflow winds with a rate of 1.9 x 1045 ergs s−1. The elliptical host of 3C 191, is said to have a gas density of ~ 0.17 cm−3 with a magnetic field measuring ~ 2.5 x 10−3 μG. In additional, 3C 191 contains a number of absorption lines. References Quasars 191 2817585 Cancer (constellation)
3C 191
[ "Astronomy" ]
160
[ "Cancer (constellation)", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
14,706,853
https://en.wikipedia.org/wiki/Fire%20sale
A fire sale is the sale of goods at extremely discounted prices. The term originated in reference to the sale of goods at a heavy discount due to fire damage. It may or may not be defined as a closeout, the final sale of goods to zero inventory. They are said to occur in the financial markets when bidders who value assets highly are prevented from bidding on them, depressing the average selling price below what it otherwise would be. This lowering of the price can cause even further issues because it may be inaccurately perceived as signalling negative information. History The term is adapted from reference to the sale of fire-damaged goods at reduced prices. In Proceedings of the Fitchburg [Mass.] Historical Society and Papers Relating to the History of the Town Read by Some of the Members the following entry is found: In December, 1856, the account of an extensive fire in the American House mentions the following occupants: E. B. Gee, clothing; T. B. Choate, drugs; J. C. Tenney, boots and shoes; Maraton Upton, dry goods; and M. W. Hayward, groceries. Maraton Upton removed his stock to No. 9 Rollstone block, and advertised "Extraordinary fire sale; customers are invited to call and examine goods which are still warm." The term also has a counterpart in "railroad salvage", the discount sale of goods damaged in derailment or other accidents. According to Plutarch, Crassus would show up to a burning building with a fire brigade and offer to put out the fire, or sometimes to buy the property at an outrageous discount. If the owner agreed, he would put out the fire. If they refused, he would stand by and let it burn. Sports In professional sports, a fire sale occurs when a team trades many of its veteran players, especially expensive star players, to other teams for less expensive and usually younger players. Teams usually have a fire sale for financial reasons. The term is generally thought of as different from merely "rebuilding" a team, because during a rebuilding process, teams often obtain players who are already in the major leagues or who are close to being major-league-ready, while retaining at least some of their key veterans (such as a franchise player) while also getting players from their minor league system; most rebuilding teams have few veterans remaining to jettison in the first place. On the other hand, trades in a fire sale often bring a team draft picks and prospects who have little to no major-league experience in their sport, in exchange for proven, experienced veterans. The term comes from the perception that the team is trying to get rid of all its players. Baseball In sports, the term fire sale is especially used in Major League Baseball, where the most infamous fire sale occurred in 1997. Weeks after winning the 1997 World Series, the Florida Marlins began trading away several of their high salary players and key cogs in the championship run, with Moisés Alou and Al Leiter among the first of many to go throughout the off-season and well into the 1998 season. This ended any realistic chance of the Marlins' defending their title. They plummeted to a 54–108 record in 1998, the worst ever by a defending World Series champion. Owner and manager Connie Mack put the Philadelphia Athletics through a fire sale twice. In 1914, the Athletics lost the 1914 World Series to the "Miracle Braves" in a four-game sweep. Mack traded, sold or released most of the team's star players soon after, in what could be considered as the first fire sale in organized sports. The second and more consequential fire sale came after the 1933 season. On December 12, Mack traded Mickey Cochrane to the Detroit Tigers for Johnny Pasek and $100,000 and then traded Pasek and former 20-game winner George Earnshaw to the Chicago White Sox for Charlie Berry and $20,000. On the same day he traded another future Hall of Famer, sending Lefty Grove, Max Bishop and Rube Walberg to the Boston Red Sox for Rabbit Warstler, Bob Kline and $125,000. Between 1933 and 1935 the Athletics also traded or sold Jimmie Foxx, Al Simmons, Doc Cramer, and Jimmie Dykes. The Athletics would never come close to having the success they had before the fire sale and the team eventually moved to Kansas City in 1955. The Athletics went through another major fire sale in anticipation of a move to Las Vegas. In April 2023, the Athletics ceased negotiations with the city of Oakland, California, where the team had played since 1968, in favor of constructing a stadium on a plot of land near Interstate 15. This was eventually changed to the plot of land where the Tropicana hotel and casino sits due to problems with the land that was originally chosen. The fire sale was seen by many as a way to provide a reason for this move. In response, Athletics fans hung several banners deriding team ownership and calling for the team to remain in Oakland at the Coliseum, which were controversially cropped out on the MLB.tv broadcast. The fire sale had taken place since the beginning of the 2022 season, when the Athletics suffered their worst season since 1979, losing 102 games. Despite fan protests, during the 2023 All-Star Game, Rob Manfred confirmed that relocation papers had been filed and that relocation proceedings had begun following the approval of a Nevada legislative package providing funding for the move. Another infamous fire sale occurred in 1994. The Montreal Expos ended the strike-shortened 1994 season with the best record in the majors. But by the start of the following season, many of the team's young stars had either been traded or lost to free agency. The Expos never really recovered on or off the field from this, and moved to Washington, D.C., as the Nationals in 2005. The Miami Marlins had another controversial fire sale in the 2012–2013 offseason. In a massive, 12-player trade with the Toronto Blue Jays, they sent away such players as Josh Johnson, José Reyes, and Mark Buehrle. In the 2017–18 offseason, the Marlins had yet another fire sale. After longtime owner Jeffrey Loria sold the team to a group led by former MLB player Derek Jeter and Bruce Sherman, nearly all of the team's star players, notably Giancarlo Stanton, Christian Yelich, and Marcell Ozuna, were traded to different teams across the major leagues. The motive for the team dismantling was primarily to help lessen payroll to pay off the organization's outstanding debt. The team's popularity notably declined in 2018 as a result of the fire sale, with the team's attendance falling to the lowest in Major League Baseball. Another infamous fire sale took place immediately after the 2021-22 MLB lockout, when the Cincinnati Reds traded, refused to sign, or waived most of their star players, including Nick Castellanos, Jesse Winker, and Eugenio Suarez. This led to fan protests on social media and at Great American Ball Park, which in turn led to Phil Castellini telling fans "Where are you gonna go?" and "Be careful what you ask for" in an interview with WLW. The Reds went on to only win 3 of their first 25 games, and ended up with a record of 62–100, which was only the 2nd time the Reds lost at least 100 games in a season. Other sports The Ottawa Senators had a fire sale during the 2018–19 season. Several star players like Matt Duchene, Mike Hoffman, Mark Stone and team captain Erik Karlsson were traded in exchange for declining veterans, prospects and draft picks. Many perceived the fire sale as Senators' management being unwilling to sign stars to long-term, expensive extensions, as many of them would be eligible to become free agents after the season. The Senators would finish dead last in 2018–19, while their former stars found success on other teams, and fan attendance and support dropped significantly due to a lack of good will with Senators ownership and the fanbase. In association football, clubs which go through financial troubles or are relegated and dropped into a lower division may need to go through a fire sale to reduce their operating expenses or replace the revenue lost from leaving the superior division. Players who are deemed to have overinflated wages or a lack of quality may be sold for low or no transfer fees in order to get their wage off the books. See also Suggested retail price References Pricing Fire Bankruptcy
Fire sale
[ "Chemistry" ]
1,735
[ "Combustion", "Fire" ]
14,706,985
https://en.wikipedia.org/wiki/Anthony%20L.%20Turkevich
Anthony Leonid Turkevich (July 23, 1916 – September 7, 2002) was an American radiochemist who was the first to determine the composition of the Moon's surface using an alpha scattering spectrometer on the Surveyor 5 mission in 1967. Early life and education Turkevich was born on July 23, 1916, in Manhattan, New York, at the bishop's house attached to Saint Nicholas Russian Orthodox Cathedral. His father, Leonid Turkevich, was dean at the time, and later became the Metropolitan of the Orthodox Church in North America. He had two brothers. Turkevich studied at Dartmouth College and obtained his bachelor's degree in chemistry in 1937. He completed his Ph.D. at Princeton University on the structure of small molecules in 1940. Career Turkevich moved to the Department of Physics at the University of Chicago as a research assistant with Robert Mulliken where he studied molecular spectroscopy and nuclear fission products. In 1942, during World War II, he joined the Manhattan Project, working initially at Columbia University. The Columbia laboratory group was asked to move to Chicago as part of the project and from 1943 to 1945 he worked at the Metallurgical Laboratory or "Met Lab", at the University of Chicago. He investigated the separation of uranium isotopes by gaseous diffusion of uranium hexafluoride and the radiochemistry of reactor products, such as plutonium, that are generated by neutron capture in uranium. In 1945, he transferred to Los Alamos, and was involved with the Trinity test, the first detonation of a nuclear device, near Alamogordo, New Mexico, on July 16, 1945. Turkevich was one of several scientists who estimated the amount of energy released in the explosion. He then transferred to Edward Teller's theory group to study nuclear fusion and establish whether producing a thermonuclear weapon was feasible, one of many challenges faced by scientists at Los Alamos that led to the development and use of the Monte Carlo method. He worked with Nicholas Metropolis and Stanley Frankel using the ENIAC computer. Turkevich returned to the Department of Chemistry at the University of Chicago as an assistant professor in 1946. In July 1946, Turkevich and Seymour Katcoff suggested that nuclear explosions could be monitored by measuring the atmospheric concentration of the radioactive isotope krypton-85, a fission product. Turkevich wrote a letter to Philip Morrison proposing that atmospheric sampling could be used to estimate the number of fissions that had occurred in nuclear reactors and atmospheric atom bomb tests. The history of this aspect of Turkevich's work didn't become public until it was declassified in 1997. He also worked on the peaceful uses of nuclear energy. For this latter work, he received the 1969 Atoms for Peace Award. Personal life Turkevich married Ireene ("Renee") in September 1948. They had a son and a daughter. Turkevich had two brothers. His elder brother, John Turkevich (1907 – 1998), was Eugene Higgins Professor of Chemistry at Princeton, and his younger brother, Nicholas L. Turkevich (1918 – 2007), was an international advertising executive. Anthony Turkevich died on September 7, 2002, in Lexington, Virginia, aged 86. References 1916 births 2002 deaths 20th-century American chemists Nuclear chemists Atoms for Peace Award recipients Members of the United States National Academy of Sciences Dartmouth College alumni Princeton University alumni University of Chicago faculty Manhattan Project people
Anthony L. Turkevich
[ "Chemistry" ]
691
[ "Nuclear chemists" ]
14,707,384
https://en.wikipedia.org/wiki/3C%2047
3C 47 is a Seyfert galaxy / lobe-dominated quasar located in the constellation Pisces. It was the first quasar found with the classic double radio-lobe structure. References Quasars Seyfert galaxies Pisces (constellation) 047 2817500
3C 47
[ "Astronomy" ]
64
[ "Pisces (constellation)", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
14,708,063
https://en.wikipedia.org/wiki/Enthalpy%E2%80%93entropy%20compensation
In thermodynamics, enthalpy–entropy compensation is a specific example of the compensation effect. The compensation effect refers to the behavior of a series of closely related chemical reactions (e.g., reactants in different solvents or reactants differing only in a single substituent), which exhibit a linear relationship between one of the following kinetic or thermodynamic parameters for describing the reactions: Between the logarithm of the pre-exponential factors (or prefactors) and the activation energies where the series of closely related reactions are indicated by the index , are the preexponential factors, are the activation energies, is the gas constant, and , are constants. Between enthalpies and entropies of activation (enthalpy–entropy compensation) where are the enthalpies of activation and are the entropies of activation. Between the enthalpy and entropy changes of a series of similar reactions (enthalpy–entropy compensation) where are the enthalpy changes and are the entropy changes. When the activation energy is varied in the first instance, we may observe a related change in pre-exponential factors. An increase in tends to compensate for an increase in , which is why we call this phenomenon a compensation effect. Similarly, for the second and third instances, in accordance with the Gibbs free energy equation, with which we derive the listed equations, scales proportionately with . The enthalpy and entropy compensate for each other because of their opposite algebraic signs in the Gibbs equation. A correlation between enthalpy and entropy has been observed for a wide variety of reactions. The correlation is significant because, for linear free-energy relationships (LFERs) to hold, one of three conditions for the relationship between enthalpy and entropy for a series of reactions must be met, with the most common encountered scenario being that which describes enthalpy–entropy compensation. The empirical relations above were noticed by several investigators beginning in the 1920s, since which the compensatory effects they govern have been identified under different aliases. Related terms Many of the more popular terms used in discussing the compensation effect are specific to their field or phenomena. In these contexts, the unambiguous terms are preferred. The misapplication of and frequent crosstalk between fields on this matter has, however, often led to the use of inappropriate terms and a confusing picture. For the purposes of this entry different terms may refer to what may seem to be the same effect, but that either a term is being used as a shorthand (isokinetic and isoequilibrium relationships are different, yet are often grouped together synecdochically as isokinetic relationships for the sake of brevity) or is the correct term in context. This section should aid in resolving any uncertainties. (see Criticism section for more on the variety of terms) compensation effect/rule : umbrella term for the observed linear relationship between: (i) the logarithm of the preexponential factors and the activation energies, (ii) enthalpies and entropies of activation, or (iii) between the enthalpy and entropy changes of a series of similar reactions. enthalpy-entropy compensation : the linear relationship between either the enthalpies and entropies of activation or the enthalpy and entropy changes of a series of similar reactions. isoequilibrium relation (IER), isoequilibrium effect : On a Van 't Hoff plot, there exists a common intersection point describing the thermodynamics of the reactions. At the isoequilibrium temperature , all the reactions in the series should have the same equilibrium constant () isokinetic relation (IKR), isokinetic effect : On an Arrhenius plot, there exists a common intersection point describing the kinetics of the reactions. At the isokinetic temperature , all the reactions in the series should have the same rate constant () isoequilibrium temperature : used for thermodynamic LFERs; refers to in the equations where it possesses dimensions of temperature isokinetic temperature : used for kinetic LFERs; refers to in the equations where it possesses dimensions of temperature kinetic compensation : an increase in the preexponential factors tends to compensate for the increase in activation energy: Meyer–Neldel rule (MNR) : primarily used in materials science and condensed matter physics; the MNR is often stated as the plot of the logarithm of the preexponential factor against activation energy is linear: where is the preexponential factor, is the activation energy, σ is the conductivity, and is the Boltzmann constant, and is temperature. Mathematics Enthalpy–entropy compensation as a requirement for LFERs Linear free-energy relationships (LFERs) exist when the relative influence of changing substituents on one reactant is similar to the effect on another reactant, and include linear Hammett plots, Swain–Scott plots, and Brønsted plots. LFERs are not always found to hold, and to see when one can expect them to, we examine the relationship between the free-energy differences for the two reactions under comparison. The extent to which the free energy of the new reaction is changed, via a change in substituent, is proportional to the extent to which the reference reaction was changed by the same substitution. A ratio of the free-energy differences is the reaction quotient or constant . The above equation may be rewritten as the difference () in free-energy changes (): Substituting the Gibbs free-energy equation () into the equation above yields a form that makes clear the requirements for LFERs to hold. One should expect LFERs to hold if one of three conditions are met: 's are coincidentally the same for both the new reaction under study and the reference reaction, and the 's are linearly proportional for the two reactions being compared. 's are coincidentally the same for both the new reaction under study and the reference reaction, and the 's are linearly proportional for the two reactions being compared. 's and 's are linearly related to each other for both the reference reaction and the new reaction. The third condition describes the enthalpy–entropy effect and is the condition most commonly met. Isokinetic and isoequilibrium temperature For most reactions the activation enthalpy and activation entropy are unknown, but, if these parameters have been measured and a linear relationship is found to exist (meaning an LFER was found to hold), the following equation describes the relationship between and : Inserting the Gibbs free-energy equation and combining like terms produces the following equation: where is constant regardless of substituents and is different for each substituent. In this form, has the dimension of temperature and is referred to as the isokinetic (or isoequilibrium) temperature. Alternately, the isokinetic (or isoequilibrium) temperature may be reached by observing that, if a linear relationship is found, then the difference between the s for any closely related reactants will be related to the difference between 's for the same reactants: Using the Gibbs free-energy equation, In both forms, it is apparent that the difference in Gibbs free-energies of activations () will be zero when the temperature is at the isokinetic (or isoequilibrium) temperature and hence identical for all members of the reaction set at that temperature. Beginning with the Arrhenius equation and assuming kinetic compensation (obeying ), the isokinetic temperature may also be given by The reactions will have approximately the same value of their rate constant at an isokinetic temperature. History In a 1925 paper, F.H. Constable described the linear relationship observed for the reaction parameters of the catalytic dehydrogenation of primary alcohols with copper-chromium oxide. Phenomenon explained The foundations of the compensation effect are still not fully understood though many theories have been brought forward. Compensation of Arrhenius processes in solid-state materials and devices can be explained quite generally from the statistical physics of aggregating fundamental excitations from the thermal bath to surmount a barrier whose activation energy is significantly larger than the characteristic energy of the excitations used (e.g., optical phonons). To rationalize the occurrences of enthalpy-entropy compensation in protein folding and enzymatic reactions, a Carnot-cycle model in which a micro-phase transition plays a crucial role was proposed. In drug receptor binding, it has been suggested that enthalpy-entropy compensation arises due to an intrinsic property of hydrogen bonds. A mechanical basis for solvent-induced enthalpy-entropy compensation has been put forward and tested at the dilute gas limit. There is some evidence of enthalpy-entropy compensation in biochemical or metabolic networks particularly in the context of intermediate-free coupled reactions or processes. However, a single general statistical mechanical explanation applicable to all compensated processes has not yet been developed. Criticism Kinetic relations have been observed in many systems and, since their conception, have gone by many terms, among which are the Meyer-Neldel effect or rule, the Barclay-Butler rule, the theta rule, and the Smith-Topley effect. Generally, chemists will talk about the isokinetic relation (IKR), from the importance of the isokinetic (or isoequilibrium) temperature, condensed matter physicists and material scientists use the Meyer-Neldel rule, and biologists will use the compensation effect or rule. An interesting homework problem appears following Chapter 7: Structure-Reactivity Relationships in Kenneth Connors's textbook Chemical Kinetics: The Study of Reaction Rates: From the last four digits of the office telephone numbers of the faculty in your department, systematically construct pairs of "rate constants" as two-digit numbers times 10−5 s−1 at temperatures 300 K and 315 K (obviously the larger rate constant of each pair to be associated with the higher temperature). Make a two-point Arrhenius plot for each faculty member, evaluating and . Examine the plot of against for evidence of an isokinetic relationship. The existence of any real compensation effect has been widely derided in recent years and attributed to the analysis of interdependent factors and chance. Because the physical roots remain to be fully understood, it has been called into question whether compensation is a truly physical phenomenon or a coincidence due to trivial mathematical connections between parameters. The compensation effect has been criticized in other respects, namely for being the result of random experimental and systematic errors producing the appearance of compensation. The principal complaint lodged states that compensation is an artifact of data from a limited temperature range or from a limited range for the free energies. In response to the criticisms, investigators have stressed that compensatory phenomena are real, but appropriate and in-depth data analysis is always needed. The F-test has been used to such an aim, and it minimizes the deviations of points constrained to pass through an isokinetic temperature to the deviation of the points from the unconstrained line is achieved by comparing the mean deviations of points. Appropriate statistical tests should be performed as well. W. Linert wrote in a 1983 paper: There are few topics in chemistry in which so many misunderstandings and controversies have arisen as in connection with the so-called isokinetic relationship (IKR) or compensation law. Up to date, a great many chemists appear to be inclined to dismiss the IKR as being accidental. The crucial problem is that the activation parameters are mutually dependent because of their determination from the experimental data. Therefore, it has been stressed repeatedly, the isokinetic plot (i.e., against ) is unfit in principle to substantiate a claim of an isokinetic relationship. At the same time, however, it is a fatal error to dismiss the IKR because of that fallacy. Common among all defenders is the agreement that stringent criteria for the assignment of true compensation effects must be adhered to. References Thermodynamics Chemical thermodynamics
Enthalpy–entropy compensation
[ "Physics", "Chemistry", "Mathematics" ]
2,496
[ "Chemical thermodynamics", "Thermodynamics", "Dynamical systems" ]
14,708,275
https://en.wikipedia.org/wiki/CLASS%20B1359%2B154
CLASS B1359+154 is a quasar, or quasi-stellar object, that has a redshift of 3.235. A group of three foreground galaxies at a redshift of about 1 are behaving as gravitational lenses. The result is a rare example of a sixfold multiply imaged quasar. See also Twin Quasar Einstein Cross References External links Simbad data on QSO B1359+154 Image QSO B1359+154 Six-Image CLASS Gravitational Lens SIMBAD data Gravitationally lensed quasars Gravitational lensing Boötes
CLASS B1359+154
[ "Physics", "Astronomy" ]
123
[ "Galaxy stubs", "Boötes", "Astronomy stubs", "Astrophysics", "Constellations", "Astrophysics stubs" ]
14,708,457
https://en.wikipedia.org/wiki/RHO%20protein%20GDP%20dissociation%20inhibitor
RHO protein GDP dissociation inhibitor of Rho proteins (rho GDI) regulates GDP/GTP exchange. The protein plays an important role in the activation of the oxygen superoxide-generating NADPH oxidase of phagocytes. This process requires the interaction of membrane-associated cytochrome b559 with 3 cytosolic components: p47-phox, p67-phox and a heterodimer of the small G-protein p21Rac1 and rho GDI. The association of p21rac and GDI inhibits dissociation of GDP from p21rac, thereby maintaining it in an inactive form. The proteins are attached via a lipid tail on p21rac that binds to the hydrophobic region of GDI. Dissociation of these proteins might be mediated by the release of lipids (e.g., arachidonate and phosphatidate) from membranes through the action of phospholipases. The lipids may then compete with the lipid tail on p21rac for the hydrophobic pocket on GDI. Human proteins containing this domain ARHGDIA; ARHGDIB; ARHGDIG; References Protein domains Peripheral membrane proteins
RHO protein GDP dissociation inhibitor
[ "Biology" ]
256
[ "Protein domains", "Protein classification" ]